Springer
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (53)
- Fachbereich Elektrotechnik und Informationstechnik (29)
- IfB - Institut für Bioengineering (25)
- Fachbereich Chemie und Biotechnologie (16)
- Fachbereich Luft- und Raumfahrttechnik (13)
- Fachbereich Energietechnik (12)
- Fachbereich Wirtschaftswissenschaften (12)
- Fachbereich Bauingenieurwesen (10)
- INB - Institut für Nano- und Biotechnologien (10)
- ECSM European Center for Sustainable Mobility (8)
Document Type
- Article (150) (remove)
Keywords
- CFD (2)
- Obstacle avoidance (2)
- Prozessautomatisierung (2)
- Robotic Process Automation (2)
- UAV (2)
- ABE (1)
- AI ethics (1)
- Accessibility (1)
- Acid crash (1)
- Actuator disk modelling (1)
- Acyl-amino acids (1)
- Acylation (1)
- Aircraft sizing (1)
- Algal Turf Scrubber (1)
- Algal–bacterial bioflm (1)
- Aminoacylase (1)
- Annulus Fibrosus (1)
- Arbeit 4.0 (1)
- Autonomy (1)
- BET (1)
- Bahadur efficiency (1)
- Balance (1)
- Balanced hypergraph (1)
- Bio-inspired systems (1)
- Biobeneficiation (1)
- Biocatalysis (1)
- Biogas (1)
- Biomechanical simulation (1)
- Biosolubilization (1)
- Biosurfactants (1)
- Bloom’s Taxonomy (1)
- Boundary integral equations (1)
- Brake set-up (1)
- Brake test (1)
- Butanol (1)
- C. acetobutylicum (1)
- CFD propeller simulation (1)
- Capacitive field-effect sensor (1)
- Cardiovascular MRI (1)
- Centrifugal twisting moment (1)
- Chaperone (1)
- Charging station (1)
- Chatbots (1)
- Chondroitin sulfate (1)
- Circular bioeconomy (1)
- Co-managed care (1)
- Coal (1)
- Competence Developing Games (1)
- Complex System (1)
- Coverage probability (1)
- Crámer–von-Mises distance (1)
- Deep Learning (1)
- Dietary supplements (1)
- Digitalisierung (1)
- Disc Degeneration (1)
- Drag estimation (1)
- Duality (1)
- Easy read (1)
- Elderly (1)
- Electromagnetism (1)
- Electronic vehicle (1)
- Elektroenzephalographie (1)
- Equivalence test (1)
- Experimental validation (1)
- Extension–twist coupling (1)
- Fall prevention (1)
- Fracture configuration (1)
- Fracture simulation (1)
- Freight rail (1)
- Game-based learning (1)
- Glaucoma (1)
- Glucosamine (1)
- Goodness-of-fit tests for uniformity (1)
- Ground-level falls (1)
- Hall’s Theorem (1)
- Helmholtz equation (1)
- High field MRI (1)
- High-field NMR (1)
- Hypergraph (1)
- IT security education (1)
- Incident analysis (1)
- Incomplete data (1)
- Integrated empirical distribution (survival) function (1)
- Interior Neumann eigenvalues (1)
- Intervertebral Disc (1)
- Intradiscal Pressure (1)
- Kernel density estimator (1)
- Keyword analysis (1)
- Koenig’s Theorem (1)
- Label-free detection (1)
- Large language models (1)
- Left ventriular function (1)
- Length of confidence intervals (1)
- Lifting propeller (1)
- Local path planning (1)
- MILP (1)
- MINLP (1)
- MR safety (1)
- MR-stethoscope (1)
- MRI (1)
- Magnetic field strength (1)
- Magnetic resonance imaging (MRI) (1)
- Magneto alert sensor (1)
- Marginal homogeneity test (1)
- Matching (1)
- Metabolic shift (1)
- Methane (1)
- Microorganisms (1)
- Mobility (1)
- Mobility tests (1)
- Multirotor UAS (1)
- Natural language processing (1)
- Neural Network (1)
- Nonlinear Dynamics (1)
- Nucleus Pulposus (1)
- Numerical inversion of Laplace transforms (1)
- Numerics (1)
- Ocular blood flow (1)
- Paired sample (1)
- Parasitic drag (1)
- Parking (1)
- Path planning (1)
- Pitman efficiency (1)
- Plant virus (1)
- Polysaccharides (1)
- Potential theory (1)
- Pre-culture (1)
- Process virtualization (1)
- Product bundling (1)
- Propeller aerodynamics (1)
- Propeller performance (1)
- Proximal humerus fracture (1)
- Prozessabläufe (1)
- Prozessmodellierung (1)
- Prozessstandardisierung (1)
- Prozessverbesserung (1)
- Psychiatrische Biomarker (1)
- Pumping systems (1)
- RPA (1)
- RVA (1)
- Referenzmodellierung (1)
- Resampling test (1)
- Reservation system (1)
- Resolvent Operator (1)
- Rotator cuff (1)
- Schlafspindeldetektion (1)
- Selektionskriterien (1)
- Services (1)
- Simulation (1)
- Surfactants (1)
- Technical Operations Research (1)
- Telecommunication (1)
- Tinetti test (1)
- Tobacco mosaic virus (TMV) (1)
- Train composition (1)
- Transformation (1)
- Trapeze effect (1)
- Uktrahigh field MRI (1)
- Unmanned aerial vehicles (1)
- Utilization improvement (1)
- Vascular response (1)
- Vertex cover (1)
- Visual field asymmetry (1)
- Wilcoxon tests (1)
- Wind milling (1)
- Wind tunnel experiments (1)
- Zeta potential (1)
- eTOM (1)
- · Psychiatrische Erkrankungen/Diagnostik (1)
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
Objectives
Interest in cardiovascular magnetic resonance (CMR) at 7 T is motivated by the expected increase in spatial and temporal resolution, but the method is technically challenging. We examined the feasibility of cardiac chamber quantification at 7 T.
Methods
A stack of short axes covering the left ventricle was obtained in nine healthy male volunteers. At 1.5 T, steady-state free precession (SSFP) and fast gradient echo (FGRE) cine imaging with 7 mm slice thickness (STH) were used. At 7 T, FGRE with 7 mm and 4 mm STH were applied. End-diastolic volume, end-systolic volume, ejection fraction and mass were calculated.
Results
All 7 T examinations provided excellent blood/myocardium contrast for all slice directions. No significant difference was found regarding ejection fraction and cardiac volumes between SSFP at 1.5 T and FGRE at 7 T, while volumes obtained from FGRE at 1.5 T were underestimated. Cardiac mass derived from FGRE at 1.5 and 7 T was larger than obtained from SSFP at 1.5 T. Agreement of volumes and mass between SSFP at 1.5 T and FGRE improved for FGRE at 7 T when combined with an STH reduction to 4 mm.
Conclusions
This pilot study demonstrates that cardiac chamber quantification at 7 T using FGRE is feasible and agrees closely with SSFP at 1.5 T.
Objective
The purpose of this study is to (i) design a small and mobile Magnetic field ALert SEnsor (MALSE), (ii) to carefully evaluate its sensors to their consistency of activation/deactivation and sensitivity to magnetic fields, and (iii) to demonstrate the applicability of MALSE in 1.5 T, 3.0 T and 7.0 T MR fringe field environments.
Methods
MALSE comprises a set of reed sensors, which activate in response to their exposure to a magnetic field. The activation/deactivation of reed sensors was examined by moving them in/out of the fringe field generated by 7TMR.
Results
The consistency with which individual reed sensors would activate at the same field strength was found to be 100% for the setup used. All of the reed switches investigated required a substantial drop in ambient magnetic field strength before they deactivated.
Conclusions
MALSE is a simple concept for alerting MRI staff to a ferromagnetic object being brought into fringe magnetic fields which exceeds MALSEs activation magnetic field. MALSE can easily be attached to ferromagnetic objects within the vicinity of a scanner, thus creating a barrier for hazardous situations induced by ferromagnetic parts which should not enter the vicinity of an MR-system to occur.
New insights into the influence of pre-culture on robust solvent production of C. acetobutylicum
(2024)
Clostridia are known for their solvent production, especially the production of butanol. Concerning the projected depletion of fossil fuels, this is of great interest. The cultivation of clostridia is known to be challenging, and it is difficult to achieve reproducible results and robust processes. However, existing publications usually concentrate on the cultivation conditions of the main culture. In this paper, the influence of cryo-conservation and pre-culture on growth and solvent production in the resulting main cultivation are examined. A protocol was developed that leads to reproducible cultivations of Clostridium acetobutylicum. Detailed investigation of the cell conservation in cryo-cultures ensured reliable cell growth in the pre-culture. Moreover, a reason for the acid crash in the main culture was found, based on the cultivation conditions of the pre-culture. The critical parameter to avoid the acid crash and accomplish the shift to the solventogenesis of clostridia is the metabolic phase in which the cells of the pre-culture were at the time of inoculation of the main culture; this depends on the cultivation time of the pre-culture. Using cells from the exponential growth phase to inoculate the main culture leads to an acid crash. To achieve the solventogenic phase with butanol production, the inoculum should consist of older cells which are in the stationary growth phase. Considering these parameters, which affect the entire cultivation process, reproducible results and reliable solvent production are ensured.
Unmanned Aerial Vehicles (UAV) constantly gain in versatility. However, more reliable path planning algorithms are required until full autonomous UAV operation is possible. This work investigates the algorithm 3DVFH* and analyses its dependency on its cost function weights in 2400 environments. The analysis shows that the 3DVFH* can find a suitable path in every environment. However, a particular type of environment requires a specific choice of cost function weights. For minimal failure, probability interdependencies between the weights of the cost function have to be considered. This dependency reduces the number of control parameters and simplifies the usage of the 3DVFH*. Weights for costs associated with vertical evasion (pitch cost) and vicinity to obstacles (obstacle cost) have the highest influence on the failure probability of the local path planner. Environments with mainly very tall buildings (like large American city centres) require a preference for horizontal avoidance manoeuvres (achieved with high pitch cost weights). In contrast, environments with medium-to-low buildings (like European city centres) benefit from vertical avoidance manoeuvres (achieved with low pitch cost weights). The cost of the vicinity to obstacles also plays an essential role and must be chosen adequately for the environment. Choosing these two weights ideal is sufficient to reduce the failure probability below 10%.
Lifting propellers are of increasing interest for Advanced Air Mobility. All propellers and rotors are initially twisted beams, showing significant extension–twist coupling and centrifugal twisting. Torsional deformations severely impact aerodynamic performance. This paper presents a novel approach to assess different reasons for torsional deformations. A reduced-order model runs large parameter sweeps with algebraic formulations and numerical solution procedures. Generic beams represent three different propeller types for General Aviation, Commercial Aviation, and Advanced Air Mobility. Simulations include solid and hollow cross-sections made of aluminum, steel, and carbon fiber-reinforced polymer. The investigation shows that centrifugal twisting moments depend on both the elastic and initial twist. The determination of the centrifugal twisting moment solely based on the initial twist suffers from errors exceeding 5% in some cases. The nonlinear parts of the torsional rigidity do not significantly impact the overall torsional rigidity for the investigated propeller types. The extension–twist coupling related to the initial and elastic twist in combination with tension forces significantly impacts the net cross-sectional torsional loads. While the increase in torsional stiffness due to initial twist contributes to the overall stiffness for General and Commercial Aviation propellers, its contribution to the lift propeller’s stiffness is limited. The paper closes with the presentation of approximations for each effect identified as significant. Numerical evaluations are necessary to determine each effect for inhomogeneous cross-sections made of anisotropic material.
Objective: As high-field cardiac MRI (CMR) becomes more widespread the propensity of ECG to interference from electromagnetic fields (EMF) and to magneto-hydrodynamic (MHD) effects increases and with it the motivation for a CMR triggering alternative. This study explores the suitability of acoustic cardiac triggering (ACT) for left ventricular (LV) function assessment in healthy subjects (n=14). Methods: Quantitative analysis of 2D CINE steady-state free precession (SSFP) images was conducted to compare ACT’s performance with vector ECG (VCG). Endocardial border sharpness (EBS) was examined paralleled by quantitative LV function assessment. Results: Unlike VCG, ACT provided signal traces free of interference from EMF or MHD effects. In the case of correct Rwave recognition, VCG-triggered 2D CINE SSFP was immune to cardiac motion effects—even at 3.0 T. However, VCG-triggered 2D SSFP CINE imaging was prone to cardiac motion and EBS degradation if R-wave misregistration occurred. ACT-triggered acquisitions yielded LV parameters (end-diastolic volume (EDV), endsystolic volume (ESV), stroke volume (SV), ejection fraction (EF) and left ventricular mass (LVM)) comparable with those derived fromVCG-triggered acquisitions (1.5 T: ESVVCG=(56± 17) ml, EDVVCG=(151±32)ml, LVMVCG=(97±27) g, SVVCG=(94± 19)ml, EFVCG=(63±5)% cf. ESVACT= (56±18) ml, EDVACT=(147±36) ml, LVMACT=(102±29) g, SVACT=(91± 22) ml, EFACT=(62±6)%; 3.0 T: ESVVCG=(55±21) ml, EDVVCG=(151±32) ml, LVMVCG=(101±27) g, SVVCG=(96±15) ml, EFVCG=(65±7)% cf. ESVACT=(54±20) ml, EDVACT=(146±35) ml, LVMACT= (101±30) g, SVACT=(92±17) ml, EFACT=(64±6)%). Conclusions: ACT’s intrinsic insensitivity to interference from electromagnetic fields renders
N-Acyl-amino acids can act as mild biobased surfactants, which are used, e.g., in baby shampoos. However, their chemical synthesis needs acyl chlorides and does not meet sustainability criteria. Thus, the identification of biocatalysts to develop greener synthesis routes is desirable. We describe a novel aminoacylase from Paraburkholderia monticola DSM 100849 (PmAcy) which was identified, cloned, and evaluated for its N-acyl-amino acid synthesis potential. Soluble protein was obtained by expression in lactose autoinduction medium and co-expression of molecular chaperones GroEL/S. Strep-tag affinity purification enriched the enzyme 16-fold and yielded 15 mg pure enzyme from 100 mL of culture. Biochemical characterization revealed that PmAcy possesses beneficial traits for industrial application like high temperature and pH-stability. A heat activation of PmAcy was observed upon incubation at temperatures up to 80 °C. Hydrolytic activity of PmAcy was detected with several N-acyl-amino acids as substrates and exhibited the highest conversion rate of 773 U/mg with N-lauroyl-L-alanine at 75 °C. The enzyme preferred long-chain acyl-amino-acids and displayed hardly any activity with acetyl-amino acids. PmAcy was also capable of N-acyl-amino acid synthesis with good conversion rates. The best synthesis results were obtained with the cationic L-amino acids L-arginine and L-lysine as well as with L-leucine and L-phenylalanine. Exemplarily, L-phenylalanine was acylated with fatty acids of chain lengths from C8 to C18 with conversion rates of up to 75%. N-lauroyl-L-phenylalanine was purified by precipitation, and the structure of the reaction product was verified by LC–MS and NMR.
New European Union (EU) regulations for UAS operations require an operational risk analysis, which includes an estimation of the potential danger of the UAS crashing. A key parameter for the potential ground risk is the kinetic impact energy of the UAS. The kinetic energy depends on the impact velocity of the UAS and, therefore, on the aerodynamic drag and the weight during free fall. Hence, estimating the impact energy of a UAS requires an accurate drag estimation of the UAS in that state. The paper at hand presents the aerodynamic drag estimation of small-scale multirotor UAS. Multirotor UAS of various sizes and configurations were analysed with a fully unsteady Reynolds-averaged Navier–Stokes approach. These simulations included different velocities and various fuselage pitch angles of the UAS. The results were compared against force measurements performed in a subsonic wind tunnel and provided good consistency. Furthermore, the influence of the UAS`s fuselage pitch angle as well as the influence of fixed and free spinning propellers on the aerodynamic drag was analysed. Free spinning propellers may increase the drag by up to 110%, depending on the fuselage pitch angle. Increasing the fuselage pitch angle of the UAS lowers the drag by 40% up to 85%, depending on the UAS. The data presented in this paper allow for increased accuracy of ground risk assessments.
Obstacle avoidance is critical for unmanned aerial vehicles (UAVs) operating autonomously. Obstacle avoidance algorithms either rely on global environment data or local sensor data. Local path planners react to unforeseen objects and plan purely on local sensor information. Similarly, animals need to find feasible paths based on local information about their surroundings. Therefore, their behavior is a valuable source of inspiration for path planning. Bumblebees tend to fly vertically over far-away obstacles and horizontally around close ones, implying two zones for different flight strategies depending on the distance to obstacles. This work enhances the local path planner 3DVFH* with this bio-inspired strategy. The algorithm alters the goal-driven function of the 3DVFH* to climb-preferring if obstacles are far away. Prior experiments with bumblebees led to two definitions of flight zone limits depending on the distance to obstacles, leading to two algorithm variants. Both variants reduce the probability of not reaching the goal of a 3DVFH* implementation in Matlab/Simulink. The best variant, 3DVFH*b-b, reduces this probability from 70.7 to 18.6% in city-like worlds using a strong vertical evasion strategy. Energy consumption is higher, and flight paths are longer compared to the algorithm version with pronounced horizontal evasion tendency. A parameter study analyzes the effect of different weighting factors in the cost function. The best parameter combination shows a failure probability of 6.9% in city-like worlds and reduces energy consumption by 28%. Our findings demonstrate the potential of bio-inspired approaches for improving the performance of local path planning algorithms for UAV.
With the prevalence of glucosamine- and chondroitin-containing dietary supplements for people with osteoarthritis in the marketplace, it is important to have an accurate and reproducible analytical method for the quantitation of these compounds in finished products. NMR spectroscopic method based both on low- (80 MHz) and high- (500–600 MHz) field NMR instrumentation was established, compared and validated for the determination of chondroitin sulfate and glucosamine in dietary supplements. The proposed method was applied for analysis of 20 different dietary supplements. In the majority of cases, quantification results obtained on the low-field NMR spectrometer are similar to those obtained with high-field 500–600 MHz NMR devices. Validation results in terms of accuracy, precision, reproducibility, limit of detection and recovery demonstrated that the developed method is fit for purpose for the marketed products. The NMR method was extended to the analysis of methylsulfonylmethane, adulterant maltodextrin, acetate and inorganic ions. Low-field NMR can be a quicker and cheaper alternative to more expensive high-field NMR measurements for quality control of the investigated dietary supplements. High-field NMR instrumentation can be more favorable for samples with complex composition due to better resolution, simultaneously giving the possibility of analysis of inorganic species such as potassium and chloride.
The number of electronic vehicles increase steadily while the space for extending the charging infrastructure is limited. In particular in urban areas, where parking spaces in attractive areas are famous, opportunities to setup new charging stations is very limited. This leads to an overload of some very attractive charging stations and an underutilization of less attractive ones. Against this background, the paper at hand presents the design of an e-vehicle reservation system that aims at distributing the utilization of the charging infrastructure, particularly in urban areas. By applying a design science approach, the requirements for a reservation-based utilization approach are elicited and a model for a suitable distribution approach and its instantiation are developed. The artefact is evaluated by simulating the distribution effects based on data of real charging station utilizations.
The efficiency concepts of Bahadur and Pitman are used to compare the Wilcoxon tests in paired and independent survey samples. A comparison through the length of corresponding confidence intervals is also done. Simple conditions characterizing the dominance of a procedure are derived. Statistical tests for checking these conditions are suggested and discussed.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
Der Telekommunikationsmarkt erfährt substanzielle Veränderungen. Neue Geschäftsmodelle, innovative Dienstleistungen und Technologien erfordern Reengineering, Transformation und Prozessstandardisierung. Mit der Enhanced Telecom Operation Map (eTOM) bietet das TM Forum ein international anerkanntes de facto Referenz-Prozess-Framework basierend auf spezifischen Anforderungen und Ausprägungen der Telekommunikationsindustrie an. Allerdings enthält dieses Referenz-Framework nur eine hierarchische Sammlung von Prozessen auf unterschiedlichen Abstraktionsebenen. Eine Kontrollsicht verstanden als sequenzielle Anordnung von Aktivitäten und daraus resultierend ein realer Prozessablauf fehlt ebenso wie eine Ende-zu-Ende-Sicht auf den Kunden. In diesem Artikel erweitern wir das eTOM-Referenzmodell durch Referenzprozessabläufe, in welchen wir das Wissen über Prozesse in Telekommunikationsunternehmen abstrahieren und generalisieren. Durch die Referenzprozessabläufe werden Unternehmen bei dem strukturierten und transparenten (Re-)Design ihrer Prozesse unterstützt. Wir demonstrieren die Anwendbarkeit und Nützlichkeit unserer Referenzprozessabläufe in zwei Fallstudien und evaluieren diese anhand von Kriterien für die Bewertung von Referenzmodellen. Die Referenzprozessabläufe wurden vom TM Forum in den Standard aufgenommen und als Teil von eTOM Version 9 veröffentlicht. Darüber hinaus diskutieren wir die Komponenten unseres Ansatzes, die auch außerhalb der Telekommunikationsindustrie angewandt werden können.
Im Rahmen der Digitalisierung ist die zunehmende Automatisierung von bisher manuellen Prozessschritten ein Aspekt, der massive Auswirkungen auf die zukünftige Arbeitswelt haben wird. In diesem Kontext werden an den Einsatz von Softwarerobotern zur Prozessautomatisierung hohe Erwartungen geknüpft. Bei den Implementierungsansätzen wird die Diskussion aktuell insbesondere durch Robotic Process Automation (RPA) und Chatbots geprägt. Beide Ansätze verfolgen das gemeinsame Ziel einer 1:1-Automatisierung von menschlichen Handlungen und dadurch ein direktes Ersetzen von Mitarbeitern durch Maschinen. Bei RPA werden Prozesse durch Softwareroboter erlernt und automatisiert ausgeführt. Dabei emulieren RPA-Roboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Durch Chatbots werden Ein- und Ausgaben von Anwendungssystemen über natürliche Sprache realisiert. Dadurch ist die Automatisierung von unternehmensexterner Kommunikation (z. B. mit Kunden) aber auch von unternehmensinternen Assistenztätigkeiten möglich. Der Beitrag diskutiert die Auswirkungen von Softwarerobotern auf die Arbeitswelt anhand von Anwendungsbeispielen und erläutert die unternehmensindividuelle Entscheidung über den Einsatz von Softwarerobotern anhand von Effektivitäts- und Effizienzzielen.
In this study, a recently proposed NMR standardization approach by 2H integral of deuterated solvent for quantitative multicomponent analysis of complex mixtures is presented. As a proof of principle, the existing NMR routine for the analysis of Aloe vera products was modified. Instead of using absolute integrals of targeted compounds and internal standard (nicotinamide) from 1H-NMR spectra, quantification was performed based on the ratio of a particular 1H-NMR compound integral and 2H-NMR signal of deuterated solvent D2O. Validation characteristics (linearity, repeatability, accuracy) were evaluated and the results showed that the method has the same precision as internal standardization in case of multicomponent screening. Moreover, a dehydration process by freeze drying is not necessary for the new routine. Now, our NMR profiling of A. vera products needs only limited sample preparation and data processing. The new standardization methodology provides an appealing alternative for multicomponent NMR screening. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and is recommended in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Schlafspindeln – Funktion, Detektion und Nutzung als Biomarker für die psychiatrische Diagnostik
(2022)
Hintergrund:
Die Schlafspindel ist ein Graphoelement des Elektroenzephalogramms
(EEG), das im Leicht- und Tiefschlaf beobachtet werden kann. Veränderungen der
Spindelaktivität wurden für verschiedene psychiatrische Erkrankungen beschrieben. Schlafspindeln zeigen aufgrund ihrer relativ konstanten Eigenschaften Potenzial als Biomarker in der psychiatrischen Diagnostik.
Methode:
Dieser Beitrag liefert einen Überblick über den Stand der Wissenschaft
zu Eigenschaften und Funktionen der Schlafspindeln sowie über beschriebene
Veränderungen der Spindelaktivität bei psychiatrischen Erkrankungen. Verschiedene methodische Ansätze und Ausblicke zur Spindeldetektion werden hinsichtlich deren Anwendungspotenzial in der psychiatrischen Diagnostik erläutert.
Ergebnisse und Schlussfolgerung:
Während Veränderungen der Spindelaktivität
bei psychiatrischen Erkrankungen beschrieben wurden, ist deren exaktes Potenzial für die psychiatrische Diagnostik noch nicht ausreichend erforscht. Diesbezüglicher Erkenntnisgewinn wird in der Forschung gegenwärtig durch ressourcenintensive und fehleranfällige Methoden zur manuellen oder automatisierten Spindeldetektion ausgebremst. Neuere Detektionsansätze, die auf Deep-Learning-Verfahren basieren, könnten die Schwierigkeiten bisheriger Detektionsmethoden überwinden und damit neue Möglichkeiten für die praktisch
Introduction
In regard of surgical training, the reproducible simulation of life-like proximal humerus fractures in human cadaveric specimens is desirable. The aim of the present study was to develop a technique that allows simulation of realistic proximal humerus fractures and to analyse the influence of rotator cuff preload on the generated lesions in regards of fracture configuration.
Materials and methods
Ten cadaveric specimens (6 left, 4 right) were fractured using a custom-made drop-test bench, in two groups. Five specimens were fractured without rotator cuff preload, while the other five were fractured with the tendons of the rotator cuff preloaded with 2 kg each. The humeral shaft and the shortened scapula were potted. The humerus was positioned at 90° of abduction and 10° of internal rotation to simulate a fall on the elevated arm. In two specimens of each group, the emergence of the fractures was documented with high-speed video imaging. Pre-fracture radiographs were taken to evaluate the deltoid-tuberosity index as a measure of bone density. Post-fracture X-rays and CT scans were performed to define the exact fracture configurations. Neer’s classification was used to analyse the fractures.
Results
In all ten cadaveric specimens life-like proximal humerus fractures were achieved. Two III-part and three IV-part fractures resulted in each group. The preloading of the rotator cuff muscles had no further influence on the fracture configuration. High-speed videos of the fracture simulation revealed identical fracture mechanisms for both groups. We observed a two-step fracture mechanism, with initial impaction of the head segment against the glenoid followed by fracturing of the head and the tuberosities and then with further impaction of the shaft against the acromion, which lead to separation of the tuberosities.
Conclusion
A high energetic axial impulse can reliably induce realistic proximal humerus fractures in cadaveric specimens. The preload of the rotator cuff muscles had no influence on initial fracture configuration. Therefore, fracture simulation in the proximal humerus is less elaborate. Using the presented technique, pre-fractured specimens are available for real-life surgical education.
Plant viruses are major contributors to crop losses and induce high economic costs worldwide. For reliable, on-site and early detection of plant viral diseases, portable biosensors are of great interest. In this study, a field-effect SiO2-gate electrolyte-insulator-semiconductor (EIS) sensor was utilized for the label-free electrostatic detection of tobacco mosaic virus (TMV) particles as a model plant pathogen. The capacitive EIS sensor has been characterized regarding its TMV sensitivity by means of constant-capacitance method. The EIS sensor was able to detect biotinylated TMV particles from a solution with a TMV concentration as low as 0.025 nM. A good correlation between the registered EIS sensor signal and the density of adsorbed TMV particles assessed from scanning electron microscopy images of the SiO2-gate chip surface was observed. Additionally, the isoelectric point of the biotinylated TMV particles was determined via zeta potential measurements and the influence of ionic strength of the measurement solution on the TMV-modified EIS sensor signal has been studied.
This study reviews the practice of brake tests in freight railways, which is time consuming and not suitable to detect certain failure types. Public incident reports are analysed to derive a reasonable brake test hardware and communication architecture, which aims to provide automatic brake tests at lower cost than current solutions. The proposed solutions relies exclusively on brake pipe and brake cylinder pressure sensors, a brake release position switch as well as radio communication via standard protocols. The approach is embedded in the Wagon 4.0 concept, which is a holistic approach to a smart freight wagon. The reduction of manual processes yields a strong incentive due to high savings in manual
labour and increased productivity.
Die Datenschutz-Grundverordnung (DS-GVO) regelt in ihrem Art. 3 das räumlich anwendbare Datenschutzrecht und zielt dabei gerade auch auf Angebote nichteuropäischer Diensteanbieter ab. Die bisherige Diskussion konzentriert sich bislang in erster Linie darauf, das eingeführte Marktortprinzip zu thematisieren; das weitgehend unangetastete
Niederlassungsprinzip und vor allem die Probleme, die sich durch dessen unveränderte Beibehaltung ergeben, werden dagegen nicht erörtert. Der folgende Beitrag versucht sich an einer systematischen Analyse eines teils kontrovers, teils kaum diskutierten Themas.
This study focuses on thermoelectric elements (TEE) as an alternative for room temperature control. TEE are semi-conductor devices that can provide heating and cooling via a heat pump effect without direct noise emissions and no refrigerant use. An efficiency evaluation of the optimal operating mode is carried out for different numbers of TEE, ambient temperatures, and heating loads. The influence of an additional heat recovery unit on system efficiency and an unevenly distributed heating demand are examined. The results show that TEE can provide heat at a coefficient of performance (COP) greater than one especially for small heating demands and high ambient temperatures. The efficiency increases with the number of elements in the system and is subject to economies of scale. The best COP exceeds six at optimal operating conditions. An additional heat recovery unit proves beneficial for low ambient temperatures and systems with few TEE. It makes COPs above one possible at ambient temperatures below 0 ∘C. The effect increases efficiency by maximal 0.81 (from 1.90 to 2.71) at ambient temperature 5 K below room temperature and heating demand Q˙h=100W but is subject to diseconomies of scale. Thermoelectric technology is a valuable option for electricity-based heat supply and can provide cooling and ventilation functions. A careful system design as well as an additional heat recovery unit significantly benefits the performance. This makes TEE superior to direct current heating systems and competitive to heat pumps for small scale applications with focus on avoiding noise and harmful refrigerants.
Planning the layout and operation of a technical system is a common task
for an engineer. Typically, the workflow is divided into consecutive stages: First,
the engineer designs the layout of the system, with the help of his experience or of
heuristic methods. Secondly, he finds a control strategy which is often optimized
by simulation. This usually results in a good operating of an unquestioned sys-
tem topology. In contrast, we apply Operations Research (OR) methods to find a
cost-optimal solution for both stages simultaneously via mixed integer program-
ming (MILP). Technical Operations Research (TOR) allows one to find a provable
global optimal solution within the model formulation. However, the modeling error
due to the abstraction of physical reality remains unknown. We address this ubiq-
uitous problem of OR methods by comparing our computational results with mea-
surements in a test rig. For a practical test case we compute a topology and control
strategy via MILP and verify that the objectives are met up to a deviation of 8.7%.
Purpose Vascular risk factors and ocular perfusion are heatedly discussed in the pathogenesis of glaucoma. The retinal vessel analyzer (RVA, IMEDOS Systems, Germany) allows noninvasive measurement of retinal vessel regulation. Significant differences especially in the veins between healthy subjects and patients suffering from glaucoma were previously reported. In this pilot-study we investigated if localized vascular regulation is altered in glaucoma patients with altitudinal visual field defect asymmetry. Methods 15 eyes of 12 glaucoma patients with advanced altitudinal visual field defect asymmetry were included. The mean defect was calculated for each hemisphere separately (-20.99 ± 10.49 pro- found hemispheric visual field defect vs -7.36 ± 3.97 dB less profound hemisphere). After pupil dilation, RVA measurements of retinal arteries and veins were conducted using the standard protocol. The superior and inferior retinal vessel reactivity were measured consecutively in each eye. Results Significant differences were recorded in venous vessel constriction after flicker light stimulation and overall amplitude of the reaction (p \ 0.04 and p \ 0.02 respectively) in-between the hemispheres spheres. Vessel reaction was higher in the hemisphere corresponding to the more advanced visual field defect. Arterial diameters reacted similarly, failing to reach statistical significance. Conclusion Localized retinal vessel regulation is significantly altered in glaucoma patients with asymmetri altitudinal visual field defects. Veins supplying the hemisphere concordant to a less profound visual field defect show diminished diameter changes. Vascular dysregulation might be particularly important in early glaucoma stages prior to a significant visual field defect.
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
Cardiopulmonary bypass (CPB) is a standard technique for cardiac surgery, but comes with the risk of severe neurological complications (e.g. stroke) caused by embolisms and/or reduced cerebral perfusion. We report on an aortic cannula prototype design (optiCAN) with helical outflow and jet-splitting dispersion tip that could reduce the risk of embolic events and restores cerebral perfusion to 97.5% of physiological flow during CPB in vivo, whereas a commercial curved-tip cannula yields 74.6%. In further in vitro comparison, pressure loss and hemolysis parameters of optiCAN remain unaffected. Results are reproducibly confirmed in silico for an exemplary human aortic anatomy via computational fluid dynamics (CFD) simulations. Based on CFD simulations, we firstly show that optiCAN design improves aortic root washout, which reduces the risk of thromboembolism. Secondly, we identify regions of the aortic intima with increased risk of plaque release by correlating areas of enhanced plaque growth and high wall shear stresses (WSS). From this we propose another easy-to-manufacture cannula design (opti2CAN) that decreases areas burdened by high WSS, while preserving physiological cerebral flow and favorable hemodynamics. With this novel cannula design, we propose a cannulation option to reduce neurological complications and the prevalence of stroke in high-risk patients after CPB.
Previous studies optimized the dimensions of coaxial heat exchangers using constant mass fow rates as a boundary condition. They show a thermal optimal circular ring width of nearly zero. Hydraulically optimal is an inner to outer pipe radius ratio of 0.65 for turbulent and 0.68 for laminar fow types. In contrast, in this study, fow conditions in the circular ring are kept constant (a set of fxed Reynolds numbers) during optimization. This approach ensures fxed fow conditions and prevents inappropriately high or low mass fow rates. The optimization is carried out for three objectives: Maximum energy gain, minimum hydraulic efort and eventually optimum net-exergy balance. The optimization changes the inner pipe radius and mass fow rate but not the Reynolds number of the circular ring. The thermal calculations base on Hellström’s borehole resistance and the hydraulic optimization on individually calculated linear loss of head coefcients. Increasing the inner pipe radius results in decreased hydraulic losses in the inner pipe but increased losses in the circular ring. The net-exergy diference is a key performance indicator and combines thermal and hydraulic calculations. It is the difference between thermal exergy fux and hydraulic efort. The Reynolds number in the circular ring is instead of the mass fow rate constant during all optimizations. The result from a thermal perspective is an optimal width of the circular ring of nearly zero. The hydraulically optimal inner pipe radius is 54% of the outer pipe radius for laminar fow and 60% for turbulent fow scenarios. Net-exergetic optimization shows a predominant infuence of hydraulic losses, especially for small temperature gains. The exact result depends on the earth’s thermal properties and the fow type. Conclusively, coaxial geothermal probes’ design should focus on the hydraulic optimum and take the thermal optimum as a secondary criterion due to the dominating hydraulics.
The paper presents the derivation of a new equivalent skin friction coefficient for estimating the parasitic drag of short-to-medium range fixed-wing unmanned aircraft. The new coefficient is derived from an aerodynamic analysis of ten different unmanned aircraft used for surveillance, reconnaissance, and search and rescue missions. The aircraft is simulated using a validated unsteady Reynolds-averaged Navier Stokes approach. The UAV’s parasitic drag is significantly influenced by the presence of miscellaneous components like fixed landing gears or electro-optical sensor turrets. These components are responsible for almost half of an unmanned aircraft’s total parasitic drag. The new equivalent skin friction coefficient accounts for these effects and is significantly higher compared to other aircraft categories. It is used to initially size an unmanned aircraft for a typical reconnaissance mission. The improved parasitic drag estimation yields a much heavier unmanned aircraft when compared to the sizing results using available drag data of manned aircraft.
Häufig bremsen geringe IT-Ressourcen, fehlende Softwareschnittstellen oder eine veraltete und komplex gewachsene Systemlandschaft die Automatisierung von Geschäftsprozessen. Robotic Process Automation (RPA) ist eine vielversprechende Methode, um Geschäftsprozesse oberflächenbasiert und ohne größere Systemeingriffe zu automatisieren und Medienbrüche abzubauen. Die Auswahl der passenden Prozesse ist dabei für den Erfolg von RPA-Projekten entscheidend. Der vorliegende Beitrag liefert dafür Selektionskriterien, die aus einer qualitativen Inhaltanalyse von elf Interviews mit RPA-Experten aus dem Versicherungsumfeld resultieren. Das Ergebnis umfasst eine gewichtetet Liste von sieben Dimensionen und 51 Prozesskriterien, welche die Automatisierung mit Softwarerobotern begünstigen bzw. deren Nichterfüllung eine Umsetzung erschweren oder sogar verhindern. Die drei wichtigsten Kriterien zur Auswahl von Geschäftsprozessen für die Automatisierung mittels RPA umfassen die Entlastung der an dem Prozess mitwirkenden Mitarbeiter (Arbeitnehmerüberlastung), die Ausführbarkeit des Prozesses mittels Regeln (Regelbasierte Prozessteuerung) sowie ein positiver Kosten-Nutzen-Vergleich. Praktiker können diese Kriterien verwenden, um eine systematische Auswahl von RPA-relevanten Prozessen vorzunehmen. Aus wissenschaftlicher Perspektive stellen die Ergebnisse eine Grundlage zur Erklärung des Erfolgs und Misserfolgs von RPA-Projekten dar.
Game-based learning is a promising approach to anti-phishing education, as it fosters motivation and can help reduce the perceived difficulty of the educational material. Over the years, several prototypes for game-based applications have been proposed, that follow different approaches in content selection, presentation, and game mechanics. In this paper, a literature and product review of existing learning games is presented. Based on research papers and accessible applications, an in-depth analysis was conducted, encompassing target groups, educational contexts, learning goals based on Bloom’s Revised Taxonomy, and learning content. As a result of this review, we created the publications on games (POG) data set for the domain of anti-phishing education. While there are games that can convey factual and conceptual knowledge, we find that most games are either unavailable, fail to convey procedural knowledge or lack technical depth. Thus, we identify potential areas of improvement for games suitable for end-users in informal learning contexts.
This paper compares several blade element theory (BET) method-based propeller simulation tools, including an evaluation against static propeller ground tests and high-fidelity Reynolds-Average Navier Stokes (RANS) simulations. Two proprietary propeller geometries for paraglider applications are analysed in static and flight conditions. The RANS simulations are validated with the static test data and used as a reference for comparing the BET in flight conditions. The comparison includes the analysis of varying 2D aerodynamic airfoil parameters and different induced velocity calculation methods. The evaluation of the BET propeller simulation tools shows the strength of the BET tools compared to RANS simulations. The RANS simulations underpredict static experimental data within 10% relative error, while appropriate BET tools overpredict the RANS results by 15–20% relative error. A variation in 2D aerodynamic data depicts the need for highly accurate 2D data for accurate BET results. The nonlinear BET coupled with XFOIL for the 2D aerodynamic data matches best with RANS in static operation and flight conditions. The novel BET tool PropCODE combines both approaches and offers further correction models for highly accurate static and flight condition results.
Objective
In local SAR compression algorithms, the overestimation is generally not linearly dependent on actual local SAR. This can lead to large relative overestimation at low actual SAR values, unnecessarily constraining transmit array performance.
Method
Two strategies are proposed to reduce maximum relative overestimation for a given number of VOPs. The first strategy uses an overestimation matrix that roughly approximates actual local SAR; the second strategy uses a small set of pre-calculated VOPs as the overestimation term for the compression.
Result
Comparison with a previous method shows that for a given maximum relative overestimation the number of VOPs can be reduced by around 20% at the cost of a higher absolute overestimation at high actual local SAR values.
Conclusion
The proposed strategies outperform a previously published strategy and can improve the SAR compression where maximum relative overestimation constrains the performance of parallel transmission.
In this chapter, the key technologies and the instrumentation required for the subsurface exploration of ocean worlds are discussed. The focus is laid on Jupiter’s moon Europa and Saturn’s moon Enceladus because they have the highest potential for such missions in the near future. The exploration of their oceans requires landing on the surface, penetrating the thick ice shell with an ice-penetrating probe, and probably diving with an underwater vehicle through dozens of kilometers of water to the ocean floor, to have the chance to find life, if it exists. Technologically, such missions are extremely challenging. The required key technologies include power generation, communications, pressure resistance, radiation hardness, corrosion protection, navigation, miniaturization, autonomy, and sterilization and cleaning. Simpler mission concepts involve impactors and penetrators or – in the case of Enceladus – plume-fly-through missions.
Elastomers are exceptional materials owing to their ability to undergo large deformations before failure. However, due to their very low stiffness, they are not always suitable for industrial applications. Addition of filler particles provides reinforcing effects and thus enhances the material properties that render them more versatile for applications like tyres etc. However, deformation behavior of filled polymers is accompanied by several nonlinear effects like Mullins and Payne effect. To this day, the physical and chemical changes resulting in such nonlinear effect remain an active area of research. In this work, we develop a heterogeneous (or multiphase) constitutive model at the mesoscale explicitly considering filler particle aggregates, elastomeric matrix and their mechanical interaction through an approximate interface layer. The developed constitutive model is used to demonstrate cluster breakage, also, as one of the possible sources for Mullins effect observed in non-crystallizing filled elastomers.
Purpose
This study aims to investigate the biomechanics of handcycling during a continuous load trial (CLT) to assess the mechanisms underlying fatigue in upper body exercise.
Methods
Twelve able-bodied triathletes performed a 30-min CLT at a power output corresponding to lactate threshold in a racing recumbent handcycle mounted on a stationary ergometer. During the CLT, ratings of perceived exertion (RPE), tangential crank kinetics, 3D joint kinematics, and muscular activity of ten muscles of the upper extremity and trunk were examined using motion capturing and surface electromyography.
Results
During the CLT, spontaneously chosen cadence and RPE increased, whereas crank torque decreased. Rotational work was higher during the pull phase. Peripheral RPE was higher compared to central RPE. Joint range of motion decreased for elbow-flexion and radial-duction. Integrated EMG (iEMG) increased in the forearm flexors, forearm extensors, and M. deltoideus (Pars spinalis). An earlier onset of activation was found for M. deltoideus (Pars clavicularis), M. pectoralis major, M. rectus abdominis, M. biceps brachii, and the forearm flexors.
Conclusion
Fatigue-related alterations seem to apply analogously in handcycling and cycling. The most distal muscles are responsible for force transmission on the cranks and might thus suffer most from neuromuscular fatigue. The findings indicate that peripheral fatigue (at similar lactate values) is higher in handcycling compared to leg cycling, at least for inexperienced participants. An increase in cadence might delay peripheral fatigue by a reduced vascular occlusion. We assume that the gap between peripheral and central fatigue can be reduced by sport-specific endurance training.
Researching the field of business intelligence and analytics (BI & A) has a long tradition within information systems research. Thereby, in each decade the rapid development of technologies opened new room for investigation. Since the early 1950s, the collection and analysis of structured data were the focus of interest, followed by unstructured data since the early 1990s. The third wave of BI & A comprises unstructured and sensor data of mobile devices. The article at hand aims at drawing a comprehensive overview of the status quo in relevant BI & A research of the current decade, focusing on the third wave of BI & A. By this means, the paper’s contribution is fourfold. First, a systematically developed taxonomy for BI & A 3.0 research, containing seven dimensions and 40 characteristics, is presented. Second, the results of a structured literature review containing 75 full research papers are analyzed by applying the developed taxonomy. The analysis provides an overview on the status quo of BI & A 3.0. Third, the results foster discussions on the predicted and observed developments in BI & A research of the past decade. Fourth, research gaps of the third wave of BI & A research are disclosed and concluded in a research agenda.
For short take-off and landing (STOL) aircraft, a parallel hybrid-electric propulsion system potentially offers superior performance compared to a conventional propulsion system, because the short-take-off power requirement is much higher than the cruise power requirement. This power-matching problem can be solved with a balanced hybrid propulsion system. However, there is a trade-off between wing loading, power loading, the level of hybridization, as well as range and take-off distance. An optimization method can vary design variables in such a way that a minimum of a particular objective is attained. In this paper, a comparison between the optimization results for minimum mass, minimum consumed primary energy, and minimum cost is conducted. A new initial sizing algorithm for general aviation aircraft with hybrid-electric propulsion systems is applied. This initial sizing methodology covers point performance, mission performance analysis, the weight estimation process, and cost estimation. The methodology is applied to the design of a STOL general aviation aircraft, intended for on-demand air mobility operations. The aircraft is sized to carry eight passengers over a distance of 500 km, while able to take off and land from short airstrips. Results indicate that parallel hybrid-electric propulsion systems must be considered for future STOL aircraft.
Through a mirror darkly – On the obscurity of teaching goals in game-based learning in IT security
(2021)
Teachers and instructors use very specific language communicating teaching goals. The most widely used frameworks of common reference are the Bloom’s Taxonomy and the Revised Bloom’s Taxonomy. The latter provides distinction of 209 different teaching goals which are connected to methods. In Competence Developing Games (CDGs - serious games to convey knowledge) and in IT security education, a two- or three level typology exists, reducing possible learning outcomes to awareness, training, and education. This study explores whether this much simpler framework succeeds in achieving the same range of learning outcomes. Method wise a keyword analysis was conducted. The results were threefold: 1. The words used to describe teaching goals in CDGs on IT security education do not reflect the whole range of learning outcomes. 2. The word choice is nevertheless different from common language, indicating an intentional use of language. 3. IT security CDGs use different sets of terms to describe learning outcomes, depending on whether they are awareness, training, or education games. The interpretation of the findings is that the reduction to just three types of CDGs reduces the capacity to communicate and think about learning outcomes and consequently reduces the outcomes that are intentionally achieved.
Socio-technical scenarios for energy-intensive industries: the future of steel production in Germany
(2019)
The pharmacokinetics and metabolism of diclofenac in chimeric humanized and murinized FRG mice
(2018)
The pharmacokinetics of diclofenac were investigated following single oral doses of 10 mg/kg to chimeric liver humanized and murinized FRG and C57BL/6 mice. In addition, the metabolism and excretion were investigated in chimeric liver humanized and murinized FRG mice. Diclofenac reached maximum blood concentrations of 2.43 ± 0.9 µg/mL (n = 3) at 0.25 h post-dose with an AUCinf of 3.67 µg h/mL and an effective half-life of 0.86 h (n = 2). In the murinized animals, maximum blood concentrations were determined as 3.86 ± 2.31 µg/mL at 0.25 h post-dose with an AUCinf of 4.94 ± 2.93 µg h/mL and a half-life of 0.52 ± 0.03 h (n = 3). In C57BL/6J mice, mean peak blood concentrations of 2.31 ± 0.53 µg/mL were seen 0.25 h post-dose with a mean AUCinf of 2.10 ± 0.49 µg h/mL and a half-life of 0.51 ± 0.49 h (n = 3). Analysis of blood indicated only trace quantities of drug-related material in chimeric humanized and murinized FRG mice. Metabolic profiling of urine, bile and faecal extracts revealed a complex pattern of metabolites for both humanized and murinized animals with, in addition to unchanged parent drug, a variety of hydroxylated and conjugated metabolites detected. The profiles in humanized mice were different to those of both murinized and wild-type animals, e.g., a higher proportion of the dose was detected in the form of acyl glucuronide metabolites and much reduced amounts as taurine conjugates. Comparison of the metabolic profiles obtained from the present study with previously published data from C57BL/6J mice and humans revealed a greater, though not complete, match between chimeric humanized mice and humans, such that the liver humanized FRG model may represent a model for assessing the biotransformation of such compounds in humans.
Slot die coating is applied to deposit thin and homogenous films in roll-to-roll and sheet-to-sheet applications. The critical step in operation is to choose suitable process parameters within the process window. In this work, we investigate an upper limit for stripe coatings. This maximum film thickness is characterized by stripe merging which needs to be avoided in a stable process. It is shown that the upper limit reduces the process window for stripe coatings to a major extent. As a result, stripe coatings at large coating gaps and low viscosities are only possible for relatively thick films. Explaining the upper limit, a theory of balancing the side pressure in the gap region in the cross-web direction has been developed.
Impact of electric propulsion technology and mission requirements on the performance of VTOL UAVs
(2018)
One of the engineering challenges in aviation is the design of transitioning vertical take-off and landing (VTOL) aircraft. Thrust-borne flight implies a higher mass fraction of the propulsion system, as well as much increased energy consumption in the take-off and landing phases. This mass increase is typically higher for aircraft with a separate lift propulsion system than for aircraft that use the cruise propulsion system to support a dedicated lift system. However, for a cost–benefit trade study, it is necessary to quantify the impact the VTOL requirement and propulsion configuration has on aircraft mass and size. For this reason, sizing studies are conducted. This paper explores the impact of considering a supplemental electric propulsion system for achieving hovering flight. Key variables in this study, apart from the lift system configuration, are the rotor disk loading and hover flight time, as well as the electrical systems technology level for both batteries and motors. Payload and endurance are typically used as the measures of merit for unmanned aircraft that carry electro-optical sensors, and therefore the analysis focuses on these particular parameters.
Geld-zurück-Garantien erlangen in der Unternehmenspraxis eine immer größere Bedeutung, vor allem weil sie als probates Mittel zur Signalisierung hochwertiger Qualität angesehen werden – eine Annahme, die bislang wissenschaftlich ungeprüft geblieben ist. Vor diesem Hintergrund wird im vorliegenden Beitrag eine umfassende empirische Untersuchung der kaufverhaltensrelevanten Wirkungen dieses Marketinginstrumentes vorgenommen. Die Ergebnisse verdeutlichen zum einen, dass eine Geld-zurück-Garantie nur unter bestimmten Bedingungen als Qualitätssignal wirkt. Dies hängt neben der Art des Produktes (Erfahrungs- vs. Suchgut) insbesondere von der Ausprägung des für die Qualitätsbeurteilung besonders diagnostischen Merkmals Marke sowie von der Produktkenntnis der Konsumenten ab. Zum anderen zeigt sich aber auch, dass eine Geld-zurück-Garantie affektive Konsumentenreaktionen auslöst, die die Kaufabsicht von Konsumenten zusätzlich erhöhen können. Zusammenfassend stellen wir fest, dass eine Geld-zurück-Garantie – entgegen bisheriger Erwartungen aus der Praxis – nicht zwingend ein Qualitätsindikator ist, stattdessen entfaltet sie aber bisher unbeachtete affektive Wirkungen, die insbesondere auf ihre absichernde Funktion von etwaigen Fehlentscheidungen beim Kauf zurückzuführen sind.
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
Given the strong increase in regulatory requirements for business processes the management of business process compliance becomes a more and more regarded field in IS research. Several methods have been developed to support compliance checking of conceptual models. However, their focus on distinct modeling languages and mostly linear (i.e., predecessor-successor related) compliance rules may hinder widespread adoption and application in practice. Furthermore, hardly any of them has been evaluated in a real-world setting. We address this issue by applying a generic pattern matching approach for conceptual models to business process compliance checking in the financial sector. It consists of a model query language, a search algorithm and a corresponding modelling tool prototype. It is (1) applicable for all graph-based conceptual modeling languages and (2) for different kinds of compliance rules. Furthermore, based on an applicability check, we (3) evaluate the approach in a financial industry project setting against its relevance for decision support of audit and compliance management tasks.
The rail business is challenged by long product life cycles and a broad spectrum of assembly groups and single parts. When spare part obsolescence occurs, quick solutions are needed. A reproduction of obsolete parts is often connected to long waiting times and minimum lot quantities that need to be purchased and stored. Spare part storage is therefore challenged by growing stocks, bound capital and issues of part ageing. A possible solution could be a virtual storage of spare parts which will be 3D printed through additive manufacturing technologies in case of sudden demand. As mechanical properties of additive manufactured parts are neither guaranteed by machine manufacturers nor by service providers, the utilization of this relatively young technology is impeded and research is required to address these issues. This paper presents an examination of mechanical properties of specimens manufactured from stainless steel through the selective laser melting (SLM) process. The specimens were produced in multiple batches. This paper interrogates the question if the test results follow a normal distribution pattern and if mechanical property predictions can be made. The results will be put opposite existing threshold values provided as the industrial standard. Furthermore, probability predictions will be made in order to examine the potential of the SLM process to maintain state-of-the-art mechanical property requirements.
In modernen Fahrzeugkarosserien der Großserie kommen zunehmend Materialmischbauweisen zur Anwendung. In Zusammenarbeit der Daimler AG, der Tower Automotive Holding GmbH, der Imperia GmbH sowie der Partnerunternehmen KSM Castings GmbH und Schaufler Tooling GmbH & Co. KG wird das Leichtbaupotenzial von Aluminiumverbundguss-Stahlblech-Hybriden am Beispiel des vorderen Dachquerträgers des Mercedes-Benz Viano/Vito ausführlich untersucht.
In modernen Fahrzeugkarosserien der Großserie kommen zunehmend Materialmischbauweisen
zur Anwendung. In Zusammenarbeit der Daimler AG, der Tower Automotive Holding
GmbH, der Imperia GmbH sowie der Partnerunternehmen KSM Castings GmbH und Schaufler
Tooling GmbH & Co. KG wird das Leichtbaupotenzial von Stahlblech-AluminiumverbundgussHybriden
am Beispiel des vorderen Dachquerträgers des Mercedes-Benz Viano/Vito ausführlich
untersucht.
20 Years of RoboCup
(2016)
An immunochromatographic lateral flow dipstick assay for the fast detection of microcystin-LR was developed. Colloid gold particles with diameters of 40 nm were used as red-colored antibody labels for the visual detection of the antigen. The new dipstick sensor is capable of detecting down to 5 µg·l−1 (ppb; total inversion of the color signal) or 1 ppb (observation of color grading) of microcystin-LR. The course of the labeling reaction was observed via spectrometric wave shifts caused by the change of particle size during the binding of antibodies. Different stabilizing reagents showed that especially bovine serum albumin (BSA) and casein increase the assays sensitivity and the conjugate stability. Performance of the dipsticks was quantified by pattern processing of capture zone CCD images. Storage stability of dipsticks and conjugate suspensions over 115 days under different conditions were monitored. The ready-to-use dipsticks were successfully tested with microcystin-LR-spiked samples of outdoor drinking- and salt water and applied to the tissue of microcystin-fed mussels.
Implikationen der Digitalisierung für den Finanzbereich der Unternehmung und das Rollenbild des CFO
(2017)
Three amperometric biosensors have been developed for the detection of L-malic acid, fumaric acid, and L -aspartic acid, all based on the combination of a malate-specific dehydrogenase (MDH, EC 1.1.1.37) and diaphorase (DIA, EC 1.8.1.4). The stepwise expansion of the malate platform with the enzymes fumarate hydratase (FH, EC 4.2.1.2) and aspartate ammonia-lyase (ASPA, EC 4.3.1.1) resulted in multi-enzyme reaction cascades and, thus, augmentation of the substrate spectrum of the sensors. Electrochemical measurements were carried out in presence of the cofactor β-nicotinamide adenine dinucleotide (NAD+) and the redox mediator hexacyanoferrate (III) (HCFIII). The amperometric detection is mediated by oxidation of hexacyanoferrate (II) (HCFII) at an applied potential of + 0.3 V vs. Ag/AgCl. For each biosensor, optimum working conditions were defined by adjustment of cofactor concentrations, buffer pH, and immobilization procedure. Under these improved conditions, amperometric responses were linear up to 3.0 mM for L-malate and fumarate, respectively, with a corresponding sensitivity of 0.7 μA mM−1 (L-malate biosensor) and 0.4 μA mM−1 (fumarate biosensor). The L-aspartate detection system displayed a linear range of 1.0–10.0 mM with a sensitivity of 0.09 μA mM−1. The sensor characteristics suggest that the developed platform provides a promising method for the detection and differentiation of the three substrates.
The incorporation of nanomaterials that are biocompatible with different types of biological compounds has allowed the development of a new generation of biosensors applied especially in the biomedical field. In particular, the integration of film-based nanomaterials employed in field-effect devices can be interesting to develop biosensors with enhanced properties. In this paper, we studied the fabrication of sensitive nanofilms combining ZnO nanocrystals and carbon nanotubes (CNTs), prepared by means of the layer-by-layer (LbL) technique, in a capacitive electrolyte-insulator-semiconductor (EIS) structure for detecting glucose and urea. The ZnO nanocrystals were incorporated in a polymeric matrix of poly(allylamine) hydrochloride (PAH), and arranged with multi-walled CNTs in a LbL PAH-ZnO/CNTs film architecture onto EIS chips. The electrochemical characterizations were performed by capacitance–voltage and constant capacitance measurements, while the morphology of the films was characterized by atomic force microscopy. The enzymes glucose oxidase and urease were immobilized on film’s surface for detection of glucose and urea, respectively. In order to obtain glucose and urea biosensors with optimized amount of sensitive films, we investigated the ideal number of bilayers for each detection system. The glucose biosensor showed better sensitivity and output signal for an LbL PAH-ZnO/CNTs nanofilm with 10 bilayers. On the other hand, the urea biosensor presented enhanced properties even for the first bilayer, exhibiting high sensitivity and output signal. The presence of the LbL PAH-ZnO/CNTs films led to biosensors with better sensitivity and enhanced response signal, demonstrating that the adequate use of nanostructured films is feasible for proof-of-concept biosensors with improved properties that may be employed for biomedical applications.
Due to their anion exchange characteristics, layered double hydroxides (LDHs) are suitable for the detoxification of aqueous, fatty acid containing fermentation substrates. The aim of this study is to examine the adsorption mechanism, using crude glycerol from plant oil esterification as a model system. Changes in the intercalation structure in relation to the amount of fatty acids adsorbed are monitored by X-ray diffraction and infra-red spectroscopy. Additionally, calcination of LDH is investigated in order to increase the binding capacity for fatty acids. Our data propose that, at ambient temperature, fatty acids can be bound to the hydrotalcite by adsorption or in addition by intercalation, depending on fatty acid concentration. The adsorption of fatty acids from crude glycerol shows a BET-like behavior. Above a fatty acid concentration of 3.5 g L−1, intercalation of fatty acids can be shown by the appearance of an increased interlayer spacing. This observation suggests a two phase adsorption process. Calcination of LDHs allows increasing the binding capacity for fatty acids by more than six times, mainly by reduction of structural CO32−.
Objective
To investigate the feasibility of 7T MR imaging of the kidneys utilising a custom-built 8-channel transmit/receive radiofrequency body coil.
Methods
In vivo unenhanced MR was performed in 8 healthy volunteers on a 7T whole-body MR system. After B0 shimming the following sequences were obtained: 1) 2D and 3D spoiled gradient-echo sequences (FLASH, VIBE), 2) T1-weighted 2D in and opposed phase 3) True-FISP imaging and 4) a T2-weighted turbo spin echo (TSE) sequence. Visual evaluation of the overall image quality was performed by two radiologists.
Results
Renal MRI at 7T was feasible in all eight subjects. Best image quality was found using T1-weighted gradient echo MRI, providing high anatomical details and excellent conspicuity of the non-enhanced vasculature. With successful shimming, B1 signal voids could be effectively reduced and/or shifted out of the region of interest in most sequence types. However, T2-weighted TSE imaging remained challenging and strongly impaired because of signal heterogeneities in three volunteers.
Conclusion
The results demonstrate the feasibility and diagnostic potential of dedicated 7T renal imaging. Further optimisation of imaging sequences and dedicated RF coil concepts are expected to improve the acquisition quality and ultimately provide high clinical diagnostic value.
Objectives
To assess the image quality of T2-weighted (T2w) magnetic resonance imaging of the prostate and the visibility of prostate cancer at 7 Tesla (T).
Materials & methods
Seventeen prostate cancer patients underwent T2w imaging at 7T with only an external transmit/receive array coil. Three radiologists independently scored images for image quality, visibility of anatomical structures, and presence of artefacts. Krippendorff’s alpha and weighted kappa statistics were used to assess inter-observer agreement. Visibility of prostate cancer lesions was assessed by directly linking the T2w images to the confirmed location of prostate cancer on histopathology.
Results
T2w imaging at 7T was achievable with ‘satisfactory’ (3/5) to ‘good’ (4/5) quality. Visibility of anatomical structures was predominantly scored as ‘satisfactory’ (3/5) and ‘good’ (4/5). If artefacts were present, they were mostly motion artefacts and, to a lesser extent, aliasing artefacts and noise. Krippendorff’s analysis revealed an α = 0.44 between three readers for the overall image quality scores. Clinically significant cancer lesions in both peripheral zone and transition zone were visible at 7T.
Conclusion
T2w imaging with satisfactory to good quality can be routinely acquired, and cancer lesions were visible in patients with prostate cancer at 7T using only an external transmit/receive body array coil.
Objective
This study assesses and quantifies impairment of postoperative magnetic resonance imaging (MRI) at 7 Tesla (T) after implantation of titanium cranial fixation plates (CFPs) for neurosurgical bone flap fixation.
Materials and methods
The study group comprised five patients who were intra-individually examined with 3 and 7 T MRI preoperatively and postoperatively (within 72 h/3 months) after implantation of CFPs. Acquired sequences included T₁-weighted magnetization-prepared rapid-acquisition gradient-echo (MPRAGE), T₂-weighted turbo-spin-echo (TSE) imaging, and susceptibility-weighted imaging (SWI). Two experienced neurosurgeons and a neuroradiologist rated image quality and the presence of artifacts in consensus reading.
Results
Minor artifacts occurred around the CFPs in MPRAGE and T2 TSE at both field strengths, with no significant differences between 3 and 7 T. In SWI, artifacts were accentuated in the early postoperative scans at both field strengths due to intracranial air and hemorrhagic remnants. After resorption, the brain tissue directly adjacent to skull bone could still be assessed. Image quality after 3 months was equal to the preoperative examinations at 3 and 7 T.
Conclusion
Image quality after CFP implantation was not significantly impaired in 7 T MRI, and artifacts were comparable to those in 3 T MRI.
This summer, RoboCup competitions were held for the 20th time in Leipzig, Germany. It was the second time that RoboCup took place in Germany, 10 years after the 2006 RoboCup in Bremen. In this article, we give an overview on the latest developments of RoboCup and what happened in the different leagues over the last decade. With its 20th edition, RoboCup clearly is a success story and a role model for robotics competitions. From our personal view point, we acknowledge this by giving a retrospection about what makes RoboCup such a success.
Enzyme-based logic gates and circuits - analytical applications and interfacing with electronics
(2017)
The paper is an overview of enzyme-based logic gates and their short circuits, with specific examples of Boolean AND and OR gates, and concatenated logic gates composed of multi-step enzyme-biocatalyzed reactions. Noise formation in the biocatalytic reactions and its decrease by adding a “filter” system, converting convex to sigmoid response function, are discussed. Despite the fact that the enzyme-based logic gates are primarily considered as components of future biomolecular computing systems, their biosensing applications are promising for immediate practical use. Analytical use of the enzyme logic systems in biomedical and forensic applications is discussed and exemplified with the logic analysis of biomarkers of various injuries, e.g., liver injury, and with analysis of biomarkers characteristic of different ethnicity found in blood samples on a crime scene. Interfacing of enzyme logic systems with modified electrodes and semiconductor devices is discussed, giving particular attention to the interfaces functionalized with signal-responsive materials. Future perspectives in the design of the biomolecular logic systems and their applications are discussed in the conclusion.
To better understand what kinds of sports and exercise could be beneficial for the intervertebral disc (IVD), we performed a review to synthesise the literature on IVD adaptation with loading and exercise. The state of the literature did not permit a systematic review; therefore, we performed a narrative review. The majority of the available data come from cell or whole-disc loading models and animal exercise models. However, some studies have examined the impact of specific sports on IVD degeneration in humans and acute exercise on disc size. Based on the data available in the literature, loading types that are likely beneficial to the IVD are dynamic, axial, at slow to moderate movement speeds, and of a magnitude experienced in walking and jogging. Static loading, torsional loading, flexion with compression, rapid loading, high-impact loading and explosive tasks are likely detrimental for the IVD. Reduced physical activity and disuse appear to be detrimental for the IVD. We also consider the impact of genetics and the likelihood of a ‘critical period’ for the effect of exercise in IVD development. The current review summarises the literature to increase awareness amongst exercise, rehabilitation and ergonomic professionals regarding IVD health and provides recommendations on future directions in research.
We present a new Min-Max theorem for an optimization problem closely connected to matchings and vertex covers in balanced hypergraphs. The result generalizes Kőnig’s Theorem (Berge and Las Vergnas in Ann N Y Acad Sci 175:32–40, 1970; Fulkerson et al. in Math Progr Study 1:120–132, 1974) and Hall’s Theorem (Conforti et al. in Combinatorica 16:325–329, 1996) for balanced hypergraphs.
Regardless of size or destination, synthetic biology starts with com-parably small information units, which need to be combined and properly arranged in order to achieve a certain goal. This may be the de novo synthesis of individual genes from oligonucleotides, a shuffling of protein domains in order to create novel biocatalysts, the assembly of multiple enzyme encoding genes in metabolic pathway design, or strain development at the production stage. The CoLibry concept has been designed in order to close the gap between recombinant production of individual genes and genome editing.
Hintergrund
Die Anwendung und das Verständnis von Statistik sind sehr wichtig für die biomedizinische Forschung und für die klinische Praxis. Dies gilt insbesondere auch zur Abschätzung der Möglichkeiten unterschiedlichster Diagnostik- und Therapieoptionen beim Glaukom. Die scheinbare Komplexität der Statistik, die zum Teil dem „gesunden Menschenverstand“ zu widersprechen scheint, zusammen mit der nur vorsichtigen Akzeptanz der Statistik bei vielen Medizinern können zu bewussten und unbewussten Manipulationen bei der Datendarstellung und -interpretation führen.
Ziel der Arbeit
Ziel ist die verständliche Darstellung einiger typischer Fehler in der medizinisch-statistischen Datenbehandlung.
Material und Methoden
Anhand hypothetischer Beispiele aus der Glaukomdiagnostik erfolgen die Darstellung der Wirkung eines hypotensiven Medikamentes sowie die Beurteilung der Ergebnisse eines diagnostischen Tests. Es werden die typischsten statistischen Einsatzbereiche und Irrtumsquellen ausführlich und verständlich analysiert
Ergebnisse
Mechanismen von Datenmanipulation und falscher Dateninterpretation werden aufgeklärt. Typische Irrtumsquellen bei der statistischen Auswertung und Datendarstellung werden dabei erläutert.
Schlussfolgerungen
Die erläuterten praktischen Beispiele zeigen die Notwendigkeit, die Grundlagen der Statistik zu verstehen und korrekt anwenden zu können. Fehlendes Grundlagenwissen und Halbwissen der medizinischen Statistik können zu folgenschweren Missverständnissen und falschen Entscheidungen in der medizinischen Forschung, aber auch in der klinischen Praxis führen.
Purpose
The most commonly used mobility assessments for screening risk of falls among older adults are rating scales such as the Tinetti performance oriented mobility assessment (POMA). However, its correlation with falls is not always predictable and disadvantages of the scale include difficulty to assess many of the items on a 3-point scale and poor specificity. The purpose of this study was to describe the ability of the new Aachen Mobility and Balance Index (AMBI) to discriminate between subjects with a fall history and subjects without such events in comparison to the Tinetti POMA Scale.
Methods
For this prospective cohort study, 24 participants in the study group and 10 in the control group were selected from a population of patients in our hospital who had met the stringent inclusion criteria. Both groups completed the Tinetti POMA Scale (gait and balance component) and the AMBI (tandem stance, tandem walk, ten-meter-walk-test, sit-to-stand with five repetitions, 360° turns, timed-up-and-go-test and measurement of the dominant hand grip strength). A history of falls and hospitalization in the past year were evaluated retrospectively. The relationships among the mobility tests were examined with Bland–Altmananalysis. Receiver-operated characteristics curves, sensitivity and specificity were calculated.
Results
The study showed a strong negative correlation between the AMBI (17 points max., highest fall risk) and Tinetti POMA Scale (28 points max., lowest fall risk; r = −0.78, p < 0.001) with an excellent discrimination between community-dwelling older people and a younger control group. However, there were no differences in any of the mobility and balance measurements between participants with and without a fall history with equal characteristics in test comparison (AMBI vs. Tinetti POMA Scale: AUC 0.570 vs. 0.598; p = 0.762). The Tinetti POMA Scale (cut-off <20 points) showed a sensitivity of 0.45 and a specificity of 0.69, the AMBI a sensitivity of 0.64 and a specificity of 0.46 (cut-off >5 points).
Conclusion
The AMBI comprises mobility and balance tasks with increasing difficulty as well as a measurement of the dominant hand-grip strength. Its ability to identify fallers was comparable to the Tinetti POMA Scale. However, both measurement sets showed shortcomings in discrimination between fallers and non-fallers based on a self-reported retrospective falls-status.
While plate fixation of proximal ulna fractures might lead to superior clinical results compared to tension band wiring, regular plates represent an established risk factor for wound complications. The olecranon double plates (Medartis, Basel, CH) might decrease complications related to the osteosynthesis because of their low profile and better anatomical fit. This study aimed to evaluate the biomechanical performance and clinical results of the olecranon double plates.
Surgical reconstruction of the interosseous membrane (IOM) could restore longitudinal forearm stability to avoid persisting disability due to capituloradial and ulnocarpal impingement in Essex Lopresti lesions. This biomechanical study aimed to assess longitudinal forearm stability of intact specimens, after sectioning of the IOM and after reconstruction with a TightRope construct using either a single or double bundle technique.