Springer
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (51)
- Fachbereich Elektrotechnik und Informationstechnik (32)
- IfB - Institut für Bioengineering (24)
- Fachbereich Chemie und Biotechnologie (16)
- Fachbereich Luft- und Raumfahrttechnik (13)
- Fachbereich Energietechnik (12)
- Fachbereich Wirtschaftswissenschaften (12)
- Fachbereich Bauingenieurwesen (10)
- INB - Institut für Nano- und Biotechnologien (10)
- ECSM European Center for Sustainable Mobility (8)
Document Type
- Article (151) (remove)
Keywords
- CFD (2)
- Obstacle avoidance (2)
- Prozessautomatisierung (2)
- Robotic Process Automation (2)
- UAV (2)
- ABE (1)
- Acid crash (1)
- Actuator disk modelling (1)
- Acyl-amino acids (1)
- Acylation (1)
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
The hot spots conjecture is only known to be true for special geometries. This paper shows numerically that the hot spots conjecture can fail to be true for easy to construct bounded domains with one hole. The underlying eigenvalue problem for the Laplace equation with Neumann boundary condition is solved with boundary integral equations yielding a non-linear eigenvalue problem. Its discretization via the boundary element collocation method in combination with the algorithm by Beyn yields highly accurate results both for the first non-zero eigenvalue and its corresponding eigenfunction which is due to superconvergence. Additionally, it can be shown numerically that the ratio between the maximal/minimal value inside the domain and its maximal/minimal value on the boundary can be larger than 1 + 10− 3. Finally, numerical examples for easy to construct domains with up to five holes are provided which fail the hot spots conjecture as well.
Objectives
Interest in cardiovascular magnetic resonance (CMR) at 7 T is motivated by the expected increase in spatial and temporal resolution, but the method is technically challenging. We examined the feasibility of cardiac chamber quantification at 7 T.
Methods
A stack of short axes covering the left ventricle was obtained in nine healthy male volunteers. At 1.5 T, steady-state free precession (SSFP) and fast gradient echo (FGRE) cine imaging with 7 mm slice thickness (STH) were used. At 7 T, FGRE with 7 mm and 4 mm STH were applied. End-diastolic volume, end-systolic volume, ejection fraction and mass were calculated.
Results
All 7 T examinations provided excellent blood/myocardium contrast for all slice directions. No significant difference was found regarding ejection fraction and cardiac volumes between SSFP at 1.5 T and FGRE at 7 T, while volumes obtained from FGRE at 1.5 T were underestimated. Cardiac mass derived from FGRE at 1.5 and 7 T was larger than obtained from SSFP at 1.5 T. Agreement of volumes and mass between SSFP at 1.5 T and FGRE improved for FGRE at 7 T when combined with an STH reduction to 4 mm.
Conclusions
This pilot study demonstrates that cardiac chamber quantification at 7 T using FGRE is feasible and agrees closely with SSFP at 1.5 T.
Objective
The purpose of this study is to (i) design a small and mobile Magnetic field ALert SEnsor (MALSE), (ii) to carefully evaluate its sensors to their consistency of activation/deactivation and sensitivity to magnetic fields, and (iii) to demonstrate the applicability of MALSE in 1.5 T, 3.0 T and 7.0 T MR fringe field environments.
Methods
MALSE comprises a set of reed sensors, which activate in response to their exposure to a magnetic field. The activation/deactivation of reed sensors was examined by moving them in/out of the fringe field generated by 7TMR.
Results
The consistency with which individual reed sensors would activate at the same field strength was found to be 100% for the setup used. All of the reed switches investigated required a substantial drop in ambient magnetic field strength before they deactivated.
Conclusions
MALSE is a simple concept for alerting MRI staff to a ferromagnetic object being brought into fringe magnetic fields which exceeds MALSEs activation magnetic field. MALSE can easily be attached to ferromagnetic objects within the vicinity of a scanner, thus creating a barrier for hazardous situations induced by ferromagnetic parts which should not enter the vicinity of an MR-system to occur.
New insights into the influence of pre-culture on robust solvent production of C. acetobutylicum
(2024)
Clostridia are known for their solvent production, especially the production of butanol. Concerning the projected depletion of fossil fuels, this is of great interest. The cultivation of clostridia is known to be challenging, and it is difficult to achieve reproducible results and robust processes. However, existing publications usually concentrate on the cultivation conditions of the main culture. In this paper, the influence of cryo-conservation and pre-culture on growth and solvent production in the resulting main cultivation are examined. A protocol was developed that leads to reproducible cultivations of Clostridium acetobutylicum. Detailed investigation of the cell conservation in cryo-cultures ensured reliable cell growth in the pre-culture. Moreover, a reason for the acid crash in the main culture was found, based on the cultivation conditions of the pre-culture. The critical parameter to avoid the acid crash and accomplish the shift to the solventogenesis of clostridia is the metabolic phase in which the cells of the pre-culture were at the time of inoculation of the main culture; this depends on the cultivation time of the pre-culture. Using cells from the exponential growth phase to inoculate the main culture leads to an acid crash. To achieve the solventogenic phase with butanol production, the inoculum should consist of older cells which are in the stationary growth phase. Considering these parameters, which affect the entire cultivation process, reproducible results and reliable solvent production are ensured.
Unmanned Aerial Vehicles (UAV) constantly gain in versatility. However, more reliable path planning algorithms are required until full autonomous UAV operation is possible. This work investigates the algorithm 3DVFH* and analyses its dependency on its cost function weights in 2400 environments. The analysis shows that the 3DVFH* can find a suitable path in every environment. However, a particular type of environment requires a specific choice of cost function weights. For minimal failure, probability interdependencies between the weights of the cost function have to be considered. This dependency reduces the number of control parameters and simplifies the usage of the 3DVFH*. Weights for costs associated with vertical evasion (pitch cost) and vicinity to obstacles (obstacle cost) have the highest influence on the failure probability of the local path planner. Environments with mainly very tall buildings (like large American city centres) require a preference for horizontal avoidance manoeuvres (achieved with high pitch cost weights). In contrast, environments with medium-to-low buildings (like European city centres) benefit from vertical avoidance manoeuvres (achieved with low pitch cost weights). The cost of the vicinity to obstacles also plays an essential role and must be chosen adequately for the environment. Choosing these two weights ideal is sufficient to reduce the failure probability below 10%.
Lifting propellers are of increasing interest for Advanced Air Mobility. All propellers and rotors are initially twisted beams, showing significant extension–twist coupling and centrifugal twisting. Torsional deformations severely impact aerodynamic performance. This paper presents a novel approach to assess different reasons for torsional deformations. A reduced-order model runs large parameter sweeps with algebraic formulations and numerical solution procedures. Generic beams represent three different propeller types for General Aviation, Commercial Aviation, and Advanced Air Mobility. Simulations include solid and hollow cross-sections made of aluminum, steel, and carbon fiber-reinforced polymer. The investigation shows that centrifugal twisting moments depend on both the elastic and initial twist. The determination of the centrifugal twisting moment solely based on the initial twist suffers from errors exceeding 5% in some cases. The nonlinear parts of the torsional rigidity do not significantly impact the overall torsional rigidity for the investigated propeller types. The extension–twist coupling related to the initial and elastic twist in combination with tension forces significantly impacts the net cross-sectional torsional loads. While the increase in torsional stiffness due to initial twist contributes to the overall stiffness for General and Commercial Aviation propellers, its contribution to the lift propeller’s stiffness is limited. The paper closes with the presentation of approximations for each effect identified as significant. Numerical evaluations are necessary to determine each effect for inhomogeneous cross-sections made of anisotropic material.
Objective: As high-field cardiac MRI (CMR) becomes more widespread the propensity of ECG to interference from electromagnetic fields (EMF) and to magneto-hydrodynamic (MHD) effects increases and with it the motivation for a CMR triggering alternative. This study explores the suitability of acoustic cardiac triggering (ACT) for left ventricular (LV) function assessment in healthy subjects (n=14). Methods: Quantitative analysis of 2D CINE steady-state free precession (SSFP) images was conducted to compare ACT’s performance with vector ECG (VCG). Endocardial border sharpness (EBS) was examined paralleled by quantitative LV function assessment. Results: Unlike VCG, ACT provided signal traces free of interference from EMF or MHD effects. In the case of correct Rwave recognition, VCG-triggered 2D CINE SSFP was immune to cardiac motion effects—even at 3.0 T. However, VCG-triggered 2D SSFP CINE imaging was prone to cardiac motion and EBS degradation if R-wave misregistration occurred. ACT-triggered acquisitions yielded LV parameters (end-diastolic volume (EDV), endsystolic volume (ESV), stroke volume (SV), ejection fraction (EF) and left ventricular mass (LVM)) comparable with those derived fromVCG-triggered acquisitions (1.5 T: ESVVCG=(56± 17) ml, EDVVCG=(151±32)ml, LVMVCG=(97±27) g, SVVCG=(94± 19)ml, EFVCG=(63±5)% cf. ESVACT= (56±18) ml, EDVACT=(147±36) ml, LVMACT=(102±29) g, SVACT=(91± 22) ml, EFACT=(62±6)%; 3.0 T: ESVVCG=(55±21) ml, EDVVCG=(151±32) ml, LVMVCG=(101±27) g, SVVCG=(96±15) ml, EFVCG=(65±7)% cf. ESVACT=(54±20) ml, EDVACT=(146±35) ml, LVMACT= (101±30) g, SVACT=(92±17) ml, EFACT=(64±6)%). Conclusions: ACT’s intrinsic insensitivity to interference from electromagnetic fields renders
N-Acyl-amino acids can act as mild biobased surfactants, which are used, e.g., in baby shampoos. However, their chemical synthesis needs acyl chlorides and does not meet sustainability criteria. Thus, the identification of biocatalysts to develop greener synthesis routes is desirable. We describe a novel aminoacylase from Paraburkholderia monticola DSM 100849 (PmAcy) which was identified, cloned, and evaluated for its N-acyl-amino acid synthesis potential. Soluble protein was obtained by expression in lactose autoinduction medium and co-expression of molecular chaperones GroEL/S. Strep-tag affinity purification enriched the enzyme 16-fold and yielded 15 mg pure enzyme from 100 mL of culture. Biochemical characterization revealed that PmAcy possesses beneficial traits for industrial application like high temperature and pH-stability. A heat activation of PmAcy was observed upon incubation at temperatures up to 80 °C. Hydrolytic activity of PmAcy was detected with several N-acyl-amino acids as substrates and exhibited the highest conversion rate of 773 U/mg with N-lauroyl-L-alanine at 75 °C. The enzyme preferred long-chain acyl-amino-acids and displayed hardly any activity with acetyl-amino acids. PmAcy was also capable of N-acyl-amino acid synthesis with good conversion rates. The best synthesis results were obtained with the cationic L-amino acids L-arginine and L-lysine as well as with L-leucine and L-phenylalanine. Exemplarily, L-phenylalanine was acylated with fatty acids of chain lengths from C8 to C18 with conversion rates of up to 75%. N-lauroyl-L-phenylalanine was purified by precipitation, and the structure of the reaction product was verified by LC–MS and NMR.
New European Union (EU) regulations for UAS operations require an operational risk analysis, which includes an estimation of the potential danger of the UAS crashing. A key parameter for the potential ground risk is the kinetic impact energy of the UAS. The kinetic energy depends on the impact velocity of the UAS and, therefore, on the aerodynamic drag and the weight during free fall. Hence, estimating the impact energy of a UAS requires an accurate drag estimation of the UAS in that state. The paper at hand presents the aerodynamic drag estimation of small-scale multirotor UAS. Multirotor UAS of various sizes and configurations were analysed with a fully unsteady Reynolds-averaged Navier–Stokes approach. These simulations included different velocities and various fuselage pitch angles of the UAS. The results were compared against force measurements performed in a subsonic wind tunnel and provided good consistency. Furthermore, the influence of the UAS`s fuselage pitch angle as well as the influence of fixed and free spinning propellers on the aerodynamic drag was analysed. Free spinning propellers may increase the drag by up to 110%, depending on the fuselage pitch angle. Increasing the fuselage pitch angle of the UAS lowers the drag by 40% up to 85%, depending on the UAS. The data presented in this paper allow for increased accuracy of ground risk assessments.
Obstacle avoidance is critical for unmanned aerial vehicles (UAVs) operating autonomously. Obstacle avoidance algorithms either rely on global environment data or local sensor data. Local path planners react to unforeseen objects and plan purely on local sensor information. Similarly, animals need to find feasible paths based on local information about their surroundings. Therefore, their behavior is a valuable source of inspiration for path planning. Bumblebees tend to fly vertically over far-away obstacles and horizontally around close ones, implying two zones for different flight strategies depending on the distance to obstacles. This work enhances the local path planner 3DVFH* with this bio-inspired strategy. The algorithm alters the goal-driven function of the 3DVFH* to climb-preferring if obstacles are far away. Prior experiments with bumblebees led to two definitions of flight zone limits depending on the distance to obstacles, leading to two algorithm variants. Both variants reduce the probability of not reaching the goal of a 3DVFH* implementation in Matlab/Simulink. The best variant, 3DVFH*b-b, reduces this probability from 70.7 to 18.6% in city-like worlds using a strong vertical evasion strategy. Energy consumption is higher, and flight paths are longer compared to the algorithm version with pronounced horizontal evasion tendency. A parameter study analyzes the effect of different weighting factors in the cost function. The best parameter combination shows a failure probability of 6.9% in city-like worlds and reduces energy consumption by 28%. Our findings demonstrate the potential of bio-inspired approaches for improving the performance of local path planning algorithms for UAV.
With the prevalence of glucosamine- and chondroitin-containing dietary supplements for people with osteoarthritis in the marketplace, it is important to have an accurate and reproducible analytical method for the quantitation of these compounds in finished products. NMR spectroscopic method based both on low- (80 MHz) and high- (500–600 MHz) field NMR instrumentation was established, compared and validated for the determination of chondroitin sulfate and glucosamine in dietary supplements. The proposed method was applied for analysis of 20 different dietary supplements. In the majority of cases, quantification results obtained on the low-field NMR spectrometer are similar to those obtained with high-field 500–600 MHz NMR devices. Validation results in terms of accuracy, precision, reproducibility, limit of detection and recovery demonstrated that the developed method is fit for purpose for the marketed products. The NMR method was extended to the analysis of methylsulfonylmethane, adulterant maltodextrin, acetate and inorganic ions. Low-field NMR can be a quicker and cheaper alternative to more expensive high-field NMR measurements for quality control of the investigated dietary supplements. High-field NMR instrumentation can be more favorable for samples with complex composition due to better resolution, simultaneously giving the possibility of analysis of inorganic species such as potassium and chloride.
The number of electronic vehicles increase steadily while the space for extending the charging infrastructure is limited. In particular in urban areas, where parking spaces in attractive areas are famous, opportunities to setup new charging stations is very limited. This leads to an overload of some very attractive charging stations and an underutilization of less attractive ones. Against this background, the paper at hand presents the design of an e-vehicle reservation system that aims at distributing the utilization of the charging infrastructure, particularly in urban areas. By applying a design science approach, the requirements for a reservation-based utilization approach are elicited and a model for a suitable distribution approach and its instantiation are developed. The artefact is evaluated by simulating the distribution effects based on data of real charging station utilizations.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
The efficiency concepts of Bahadur and Pitman are used to compare the Wilcoxon tests in paired and independent survey samples. A comparison through the length of corresponding confidence intervals is also done. Simple conditions characterizing the dominance of a procedure are derived. Statistical tests for checking these conditions are suggested and discussed.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
Der Telekommunikationsmarkt erfährt substanzielle Veränderungen. Neue Geschäftsmodelle, innovative Dienstleistungen und Technologien erfordern Reengineering, Transformation und Prozessstandardisierung. Mit der Enhanced Telecom Operation Map (eTOM) bietet das TM Forum ein international anerkanntes de facto Referenz-Prozess-Framework basierend auf spezifischen Anforderungen und Ausprägungen der Telekommunikationsindustrie an. Allerdings enthält dieses Referenz-Framework nur eine hierarchische Sammlung von Prozessen auf unterschiedlichen Abstraktionsebenen. Eine Kontrollsicht verstanden als sequenzielle Anordnung von Aktivitäten und daraus resultierend ein realer Prozessablauf fehlt ebenso wie eine Ende-zu-Ende-Sicht auf den Kunden. In diesem Artikel erweitern wir das eTOM-Referenzmodell durch Referenzprozessabläufe, in welchen wir das Wissen über Prozesse in Telekommunikationsunternehmen abstrahieren und generalisieren. Durch die Referenzprozessabläufe werden Unternehmen bei dem strukturierten und transparenten (Re-)Design ihrer Prozesse unterstützt. Wir demonstrieren die Anwendbarkeit und Nützlichkeit unserer Referenzprozessabläufe in zwei Fallstudien und evaluieren diese anhand von Kriterien für die Bewertung von Referenzmodellen. Die Referenzprozessabläufe wurden vom TM Forum in den Standard aufgenommen und als Teil von eTOM Version 9 veröffentlicht. Darüber hinaus diskutieren wir die Komponenten unseres Ansatzes, die auch außerhalb der Telekommunikationsindustrie angewandt werden können.
Im Rahmen der Digitalisierung ist die zunehmende Automatisierung von bisher manuellen Prozessschritten ein Aspekt, der massive Auswirkungen auf die zukünftige Arbeitswelt haben wird. In diesem Kontext werden an den Einsatz von Softwarerobotern zur Prozessautomatisierung hohe Erwartungen geknüpft. Bei den Implementierungsansätzen wird die Diskussion aktuell insbesondere durch Robotic Process Automation (RPA) und Chatbots geprägt. Beide Ansätze verfolgen das gemeinsame Ziel einer 1:1-Automatisierung von menschlichen Handlungen und dadurch ein direktes Ersetzen von Mitarbeitern durch Maschinen. Bei RPA werden Prozesse durch Softwareroboter erlernt und automatisiert ausgeführt. Dabei emulieren RPA-Roboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Durch Chatbots werden Ein- und Ausgaben von Anwendungssystemen über natürliche Sprache realisiert. Dadurch ist die Automatisierung von unternehmensexterner Kommunikation (z. B. mit Kunden) aber auch von unternehmensinternen Assistenztätigkeiten möglich. Der Beitrag diskutiert die Auswirkungen von Softwarerobotern auf die Arbeitswelt anhand von Anwendungsbeispielen und erläutert die unternehmensindividuelle Entscheidung über den Einsatz von Softwarerobotern anhand von Effektivitäts- und Effizienzzielen.
In this study, a recently proposed NMR standardization approach by 2H integral of deuterated solvent for quantitative multicomponent analysis of complex mixtures is presented. As a proof of principle, the existing NMR routine for the analysis of Aloe vera products was modified. Instead of using absolute integrals of targeted compounds and internal standard (nicotinamide) from 1H-NMR spectra, quantification was performed based on the ratio of a particular 1H-NMR compound integral and 2H-NMR signal of deuterated solvent D2O. Validation characteristics (linearity, repeatability, accuracy) were evaluated and the results showed that the method has the same precision as internal standardization in case of multicomponent screening. Moreover, a dehydration process by freeze drying is not necessary for the new routine. Now, our NMR profiling of A. vera products needs only limited sample preparation and data processing. The new standardization methodology provides an appealing alternative for multicomponent NMR screening. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and is recommended in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Schlafspindeln – Funktion, Detektion und Nutzung als Biomarker für die psychiatrische Diagnostik
(2022)
Hintergrund:
Die Schlafspindel ist ein Graphoelement des Elektroenzephalogramms
(EEG), das im Leicht- und Tiefschlaf beobachtet werden kann. Veränderungen der
Spindelaktivität wurden für verschiedene psychiatrische Erkrankungen beschrieben. Schlafspindeln zeigen aufgrund ihrer relativ konstanten Eigenschaften Potenzial als Biomarker in der psychiatrischen Diagnostik.
Methode:
Dieser Beitrag liefert einen Überblick über den Stand der Wissenschaft
zu Eigenschaften und Funktionen der Schlafspindeln sowie über beschriebene
Veränderungen der Spindelaktivität bei psychiatrischen Erkrankungen. Verschiedene methodische Ansätze und Ausblicke zur Spindeldetektion werden hinsichtlich deren Anwendungspotenzial in der psychiatrischen Diagnostik erläutert.
Ergebnisse und Schlussfolgerung:
Während Veränderungen der Spindelaktivität
bei psychiatrischen Erkrankungen beschrieben wurden, ist deren exaktes Potenzial für die psychiatrische Diagnostik noch nicht ausreichend erforscht. Diesbezüglicher Erkenntnisgewinn wird in der Forschung gegenwärtig durch ressourcenintensive und fehleranfällige Methoden zur manuellen oder automatisierten Spindeldetektion ausgebremst. Neuere Detektionsansätze, die auf Deep-Learning-Verfahren basieren, könnten die Schwierigkeiten bisheriger Detektionsmethoden überwinden und damit neue Möglichkeiten für die praktisch
Introduction
In regard of surgical training, the reproducible simulation of life-like proximal humerus fractures in human cadaveric specimens is desirable. The aim of the present study was to develop a technique that allows simulation of realistic proximal humerus fractures and to analyse the influence of rotator cuff preload on the generated lesions in regards of fracture configuration.
Materials and methods
Ten cadaveric specimens (6 left, 4 right) were fractured using a custom-made drop-test bench, in two groups. Five specimens were fractured without rotator cuff preload, while the other five were fractured with the tendons of the rotator cuff preloaded with 2 kg each. The humeral shaft and the shortened scapula were potted. The humerus was positioned at 90° of abduction and 10° of internal rotation to simulate a fall on the elevated arm. In two specimens of each group, the emergence of the fractures was documented with high-speed video imaging. Pre-fracture radiographs were taken to evaluate the deltoid-tuberosity index as a measure of bone density. Post-fracture X-rays and CT scans were performed to define the exact fracture configurations. Neer’s classification was used to analyse the fractures.
Results
In all ten cadaveric specimens life-like proximal humerus fractures were achieved. Two III-part and three IV-part fractures resulted in each group. The preloading of the rotator cuff muscles had no further influence on the fracture configuration. High-speed videos of the fracture simulation revealed identical fracture mechanisms for both groups. We observed a two-step fracture mechanism, with initial impaction of the head segment against the glenoid followed by fracturing of the head and the tuberosities and then with further impaction of the shaft against the acromion, which lead to separation of the tuberosities.
Conclusion
A high energetic axial impulse can reliably induce realistic proximal humerus fractures in cadaveric specimens. The preload of the rotator cuff muscles had no influence on initial fracture configuration. Therefore, fracture simulation in the proximal humerus is less elaborate. Using the presented technique, pre-fractured specimens are available for real-life surgical education.
Plant viruses are major contributors to crop losses and induce high economic costs worldwide. For reliable, on-site and early detection of plant viral diseases, portable biosensors are of great interest. In this study, a field-effect SiO2-gate electrolyte-insulator-semiconductor (EIS) sensor was utilized for the label-free electrostatic detection of tobacco mosaic virus (TMV) particles as a model plant pathogen. The capacitive EIS sensor has been characterized regarding its TMV sensitivity by means of constant-capacitance method. The EIS sensor was able to detect biotinylated TMV particles from a solution with a TMV concentration as low as 0.025 nM. A good correlation between the registered EIS sensor signal and the density of adsorbed TMV particles assessed from scanning electron microscopy images of the SiO2-gate chip surface was observed. Additionally, the isoelectric point of the biotinylated TMV particles was determined via zeta potential measurements and the influence of ionic strength of the measurement solution on the TMV-modified EIS sensor signal has been studied.
This study reviews the practice of brake tests in freight railways, which is time consuming and not suitable to detect certain failure types. Public incident reports are analysed to derive a reasonable brake test hardware and communication architecture, which aims to provide automatic brake tests at lower cost than current solutions. The proposed solutions relies exclusively on brake pipe and brake cylinder pressure sensors, a brake release position switch as well as radio communication via standard protocols. The approach is embedded in the Wagon 4.0 concept, which is a holistic approach to a smart freight wagon. The reduction of manual processes yields a strong incentive due to high savings in manual
labour and increased productivity.
Die Datenschutz-Grundverordnung (DS-GVO) regelt in ihrem Art. 3 das räumlich anwendbare Datenschutzrecht und zielt dabei gerade auch auf Angebote nichteuropäischer Diensteanbieter ab. Die bisherige Diskussion konzentriert sich bislang in erster Linie darauf, das eingeführte Marktortprinzip zu thematisieren; das weitgehend unangetastete
Niederlassungsprinzip und vor allem die Probleme, die sich durch dessen unveränderte Beibehaltung ergeben, werden dagegen nicht erörtert. Der folgende Beitrag versucht sich an einer systematischen Analyse eines teils kontrovers, teils kaum diskutierten Themas.
This study focuses on thermoelectric elements (TEE) as an alternative for room temperature control. TEE are semi-conductor devices that can provide heating and cooling via a heat pump effect without direct noise emissions and no refrigerant use. An efficiency evaluation of the optimal operating mode is carried out for different numbers of TEE, ambient temperatures, and heating loads. The influence of an additional heat recovery unit on system efficiency and an unevenly distributed heating demand are examined. The results show that TEE can provide heat at a coefficient of performance (COP) greater than one especially for small heating demands and high ambient temperatures. The efficiency increases with the number of elements in the system and is subject to economies of scale. The best COP exceeds six at optimal operating conditions. An additional heat recovery unit proves beneficial for low ambient temperatures and systems with few TEE. It makes COPs above one possible at ambient temperatures below 0 ∘C. The effect increases efficiency by maximal 0.81 (from 1.90 to 2.71) at ambient temperature 5 K below room temperature and heating demand Q˙h=100W but is subject to diseconomies of scale. Thermoelectric technology is a valuable option for electricity-based heat supply and can provide cooling and ventilation functions. A careful system design as well as an additional heat recovery unit significantly benefits the performance. This makes TEE superior to direct current heating systems and competitive to heat pumps for small scale applications with focus on avoiding noise and harmful refrigerants.
Planning the layout and operation of a technical system is a common task
for an engineer. Typically, the workflow is divided into consecutive stages: First,
the engineer designs the layout of the system, with the help of his experience or of
heuristic methods. Secondly, he finds a control strategy which is often optimized
by simulation. This usually results in a good operating of an unquestioned sys-
tem topology. In contrast, we apply Operations Research (OR) methods to find a
cost-optimal solution for both stages simultaneously via mixed integer program-
ming (MILP). Technical Operations Research (TOR) allows one to find a provable
global optimal solution within the model formulation. However, the modeling error
due to the abstraction of physical reality remains unknown. We address this ubiq-
uitous problem of OR methods by comparing our computational results with mea-
surements in a test rig. For a practical test case we compute a topology and control
strategy via MILP and verify that the objectives are met up to a deviation of 8.7%.
Purpose Vascular risk factors and ocular perfusion are heatedly discussed in the pathogenesis of glaucoma. The retinal vessel analyzer (RVA, IMEDOS Systems, Germany) allows noninvasive measurement of retinal vessel regulation. Significant differences especially in the veins between healthy subjects and patients suffering from glaucoma were previously reported. In this pilot-study we investigated if localized vascular regulation is altered in glaucoma patients with altitudinal visual field defect asymmetry. Methods 15 eyes of 12 glaucoma patients with advanced altitudinal visual field defect asymmetry were included. The mean defect was calculated for each hemisphere separately (-20.99 ± 10.49 pro- found hemispheric visual field defect vs -7.36 ± 3.97 dB less profound hemisphere). After pupil dilation, RVA measurements of retinal arteries and veins were conducted using the standard protocol. The superior and inferior retinal vessel reactivity were measured consecutively in each eye. Results Significant differences were recorded in venous vessel constriction after flicker light stimulation and overall amplitude of the reaction (p \ 0.04 and p \ 0.02 respectively) in-between the hemispheres spheres. Vessel reaction was higher in the hemisphere corresponding to the more advanced visual field defect. Arterial diameters reacted similarly, failing to reach statistical significance. Conclusion Localized retinal vessel regulation is significantly altered in glaucoma patients with asymmetri altitudinal visual field defects. Veins supplying the hemisphere concordant to a less profound visual field defect show diminished diameter changes. Vascular dysregulation might be particularly important in early glaucoma stages prior to a significant visual field defect.
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
Cardiopulmonary bypass (CPB) is a standard technique for cardiac surgery, but comes with the risk of severe neurological complications (e.g. stroke) caused by embolisms and/or reduced cerebral perfusion. We report on an aortic cannula prototype design (optiCAN) with helical outflow and jet-splitting dispersion tip that could reduce the risk of embolic events and restores cerebral perfusion to 97.5% of physiological flow during CPB in vivo, whereas a commercial curved-tip cannula yields 74.6%. In further in vitro comparison, pressure loss and hemolysis parameters of optiCAN remain unaffected. Results are reproducibly confirmed in silico for an exemplary human aortic anatomy via computational fluid dynamics (CFD) simulations. Based on CFD simulations, we firstly show that optiCAN design improves aortic root washout, which reduces the risk of thromboembolism. Secondly, we identify regions of the aortic intima with increased risk of plaque release by correlating areas of enhanced plaque growth and high wall shear stresses (WSS). From this we propose another easy-to-manufacture cannula design (opti2CAN) that decreases areas burdened by high WSS, while preserving physiological cerebral flow and favorable hemodynamics. With this novel cannula design, we propose a cannulation option to reduce neurological complications and the prevalence of stroke in high-risk patients after CPB.
Previous studies optimized the dimensions of coaxial heat exchangers using constant mass fow rates as a boundary condition. They show a thermal optimal circular ring width of nearly zero. Hydraulically optimal is an inner to outer pipe radius ratio of 0.65 for turbulent and 0.68 for laminar fow types. In contrast, in this study, fow conditions in the circular ring are kept constant (a set of fxed Reynolds numbers) during optimization. This approach ensures fxed fow conditions and prevents inappropriately high or low mass fow rates. The optimization is carried out for three objectives: Maximum energy gain, minimum hydraulic efort and eventually optimum net-exergy balance. The optimization changes the inner pipe radius and mass fow rate but not the Reynolds number of the circular ring. The thermal calculations base on Hellström’s borehole resistance and the hydraulic optimization on individually calculated linear loss of head coefcients. Increasing the inner pipe radius results in decreased hydraulic losses in the inner pipe but increased losses in the circular ring. The net-exergy diference is a key performance indicator and combines thermal and hydraulic calculations. It is the difference between thermal exergy fux and hydraulic efort. The Reynolds number in the circular ring is instead of the mass fow rate constant during all optimizations. The result from a thermal perspective is an optimal width of the circular ring of nearly zero. The hydraulically optimal inner pipe radius is 54% of the outer pipe radius for laminar fow and 60% for turbulent fow scenarios. Net-exergetic optimization shows a predominant infuence of hydraulic losses, especially for small temperature gains. The exact result depends on the earth’s thermal properties and the fow type. Conclusively, coaxial geothermal probes’ design should focus on the hydraulic optimum and take the thermal optimum as a secondary criterion due to the dominating hydraulics.
The paper presents the derivation of a new equivalent skin friction coefficient for estimating the parasitic drag of short-to-medium range fixed-wing unmanned aircraft. The new coefficient is derived from an aerodynamic analysis of ten different unmanned aircraft used for surveillance, reconnaissance, and search and rescue missions. The aircraft is simulated using a validated unsteady Reynolds-averaged Navier Stokes approach. The UAV’s parasitic drag is significantly influenced by the presence of miscellaneous components like fixed landing gears or electro-optical sensor turrets. These components are responsible for almost half of an unmanned aircraft’s total parasitic drag. The new equivalent skin friction coefficient accounts for these effects and is significantly higher compared to other aircraft categories. It is used to initially size an unmanned aircraft for a typical reconnaissance mission. The improved parasitic drag estimation yields a much heavier unmanned aircraft when compared to the sizing results using available drag data of manned aircraft.
Häufig bremsen geringe IT-Ressourcen, fehlende Softwareschnittstellen oder eine veraltete und komplex gewachsene Systemlandschaft die Automatisierung von Geschäftsprozessen. Robotic Process Automation (RPA) ist eine vielversprechende Methode, um Geschäftsprozesse oberflächenbasiert und ohne größere Systemeingriffe zu automatisieren und Medienbrüche abzubauen. Die Auswahl der passenden Prozesse ist dabei für den Erfolg von RPA-Projekten entscheidend. Der vorliegende Beitrag liefert dafür Selektionskriterien, die aus einer qualitativen Inhaltanalyse von elf Interviews mit RPA-Experten aus dem Versicherungsumfeld resultieren. Das Ergebnis umfasst eine gewichtetet Liste von sieben Dimensionen und 51 Prozesskriterien, welche die Automatisierung mit Softwarerobotern begünstigen bzw. deren Nichterfüllung eine Umsetzung erschweren oder sogar verhindern. Die drei wichtigsten Kriterien zur Auswahl von Geschäftsprozessen für die Automatisierung mittels RPA umfassen die Entlastung der an dem Prozess mitwirkenden Mitarbeiter (Arbeitnehmerüberlastung), die Ausführbarkeit des Prozesses mittels Regeln (Regelbasierte Prozessteuerung) sowie ein positiver Kosten-Nutzen-Vergleich. Praktiker können diese Kriterien verwenden, um eine systematische Auswahl von RPA-relevanten Prozessen vorzunehmen. Aus wissenschaftlicher Perspektive stellen die Ergebnisse eine Grundlage zur Erklärung des Erfolgs und Misserfolgs von RPA-Projekten dar.
Game-based learning is a promising approach to anti-phishing education, as it fosters motivation and can help reduce the perceived difficulty of the educational material. Over the years, several prototypes for game-based applications have been proposed, that follow different approaches in content selection, presentation, and game mechanics. In this paper, a literature and product review of existing learning games is presented. Based on research papers and accessible applications, an in-depth analysis was conducted, encompassing target groups, educational contexts, learning goals based on Bloom’s Revised Taxonomy, and learning content. As a result of this review, we created the publications on games (POG) data set for the domain of anti-phishing education. While there are games that can convey factual and conceptual knowledge, we find that most games are either unavailable, fail to convey procedural knowledge or lack technical depth. Thus, we identify potential areas of improvement for games suitable for end-users in informal learning contexts.
This paper compares several blade element theory (BET) method-based propeller simulation tools, including an evaluation against static propeller ground tests and high-fidelity Reynolds-Average Navier Stokes (RANS) simulations. Two proprietary propeller geometries for paraglider applications are analysed in static and flight conditions. The RANS simulations are validated with the static test data and used as a reference for comparing the BET in flight conditions. The comparison includes the analysis of varying 2D aerodynamic airfoil parameters and different induced velocity calculation methods. The evaluation of the BET propeller simulation tools shows the strength of the BET tools compared to RANS simulations. The RANS simulations underpredict static experimental data within 10% relative error, while appropriate BET tools overpredict the RANS results by 15–20% relative error. A variation in 2D aerodynamic data depicts the need for highly accurate 2D data for accurate BET results. The nonlinear BET coupled with XFOIL for the 2D aerodynamic data matches best with RANS in static operation and flight conditions. The novel BET tool PropCODE combines both approaches and offers further correction models for highly accurate static and flight condition results.
Objective
In local SAR compression algorithms, the overestimation is generally not linearly dependent on actual local SAR. This can lead to large relative overestimation at low actual SAR values, unnecessarily constraining transmit array performance.
Method
Two strategies are proposed to reduce maximum relative overestimation for a given number of VOPs. The first strategy uses an overestimation matrix that roughly approximates actual local SAR; the second strategy uses a small set of pre-calculated VOPs as the overestimation term for the compression.
Result
Comparison with a previous method shows that for a given maximum relative overestimation the number of VOPs can be reduced by around 20% at the cost of a higher absolute overestimation at high actual local SAR values.
Conclusion
The proposed strategies outperform a previously published strategy and can improve the SAR compression where maximum relative overestimation constrains the performance of parallel transmission.
In this chapter, the key technologies and the instrumentation required for the subsurface exploration of ocean worlds are discussed. The focus is laid on Jupiter’s moon Europa and Saturn’s moon Enceladus because they have the highest potential for such missions in the near future. The exploration of their oceans requires landing on the surface, penetrating the thick ice shell with an ice-penetrating probe, and probably diving with an underwater vehicle through dozens of kilometers of water to the ocean floor, to have the chance to find life, if it exists. Technologically, such missions are extremely challenging. The required key technologies include power generation, communications, pressure resistance, radiation hardness, corrosion protection, navigation, miniaturization, autonomy, and sterilization and cleaning. Simpler mission concepts involve impactors and penetrators or – in the case of Enceladus – plume-fly-through missions.
Elastomers are exceptional materials owing to their ability to undergo large deformations before failure. However, due to their very low stiffness, they are not always suitable for industrial applications. Addition of filler particles provides reinforcing effects and thus enhances the material properties that render them more versatile for applications like tyres etc. However, deformation behavior of filled polymers is accompanied by several nonlinear effects like Mullins and Payne effect. To this day, the physical and chemical changes resulting in such nonlinear effect remain an active area of research. In this work, we develop a heterogeneous (or multiphase) constitutive model at the mesoscale explicitly considering filler particle aggregates, elastomeric matrix and their mechanical interaction through an approximate interface layer. The developed constitutive model is used to demonstrate cluster breakage, also, as one of the possible sources for Mullins effect observed in non-crystallizing filled elastomers.
Purpose
This study aims to investigate the biomechanics of handcycling during a continuous load trial (CLT) to assess the mechanisms underlying fatigue in upper body exercise.
Methods
Twelve able-bodied triathletes performed a 30-min CLT at a power output corresponding to lactate threshold in a racing recumbent handcycle mounted on a stationary ergometer. During the CLT, ratings of perceived exertion (RPE), tangential crank kinetics, 3D joint kinematics, and muscular activity of ten muscles of the upper extremity and trunk were examined using motion capturing and surface electromyography.
Results
During the CLT, spontaneously chosen cadence and RPE increased, whereas crank torque decreased. Rotational work was higher during the pull phase. Peripheral RPE was higher compared to central RPE. Joint range of motion decreased for elbow-flexion and radial-duction. Integrated EMG (iEMG) increased in the forearm flexors, forearm extensors, and M. deltoideus (Pars spinalis). An earlier onset of activation was found for M. deltoideus (Pars clavicularis), M. pectoralis major, M. rectus abdominis, M. biceps brachii, and the forearm flexors.
Conclusion
Fatigue-related alterations seem to apply analogously in handcycling and cycling. The most distal muscles are responsible for force transmission on the cranks and might thus suffer most from neuromuscular fatigue. The findings indicate that peripheral fatigue (at similar lactate values) is higher in handcycling compared to leg cycling, at least for inexperienced participants. An increase in cadence might delay peripheral fatigue by a reduced vascular occlusion. We assume that the gap between peripheral and central fatigue can be reduced by sport-specific endurance training.
Researching the field of business intelligence and analytics (BI & A) has a long tradition within information systems research. Thereby, in each decade the rapid development of technologies opened new room for investigation. Since the early 1950s, the collection and analysis of structured data were the focus of interest, followed by unstructured data since the early 1990s. The third wave of BI & A comprises unstructured and sensor data of mobile devices. The article at hand aims at drawing a comprehensive overview of the status quo in relevant BI & A research of the current decade, focusing on the third wave of BI & A. By this means, the paper’s contribution is fourfold. First, a systematically developed taxonomy for BI & A 3.0 research, containing seven dimensions and 40 characteristics, is presented. Second, the results of a structured literature review containing 75 full research papers are analyzed by applying the developed taxonomy. The analysis provides an overview on the status quo of BI & A 3.0. Third, the results foster discussions on the predicted and observed developments in BI & A research of the past decade. Fourth, research gaps of the third wave of BI & A research are disclosed and concluded in a research agenda.
For short take-off and landing (STOL) aircraft, a parallel hybrid-electric propulsion system potentially offers superior performance compared to a conventional propulsion system, because the short-take-off power requirement is much higher than the cruise power requirement. This power-matching problem can be solved with a balanced hybrid propulsion system. However, there is a trade-off between wing loading, power loading, the level of hybridization, as well as range and take-off distance. An optimization method can vary design variables in such a way that a minimum of a particular objective is attained. In this paper, a comparison between the optimization results for minimum mass, minimum consumed primary energy, and minimum cost is conducted. A new initial sizing algorithm for general aviation aircraft with hybrid-electric propulsion systems is applied. This initial sizing methodology covers point performance, mission performance analysis, the weight estimation process, and cost estimation. The methodology is applied to the design of a STOL general aviation aircraft, intended for on-demand air mobility operations. The aircraft is sized to carry eight passengers over a distance of 500 km, while able to take off and land from short airstrips. Results indicate that parallel hybrid-electric propulsion systems must be considered for future STOL aircraft.
Through a mirror darkly – On the obscurity of teaching goals in game-based learning in IT security
(2021)
Teachers and instructors use very specific language communicating teaching goals. The most widely used frameworks of common reference are the Bloom’s Taxonomy and the Revised Bloom’s Taxonomy. The latter provides distinction of 209 different teaching goals which are connected to methods. In Competence Developing Games (CDGs - serious games to convey knowledge) and in IT security education, a two- or three level typology exists, reducing possible learning outcomes to awareness, training, and education. This study explores whether this much simpler framework succeeds in achieving the same range of learning outcomes. Method wise a keyword analysis was conducted. The results were threefold: 1. The words used to describe teaching goals in CDGs on IT security education do not reflect the whole range of learning outcomes. 2. The word choice is nevertheless different from common language, indicating an intentional use of language. 3. IT security CDGs use different sets of terms to describe learning outcomes, depending on whether they are awareness, training, or education games. The interpretation of the findings is that the reduction to just three types of CDGs reduces the capacity to communicate and think about learning outcomes and consequently reduces the outcomes that are intentionally achieved.