Refine
Year of publication
- 2021 (154) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (53)
- IfB - Institut für Bioengineering (35)
- Fachbereich Elektrotechnik und Informationstechnik (25)
- Fachbereich Luft- und Raumfahrttechnik (23)
- Fachbereich Energietechnik (17)
- INB - Institut für Nano- und Biotechnologien (15)
- Fachbereich Chemie und Biotechnologie (11)
- Fachbereich Bauingenieurwesen (10)
- Solar-Institut Jülich (10)
- ECSM European Center for Sustainable Mobility (9)
Language
- English (154) (remove)
Document Type
- Article (86)
- Conference Proceeding (48)
- Part of a Book (12)
- Book (2)
- Doctoral Thesis (2)
- Conference: Meeting Abstract (1)
- Other (1)
- Preprint (1)
- Working Paper (1)
Keywords
- Hydrogen (2)
- NOx emissions (2)
- Out-of-plane load (2)
- PCM (2)
- Principal component analysis (2)
- autonomous driving (2)
- building information modelling (2)
- capacitive field-effect sensor (2)
- constructive alignment (2)
- earthquakes (2)
Geochemical characterisation of hypersaline waters is difficult as high concentrations of salts hinder the analysis of constituents at low concentrations, such as trace metals, and the collection of samples for trace metal analysis in natural waters can be easily contaminated. This is particularly the case if samples are collected by non-conventional techniques such as those required for aquatic subglacial environments. In this paper we present the first analysis of a subglacial brine from Taylor Valley, (~ 78°S), Antarctica for the trace metals: Ba, Co, Mo, Rb, Sr, V, and U. Samples were collected englacially using an electrothermal melting probe called the IceMole. This probe uses differential heating of a copper head as well as the probe’s sidewalls and an ice screw at the melting head to move through glacier ice. Detailed blanks, meltwater, and subglacial brine samples were collected to evaluate the impact of the IceMole and the borehole pump, the melting and collection process, filtration, and storage on the geochemistry of the samples collected by this device. Comparisons between melt water profiles through the glacier ice and blank analysis, with published studies on ice geochemistry, suggest the potential for minor contributions of some species Rb, As, Co, Mn, Ni, NH4+, and NO2−+NO3− from the IceMole. The ability to conduct detailed chemical analyses of subglacial fluids collected with melting probes is critical for the future exploration of the hundreds of deep subglacial lakes in Antarctica.
Concentrating Solar Power
(2021)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
Test-retest reliability of the internal shoulder rotator muscles' stretch reflex in healthy men
(2021)
Until now the reproducibility of the short latency stretch reflex of the internal rotator muscles of the glenohumeral joint has not been identified. Twenty-three healthy male participants performed three sets of external shoulder rotation stretches with various pre-activation levels on two different dates of measurement to assess test-retest reliability. All stretches were applied with a dynamometer acceleration of 104°/s2 and a velocity of 150°/s. Electromyographical response was measured via surface EMG. Reflex latencies showed a pre-activation effect (ƞ2 = 0,355). ICC ranged from 0,735 to 0,909 indicating an overall “good” relative reliability. SRD 95% lay between ±7,0 to ±12,3 ms.. The reflex gain showed overall poor test-retest reproducibility. The chosen methodological approach presented a suitable test protocol for shoulder muscles stretch reflex latency evaluation. A proof-of-concept study to validate the presented methodical approach in shoulder involvement including subjects with clinically relevant conditions is recommended.
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
Microbial diversity studies regarding the aquatic communities that experienced or are experiencing environmental problems are essential for the comprehension of the remediation dynamics. In this pilot study, we present data on the phylogenetic and ecological structure of microorganisms from epipelagic water samples collected in the Small Aral Sea (SAS). The raw data were generated by massive parallel sequencing using the shotgun approach. As expected, most of the identified DNA sequences belonged to Terrabacteria and Actinobacteria (40% and 37% of the total reads, respectively). The occurrence of Deinococcus-Thermus, Armatimonadetes, Chloroflexi in the epipelagic SAS waters was less anticipated. Surprising was also the detection of sequences, which are characteristic for strict anaerobes—Ignavibacteria, hydrogen-oxidizing bacteria, and archaeal methanogenic species. We suppose that the observed very broad range of phylogenetic and ecological features displayed by the SAS reads demonstrates a more intensive mixing of water masses originating from diverse ecological niches of the Aral-Syr Darya River basin than presumed before.
Conventional EEG devices cannot be used in everyday life and hence, past decade research has been focused on Ear-EEG for mobile, at-home monitoring for various applications ranging from emotion detection to sleep monitoring. As the area available for electrode contact in the ear is limited, the electrode size and location play a vital role for an Ear-EEG system. In this investigation, we present a quantitative study of ear-electrodes with two electrode sizes at different locations in a wet and dry configuration. Electrode impedance scales inversely with size and ranges from 450 kΩ to 1.29 MΩ for dry and from 22 kΩ to 42 kΩ for wet contact at 10 Hz. For any size, the location in the ear canal with the lowest impedance is ELE (Left Ear Superior), presumably due to increased contact pressure caused by the outer-ear anatomy. The results can be used to optimize signal pickup and SNR for specific applications. We demonstrate this by recording sleep spindles during sleep onset with high quality (5.27 μVrms).
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
Communication via serial bus systems, like CAN, plays an important role for all kinds of embedded electronic and mechatronic systems. To cope up with the requirements for functional safety of safety-critical applications, there is a need to enhance the safety features of the communication systems. One measure to achieve a more robust communication is to add redundant data transmission path to the applications. In general, the communication of real-time embedded systems like automotive applications is tethered, and the redundant data transmission lines are also tethered, increasing the size of the wiring harness and the weight of the system. A radio link is preferred as a redundant transmission line as it uses a complementary transmission medium compared to the wired solution and in addition reduces wiring harness size and weight. Standard wireless links like Wi-Fi or Bluetooth cannot meet the requirements for real-time capability with regard to bus communication. Using the new dual-mode radio enables a redundant transmission line meeting all requirements with regard to real-time capability, robustness and transparency for the data bus. In addition, it provides a complementary transmission medium with regard to commonly used tethered links. A CAN bus system is used to demonstrate the redundant data transfer via tethered and wireless CAN.
We consider a binary multivariate regression model where the conditional expectation of a binary variable given a higher-dimensional input variable belongs to a parametric family. Based on this, we introduce a model-based bootstrap (MBB) for higher-dimensional input variables. This test can be used to check whether a sequence of independent and identically distributed observations belongs to such a parametric family. The approach is based on the empirical residual process introduced by Stute (Ann Statist 25:613–641, 1997). In contrast to Stute and Zhu’s approach (2002) Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), a transformation is not required. Thus, any problems associated with non-parametric regression estimation are avoided. As a result, the MBB method is much easier for users to implement. To illustrate the power of the MBB based tests, a small simulation study is performed. Compared to the approach of Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), the simulations indicate a slightly improved power of the MBB based method. Finally, both methods are applied to a real data set.
This book provides a compact introduction to the bootstrap method. In addition to classical results on point estimation and test theory, multivariate linear regression models and generalized linear models are covered in detail. Special attention is given to the use of bootstrap procedures to perform goodness-of-fit tests to validate model or distributional assumptions. In some cases, new methods are presented here for the first time.
The text is motivated by practical examples and the implementations of the corresponding algorithms are always given directly in R in a comprehensible form. Overall, R is given great importance throughout. Each chapter includes a section of exercises and, for the more mathematically inclined readers, concludes with rigorous proofs. The intended audience is graduate students who already have a prior knowledge of probability theory and mathematical statistics.
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
The fourth industrial revolution introduces disruptive technologies to production environments. One of these technologies are multi-agent systems (MASs), where agents virtualize machines. However, the agent's actual performances in production environments can hardly be estimated as most research has been focusing on isolated projects and specific scenarios. We address this gap by implementing a highly connected and configurable reference model with quantifiable key performance indicators (KPIs) for production scheduling and routing in single-piece workflows. Furthermore, we propose an algorithm to optimize the search of extrema in highly connected distributed systems. The benefits, limits, and drawbacks of MASs and their performances are evaluated extensively by event-based simulations against the introduced model, which acts as a benchmark. Even though the performance of the proposed MAS is, on average, slightly lower than the reference system, the increased flexibility allows it to find new solutions and deliver improved factory-planning outcomes. Our MAS shows an emerging behavior by using flexible production techniques to correct errors and compensate for bottlenecks. This increased flexibility offers substantial improvement potential. The general model in this paper allows the transfer of the results to estimate real systems or other models.
Magnetic nanoparticle relaxation in biomedical application: focus on simulating nanoparticle heating
(2021)
Extension fractures are typical for the deformation under low or no confining pressure. They can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. In this article, it is shown that the simple extension strain criterion makes unrealistic strength predictions in biaxial compression and tension. To overcome this major limitation, a new extension strain criterion is proposed by adding a weighted principal shear component to the simple criterion. The shear weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting extension failure modes, which are unexpected in the classical understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain leading to dilatancy. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak stress CP. Different from compressive loading, tensile loading requires only a limited number of critical cracks to cause failure. Therefore, for tensile stresses, the failure criteria must be modified somehow, possibly by a cut-off corresponding to the CI stress. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Background:
Additional stabilization of the “comma sign” in anterosuperior rotator cuff repair has been proposed to provide biomechanical benefits regarding stability of the repair.
Purpose:
This in vitro investigation aimed to investigate the influence of a comma sign–directed reconstruction technique for anterosuperior rotator cuff tears on the primary stability of the subscapularis tendon repair.
Study Design:
Controlled laboratory study.
Methods:
A total of 18 fresh-frozen cadaveric shoulders were used in this study. Anterosuperior rotator cuff tears (complete full-thickness tear of the supraspinatus and subscapularis tendons) were created, and supraspinatus repair was performed with a standard suture bridge technique. The subscapularis was repaired with either a (1) single-row or (2) comma sign technique. A high-resolution 3D camera system was used to analyze 3-mm and 5-mm gap formation at the subscapularis tendon-bone interface upon incremental cyclic loading. Moreover, the ultimate failure load of the repair was recorded. A Mann-Whitney test was used to assess significant differences between the 2 groups.
Results:
The comma sign repair withstood significantly more loading cycles than the single-row repair until 3-mm and 5-mm gap formation occurred (P≤ .047). The ultimate failure load did not reveal any significant differences when the 2 techniques were compared (P = .596).
Conclusion:
The results of this study show that additional stabilization of the comma sign enhanced the primary stability of subscapularis tendon repair in anterosuperior rotator cuff tears. Although this stabilization did not seem to influence the ultimate failure load, it effectively decreased the micromotion at the tendon-bone interface during cyclic loading.
Clinical Relevance:
The proposed technique for stabilization of the comma sign has shown superior biomechanical properties in comparison with a single-row repair and might thus improve tendon healing. Further clinical research will be necessary to determine its influence on the functional outcome.
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
Thrombogenic complications are a main issue in mechanical circulatory support (MCS). There is no validated in vitro method available to quantitatively assess the thrombogenic performance of pulsatile MCS devices under realistic hemodynamic conditions. The aim of this study is to propose a method to evaluate the thrombogenic potential of new designs without the use of complex in-vivo trials. This study presents a novel in vitro method for reproducible thrombogenicity testing of pulsatile MCS systems using low molecular weight heparinized porcine blood. Blood parameters are continuously measured with full blood thromboelastometry (ROTEM; EXTEM, FIBTEM and a custom-made analysis HEPNATEM). Thrombus formation is optically observed after four hours of testing. The results of three experiments are presented each with two parallel loops. The area of thrombus formation inside the MCS device was reproducible. The implantation of a filter inside the loop catches embolizing thrombi without a measurable increase of platelet activation, allowing conclusions of the place of origin of thrombi inside the device. EXTEM and FIBTEM parameters such as clotting velocity (α) and maximum clot firmness (MCF) show a total decrease by around 6% with a characteristic kink after 180 minutes. HEPNATEM α and MCF rise within the first 180 minutes indicate a continuously increasing activation level of coagulation. After 180 minutes, the consumption of clotting factors prevails, resulting in a decrease of α and MCF. With the designed mock loop and the presented protocol we are able to identify thrombogenic hot spots inside a pulsatile pump and characterize their thrombogenic potential.
Aneurysmal subarachnoid hemorrhage (aSAH) is associated with early and delayed brain injury due to several underlying and interrelated processes, which include inflammation, oxidative stress, endothelial, and neuronal apoptosis. Treatment with melatonin, a cytoprotective neurohormone with anti-inflammatory, anti-oxidant and anti-apoptotic effects, has been shown to attenuate early brain injury (EBI) and to prevent delayed cerebral vasospasm in experimental aSAH models. Less is known about the role of endogenous melatonin for aSAH outcome and how its production is altered by the pathophysiological cascades initiated during EBI. In the present observational study, we analyzed changes in melatonin levels during the first three weeks after aSAH.
Cardiopulmonary bypass (CPB) is a standard technique for cardiac surgery, but comes with the risk of severe neurological complications (e.g. stroke) caused by embolisms and/or reduced cerebral perfusion. We report on an aortic cannula prototype design (optiCAN) with helical outflow and jet-splitting dispersion tip that could reduce the risk of embolic events and restores cerebral perfusion to 97.5% of physiological flow during CPB in vivo, whereas a commercial curved-tip cannula yields 74.6%. In further in vitro comparison, pressure loss and hemolysis parameters of optiCAN remain unaffected. Results are reproducibly confirmed in silico for an exemplary human aortic anatomy via computational fluid dynamics (CFD) simulations. Based on CFD simulations, we firstly show that optiCAN design improves aortic root washout, which reduces the risk of thromboembolism. Secondly, we identify regions of the aortic intima with increased risk of plaque release by correlating areas of enhanced plaque growth and high wall shear stresses (WSS). From this we propose another easy-to-manufacture cannula design (opti2CAN) that decreases areas burdened by high WSS, while preserving physiological cerebral flow and favorable hemodynamics. With this novel cannula design, we propose a cannulation option to reduce neurological complications and the prevalence of stroke in high-risk patients after CPB.
Biologically sensitive field-effect devices (BioFEDs) advantageously combine the electronic field-effect functionality with the (bio)chemical receptor’s recognition ability for (bio)chemical sensing. In this review, basic and widely applied device concepts of silicon-based BioFEDs (ion-sensitive field-effect transistor, silicon nanowire transistor, electrolyte-insulator-semiconductor capacitor, light-addressable potentiometric sensor) are presented and recent progress (from 2019 to early 2021) is discussed. One of the main advantages of BioFEDs is the label-free sensing principle enabling to detect a large variety of biomolecules and bioparticles by their intrinsic charge. The review encompasses applications of BioFEDs for the label-free electrical detection of clinically relevant protein biomarkers, deoxyribonucleic acid molecules and viruses, enzyme-substrate reactions as well as recording of the cell acidification rate (as an indicator of cellular metabolism) and the extracellular potential.
Previous studies optimized the dimensions of coaxial heat exchangers using constant mass fow rates as a boundary condition. They show a thermal optimal circular ring width of nearly zero. Hydraulically optimal is an inner to outer pipe radius ratio of 0.65 for turbulent and 0.68 for laminar fow types. In contrast, in this study, fow conditions in the circular ring are kept constant (a set of fxed Reynolds numbers) during optimization. This approach ensures fxed fow conditions and prevents inappropriately high or low mass fow rates. The optimization is carried out for three objectives: Maximum energy gain, minimum hydraulic efort and eventually optimum net-exergy balance. The optimization changes the inner pipe radius and mass fow rate but not the Reynolds number of the circular ring. The thermal calculations base on Hellström’s borehole resistance and the hydraulic optimization on individually calculated linear loss of head coefcients. Increasing the inner pipe radius results in decreased hydraulic losses in the inner pipe but increased losses in the circular ring. The net-exergy diference is a key performance indicator and combines thermal and hydraulic calculations. It is the difference between thermal exergy fux and hydraulic efort. The Reynolds number in the circular ring is instead of the mass fow rate constant during all optimizations. The result from a thermal perspective is an optimal width of the circular ring of nearly zero. The hydraulically optimal inner pipe radius is 54% of the outer pipe radius for laminar fow and 60% for turbulent fow scenarios. Net-exergetic optimization shows a predominant infuence of hydraulic losses, especially for small temperature gains. The exact result depends on the earth’s thermal properties and the fow type. Conclusively, coaxial geothermal probes’ design should focus on the hydraulic optimum and take the thermal optimum as a secondary criterion due to the dominating hydraulics.
A new formulation to calculate the shakedown limit load of Kirchhoff plates under stochastic conditions of strength is developed. Direct structural reliability design by chance con-strained programming is based on the prescribed failure probabilities, which is an effective approach of stochastic programming if it can be formulated as an equivalent deterministic optimization problem. We restrict uncertainty to strength, the loading is still deterministic. A new formulation is derived in case of random strength with lognormal distribution. Upper bound and lower bound shakedown load factors are calculated simultaneously by a dual algorithm.
The paper presents the derivation of a new equivalent skin friction coefficient for estimating the parasitic drag of short-to-medium range fixed-wing unmanned aircraft. The new coefficient is derived from an aerodynamic analysis of ten different unmanned aircraft used for surveillance, reconnaissance, and search and rescue missions. The aircraft is simulated using a validated unsteady Reynolds-averaged Navier Stokes approach. The UAV’s parasitic drag is significantly influenced by the presence of miscellaneous components like fixed landing gears or electro-optical sensor turrets. These components are responsible for almost half of an unmanned aircraft’s total parasitic drag. The new equivalent skin friction coefficient accounts for these effects and is significantly higher compared to other aircraft categories. It is used to initially size an unmanned aircraft for a typical reconnaissance mission. The improved parasitic drag estimation yields a much heavier unmanned aircraft when compared to the sizing results using available drag data of manned aircraft.
Project work and inter disciplinarity are integral parts of today's engineering work. It is therefore important to incorporate these aspects into the curriculum of academic studies of engineering. At the faculty of Electrical Engineering and Information Technology an interdisciplinary project is part of the bachelor program to address these topics. Since the summer term 2020 most courses changed to online mode during the Covid-19 crisis including the interdisciplinary projects. This online mode introduces additional challenges to the execution of the projects, both for the students as well as for the lecture. The challenges, but also the risks and chances of this kind of project courses are subject of this paper, based on five different interdisciplinary projects
During the Covid-19 pandemic, vocational colleges, universities of applied science and technical universities often had to cancel laboratory sessions requiring students’ attendance. These above of all are of decisive importance in order to give learners an understanding of theory through practical work.This paper is a contribution to the implementation of distance learning for laboratory work applicable for several upper secondary educational facilities. Its aim is to provide a paradigm for hybrid teaching to analyze and control a non-linear system depicted by a tank model. For this reason, we redesign a full series of laboratory sessions on the basis of various challenges. Thus, it is suitable to serve different reference levels of the European Qualifications Framework (EQF).We present problem-based learning through online platforms to compensate the lack of a laboratory learning environment. With a task deduced from their future profession, we give students the opportunity to develop own solutions in self-defined time intervals. A requirements specification provides the framework conditions in terms of time and content for students having to deal with the challenges of the project in a self-organized manner with regard to inhomogeneous previous knowledge. If the concept of Complete Action is introduced in classes before, they will automatically apply it while executing the project.The goal is to combine students’ scientific understanding with a procedural knowledge. We suggest a series of remote laboratory sessions that combine a problem formulation from the subject area of Measurement, Control and Automation Technology with a project assignment that is common in industry by providing extracts from a requirements specification.
Reliable automation of the labor-intensive manual task of scoring animal sleep can facilitate the analysis of long-term sleep studies. In recent years, deep-learning-based systems, which learn optimal features from the data, increased scoring accuracies for the classical sleep stages of Wake, REM, and Non-REM. Meanwhile, it has been recognized that the statistics of transitional stages such as pre-REM, found between Non-REM and REM, may hold additional insight into the physiology of sleep and are now under vivid investigation. We propose a classification system based on a simple neural network architecture that scores the classical stages as well as pre-REM sleep in mice. When restricted to the classical stages, the optimized network showed state-of-the-art classification performance with an out-of-sample F1 score of 0.95 in male C57BL/6J mice. When unrestricted, the network showed lower F1 scores on pre-REM (0.5) compared to the classical stages. The result is comparable to previous attempts to score transitional stages in other species such as transition sleep in rats or N1 sleep in humans. Nevertheless, we observed that the sequence of predictions including pre-REM typically transitioned from Non-REM to REM reflecting sleep dynamics observed by human scorers. Our findings provide further evidence for the difficulty of scoring transitional sleep stages, likely because such stages of sleep are under-represented in typical data sets or show large inter-scorer variability. We further provide our source code and an online platform to run predictions with our trained network.
Dual frequency magnetic excitation of magnetic nanoparticles (MNP) enables enhanced biosensing applications. This was studied from an experimental and theoretical perspective: nonlinear sum-frequency components of MNP exposed to dual-frequency magnetic excitation were measured as a function of static magnetic offset field. The Langevin model in thermodynamic equilibrium was fitted to the experimental data to derive parameters of the lognormal core size distribution. These parameters were subsequently used as inputs for micromagnetic Monte-Carlo (MC)-simulations. From the hysteresis loops obtained from MC-simulations, sum-frequency components were numerically demodulated and compared with both experiment and Langevin model predictions. From the latter, we derived that approximately 90% of the frequency mixing magnetic response signal is generated by the largest 10% of MNP. We therefore suggest that small particles do not contribute to the frequency mixing signal, which is supported by MC-simulation results. Both theoretical approaches describe the experimental signal shapes well, but with notable differences between experiment and micromagnetic simulations. These deviations could result from Brownian relaxations which are, albeit experimentally inhibited, included in MC-simulation, or (yet unconsidered) cluster-effects of MNP, or inaccurately derived input for MC-simulations, because the largest particles dominate the experimental signal but concurrently do not fulfill the precondition of thermodynamic equilibrium required by Langevin theory.
This paper introduces a new maritime search and rescue system based on S-band illumination harmonic radar (HR). Passive and active tags have been developed and tested while attached to life jackets and a small boat. In this demonstration test carried out on the Baltic Sea, the system was able to detect and range the active tags up to a distance of 5800 m using an illumination signal transmit-power of 100 W. Special attention is given to the development, performance, and conceptual differences between passive and active tags used in the system. Guidelines for achieving a high HR dynamic range, including a system components description, are given and a comparison with other HR systems is performed. System integration with a commercial maritime X-band navigation radar is shown to demonstrate a solution for rapid search and rescue response and quick localization.
The Robot Operating System (ROS) is the current de-facto standard in robot middlewares. The steadily increasing size of the user base results in a greater demand for training as well. User groups range from students in academia to industry professionals with a broad spectrum of developers in between. To deliver high quality training and education to any of these audiences, educators need to tailor individual curricula for any such training. In this paper, we present an approach to ease compiling curricula for ROS trainings based on a taxonomy of the teaching contents. The instructor can select a set of dedicated learning units and the system will automatically compile the teaching material based on the dependencies of the units selected and a set of parameters for a particular training. We walk through an example training to illustrate our work.
Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer.
Quantitative nuclear magnetic resonance (qNMR) is considered as a powerful tool for multicomponent mixture analysis as well as for the purity determination of single compounds. Special attention is currently paid to the training of operators and study directors involved in qNMR testing. To assure that only qualified personnel are used for sample preparation at our GxP-accredited laboratory, weighing test was proposed. Sixteen participants performed six-fold weighing of the binary mixture of dibutylated hydroxytoluene (BHT) and 1,2,4,5-tetrachloro-3-nitrobenzene (TCNB). To evaluate the quality of data analysis, all spectra were evaluated manually by a qNMR expert and using in-house developed automated routine. The results revealed that mean values are comparable and both evaluation approaches are free of systematic error. However, automated evaluation resulted in an approximately 20% increase in precision. The same findings were revealed for qNMR analysis of 32 compounds used in pharmaceutical industry. Weighing test by six-fold determination in binary mixtures and automated qNMR methodology can be recommended as efficient tools for evaluating staff proficiency. The automated qNMR method significantly increases throughput and precision of qNMR for routine measurements and extends application scope of qNMR.
How different diversity factors affect the perception of first-year requirements in higher education
(2021)
In the light of growing university entry rates, higher education institutions not only serve larger numbers of students, but also seek to meet first-year students’ ever more diverse needs. Yet to inform universities how to support the transition to higher education, research only offers limited insights. Current studies tend to either focus on the individual factors that affect student success or they highlight students’ social background and their educational biography in order to examine the achievement of selected, non-traditional groups of students. Both lines of research appear to lack integration and often fail to take organisational diversity into account, such as different types of higher education institutions or degree programmes. For a more comprehensive understanding of student diversity, the present study includes individual, social and organisational factors. To gain insights into their role for the transition to higher education, we examine how the different factors affect the students’ perception of the formal and informal requirements of the first year as more or less difficult to cope with. As the perceived requirements result from both the characteristics of the students and the institutional context, they allow to investigate transition at the interface of the micro and the meso level of higher education. Latent profile analyses revealed that there are no profiles with complex patterns of perception of the first-year requirements, but the identified groups rather differ in the overall level of perceived challenges. Moreover, SEM indicates that the differences in the perception largely depend on the individual factors self-efficacy and volition.
Stretch-shortening type actions are characterized by lengthening of the pre-activated muscle-tendon unit (MTU) in the eccentric phase immediately followed by muscle shortening. Under 1 g, pre-activity before and muscle activity after ground contact, scale muscle stiffness, which is crucial for the recoil properties of the MTU in the subsequent push-off. This study aimed to examine the neuro-mechanical coupling of the stretch-shortening cycle in response to gravity levels ranging from 0.1 to 2 g. During parabolic flights, 17 subjects performed drop jumps while electromyography (EMG) of the lower limb muscles was combined with ultrasound images of the gastrocnemius medialis, 2D kinematics and kinetics to depict changes in energy management and performance. Neuro-mechanical coupling in 1 g was characterized by high magnitudes of pre-activity and eccentric muscle activity allowing an isometric muscle behavior during ground contact. EMG during pre-activity and the concentric phase systematically increased from 0.1 to 1 g. Below 1 g the EMG in the eccentric phase was diminished, leading to muscle lengthening and reduced MTU stretches. Kinetic energy at take-off and performance were decreased compared to 1 g. Above 1 g, reduced EMG in the eccentric phase was accompanied by large MTU and muscle stretch, increased joint flexion amplitudes, energy loss and reduced performance. The energy outcome function established by linear mixed model reveals that the central nervous system regulates the extensor muscles phase- and load-specifically. In conclusion, neuro-mechanical coupling appears to be optimized in 1 g. Below 1 g, the energy outcome is compromised by reduced muscle stiffness. Above 1 g, loading progressively induces muscle lengthening, thus facilitating energy dissipation.
Glucose oxidase (GOx) is an enzyme frequently used in glucose biosensors. As increased temperatures can enhance the performance of electrochemical sensors, we investigated the impact of temperature pulses on GOx that was drop-coated on flattened Pt microwires. The wires were heated by an alternating current. The sensitivity towards glucose and the temperature stability of GOx was investigated by amperometry. An up to 22-fold increase of sensitivity was observed. Spatially resolved enzyme activity changes were investigated via scanning electrochemical microscopy. The application of short (<100 ms) heat pulses was associated with less thermal inactivation of the immobilized GOx than long-term heating.
In this paper, we present the structure, the simulation the operation of a multi-stage, hybrid solar desalination system (MSDH), powered by thermal and photovoltaic (PV) (MSDH) energy. The MSDH system consists of a lower basin, eight horizontal stages, a field of four flat thermal collectors with a total area of 8.4 m2, 3 Kw PV panels and solar batteries. During the day the system is heated by thermal energy, and at night by heating resistors, powered by solar batteries. These batteries are charged by the photovoltaic panels during the day. More specifically, during the day and at night, we analyse the temperature of the stages and the production of distilled water according to the solar irradiation intensity and the electric heating power, supplied by the solar batteries. The simulations were carried out in the meteorological conditions of the winter month (February 2020), presenting intensities of irradiance and ambient temperature reaching 824 W/m2 and 23 °C respectively. The results obtained show that during the day the system is heated by the thermal collectors, the temperature of the stages and the quantity of water produced reach 80 °C and 30 Kg respectively. At night, from 6p.m. the system is heated by the electric energy stored in the batteries, the temperature of the stages and the quantity of water produced reach respectively 90 °C and 104 Kg for an electric heating power of 2 Kw. Moreover, when the electric power varies from 1 Kw to 3 Kw the quantity of water produced varies from 92 Kg to 134 Kg. The analysis of these results and their comparison with conventional solar thermal desalination systems shows a clear improvement both in the heating of the stages, by 10%, and in the quantity of water produced by a factor of 3.
The treatment method to deactivate viable microorganisms from objects or products is termed sterilization. There are multiple forms of sterilization, each intended to be applied for a specific target, which depends on—but not limited to—the thermal, physical, and chemical stability of that target. Herein, an overview on the currently used sterilization processes in the global market is provided. Different sterilization techniques are grouped under a category that describes the method of treatment: radiation (gamma, electron beam, X-ray, and ultraviolet), thermal (dry and moist heat), and chemical (ethylene oxide, ozone, chlorine dioxide, and hydrogen peroxide). For each sterilization process, the typical process parameters as defined by regulations and the mode of antimicrobial activity are summarized. Finally, the recommended microorganisms that are used as biological indicators to validate sterilization processes in accordance with the rules that are established by various regulatory agencies are summarized.
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
Bitcoin is a cryptocurrency and is considered a high-risk asset class whose price changes are difficult to predict. Current research focusses on daily price movements with a limited number of predictors. The paper at hand aims at identifying measurable indicators for Bitcoin price movements and the development of a suitable forecasting model for hourly changes. The paper provides three research contributions. First, a set of significant indicators for predicting the Bitcoin price is identified. Second, the results of a trained Long Short-term Memory (LSTM) neural network that predicts price changes on an hourly basis is presented and compared with other algorithms. Third, the results foster discussions of the applicability of neural nets for stock price predictions. In total, 47 input features for a period of over 10 months could be retrieved to train a neural net that predicts the Bitcoin price movements with an error rate of 3.52 %.
The paper presents an overview of the past and present of low-emission combustor research with hydrogen-rich fuels at Aachen University of Applied Sciences. In 1990, AcUAS started developing the Dry-Low-NOx Micromix combustion technology. Micromix reduces NOx emissions using jet-in-crossflow mixing of multiple miniaturized fuel jets and combustor air with an inherent safety against flashback. At first, pure hydrogen as fuel was investigated with lab-scale applications. Later, Micromix prototypes were developed for the use in an industrial gas turbine Honeywell/Garrett GTCP-36-300, proving low NOx characteristics during real gas turbine operation, accompanied by the successful definition of safety laws and control system modifications. Further, the Micromix was optimized for the use in annular and can combustors as well as for fuel-flexibility with hydrogen-methane-mixtures and hydrogen-rich syngas qualities by means of extensive experimental and numerical simulations. In 2020, the latest Micromix application will be demonstrated in a commercial 2 MW-class gas turbine can-combustor with full-scale engine operation. The paper discusses the advances in Micromix research over the last three decades.
This article introduces a new maritime search and rescue system based on S-band illumination harmonic radar (HR). Passive and active tags have been developed and tested attached to life jackets and a rescue boat. This system was able to detect and range the active tags up to a range of 5800 m in tests on the Baltic Sea with an antenna input power of only 100 W. All electronic GHz components of the system, excluding the S-band power amplifier, were custom developed for this purpose. Special attention is given to the performance and conceptual differences between passive and active tags used in the system and integration with a maritime X-band navigation radar is demonstrated.
An acetoin biosensor based on a capacitive electrolyte–insulator–semiconductor (EIS) structure modified with the enzyme acetoin reductase, also known as butane-2,3-diol dehydrogenase (Bacillus clausii DSM 8716ᵀ), is applied for acetoin detection in beer, red wine, and fermentation broth samples for the first time. The EIS sensor consists of an Al/p-Si/SiO₂/Ta₂O₅ layer structure with immobilized acetoin reductase on top of the Ta₂O₅ transducer layer by means of crosslinking via glutaraldehyde. The unmodified and enzyme-modified sensors are electrochemically characterized by means of leakage current, capacitance–voltage, and constant capacitance methods, respectively.
Plant virus-like particles, and in particular, tobacco mosaic virus (TMV) particles, are increasingly being used in nano- and biotechnology as well as for biochemical sensing purposes as nanoscaffolds for the high-density immobilization of receptor molecules. The sensitive parameters of TMV-assisted biosensors depend, among others, on the density of adsorbed TMV particles on the sensor surface, which is affected by both the adsorption conditions and surface properties of the sensor. In this work, Ta₂O₅-gate field-effect capacitive sensors have been applied for the label-free electrical detection of TMV adsorption. The impact of the TMV concentration on both the sensor signal and the density of TMV particles adsorbed onto the Ta₂O₅-gate surface has been studied systematically by means of field-effect and scanning electron microscopy methods. In addition, the surface density of TMV particles loaded under different incubation times has been investigated. Finally, the field-effect sensor also demonstrates the label-free detection of penicillinase immobilization as model bioreceptor on TMV particles.
7T MR Safety
(2021)
Lignite biosolubilization and bioconversion by Bacillus sp.: the collation of analytical data
(2021)
The vast metabolic potential of microbes in brown coal (lignite) processing and utilization can greatly contribute to innovative approaches to sustainable production of high-value products from coal. In this study, the multi-faceted and complex coal biosolubilization process by Bacillus sp. RKB 7 isolate from the Kazakhstan coal-mining soil is reported, and the derived products are characterized. Lignite solubilization tests performed for surface and suspension cultures testify to the formation of numerous soluble lignite-derived substances. Almost 24% of crude lignite (5% w/v) was solubilized within 14 days under slightly alkaline conditions (pH 8.2). FTIR analysis revealed various functional groups in the obtained biosolubilization products. Analyses of the lignite-derived humic products by UV-Vis and fluorescence spectrometry as well as elemental analysis yielded compatible results indicating the emerging products had a lower molecular weight and degree of aromaticity. Furthermore, XRD and SEM analyses were used to evaluate the biosolubilization processes from mineralogical and microscopic points of view. The findings not only contribute to a deeper understanding of microbe–mineral interactions in coal environments, but also contribute to knowledge of coal biosolubilization and bioconversion with regard to sustainable production of humic substances. The detailed and comprehensive analyses demonstrate the huge biotechnological potential of Bacillus sp. for agricultural productivity and environmental health.
Through a mirror darkly – On the obscurity of teaching goals in game-based learning in IT security
(2021)
Teachers and instructors use very specific language communicating teaching goals. The most widely used frameworks of common reference are the Bloom’s Taxonomy and the Revised Bloom’s Taxonomy. The latter provides distinction of 209 different teaching goals which are connected to methods. In Competence Developing Games (CDGs - serious games to convey knowledge) and in IT security education, a two- or three level typology exists, reducing possible learning outcomes to awareness, training, and education. This study explores whether this much simpler framework succeeds in achieving the same range of learning outcomes. Method wise a keyword analysis was conducted. The results were threefold: 1. The words used to describe teaching goals in CDGs on IT security education do not reflect the whole range of learning outcomes. 2. The word choice is nevertheless different from common language, indicating an intentional use of language. 3. IT security CDGs use different sets of terms to describe learning outcomes, depending on whether they are awareness, training, or education games. The interpretation of the findings is that the reduction to just three types of CDGs reduces the capacity to communicate and think about learning outcomes and consequently reduces the outcomes that are intentionally achieved.