Refine
Year of publication
- 2024 (59)
- 2023 (93)
- 2022 (136)
- 2021 (136)
- 2020 (169)
- 2019 (196)
- 2018 (169)
- 2017 (154)
- 2016 (157)
- 2015 (162)
- 2014 (160)
- 2013 (171)
- 2012 (162)
- 2011 (183)
- 2010 (181)
- 2009 (179)
- 2008 (150)
- 2007 (137)
- 2006 (129)
- 2005 (122)
- 2004 (150)
- 2003 (95)
- 2002 (123)
- 2001 (103)
- 2000 (102)
- 1999 (109)
- 1998 (98)
- 1997 (96)
- 1996 (81)
- 1995 (78)
- 1994 (87)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (44)
- 1988 (56)
- 1987 (32)
- 1986 (19)
- 1985 (33)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1575)
- Fachbereich Elektrotechnik und Informationstechnik (713)
- IfB - Institut für Bioengineering (567)
- Fachbereich Energietechnik (562)
- Fachbereich Chemie und Biotechnologie (540)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (482)
- Fachbereich Maschinenbau und Mechatronik (271)
- Fachbereich Wirtschaftswissenschaften (207)
- Solar-Institut Jülich (161)
Has Fulltext
- no (4725) (remove)
Language
- English (4725) (remove)
Document Type
- Article (3190)
- Conference Proceeding (1059)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (33)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Preprint (6)
Keywords
- Gamification (6)
- avalanche (6)
- Additive manufacturing (5)
- Earthquake (5)
- Enterprise Architecture (5)
- MINLP (5)
- Natural language processing (5)
- solar sail (5)
- Additive Manufacturing (4)
- Diversity Management (4)
The use of transgenic animal models has transformed our knowledge of complex biochemical pathways in vivo. It has allowed disease processes to be modelled and used in the development of new disease prevention and treatment strategies. They can also be used to define cell- and tissue-specific pathways of gene regulation. A further major application is in the area of preclinical development where such models can be used to define pathways of chemical toxicity, and the pathways that regulate drug disposition. One major application of this approach is the humanisation of mice for the proteins that control drug metabolism and disposition. Such models can have numerous applications in the development of drugs and in their more sophisticated use in the clinic.
Application of Low NOx Micro-mix Hydrogen Combustion to 2MW Class Industrial Gas Turbine Combustor
(2019)
Application of polymers in textile reinforced concrete : from the interface to construction elements
(2006)
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle.
Often, detailed simulations of heat conduction in complicated, porous media have large runtimes. Then homogenization is a powerful tool to speed up the calculations by preserving accurate solutions at the same time. Unfortunately real structures are generally non-periodic, which requires unpractical, complicated homogenization techniques. We demonstrate in this paper, that the application of simple, periodic techniques to realistic media, that are just close to periodic, gives accurate, approximative solutions. In order to obtain effective parameters for the homogenized heat equation, we have to solve a so called “cell problem”. In contrast to periodic structures it is not trivial to determine a suitable unit cell, which represents a non-periodic media. To overcome this problem, we give a rule of thumb on how to choose a good cell. Finally we demonstrate the efficiency of our method for virtually generated foams as well as real foams and compare these results to periodic structures.
Application of the optical flow method to velocity determination in hydraulic structure models
(2016)
Software development projects often fail because of insufficient code quality. It is now well documented that the task of testing software, for example, is perceived as uninteresting and rather boring, leading to poor software quality and major challenges to software development companies. One promising approach to increase the motivation for considering software quality is the use of gamification. Initial research works already investigated the effects of gamification on software developers and come to promising. Nevertheless, a lack of results from field experiments exists, which motivates the chapter at hand. By conducting a gamification experiment with five student software projects and by interviewing the project members, the chapter provides insights into the changing programming behavior of information systems students when confronted with a leaderboard. The results reveal a motivational effect as well as a reduction of code smells.
Process mining gets more and more attention even outside large enterprises and can be a major benefit for small and medium sized enterprises (SMEs) to gain competitive advantages. Applying process mining is challenging, particularly for SMEs because they have less resources and process maturity. So far, IS researchers analyzed process mining challenges with a focus on larger companies. This paper investigates the application of process mining by means of a case study and sheds light into the particular challenges of an IT SME. The results reveal 13 SME process mining challenges and seven guidelines to address them. In this way, the paper contributes to the understanding of process mining application in SME and shows similarities and differences to larger companies.
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
We present a concise mini overview on the approaches to the disposal of nuclear waste currently used or deployed. The disposal of nuclear waste is the end point of nuclear waste management (NWM) activities and is the emplacement of waste in an appropriate facility without the intention to retrieve it. The IAEA has developed an internationally accepted classification scheme based on the end points of NWM, which is used as guidance. Retention times needed for safe isolation of waste radionuclides are estimated based on the radiotoxicity of nuclear waste. Disposal facilities usually rely on a multi-barrier defence system to isolate the waste from the biosphere, which comprises the natural geological barrier and the engineered barrier system. Disposal facilities could be of a trench type, vaults, tunnels, shafts, boreholes, or mined repositories. A graded approach relates the depth of the disposal facilities’ location with the level of hazard. Disposal practices demonstrate the reliability of nuclear waste disposal with minimal expected impacts on the environment and humans.
Malaria infection remains a significant risk for much of the population of tropical and subtropical areas, particularly in developing countries. Therefore, it is of high importance to develop sensitive, accurate and inexpensive malaria diagnosis tests. Here, we present a novel aptamer-based electrochemical biosensor (aptasensor) for malaria detection by impedance spectroscopy, through the specific recognition between a highly discriminatory DNA aptamer and its target Plasmodium falciparum lactate dehydrogenase (PfLDH). Interestingly, due to the isoelectric point (pI) of PfLDH, the aptasensor response showed an adjustable detection range based on the different protein net-charge at variable pH environments. The specific aptamer recognition allows sensitive protein detection with an expanded detection range and a low detection limit, as well as a high specificity for PfLDH compared to analogous proteins. The specific feasibility of the aptasensor is further demonstrated by detection of the target PfLDH in human serum. Furthermore, the aptasensor can be easily regenerated and thus applied for multiple usages. The robustness, sensitivity, and reusability of the presented aptasensor make it a promising candidate for point-of-care diagnostic systems.
For the successful implementation of microfluidic reaction systems, such as PCR and electrophoresis, the movement of small liquid volumes is essential. In conventional lab-on-a-chip-platforms, solvents and samples are passed through defined microfluidic channels with complex flow control installations. The droplet actuation platform presented here is a promising alternative. With it, it is possible to move a liquid drop (microreactor) on a planar surface of a reaction platform (lab-in-a-drop). The actuation of microreactors on the hydrophobic surface of the platform is based on the use of magnetic forces acting on the outer shell of the liquid drops which is made of a thin layer of superhydrophobic magnetite particles. The hydrophobic surface of the platform is needed to avoid any contact between the liquid core and the surface to allow a smooth movement of the microreactor. On the platform, one or more microreactors with volumes of 10 µL can be positioned and moved simultaneously. The platform itself consists of a 3 x 3 matrix of electrical double coils which accommodate either neodymium or iron cores. The magnetic field gradients are automatically controlled. By variation of the magnetic field gradients, the microreactors' magnetic hydrophobic shell can be manipulated automatically to move the microreactor or open the shell reversibly. Reactions of substrates and corresponding enzymes can be initiated by merging the microreactors or bringing them into contact with surface immobilized catalysts.
Architects and civil engineers work together regularly during their professional days and are irreplaceable for each other. This co-operation is sometimes made more difficult by the differences in their disciplinary languages and approaches. Structures are evaluated by architects on the basis of criteria such as spatial impact and usability, while civil engineers analyze them more closely by their bearing and deformation properties, as well as by constructive aspects. This diversity of assessment criteria and approaches is often continued in both academic disciplines in the view on structures.
Within the framework of the Exploratory Teaching Space (ETS), a funding program to improve teaching at RWTH Aachen University and to promote new teaching concepts, a project was carried out jointly by the Junior Professorship of Tool-Culture at the Faculty of Architecture and the Institute of Structural Concrete at the Faculty of Civil Engineering. The aim of the project is to present buildings in such a way that the differences in perception between architects and civil engineers are reduced and the common understanding is promoted.
The project develops a database, which contains a collection of striking buildings from Aachen and the surrounding area. The buildings are categorized according to terms that come from both disciplinary areas. The collection can be freely explored or crossed through learning trails. The medium of film plays a special role in presenting the buildings. The buildings are assigned to different categories of load bearing structures as linear, planar and spatial structures, and further to different types of material, functional programs and spatial characteristics. Since the buildings are located in the direct vicinity of Aachen, they can be visited by the students. This makes them more sensitive to their environment. Intrinsic motivation, as well as implicit learning is encouraged. The paper will provide a detailed report of the project, its implementation, the feedback of the students and the plans for further development.
Architecture for platform- and hardware-independent mesh networks : how to unify the channels
(2013)
This paper will prove that mesh networks among different platforms and hardware channels can help to channel valuable information even if public telecommunication infrastructure is not available due to arbitrary reasons. Therefore, results of a simulation for mesh networks on mass events will be provided, followed by the developed architecture and an outlook on future research. The developed architecture is currently being implemented and field tested on mass events.
The implementation of IO-Link in the automation industry has increased over the years. Its main advantage is it offers a digital point-to-point plugand-play interface for any type of device or application. This simplifies the communication between devices and increases productivity with its different features like self-parametrization and maintenance. However, its complete potential is not always used.
The aim of this paper is to create an Arduino based framework for the development of generic IO-Link devices and increase its implementation for rapid prototyping. By generating the IO device description file (IODD) from a graphical user interface, and further customizable options for the device application, the end-user can intuitively develop generic IO-Link devices. The peculiarity of this framework relies on its simplicity and abstraction which allows to implement any sensor functionality and virtually connect any type of device to an IO-Link master. This work consists of the general overview of the framework, the technical background of its development and a proof of concept which demonstrates the workflow for its implementation.
Arsenic passivation of MOMBE grown GaAs surfaces / B. -J. Schäfer ; A. Förster ; M. Londschien ...
(1988)
Environmental emissions, global warming, and energy-related concerns have accelerated the advancements in conventional vehicles that primarily use internal combustion engines. Among the existing technologies, hydrogen fuel cell electric vehicles and fuel cell hybrid electric vehicles may have minimal contributions to greenhouse gas emissions and thus are the prime choices for environmental concerns. However, energy management in fuel cell electric vehicles and fuel cell hybrid electric vehicles is a major challenge. Appropriate control strategies should be used for effective energy management in these vehicles. On the other hand, there has been significant progress in artificial intelligence, machine learning, and designing data-driven intelligent controllers. These techniques have found much attention within the community, and state-of-the-art energy management technologies have been developed based on them. This manuscript reviews the application of machine learning and intelligent controllers for prediction, control, energy management, and vehicle to everything (V2X) in hydrogen fuel cell vehicles. The effectiveness of data-driven control and optimization systems are investigated to evolve, classify, and compare, and future trends and directions for sustainability are discussed.
Air- and water-stable phenyl complexes with nitridotechnetium(V) cores can be prepared by straightforward procedures. [TcNPh2(PPh3)2] is formed by the reaction of [TcNCl2(PPh3)2] with PhLi. The analogous N-heterocyclic carbene (NHC) compound [TcNPh2(HLPh)2], where HLPh is 1,3,4-triphenyl-1,2,4-triazol-5-ylidene, is available from (NBu4)[TcNCl4] and HLPh or its methoxo-protected form. The latter compound allows the comparison of different Tc–C bonds within one compound. Surprisingly, the Tc chemistry with such NHCs does not resemble that of corresponding Re complexes, where CH activation and orthometalation dominate.
The understanding that optimized components do not automatically lead to energy-efficient systems sets the attention from the single component on the entire technical system. At TU Darmstadt, a new field of research named Technical Operations Research (TOR) has its origin. It combines mathematical and technical know-how for the optimal design of technical systems. We illustrate our optimization approach in a case study for the design of a ventilation system with the ambition to minimize the energy consumption for a temporal distribution of diverse load demands. By combining scaling laws with our optimization methods we find the optimal combination of fans and show the advantage of the use of multiple fans.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
At (ultra)high magnetic fields the artifact sensitivity of ECG recordings increases. This bears the risk of R-wave mis-registration which has been consistently reported for ECG triggered CMR at 7.0T. Realizing the constraints of conventional ECG, acoustic cardiac triggering (ACT) has been proposed. The clinical ACT has not been carefully examined yet. For this reason, this work scrutinizes the suitability, accuracy and reproducibility of ACT for CMR at 7.0T. For this purpose, the trigger reliability and trigger detection variance are examined together with an qualitative and quantitative assessment of image quality of the heart at 7.0T.
Assessment of RF Safety of Transmit Coils at 7 Tesla by Experimental and Numerical Procedures (490.)
(2012)
The Monte Carlo code FLUKA is used to simulate the production of a number of positron emitting radionuclides, ¹⁸F, ¹³N, ⁹⁴Tc, ⁴⁴Sc, ⁶⁸Ga, ⁸⁶Y, ⁸⁹Zr, ⁵²Mn, ⁶¹Cu and ⁵⁵Co, on a small medical cyclotron with a proton beam energy of 13 MeV. Experimental data collected at the TR13 cyclotron at TRIUMF agree within a factor of 0.6 ± 0.4 with the directly simulated data, except for the production of ⁵⁵Co, where the simulation underestimates the experiment by a factor of 3.4 ± 0.4. The experimental data also agree within a factor of 0.8 ± 0.6 with the convolution of simulated proton fluence and cross sections from literature. Overall, this confirms the applicability of FLUKA to simulate radionuclide production at 13 MeV proton beam energy.
In this study, an online multi-sensing platform was engineered to simultaneously evaluate various process parameters of food package sterilization using gaseous hydrogen peroxide (H₂O₂). The platform enabled the validation of critical aseptic parameters. In parallel, one series of microbiological count reduction tests was performed using highly resistant spores of B. atrophaeus DSM 675 to act as the reference method for sterility validation. By means of the multi-sensing platform together with microbiological tests, we examined sterilization process parameters to define the most effective conditions with regards to the highest spore kill rate necessary for aseptic packaging. As these parameters are mutually associated, a correlation between different factors was elaborated. The resulting correlation indicated the need for specific conditions regarding the applied H₂O₂ gas temperature, the gas flow and concentration, the relative humidity and the exposure time. Finally, the novel multi-sensing platform together with the mobile electronic readout setup allowed for the online and on-site monitoring of the sterilization process, selecting the best conditions for sterility and, at the same time, reducing the use of the time-consuming and costly microbiological tests that are currently used in the food package industry.
The control of molecular architecture provided by the layer-by-layer (LbL) technique has led to enhanced biosensors, in which advantageous features of distinct materials can be combined. Full optimization of biosensing performance, however, is only reached if the film morphology is suitable for the principle of detection of a specific biosensor. In this paper, we report a detailed morphology analysis of LbL films made with alternating layers of single-walled carbon nanotubes (SWNTs) and polyamidoamine (PAMAM) dendrimers, which were then covered with a layer of penicillinase (PEN). An optimized performance to detect penicillin G was obtained with 6-bilayer SWNT/PAMAM LbL films deposited on p-Si-SiO2-Ta2O5 chips, used in biosensors based on a capacitive electrolyte-insulator-semiconductor (EIS) and a light-addressable potentiometric sensor (LAPS) structure, respectively. Field-emission scanning electron microscopy (FESEM) and atomic force microscopy (AFM) images indicated that the LbL films were porous, with a large surface area due to interconnection of SWNT into PAMAM layers. This morphology was instrumental for the adsorption of a larger quantity of PEN, with the resulting LbL film being highly stable. The experiments to detect penicillin were performed with constant-capacitance (ConCap) and constant-current (CC) measurements for EIS and LAPS sensors, respectively, which revealed an enhanced detection signal and sensitivity of ca. 100 mV/decade for the field-effect sensors modified with the PAMAM/SWNT LbL film. It is concluded that controlling film morphology is essential for an enhanced performance of biosensors, not only in terms of sensitivity but also stability and response time.
This study presents the concept of AstroBioLab, an autonomous astrobiological field laboratory tailored for the exploration of (sub)glacial habitats. AstroBioLab is an integral component of the TRIPLE (Technologies for Rapid Ice Penetration and subglacial Lake Exploration) DLR-funded project, aimed at advancing astrobiology research through the development and deployment of innovative technologies. AstroBioLab integrates diverse measurement techniques such as fluorescence microscopy, DNA sequencing and fluorescence spectrometry, while leveraging microfluidics for efficient sample delivery and preparation.
We study the estimation of some linear functionals which are based on an unknown lifetime distribution. The observations are assumed to be generated under the semi-parametric random censorship model (SRCM), that is, a random censorship model where the conditional expectation of the censoring indicator given the observation belongs to a parametric family. Under this setup a semi-parametric estimator of the survival function was introduced by the author. If the parametric model assumption is correct, it is known that the estimated functional which is based on this semi-parametric estimator is asymptotically at least as efficient as the corresponding one which rests on the nonparametric Kaplan–Meier estimator.
In this paper we show that the estimated functional which is based on this semi-parametric estimator is asymptotically efficient with respect to the class of all regular estimators under this semi-parametric model.
Atmospheric pressure plasma-jet treatment of PAN-nonwovens—carbonization of nanofiber electrodes
(2022)
Carbon nanofibers are produced from dielectric polymer precursors such as polyacrylonitrile (PAN). Carbonized nanofiber nonwovens show high surface area and good electrical conductivity, rendering these fiber materials interesting for application as electrodes in batteries, fuel cells, and supercapacitors. However, thermal processing is slow and costly, which is why new processing techniques have been explored for carbon fiber tows. Alternatives for the conversion of PAN-precursors into carbon fiber nonwovens are scarce. Here, we utilize an atmospheric pressure plasma jet to conduct carbonization of stabilized PAN nanofiber nonwovens. We explore the influence of various processing parameters on the conductivity and degree of carbonization of the converted nanofiber material. The precursor fibers are converted by plasma-jet treatment to carbon fiber nonwovens within seconds, by which they develop a rough surface making subsequent surface activation processes obsolete. The resulting carbon nanofiber nonwovens are applied as supercapacitor electrodes and examined by cyclic voltammetry and impedance spectroscopy. Nonwovens that are carbonized within 60 s show capacitances of up to 5 F g⁻¹.
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
Attitude and Orbital Dynamics Modeling for an Uncontrolled Solar-Sail Experiment in Low-Earth Orbit
(2015)
Gossamer-1 is the first project of the three-step Gossamer roadmap, the purpose of which is to develop, prove and demonstrate that solar-sail technology is a safe and reliable propulsion technique for long-lasting and high-energy missions. This paper firstly presents the structural analysis performed on the sail to understand its elastic behavior. The results are then used in attitude and orbital simulations. The model considers the main forces and torques that a satellite experiences in low-Earth orbit coupled with the sail deformation. Doing the simulations for varying initial conditions in attitude and rotation rate, the results show initial states to avoid and maximum rotation rates reached for correct and faulty deployment of the sail. Lastly comparisons with the classic flat sail model are carried out to test the hypothesis that the elastic behavior does play a role in the attitude and orbital behavior of the sail
Sleep scoring is a necessary and time-consuming task in sleep studies. In animal models (such as mice) or in humans, automating this tedious process promises to facilitate long-term studies and to promote sleep biology as a data-driven f ield. We introduce a deep neural network model that is able to predict different states of consciousness (Wake, Non-REM, REM) in mice from EEG and EMG recordings with excellent scoring results for out-of-sample data. Predictions are made on epochs of 4 seconds length, and epochs are classified as artifactfree or not. The model architecture draws on recent advances in deep learning and in convolutional neural networks research. In contrast to previous approaches towards automated sleep scoring, our model does not rely on manually defined features of the data but learns predictive features automatically. We expect deep learning models like ours to become widely applied in different fields, automating many repetitive cognitive tasks that were previously difficult to tackle.
Having well-defined control strategies for fuel cells, that can efficiently detect errors and take corrective action is critically important for safety in all applications, and especially so in aviation. The algorithms not only ensure operator safety by monitoring the fuel cell and connected components, but also contribute to extending the health of the fuel cell, its durability and safe operation over its lifetime. While sensors are used to provide peripheral data surrounding the fuel cell, the internal states of the fuel cell cannot be directly measured. To overcome this restriction, Kalman Filter has been implemented as an internal state observer.
Other safety conditions are evaluated using real-time data from every connected sensor and corrective actions automatically take place to ensure safety. The algorithms discussed in this paper have been validated thorough Model-in-the-Loop (MiL) tests as well as practical validation at a dedicated test bench.
Combined with the use of renewable energy sources for
its production, Hydrogen represents a possible alternative gas
turbine fuel for future low emission power generation. Due to
its different physical properties compared to other fuels such
as natural gas, well established gas turbine combustion
systems cannot be directly applied for Dry Low NOx (DLN)
Hydrogen combustion. This makes the development of new
combustion technologies an essential and challenging task
for the future of hydrogen fueled gas turbines.
The newly developed and successfully tested “DLN
Micromix” combustion technology offers a great potential to
burn hydrogen in gas turbines at very low NOx emissions.
Aiming to further develop an existing burner design in terms
of increased energy density, a redesign is required in order to
stabilise the flames at higher mass flows and to maintain low
emission levels.
For this purpose, a systematic design exploration has
been carried out with the support of CFD and optimisation
tools to identify the interactions of geometrical and design
parameters on the combustor performance. Aerodynamic
effects as well as flame and emission formation are observed
and understood time- and cost-efficiently. Correlations
between single geometric values, the pressure drop of the
burner and NOx production have been identified as a result.
This numeric methodology helps to reduce the effort of
manufacturing and testing to few designs for single
validation campaigns, in order to confirm the flame stability
and NOx emissions in a wider operating condition field.
An approach to automatically generate a dynamic energy simulation model in Modelica for a single existing building is presented. It aims at collecting data about the status quo in the preparation of energy retrofits with low effort and costs. The proposed method starts from a polygon model of the outer building envelope obtained from photogrammetrically generated point clouds. The open-source tools TEASER and AixLib are used for data enrichment and model generation. A case study was conducted on a single-family house. The resulting model can accurately reproduce the internal air temperatures during synthetical heating up and cooling down. Modelled and measured whole building heat transfer coefficients (HTC) agree within a 12% range. A sensitivity analysis emphasises the importance of accurate window characterisations and justifies the use of a very simplified interior geometry. Uncertainties arising from the use of archetype U-values are estimated by comparing different typologies, with best- and worst-case estimates showing differences in pre-retrofit heat demand of about ±20% to the average; however, as the assumptions made are permitted by some national standards, the method is already close to practical applicability and opens up a path to quickly estimate possible financial and energy savings after refurbishment.
Wind-induced operational variability is one of the major challenges for structural health monitoring of slender engineering structures like aircraft wings or wind turbine blades. Damage sensitive features often show an even bigger sensitivity to operational variability. In this study a composite cantilever was subjected to multiple mass configurations, velocities and angles of attack in a controlled wind tunnel environment. A small-scale impact damage was introduced to the specimen and the structural response measurements were repeated. The proposed damage detection methodology is based on automated operational modal analysis. A novel baseline preparation procedure is described that reduces the amount of user interaction to the provision of a single consistency threshold. The procedure starts with an indeterminate number of operational modal analysis identifications from a large number of datasets and returns a complete baseline matrix of natural frequencies and damping ratios that is suitable for subsequent anomaly detection. Mahalanobis distance-based anomaly detection is then applied to successfully detect the damage under varying severities of operational variability and with various degrees of knowledge about the present operational conditions. The damage detection capabilities of the proposed methodology were found to be excellent under varying velocities and angles of attack. Damage detection was less successful under joint mass and wind variability but could be significantly improved through the provision of the currently encountered operational conditions.
In collaborative research projects, both researchers and practitioners work together solving business-critical challenges. These projects often deal with ETL processes, in which humans extract information from non-machine-readable documents by hand. AI-based machine learning models can help to solve this problem.
Since machine learning approaches are not deterministic, their quality of output may decrease over time. This fact leads to an overall quality loss of the application which embeds machine learning models. Hence, the software qualities in development and production may differ.
Machine learning models are black boxes. That makes practitioners skeptical and increases the inhibition threshold for early productive use of research prototypes. Continuous monitoring of software quality in production offers an early response capability on quality loss and encourages the use of machine learning approaches. Furthermore, experts have to ensure that they integrate possible new inputs into the model training as quickly as possible.
In this paper, we introduce an architecture pattern with a reference implementation that extends the concept of Metrics Driven Research Collaboration with an automated software quality monitoring in productive use and a possibility to auto-generate new test data coming from processed documents in production.
Through automated monitoring of the software quality and auto-generated test data, this approach ensures that the software quality meets and keeps requested thresholds in productive use, even during further continuous deployment and changing input data.
A concept for the analysis and optimal design of reinforced concrete structures is described. It is based on a nonlinear optimization algorithm and a finite element program for linear and nonlinear analysis of structures. With the aim of minimal cost design a two stage optimization using efficient gradient algorithm is developed. The optimization problems on global (structural) and local (crosssectional) level are formulated. A parallelization concept for solving the two stage optimization problem in minimal time is presented. Examples are included to illustrate the practical use and the effectively of the parallelization in the area of engineering design.
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.
We present an automated pipeline for the generation of synthetic datasets for six-dimension (6D) object pose estimation. Therefore, a completely automated generation process based on predefined settings is developed, which enables the user to create large datasets with a minimum of interaction and which is feasible for applications with a high object variance. The pipeline is based on the Unreal 4 (UE4) game engine and provides a high variation for domain randomization, such as object appearance, ambient lighting, camera-object transformation and distractor density. In addition to the object pose and bounding box, the metadata includes all randomization parameters, which enables further studies on randomization parameter tuning. The developed workflow is adaptable to other 3D objects and UE4 environments. An exemplary dataset is provided including five objects of the Yale-CMU-Berkeley (YCB) object set. The datasets consist of 6 million subsegments using 97 rendering locations in 12 different UE4 environments. Each dataset subsegment includes one RGB image, one depth image and one class segmentation image at pixel-level.