Conference Proceeding
Refine
Year of publication
- 2024 (16)
- 2023 (36)
- 2022 (47)
- 2021 (48)
- 2020 (46)
- 2019 (76)
- 2018 (65)
- 2017 (67)
- 2016 (66)
- 2015 (71)
- 2014 (52)
- 2013 (57)
- 2012 (59)
- 2011 (44)
- 2010 (68)
- 2009 (53)
- 2008 (37)
- 2007 (44)
- 2006 (61)
- 2005 (23)
- 2004 (22)
- 2003 (22)
- 2002 (25)
- 2001 (12)
- 2000 (12)
- 1999 (7)
- 1998 (8)
- 1997 (8)
- 1996 (4)
- 1995 (4)
- 1993 (6)
- 1992 (3)
- 1991 (2)
- 1990 (1)
- 1989 (3)
- 1988 (3)
- 1986 (1)
- 1985 (2)
- 1984 (3)
- 1983 (2)
- 1981 (2)
- 1980 (1)
- 1979 (1)
- 1978 (3)
- 1975 (2)
- 1973 (2)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (241)
- Fachbereich Medizintechnik und Technomathematik (218)
- Fachbereich Luft- und Raumfahrttechnik (189)
- Fachbereich Energietechnik (181)
- IfB - Institut für Bioengineering (151)
- Solar-Institut Jülich (110)
- Fachbereich Maschinenbau und Mechatronik (108)
- Fachbereich Bauingenieurwesen (75)
- Fachbereich Wirtschaftswissenschaften (57)
- ECSM European Center for Sustainable Mobility (53)
Language
- English (1197) (remove)
Document Type
- Conference Proceeding (1197) (remove)
Keywords
- Biosensor (25)
- CAD (7)
- Finite-Elemente-Methode (7)
- civil engineering (7)
- Bauingenieurwesen (6)
- Blitzschutz (6)
- Enterprise Architecture (5)
- Clusterion (4)
- Energy storage (4)
- Gamification (4)
Solar-electric propulsion (SEP) is superior with
respect to payload capacity, flight time and
flexible launch window to the conventional
interplanetary transfer method using chemical
propulsion combined with gravity assists. This fact
results from the large exhaust velocities of electric
low–thrust propulsion and is favourable also for
missions to the giant planets, Kuiper-belt objects
and even for a heliopause probe (IHP) as shown in
three studies by the authors funded by DLR. They
dealt with a lander for Europa and a sample return
mission from a mainbelt asteroid [1], with the
TANDEM mission [2]; the third recent one
investigates electric propulsion for the transfer to
the edge of the solar system.
All studies are based on triple-junction solar arrays,
on rf-ion thrusters of the qualified RIT-22 type and
they use the intelligent trajectory optimization
program InTrance [3].
Solar sails provide ignificant advantages over other low-thrust propulsion systems because they produce thrust by the momentum exchange from solar radiation pressure (SRP) and thus do not consume any propellant.The force exerted on a very thin sail foil basically depends on the light incidence angle. Several analytical SRP force models that describe the SRP force acting on the sail have been established since the 1970s. All the widely used models use constant optical force coefficients of the reflecting sail material. In 2006,MENGALI et al. proposed a refined SRP force model that takes into account the dependancy of the force coefficients on the light incident angle,the sail’s distance from the sun (and thus the sail emperature) and the surface roughness of the sail material [1]. In this paper, the refined SRP force model is compared to the previous ones in order to identify the potential impact of the new model on the predicted capabilities of solar sails in performing low-cost interplanetary space missions. All force models have been implemented within InTrance, a global low-thrust trajectory optimization software utilizing evolutionary neurocontrol [2]. Two interplanetary rendezvous missions, to Mercury and the near-Earth asteroid 1996FG3, are investigated. Two solar sail performances in terms of characteristic acceleration are examined for both scenarios, 0.2 mm/s2 and 0.5 mm/s2, termed “low” and “medium” sail performance. In case of the refined SRP model, three different values of surface roughness are chosen, h = 0 nm, 10 nm and 25 nm. The results show that the refined SRP force model yields shorter transfer times than the standard model.
Flight times to the heliopause using a combination of solar and radioisotope electric propulsion
(2011)
We investigate the interplanetary flight of a low-thrust space probe to the heliopause,located at a distance of about 200 AU from the Sun. Our goal was to reach this distance within the 25 years postulated by ESA for such a mission (which is less ambitious than the 15-year goal set by NASA). Contrary to solar sail concepts and combinations of allistic and electrically propelled flight legs, we have investigated whether the set flight time limit could also be kept with a combination of solar-electric propulsion and a second, RTG-powered upper stage. The used ion engine type was the RIT-22 for the first stage and the RIT-10 for the second stage. Trajectory optimization was carried out with the low-thrust optimization program InTrance, which implements the method of Evolutionary Neurocontrol,using Artificial Neural Networks for spacecraft steering and Evolutionary Algorithms to optimize the Neural Networks’ parameter set. Based on a parameter space study, in which the number of thrust units, the unit’s specific impulse, and the relative size of the solar power generator were varied, we have chosen one configuration as reference. The transfer time of this reference configuration was 29.6 years and the fastest one, which is technically
more challenging, still required 28.3 years. As all flight times of this parameter study were longer than 25 years, we further shortened the transfer time by applying a launcher-provided hyperbolic excess energy up to 49 km2/s2. The resulting minimal flight time for the reference configuration was then 27.8 years. The following, more precise optimization to a launch with the European Ariane 5 ECA rocket reduced the transfer time to 27.5 years. This is the fastest mission design of our study that is flexible enough to allow a launch every
year. The inclusion of a fly-by at Jupiter finally resulted in a flight time of 23.8 years,which is below the set transfer-time limit. However, compared to the 27.5-year transfer,this mission design has a significantly reduced launch window and mission flexibility if the
escape direction is restricted to the heliosphere’s “nose".
This paper presents the laser-based powder bed fusion (L-PBF) using various glass powders (borosilicate and quartz glass). Compared to metals, these require adapted process strategies. First, the glass powders were characterized with regard to their material properties and their processability in the powder bed. This was followed by investigations of the melting behavior of the glass powders with different laser wavelengths (10.6 µm, 1070 nm). In particular, the experimental setup of a CO2 laser was adapted for the processing of glass powder. An experimental setup with integrated coaxial temperature measurement/control and an inductively heatable build platform was created. This allowed the L-PBF process to be carried out at the transformation temperature of the glasses. Furthermore, the component’s material quality was analyzed on three-dimensional test specimen with regard to porosity, roughness, density and geometrical accuracy in order to evaluate the developed L-PBF parameters and to open up possible applications.
Solar sailcraft provide a wide range of opportunities for high-energy low-cost missions. To date, most mission studies require a rather demanding performance that will not be realized by solar sailcraft of the first generation.
However, even with solar sailcraft of moderate performance, scientifically relevant missions are feasible. This is demonstrated with a Near Earth Asteroid sample return mission and various planetary rendezvous missions.
Solar sails are propelled in space by reflecting solar photons off large mirroring surfaces, thereby transforming the momentum of the photons into a propulsive force. This innovative concept for low-thrust space propulsion works without any propellant and thus provides a wide range of opportunities for highenergy low-cost missions. Offering an efficient way of propulsion, solar sailcraft could close a gap in transportation options for highly demanding exploration missions within our solar system and even beyond. On December 17th, 1999, a significant step was made towards the realization of this technology: a lightweight solar sail structure with an area of 20 m × 20 m was successfully deployed on ground in a large facility at the German Aerospace Center (DLR) at Cologne. The deployment from a package of 60 cm × 60 cm × 65 cm with a total mass of less than 35 kg was achieved using four extremely light-weight carbon fiber reinforced plastics (CFRP) booms with a specific mass of 100 g/m. The paper briefly reviews the basic principles of solar sails as well as the technical concept and its realization in the ground demonstration experiment, performed in close cooperation between DLR and ESA. Next possible steps are outlined. They could comprise the in-orbit demonstration of the sail deployment on the upper stage of a low-cost rocket and the verification of the propulsion concept by an autonomous and free flying solar sail in the frame of a scientific mission. It is expected that the present design could be extended to sail sizes of about (40 m)2 up to even (70 m)2 without significant mass penalty. With these areas, the maximum achievable thrust at 1 AU would range between 10 and 40 mN – comparable to some electric thrusters. Such prototype sails with a mass between 50 and 150 kg plus a micro-spacecraft of 50 to 250 kg would have a maximum acceleration in the order of 0.1 mm/s2 at 1 AU, corresponding to a maximum ∆V-capability of about 3 km/s per year. Two near/medium-term mission examples to a near-Earth asteroid (NEA) will be discussed: a rendezvous mission
and a sample return mission.
Near-Earth asteroid (NEA) 99942 Apophis provides a typical example for the evolution of asteroid orbits that lead to Earth-impacts after a close Earth-encounter that results in a resonant return. Apophis will have a close Earth-encounter in 2029 with potential very close subsequent Earth-encounters (or even an impact) in 2036 or later, depending on whether it passes through one of several less than 1 km-sized gravitational keyholes during its 2029-encounter. A pre-2029 kinetic impact is a very favorable option to nudge the asteroid out of a keyhole. The highest impact velocity and thus deflection can be achieved from a trajectory that is retrograde to Apophis orbit. With a chemical or electric propulsion system, however, many gravity assists and thus a long time is required to achieve this. We show in this paper that the solar sail might be the better propulsion system for such a mission: a solar sail Kinetic Energy Impactor (KEI) spacecraft could impact Apophis from a retrograde trajectory with a very high relative velocity (75-80 km/s) during one of its perihelion passages. The spacecraft consists of a 160 m × 160 m, 168 kg solar sail assembly and a 150 kg impactor. Although conventional spacecraft can also achieve the required minimum deflection of 1 km for this approx. 320 m-sized object from a prograde trajectory, our solar sail KEI concept also allows the deflection of larger objects. For a launch in 2020, we also show that, even after Apophis has flown through one of the gravitational keyholes in 2029, the solar sail KEI concept is still feasible to prevent Apophis from impacting the Earth, but many KEIs would be required for consecutive impacts to increase the total Earth-miss distance to a safe value
The planned coal phase-out in Germany by 2038 will lead to the dismantling of power plants with a total capacity of approx. 30 GW. A possible further use of these assets is the conversion of the power plants to thermal storage power plants; the use of these power plants on the day-ahead market is considerably limited by their technical parameters. In this paper, the influence of the technical boundary conditions on the operating times of these storage facilities is presented. For this purpose, the storage power plants were described as an MILP problem and two price curves, one from 2015 with a relatively low renewable penetration (33 %) and one from 2020 with a high renewable energy penetration (51 %) are compared. The operating times were examined as a function of the technical parameters and the critical influencing factors were investigated. The thermal storage power plant operation duration and the energy shifted with the price curve of 2020
increases by more than 25 % compared to 2015.
Experimental and numerical investigation on the effect of pressure on micromix hydrogen combustion
(2021)
The micromix (MMX) combustion concept is a DLN gas turbine combustion technology designed for high hydrogen content fuels. Multiple non-premixed miniaturized flames based on jet in cross-flow (JICF) are inherently safe against flashback and ensure a stable operation in various operative conditions.
The objective of this paper is to investigate the influence of pressure on the micromix flame with focus on the flame initiation point and the NOx emissions. A numerical model based on a steady RANS approach and the Complex Chemistry model with relevant reactions of the GRI 3.0 mechanism is used to predict the reactive flow and NOx emissions at various pressure conditions. Regarding the turbulence-chemical interaction, the Laminar Flame Concept (LFC) and the Eddy Dissipation Concept (EDC) are compared. The numerical results are validated against experimental results that have been acquired at a high pressure test facility for industrial can-type gas turbine combustors with regard to flame initiation and NOx emissions.
The numerical approach is adequate to predict the flame initiation point and NOx emission trends. Interestingly, the flame shifts its initiation point during the pressure increase in upstream direction, whereby the flame attachment shifts from anchoring behind a downstream located bluff body towards anchoring directly at the hydrogen jet. The LFC predicts this change and the NOx emissions more accurately than the EDC. The resulting NOx correlation regarding the pressure is similar to a non-premixed type combustion configuration.
Kawasaki Heavy Industries, LTD. (KHI) has research and development projects for a future hydrogen society. These projects comprise the complete hydrogen cycle, including the production of hydrogen gas, the refinement and liquefaction for transportation and storage, and finally the utilization in a gas turbine for electricity and heat supply. Within the development of the hydrogen gas turbine, the key technology is stable and low NOx hydrogen combustion, namely the Dry Low NOx (DLN) hydrogen combustion.
KHI, Aachen University of Applied Science, and B&B-AGEMA have investigated the possibility of low NOx micro-mix hydrogen combustion and its application to an industrial gas turbine combustor. From 2014 to 2018, KHI developed a DLN hydrogen combustor for a 2MW class industrial gas turbine with the micro-mix technology. Thereby, the ignition performance, the flame stability for equivalent rotational speed, and higher load conditions were investigated. NOx emission values were kept about half of the Air Pollution Control Law in Japan: 84ppm (O2-15%). Hereby, the elementary combustor development was completed.
From May 2020, KHI started the engine demonstration operation by using an M1A-17 gas turbine with a co-generation system located in the hydrogen-fueled power generation plant in Kobe City, Japan. During the first engine demonstration tests, adjustments of engine starting and load control with fuel staging were investigated. On 21st May, the electrical power output reached 1,635 kW, which corresponds to 100% load (ambient temperature 20 °C), and thereby NOx emissions of 65 ppm (O2-15, 60 RH%) were verified. Here, for the first time, a DLN hydrogen-fueled gas turbine successfully generated power and heat.
This study investigates the influence of pressure on the temperature distribution of the micromix (MMX) hydrogen flame and the NOx emissions. A steady computational fluid dynamic (CFD) analysis is performed by simulating a reactive flow with a detailed chemical reaction model. The numerical analysis is validated based on experimental investigations. A quantitative correlation is parametrized based on the numerical results. We find, that the flame initiation point shifts with increasing pressure from anchoring behind a downstream located bluff body towards anchoring upstream at the hydrogen jet. The numerical NOx emissions trend regarding to a variation of pressure is in good agreement with the experimental results. The pressure has an impact on both, the residence time within the maximum temperature region and on the peak temperature itself. In conclusion, the numerical model proved to be adequate for future prototype design exploration studies targeting on improving the operating range.
Solar sails enable missions to the outer solar system and beyond, although the solar
radiation pressure decreases with the square of solar distance. For such missions, the solar sail may gain a large amount of energy by first making one or more close approaches to the sun. Within this paper, optimal trajectories for solar sail missions to the outer planets and into near interstellar space (200 AU) are presented. Thereby, it is shown that even near/medium-term solar sails with relatively moderate performance allow reasonable transfer times to the boundaries of the solar system.
A Gamified Information System (GIS) implements game concepts and elements, such as affordances and game design principles to motivate people. Based on the idea to develop a GIS to increase the motivation of software developers to perform software quality tasks, the research work at hand aims at investigating relevant requirements from that target group. Therefore, 14 interviews with software development experts are conducted and analyzed. According to the results, software developers prefer the affordances points, narrative storytelling in a multiplayer and a round-based setting. Furthermore, six design principles for the development of a GIS are derived.
In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.
The recovery of waste heat requires heat exchangers to extract it from a liquid or gaseous medium into another working medium, a refrigerant. In Organic Rankine Cycles (ORC) on Combustion Engines there are two major heat sources, the exhaust gas and the water/glycol fluid from the engine’s cooling circuit. A heat exchanger design must be adapted to the different requirements and conditions resulting from the heat sources, fluids, system configurations, geometric restrictions, and etcetera. The Stacked Shell Cooler (SSC) is a new and very specific design of a plate heat exchanger, created by AKG, which allows with a maximum degree of freedom the optimization of heat exchange rate and the reduction of the related pressure drop. This optimization in heat exchanger design for ORC systems is even more important, because it reduces the energy consumption of the system and therefore maximizes the increase in overall efficiency of the engine.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of water distribution systems (WDS), i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, graph-theoretical metrics have been proposed. In this study, a promising approach is first physically derived analytically and then applied to assess the resilience of the WDS for a district in a major German City. The topology based resilience index computed for every consumer node takes into consideration the resistance of the best supply path as well as alternative supply paths. This resistance of a supply path is derived to be the dimensionless pressure loss in the pipes making up the path. The conducted analysis of a present WDS provides insight into the process of actively influencing the resilience of WDS locally and globally by adding pipes. The study shows that especially pipes added close to the reservoirs and main branching points in the WDS result in a high resilience enhancement of the overall WDS.
In times of planned obsolescence the demand for sustainability keeps growing. Ideally, a technical system is highly reliable, without failures and down times due to fast wear of single components. At the same time, maintenance should preferably be limited to pre-defined time intervals. Dispersion of load between multiple components can increase a system’s reliability and thus its availability inbetween maintenance points. However, this also results in higher investment costs and additional efforts due to higher complexity. Given a specific load profile and resulting wear of components, it is often unclear which system structure is the optimal one. Technical Operations Research (TOR) finds an optimal structure balancing availability and effort. We present our approach by designing a hydrostatic transmission system.
The understanding that optimized components do not automatically lead to energy-efficient systems sets the attention from the single component on the entire technical system. At TU Darmstadt, a new field of research named Technical Operations Research (TOR) has its origin. It combines mathematical and technical know-how for the optimal design of technical systems. We illustrate our optimization approach in a case study for the design of a ventilation system with the ambition to minimize the energy consumption for a temporal distribution of diverse load demands. By combining scaling laws with our optimization methods we find the optimal combination of fans and show the advantage of the use of multiple fans.
Energy-efficient components do not automatically lead to energy-efficient systems. Technical Operations Research (TOR) shifts the focus from the single component to the system as a whole and finds its optimal topology and operating strategy simultaneously. In previous works, we provided a preselected construction kit of suitable components for the algorithm. This approach may give rise to a combinatorial explosion if the preselection cannot be cut down to a reasonable number by human intuition. To reduce the number of discrete decisions, we integrate laws derived from similarity theory into the optimization model. Since the physical characteristics of a production series are similar, it can be described by affinity and scaling laws. Making use of these laws, our construction kit can be modeled more efficiently: Instead of a preselection of components, it now encompasses whole model ranges. This allows us to significantly increase the number of possible set-ups in our model. In this paper, we present how to embed this new formulation into a mixed-integer program and assess the run time via benchmarks. We present our approach on the example of a ventilation system design problem.
A new method for improved autoclave loading within the restrictive framework of helicopter manufacturing is proposed. It is derived from experimental and numerical studies of the curing process and aims at optimizing tooling positions in the autoclave for fast and homogeneous heat-up. The mold positioning is based on two sets of information. The thermal properties of the molds, which can be determined via semi-empirical thermal simulation. The second information is a previously determined distribution of heat transfer coefficients inside the autoclave. Finally, an experimental proof of concept is performed to show a cycle time reduction of up to 31% using the proposed methodology.
For typical cases of non-isolated lightning protection systems (LPS) the impulse currents are investigated which may flow through a human body directly touching a structural part of the LPS. Based on a basic LPS model with conventional down-conductors especially the cases of external and internal steel columns and metal façades are considered and compared. Numerical simulations of the line quantities voltages and currents in the time domain are performed with an equivalent circuit of the entire LPS.
As a result it can be stated that by increasing the number of conventional down-conductors and external steel columns the threat for a human being can indeed be reduced, but not down to an acceptable limit. In case of internal steel columns used as natural down-conductors the threat can be reduced sufficiently, depending on the low-resistive connection of the steel columns to the lightning equipotential bonding or the earth termination system, resp. If a metal façade is used the threat for human beings touching is usually very low, if the façade is sufficiently interconnected and multiply connected to the lightning equipotential bonding or the earth termination system, resp.
Finding a good system topology with more than a handful of components is a
highly non-trivial task. The system needs to be able to fulfil all expected load cases, but at the
same time the components should interact in an energy-efficient way. An example for a system
design problem is the layout of the drinking water supply of a residential building. It may be
reasonable to choose a design of spatially distributed pumps which are connected by pipes in at
least two dimensions. This leads to a large variety of possible system topologies. To solve such
problems in a reasonable time frame, the nonlinear technical characteristics must be modelled
as simple as possible, while still achieving a sufficiently good representation of reality. The
aim of this paper is to compare the speed and reliability of a selection of leading mathematical
programming solvers on a set of varying model formulations. This gives us empirical evidence
on what combinations of model formulations and solver packages are the means of choice with the current state of the art.
The UN sets the goal to ensure access to water and sanitation for all people by 2030. To address this goal, we present a multidisciplinary approach for designing water supply networks for slums in large cities by applying mathematical optimization. The problem is modeled as a mixed-integer linear problem (MILP) aiming to find a network describing the optimal supply infrastructure. To illustrate the approach, we apply it on a small slum cluster in Dhaka, Bangladesh.
The overall energy efficiency of ventilation systems can be improved by considering not only single components, but by considering as well the interplay between every part of the system. With the help of the method "TOR" ("Technical Operations Research"), which was developed at the Chair of Fluid Systems at TU Darmstadt, it is possible to improve the energy efficiency of the whole system by considering all possible design choices programmatically. We show the ability of this systematic design approach with a ventilation system for buildings as a use case example.
Based on a Mixed-Integer Nonlinear Program (MINLP) we model the ventilation system. We use binary variables to model the selection of different pipe diameters. Multiple fans are model with the help of scaling laws. The whole system is represented by a graph, where the edges represent the pipes and fans and the nodes represents the source of air for cooling and the sinks, that have to be cooled. At the beginning, the human designer chooses a construction kit of different suitable fans and pipes of different diameters and different load cases. These boundary conditions define a variety of different possible system topologies. It is not possible to consider all topologies by hand. With the help of state of the art solvers, on the other side, it is possible to solve this MINLP.
Next to this, we also consider the effects of malfunctions in different components. Therefore, we show a first approach to measure the resilience of the shown example use case. Further, we compare the conventional approach with designs that are more resilient. These more resilient designs are derived by extending the before mentioned model with further constraints, that consider explicitly the resilience of the overall system. We show that it is possible to design resilient systems with this method already in the early design stage and compare the energy efficiency and resilience of these different system designs.
To increase pressure to supply all floors of high buildings with water, booster stations, normally consisting of several parallel pumps in the basement, are used. In this work, we demonstrate the potential of a decentralized pump topology regarding energy savings in water supply systems of skyscrapers. We present an approach, based on Mixed-Integer Nonlinear Programming, that allows to choose an optimal network topology and optimal pumps from a predefined construction kit comprising different pump types. Using domain-specific scaling laws and Latin Hypercube Sampling, we generate different input sets of pump types and compare their impact on the efficiency and cost of the total system design. As a realistic application example, we consider a hotel building with 325 rooms, 12 floors and up to four pressure zones.
The paper industry is the industry with the third highest energy consumption in the European Union. Using recycled paper instead of fresh fibers for papermaking is less energy consuming and saves resources. However, adhesive contaminants in recycled paper are particularly problematic since they reduce the quality of the resulting paper-product. To remove as many contaminants and at the same time obtain as many valuable fibres as possible, fine screening systems, consisting of multiple interconnected pressure screens, are used. Choosing the best configuration is a non-trivial task: The screens can be interconnected in several ways, and suitable screen designs as well as operational parameters have to be selected. Additionally, one has to face conflicting objectives. In this paper, we present an approach for the multi-criteria optimization of pressure screen systems based on Mixed-Integer Nonlinear Programming. We specifically focus on a clear representation of the trade-off between different objectives.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. In practice, the focus is set on the most beneficial maintenance measures and/or capacity adaptations of existing water distribution systems (WDS). Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of WDS, i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, metrics based on graph theory have been proposed. In this study, a promising approach is applied to assess the resilience of the WDS for a district in a major German City. The conducted analysis provides insight into the process of actively influencing the
resilience of WDS
The development of resilient technical systems is a challenging task, as the system should adapt automatically to unknown disturbances and component failures. To evaluate different approaches for deriving resilient technical system designs, we developed a modular test rig that is based on a pumping system. On the basis of this example
system, we present metrics to quantify resilience and an algorithmic approach to improve resilience. This approach enables the pumping system to automatically react on unknown disturbances and to reduce the impact of component failures. In this case, the system is able to automatically adapt its topology by activating additional valves. This enables the system to still reach a minimum performance, even in case of failures. Furthermore, timedependent disturbances are evaluated continuously, deviations from the original state are automatically detected and anticipated in the future. This allows to reduce the impact of future disturbances and leads to a more resilient
system behaviour.
The course Physics for Electrical Engineering is part of the curriculum of the bachelor program Electrical Engineering at University of Applied Science Aachen.
Before covid-19 the course was conducted in a rather traditional way with all parts (lecture, exercise and lab) face-to-face. This teaching approach changed fundamentally within a week when the covid-19 limitations forced all courses to distance learning. All parts of the course were transformed to pure distance learning including synchronous and asynchronous parts for the lecture, live online-sessions for the exercises and self-paced labs at home. Using these methods, the course was able to impart the required knowledge and competencies. Taking the teacher’s observations of the student’s learning behaviour and engagement, the formal and informal feedback of the students and the results of the exams into account, the new methods are evaluated with respect to effectiveness, sustainability and suitability for competence transfer. Based on this analysis strong and weak points of the concept and countermeasures to solve the weak points were identified. The analysis further leads to a sustainable teaching approach combining synchronous and asynchronous parts with self-paced learning times that can be used in a very flexible manner for different learning scenarios, pure online, hybrid (mixture of online and presence times) and pure presence teaching.
Adapting augmented reality systems to the users’ needs using gamification and error solving methods
(2021)
Animations of virtual items in AR support systems are typically predefined and lack interactions with dynamic physical environments. AR applications rarely consider users’ preferences and do not provide customized spontaneous support under unknown situations. This research focuses on developing adaptive, error-tolerant AR systems based on directed acyclic graphs and error resolving strategies. Using this approach, users will have more freedom of choice during AR supported work, which leads to more efficient workflows. Error correction methods based on CAD models and predefined process data create individual support possibilities. The framework is implemented in the Industry 4.0 model factory at FH Aachen.
The chemical industry is one of the most important industrial sectors in Germany in terms of manufacturing revenue. While thermodynamic boundary conditions often restrict the scope for reducing the energy consumption of core processes, secondary processes such as cooling offer scope for energy optimisation. In this contribution, we therefore model and optimise an existing cooling system. The technical boundary conditions of the model are provided by the operators, the German chemical company BASF SE. In order to systematically evaluate different degrees of freedom in topology and operation, we formulate and solve a Mixed-Integer Nonlinear Program (MINLP), and compare our optimisation results with the existing system.
Component failures within water supply systems can lead to significant performance losses. One way to address these losses is the explicit anticipation of failures within the design process. We consider a water supply system for high-rise buildings, where pump failures are the most likely failure scenarios. We explicitly consider these failures within an early design stage which leads to a more resilient system, i.e., a system which is able to operate under a predefined number of arbitrary pump failures. We use a mathematical optimization approach to compute such a resilient design. This is based on a multi-stage model for topology optimization, which can be described by a system of nonlinear inequalities and integrality constraints. Such a model has to be both computationally tractable and to represent the real-world system accurately. We therefore validate the algorithmic solutions using experiments on a scaled test rig for high-rise buildings. The test rig allows for an arbitrary connection of pumps to reproduce scaled versions of booster station designs for high-rise buildings. We experimentally verify the applicability of the presented optimization model and that the proposed resilience properties are also fulfilled in real systems.
Successful optimization requires an appropriate model of the system under consideration. When selecting a suitable level of detail, one has to consider solution quality as well as the computational and implementation effort. In this paper, we present a MINLP for a pumping system for the drinking water supply of high-rise buildings. We investigate the influence of the granularity of the underlying physical models on the solution quality. Therefore, we model the system with a varying level of detail regarding the friction losses, and conduct an experimental validation of our model on a modular test rig. Furthermore, we investigate the computational effort and show that it can be reduced by the integration of domain-specific knowledge.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
To maximize the travel distances of battery electric vehicles such as cars or buses for a given amount of stored energy, their powertrains are optimized energetically. One key part within optimization models for electric powertrains is the efficiency map of the electric motor. The underlying function is usually highly nonlinear and nonconvex and leads to major challenges within a global optimization process. To enable faster solution times, one possibility is the usage of piecewise linearization techniques to approximate the nonlinear efficiency map with linear constraints. Therefore, we evaluate the influence of different piecewise linearization modeling techniques on the overall solution process and compare the solution time and accuracy for methods with and without explicitly used binary variables.
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
Conventional EEG devices cannot be used in everyday life and hence, past decade research has been focused on Ear-EEG for mobile, at-home monitoring for various applications ranging from emotion detection to sleep monitoring. As the area available for electrode contact in the ear is limited, the electrode size and location play a vital role for an Ear-EEG system. In this investigation, we present a quantitative study of ear-electrodes with two electrode sizes at different locations in a wet and dry configuration. Electrode impedance scales inversely with size and ranges from 450 kΩ to 1.29 MΩ for dry and from 22 kΩ to 42 kΩ for wet contact at 10 Hz. For any size, the location in the ear canal with the lowest impedance is ELE (Left Ear Superior), presumably due to increased contact pressure caused by the outer-ear anatomy. The results can be used to optimize signal pickup and SNR for specific applications. We demonstrate this by recording sleep spindles during sleep onset with high quality (5.27 μVrms).
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
Communication via serial bus systems, like CAN, plays an important role for all kinds of embedded electronic and mechatronic systems. To cope up with the requirements for functional safety of safety-critical applications, there is a need to enhance the safety features of the communication systems. One measure to achieve a more robust communication is to add redundant data transmission path to the applications. In general, the communication of real-time embedded systems like automotive applications is tethered, and the redundant data transmission lines are also tethered, increasing the size of the wiring harness and the weight of the system. A radio link is preferred as a redundant transmission line as it uses a complementary transmission medium compared to the wired solution and in addition reduces wiring harness size and weight. Standard wireless links like Wi-Fi or Bluetooth cannot meet the requirements for real-time capability with regard to bus communication. Using the new dual-mode radio enables a redundant transmission line meeting all requirements with regard to real-time capability, robustness and transparency for the data bus. In addition, it provides a complementary transmission medium with regard to commonly used tethered links. A CAN bus system is used to demonstrate the redundant data transfer via tethered and wireless CAN.
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
In the study, the process chain of additive manufacturing by means of powder bed fusion will be presented based on the material glass. In order to reliably process components additively, new concepts with different solutions were developed and investigated.
Compared to established metallic materials, the properties of glass materials differ significantly. Therefore, the process control was adapted to the material glass in the investigations. With extensive parameter studies based on various glass powders such as borosilicate glass and quartz glass, scientifically proven results on powder bed fusion of glass are presented. Based on the determination of the particle properties with different methods, extensive investigations are made regarding the melting behavior of glass by means of laser beams. Furthermore, the experimental setup was steadily expanded. In addition to the integration of coaxial temperature measurement and regulation, preheating of the building platform is of major importance. This offers the possibility to perform 3D printing at the transformation temperatures of the glass materials. To improve the component’s properties, the influence of a subsequent heat treatment was also investigated.
The experience gained was incorporated into a new experimental system, which allows a much better exploration of the 3D printing of glass. Currently, studies are being conducted to improve surface texture, building accuracy, and geometrical capabilities using three-dimensional specimen.
The contribution shows the development of research in the field of 3D printing of glass, gives an insight into the machine and process engineering as well as an outlook on the possibilities and applications.
A new formulation to calculate the shakedown limit load of Kirchhoff plates under stochastic conditions of strength is developed. Direct structural reliability design by chance con-strained programming is based on the prescribed failure probabilities, which is an effective approach of stochastic programming if it can be formulated as an equivalent deterministic optimization problem. We restrict uncertainty to strength, the loading is still deterministic. A new formulation is derived in case of random strength with lognormal distribution. Upper bound and lower bound shakedown load factors are calculated simultaneously by a dual algorithm.
Project work and inter disciplinarity are integral parts of today's engineering work. It is therefore important to incorporate these aspects into the curriculum of academic studies of engineering. At the faculty of Electrical Engineering and Information Technology an interdisciplinary project is part of the bachelor program to address these topics. Since the summer term 2020 most courses changed to online mode during the Covid-19 crisis including the interdisciplinary projects. This online mode introduces additional challenges to the execution of the projects, both for the students as well as for the lecture. The challenges, but also the risks and chances of this kind of project courses are subject of this paper, based on five different interdisciplinary projects