Conference Proceeding
Refine
Year of publication
- 2024 (9)
- 2023 (35)
- 2022 (46)
- 2021 (48)
- 2020 (46)
- 2019 (74)
- 2018 (64)
- 2017 (66)
- 2016 (66)
- 2015 (71)
- 2014 (51)
- 2013 (57)
- 2012 (59)
- 2011 (44)
- 2010 (48)
- 2009 (52)
- 2008 (37)
- 2007 (44)
- 2006 (60)
- 2005 (23)
- 2004 (22)
- 2003 (22)
- 2002 (25)
- 2001 (12)
- 2000 (12)
- 1999 (7)
- 1998 (8)
- 1997 (8)
- 1996 (4)
- 1995 (4)
- 1993 (6)
- 1992 (3)
- 1991 (2)
- 1990 (1)
- 1989 (3)
- 1988 (3)
- 1986 (1)
- 1985 (2)
- 1984 (3)
- 1983 (2)
- 1981 (2)
- 1980 (1)
- 1979 (1)
- 1978 (3)
- 1975 (2)
- 1973 (2)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (234)
- Fachbereich Medizintechnik und Technomathematik (210)
- Fachbereich Luft- und Raumfahrttechnik (182)
- Fachbereich Energietechnik (177)
- IfB - Institut für Bioengineering (147)
- Solar-Institut Jülich (110)
- Fachbereich Maschinenbau und Mechatronik (107)
- Fachbereich Bauingenieurwesen (75)
- ECSM European Center for Sustainable Mobility (52)
- Fachbereich Wirtschaftswissenschaften (51)
Language
- English (1161) (remove)
Document Type
- Conference Proceeding (1161) (remove)
Keywords
- Biosensor (25)
- CAD (7)
- Finite-Elemente-Methode (7)
- civil engineering (7)
- Bauingenieurwesen (6)
- Blitzschutz (6)
- Enterprise Architecture (5)
- Clusterion (4)
- Energy storage (4)
- Gamification (4)
The number of case studies focusing on hybrid-electric aircraft is steadily increasing, since these configurations are thought to lead to lower operating costs and environmental impact than traditional aircraft. However, due to the lack of reference data of actual hybrid-electric aircraft, in most cases, the design tools and results are difficult to validate. In this paper, two independently developed approaches for hybrid-electric conceptual aircraft design are compared. An existing 19-seat commuter aircraft is selected as the conventional baseline, and both design tools are used to size that aircraft. The aircraft is then re-sized under consideration of hybrid-electric propulsion technology. This is performed for parallel, serial, and fully-electric powertrain architectures. Finally, sensitivity studies are conducted to assess the validity of the basic assumptions and approaches regarding the design of hybrid-electric aircraft. Both methods are found to predict the maximum take-off mass (MTOM) of the reference aircraft with less than 4% error. The MTOM and payload-range energy efficiency of various (hybrid-) electric configurations are predicted with a maximum difference of approximately 2% and 5%, respectively. The results of this study confirm a correct formulation and implementation of the two design methods, and the data obtained can be used by researchers to benchmark and validate their design tools.
A German–Brazilian research project investigates sugarcane as an energy plant in anaerobic digestion for biogas production. The aim of the project is a continuous, efficient, and stable biogas process with sugarcane as the substrate. Tests are carried out in a fermenter with a volume of 10 l.
In order to optimize the space–time load to achieve a stable process, a continuous process in laboratory scale has been devised. The daily feed in quantity and the harvest time of the substrate sugarcane has been varied. Analyses of the digester content were conducted twice per week to monitor the process: The ratio of inorganic carbon content to volatile organic acid content (VFA/TAC), the concentration of short-chain fatty acids, the organic dry matter, the pH value, and the total nitrogen, phosphate, and ammonium concentrations were monitored. In addition, the gas quality (the percentages of CO₂, CH₄, and H₂) and the quantity of the produced gas were analyzed.
The investigations have exhibited feasible and economical production of biogas in a continuous process with energy cane as substrate. With a daily feeding rate of 1.68gᵥₛ/l*d the average specific gas formation rate was 0.5 m3/kgᵥₛ. The long-term study demonstrates a surprisingly fast metabolism of short-chain fatty acids. This indicates a stable and less susceptible process compared to other substrates.
The development of resilient technical systems is a challenging task, as the system should adapt automatically to unknown disturbances and component failures. To evaluate different approaches for deriving resilient technical system designs, we developed a modular test rig that is based on a pumping system. On the basis of this example
system, we present metrics to quantify resilience and an algorithmic approach to improve resilience. This approach enables the pumping system to automatically react on unknown disturbances and to reduce the impact of component failures. In this case, the system is able to automatically adapt its topology by activating additional valves. This enables the system to still reach a minimum performance, even in case of failures. Furthermore, timedependent disturbances are evaluated continuously, deviations from the original state are automatically detected and anticipated in the future. This allows to reduce the impact of future disturbances and leads to a more resilient
system behaviour.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of water distribution systems (WDS), i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, graph-theoretical metrics have been proposed. In this study, a promising approach is first physically derived analytically and then applied to assess the resilience of the WDS for a district in a major German City. The topology based resilience index computed for every consumer node takes into consideration the resistance of the best supply path as well as alternative supply paths. This resistance of a supply path is derived to be the dimensionless pressure loss in the pipes making up the path. The conducted analysis of a present WDS provides insight into the process of actively influencing the resilience of WDS locally and globally by adding pipes. The study shows that especially pipes added close to the reservoirs and main branching points in the WDS result in a high resilience enhancement of the overall WDS.
The recovery of waste heat requires heat exchangers to extract it from a liquid or gaseous medium into another working medium, a refrigerant. In Organic Rankine Cycles (ORC) on Combustion Engines there are two major heat sources, the exhaust gas and the water/glycol fluid from the engine’s cooling circuit. A heat exchanger design must be adapted to the different requirements and conditions resulting from the heat sources, fluids, system configurations, geometric restrictions, and etcetera. The Stacked Shell Cooler (SSC) is a new and very specific design of a plate heat exchanger, created by AKG, which allows with a maximum degree of freedom the optimization of heat exchange rate and the reduction of the related pressure drop. This optimization in heat exchanger design for ORC systems is even more important, because it reduces the energy consumption of the system and therefore maximizes the increase in overall efficiency of the engine.
A research framework for human aspects in the internet of production: an intra-company perspective
(2020)
Digitalization in the production sector aims at transferring concepts and methods from the Internet of Things (IoT) to the industry and is, as a result, currently reshaping the production area. Besides technological progress, changes in work processes and organization are relevant for a successful implementation of the “Internet of Production” (IoP). Focusing on the labor organization and organizational procedures emphasizes to consider intra-company factors such as (user) acceptance, ethical issues, and ergonomics in the context of IoP approaches. In the scope of this paper, a research approach is presented that considers these aspects from an intra-company perspective by conducting studies on the shop floor, control level and management level of companies in the production area. Focused on four central dimensions—governance, organization, capabilities, and interfaces—this contribution presents a research framework that is focused on a systematic integration and consideration of human aspects in the realization of the IoP.
The integration of product data from heterogeneous sources and manufacturers into a single catalog is often still a laborious, manual task. Especially small- and medium-sized enterprises face the challenge of timely integrating the data their business relies on to have an up-to-date product catalog, due to format specifications, low quality of data and the requirement of expert knowledge. Additionally, modern approaches to simplify catalog integration demand experience in machine learning, word vectorization, or semantic similarity that such enterprises do not have. Furthermore, most approaches struggle with low-quality data. We propose Attribute Label Ranking (ALR), an easy to understand and simple to adapt learning approach. ALR leverages a model trained on real-world integration data to identify the best possible schema mapping of previously unknown, proprietary, tabular format into a standardized catalog schema. Our approach predicts multiple labels for every attribute of an inpu t column. The whole column is taken into consideration to rank among these labels. We evaluate ALR regarding the correctness of predictions and compare the results on real-world data to state-of-the-art approaches. Additionally, we report findings during experiments and limitations of our approach.
Integrated voice assistants (IVA) receive more and more attention and are widespread for entertainment use cases, such as radio hearing or web searches. At the same time, the health care segment suffers in process inefficiency and missing staff, whereas the usage of IVA has the potential to improve caring processes and patient satisfaction. By applying a design science approach and based on a qualitative study, we identify IVA requirements, barriers and design guidelines for the health care sector. The results reveal three important IVA functions: the ability to set appointments with care service staff, the documentation of health history and the communication with service staff. Integration, system stability and volume control are the most important nonfunctional requirements. Based on the interview results and project experiences, six design and implementation guidelines are derived.
The increasing digitalization brings new opportunities but also puts new challenges to modern industrial systems. Software agents are one of the key technologies towards self-optimizing factories and are currently used to address the needs of cyber-physical production systems (CPPS). However their interplay in industrial settings needs to be understood better.This paper focusses on securing a cloud infrastructure for multi-agent systems for industrial sites. An industrial site contains multiple production processes that need to communicate with each other and each physical resource is abstracted with a software agent. This volatile architecture needs to be managed and protected from manipulation. The proposed infrastructure presents a security concept for TCP/IP communication between agents, machines, and external networks. It is based on open-source software and tested on a three-node edge cloud controlling a model-plant.
Gamification and gamified information systems (GIS) apply video game elements to encourage the work on boring and everyday tasks. Meanwhile, several research works provide evidence that gamification increases efficiency and effectivity of such tasks. The paper at hand investigates the health care sector, which is challenged with cost pressure and suffers in process efficiency. We hypothesize that GIS may improve the efficiency and quality of care processes. By applying an interview-based content analysis, the paper at hand evaluates gamification elements in an assisted living environment and provides three research contributions. First, insights into relevant GIS affordances and application examples for assisted living facilities are given. Second, assisted living experts evaluate GIS design guidelines. Both the relevant affordances and design principles comprise a basis for the development of a GIS for social workers in assisted living facilities. Third, potential adoption barriers and design guidelines for GIS in assisted living are presented.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
To maximize the travel distances of battery electric vehicles such as cars or buses for a given amount of stored energy, their powertrains are optimized energetically. One key part within optimization models for electric powertrains is the efficiency map of the electric motor. The underlying function is usually highly nonlinear and nonconvex and leads to major challenges within a global optimization process. To enable faster solution times, one possibility is the usage of piecewise linearization techniques to approximate the nonlinear efficiency map with linear constraints. Therefore, we evaluate the influence of different piecewise linearization modeling techniques on the overall solution process and compare the solution time and accuracy for methods with and without explicitly used binary variables.
The chemical industry is one of the most important industrial sectors in Germany in terms of manufacturing revenue. While thermodynamic boundary conditions often restrict the scope for reducing the energy consumption of core processes, secondary processes such as cooling offer scope for energy optimisation. In this contribution, we therefore model and optimise an existing cooling system. The technical boundary conditions of the model are provided by the operators, the German chemical company BASF SE. In order to systematically evaluate different degrees of freedom in topology and operation, we formulate and solve a Mixed-Integer Nonlinear Program (MINLP), and compare our optimisation results with the existing system.
Successful optimization requires an appropriate model of the system under consideration. When selecting a suitable level of detail, one has to consider solution quality as well as the computational and implementation effort. In this paper, we present a MINLP for a pumping system for the drinking water supply of high-rise buildings. We investigate the influence of the granularity of the underlying physical models on the solution quality. Therefore, we model the system with a varying level of detail regarding the friction losses, and conduct an experimental validation of our model on a modular test rig. Furthermore, we investigate the computational effort and show that it can be reduced by the integration of domain-specific knowledge.
As part of the transnational research project EDITOR, a parabolic trough collector system (PTC) with concrete thermal energy storage (C-TES) was installed and commissioned in Limassol, Cyprus. The system is located on the premises of the beverage manufacturer KEAN Soft Drinks Ltd. and its function is to supply process steam for the factory's pasteurisation process [1]. Depending on the factory's seasonally varying capacity for beverage production, the solar system delivers between 5 and 25 % of the total steam demand. In combination with the C-TES, the solar plant can supply process steam on demand before sunrise or after sunset. Furthermore, the C-TES compensates the PTC during the day in fluctuating weather conditions. The parabolic trough collector as well as the control and oil handling unit is designed and manufactured by Protarget AG, Germany. The C-TES is designed and produced by CADE Soluciones de Ingeniería, S.L., Spain. In the focus of this paper is the description of the operational experience with the PTC, C-TES and boiler during the commissioning and operation phase. Additionally, innovative optimisation measures are presented.
Modeling and upscaling of a pilot bayonettube reactor for indirect solar mixed methane reforming
(2020)
A 16.77 kW thermal power bayonet-tube reactor for the mixed reforming of methane using solar energy has been designed and modeled. A test bench for the experimental tests has been installed at the Synlight facility in Juelich, Germany and has just been commissioned. This paper presents the solar-heated reactor design for a combined steam and dry reforming as well as a scaled-up process simulation of a solar reforming plant for methanol production. Solar power towers are capable of providing large amounts of heat to drive high-endothermic reactions, and their integration with thermochemical processes shows a promising future. In the designed bayonet-tube reactor, the conventional burner arrangement for the combustion of natural gas has been substituted by a continuous 930 °C hot air stream, provided by means of a solar heated air receiver, a ceramic thermal storage and an auxiliary firing system. Inside the solar-heated reactor, the heat is transferred by means of convective mechanism mainly; instead of radiation mechanism as typically prevailing in fossil-based industrial reforming processes. A scaled-up solar reforming plant of 50.5 MWth was designed and simulated in Dymola® and AspenPlus®. In comparison to a fossil-based industrial reforming process of the same thermal capacity, a solar reforming plant with thermal storage promises a reduction up to 57 % of annual natural gas consumption in regions with annual DNI-value of 2349 kWh/m2. The benchmark solar reforming plant contributes to a CO2 avoidance of approx. 79 kilotons per year. This facility can produce a nominal output of 734.4 t of synthesis gas and out of this 530 t of methanol a day.
Control engineering theory is hard to grasp for undergraduates during the first semesters, as it deals with the dynamical behavior of systems also in combination with control strategies on an abstract level. Therefore, operational amplifier (OpAmp) processes are reasonable and very effective systems to connect mathematical description with actual system’s behavior. In this paper, we present an experiment for a laboratory session in which an embedded system, driven by a LabVIEW human machine interface (HMI) via USB, controls the analog circuits.With this setup we want to show the possibility of firstly, analyzing a first order process and secondly, designing a P-and PI-controller. Thereby, the theory of control engineering is always applied to the empirical results in order to break down the abstract level for the students.
In many historical centers in Europe, stone masonry is part of building aggregates, which developed when the layout of the city or village was densified. The analysis of such building aggregates is very challenging and modelling guidelines missing. Advances in the development of analysis methods have been impeded by the lack of experimental data on the seismic response of such aggregates. The SERA project AIMS (Seismic Testing of Adjacent Interacting Masonry Structures) provides such experimental data by testing an aggregate of two buildings under two horizontal components of dynamic excitation. With the aim to advance the modelling of unreinforced masonry aggregates, a blind prediction competition is organized before the experimental campaign. Each group has been provided a complete set of construction drawings, material properties, testing sequence and the list of measurements to be reported. The applied modelling approaches span from equivalent frame models to Finite Element models using shell elements and discrete element models with solid elements. This paper compares the first entries, regarding the modelling approaches, results in terms of base shear, roof displacements, interface openings, and the failure modes.
In many historical centres in Europe, stone masonry buildings are part of building aggregates, which developed when the layout of the city or village was densified. In these aggregates, adjacent buildings share structural walls to support floors and roofs. Meanwhile, the masonry walls of the façades of adjacent buildings are often connected by dry joints since adjacent buildings were constructed at different times. Observations after for example the recent Central Italy earthquakes showed that the dry joints between the building units were often the first elements to be damaged. As a result, the joints opened up leading to pounding between the building units and a complicated interaction at floor and roof beam supports. The analysis of such building aggregates is very challenging and modelling guidelines do not exist. Advances in the development of analysis methods have been impeded by the lack of experimental data on the seismic response of such aggregates. The objective of the project AIMS (Seismic Testing of Adjacent Interacting Masonry Structures), included in the H2020 project SERA, is to provide such experimental data by testing an aggregate of two buildings under two horizontal components of dynamic
excitation. The test unit is built at half-scale, with a two-storey building and a one-storey building. The buildings share one common wall while the façade walls are connected by dry joints. The floors are at different heights leading to a complex dynamic response of this smallest possible building aggregate. The shake table test is conducted at the LNEC seismic testing facility. The testing sequence comprises four levels of shaking: 25%, 50%, 75% and 100% of nominal shaking table capacity. Extensive instrumentation, including accelerometers, displacement transducers and optical measurement systems, provides detailed information on the building aggregate response. Special attention is paid to the interface opening, the globa
Masonry is used in many buildings not only for load-bearing walls, but also for non-load-bearing enclosure elements in the form of infill walls. Many studies confirmed that infill walls interact with the surrounding reinforced concrete frame, thus changing dynamic characteristics of the structure. Consequently, masonry infills cannot be neglected in the design process. However, although the relevant standards contain requirements for infill walls, they do not describe how these requirements are to be met concretely. This leads in practice to the fact that the infill walls are neither dimensioned nor constructed correctly. The evidence of this fact is confirmed by the recent earthquakes, which have led to enormous damages, sometimes followed by the total collapse of buildings and loss of human lives. Recently, the increasing effort has been dedicated to the approach of decoupling of masonry infills from the frame elements by introducing the gap in between. This helps in removing the interaction between infills and frame, but raises the question of out-of-plane stability of the panel. This paper presents the results of the experimental campaign showing the out-of-plane behavior of masonry infills decoupled with the system called INODIS (Innovative decoupled infill system), developed within the European project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings). Full scale specimens were subjected to the different loading conditions and combinations of in-plane and out-of-plane loading. Out-of-plane capacity of the masonry infills with the INODIS system is compared with traditionally constructed infills, showing that INODIS system provides reliable out-of-plane connection under various loading conditions. In contrast, traditional infills performed very poor in the case of combined and simultaneously applied in-plane and out-of-plane loading, experiencing brittle behavior under small in-plane drifts followed by high out-of-plane displacements. Decoupled infills with the INODIS system have remained stable under out-of-plane loads, even after reaching high in-plane drifts and being damaged.
Seismic behavior of an existing unreinforced masonry building built pre-modern code, located in the City of Ohrid, Republic of North Macedonia has been investigated in this paper. The analyzed school building is selected as an archetype in an ongoing project named “Seismic vulnerability assessment of existing masonry structures in Republic of North Macedonia (SeismoWall)”. Two independent segments were included in this research: Seismic hazard assessment by creating a cite specific response spectra and Seismic vulnerability definition by creating a region - specific series of vulnerability curves for the chosen building topology. A reliable Seismic Hazard Assessment for a selected region is a crucial point for performing a seismic risk analysis of a characteristic building class. In that manner, a scenario – based method that incorporates together the knowledge of tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity named Neo Deterministic approach is used for calculation of the response spectra for the location of the building. Variations of the rupturing process are taken into account in the nucleation point of the rupture, in the rupture velocity pattern and in the istribution of the slip on the fault. The results obtained from the multiple scenarios are obtained as an envelope of the response spectra computed for the cite using the procedure Maximum Credible Seismic Input (MCSI). Capacity of the selected building has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) was used for verification of the structural safety of the chosen unreinforced masonry structure. In the process of optimization of the number of samples, computational cost required in a Monte Carlo simulation is significantly reduced since the simulation is performed on a polynomial response surface function for prediction of the structural response. Performance point, found as the intersection of the capacity of the building and the spectra used, is chosen as a response parameter. Five levels of damage limit states based on the capacity curve of the building are defined in dependency on the yield displacement and the maximum displacement. Maximum likelihood estimation procedure is utilized in the process of vulnerability curves determination. As a result, region specific series of vulnerability curves for the chosen type of masonry structures are defined. The obtained probabilities of exceedance a specific damage states as a result from vulnerability curves are compared with the observed damages happened after the earthquake in July 2017 in the City of Ohrid, North Macedonia.
The industrial revolution especially in the IR4.0 era have driven many states of the art technologies to be introduced.
The automotive industry as well as many other key industries have also been greatly influenced. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South East Asia (SEA). Indulging this situation, FH JOANNEUM, Austria together with European partners from FH Aachen, Germany and Politecnico di Torino, Italy are taking initiative to close down the gap utilizing the Erasmus+ United Capacity Building in Higher Education grant from EU. A consortium was founded to engage with automotive technology transfer using the European framework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries in respective countries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing University-industry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
We present first results from a newly developed monitoring station for a closed loop geothermal heat pump test installation at our campus, consisting of helix coils and plate heat exchangers, as well as an ice-store system. There are more than 40 temperature sensors and several soil moisture content sensors distributed around the system, allowing a detailed monitoring under different operating conditions.In the view of the modern development of renewable energies along with the newly concepts known as Internet of Things and Industry 4.0 (high-tech strategy from the German government), we created a user-friendly web application, which will connect the things (sensors) with the open network (www). Besides other advantages, this allows a continuous remote monitoring of the data from the numerous sensors at an arbitrary sampling rate.Based on the recorded data, we will also present first results from numerical simulations, taking into account all relevant heat transport processes.The aim is to improve the understanding of these processes and their influence on the thermal behavior of shallow geothermal systems in the unsaturated zone. This will in turn facilitate the prediction of the performance of these systems and therefore yield an improvement in their dimensioning when designing a specific shallow geothermal installation.
A new formulation to calculate the shakedown limit load of Kirchhoff plates under stochastic conditions of strength is developed. Direct structural reliability design by chance con-strained programming is based on the prescribed failure probabilities, which is an effective approach of stochastic programming if it can be formulated as an equivalent deterministic optimization problem. We restrict uncertainty to strength, the loading is still deterministic. A new formulation is derived in case of random strength with lognormal distribution. Upper bound and lower bound shakedown load factors are calculated simultaneously by a dual algorithm.
During the Covid-19 pandemic, vocational colleges, universities of applied science and technical universities often had to cancel laboratory sessions requiring students’ attendance. These above of all are of decisive importance in order to give learners an understanding of theory through practical work.This paper is a contribution to the implementation of distance learning for laboratory work applicable for several upper secondary educational facilities. Its aim is to provide a paradigm for hybrid teaching to analyze and control a non-linear system depicted by a tank model. For this reason, we redesign a full series of laboratory sessions on the basis of various challenges. Thus, it is suitable to serve different reference levels of the European Qualifications Framework (EQF).We present problem-based learning through online platforms to compensate the lack of a laboratory learning environment. With a task deduced from their future profession, we give students the opportunity to develop own solutions in self-defined time intervals. A requirements specification provides the framework conditions in terms of time and content for students having to deal with the challenges of the project in a self-organized manner with regard to inhomogeneous previous knowledge. If the concept of Complete Action is introduced in classes before, they will automatically apply it while executing the project.The goal is to combine students’ scientific understanding with a procedural knowledge. We suggest a series of remote laboratory sessions that combine a problem formulation from the subject area of Measurement, Control and Automation Technology with a project assignment that is common in industry by providing extracts from a requirements specification.
Project work and inter disciplinarity are integral parts of today's engineering work. It is therefore important to incorporate these aspects into the curriculum of academic studies of engineering. At the faculty of Electrical Engineering and Information Technology an interdisciplinary project is part of the bachelor program to address these topics. Since the summer term 2020 most courses changed to online mode during the Covid-19 crisis including the interdisciplinary projects. This online mode introduces additional challenges to the execution of the projects, both for the students as well as for the lecture. The challenges, but also the risks and chances of this kind of project courses are subject of this paper, based on five different interdisciplinary projects
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
Adapting augmented reality systems to the users’ needs using gamification and error solving methods
(2021)
Animations of virtual items in AR support systems are typically predefined and lack interactions with dynamic physical environments. AR applications rarely consider users’ preferences and do not provide customized spontaneous support under unknown situations. This research focuses on developing adaptive, error-tolerant AR systems based on directed acyclic graphs and error resolving strategies. Using this approach, users will have more freedom of choice during AR supported work, which leads to more efficient workflows. Error correction methods based on CAD models and predefined process data create individual support possibilities. The framework is implemented in the Industry 4.0 model factory at FH Aachen.
The course Physics for Electrical Engineering is part of the curriculum of the bachelor program Electrical Engineering at University of Applied Science Aachen.
Before covid-19 the course was conducted in a rather traditional way with all parts (lecture, exercise and lab) face-to-face. This teaching approach changed fundamentally within a week when the covid-19 limitations forced all courses to distance learning. All parts of the course were transformed to pure distance learning including synchronous and asynchronous parts for the lecture, live online-sessions for the exercises and self-paced labs at home. Using these methods, the course was able to impart the required knowledge and competencies. Taking the teacher’s observations of the student’s learning behaviour and engagement, the formal and informal feedback of the students and the results of the exams into account, the new methods are evaluated with respect to effectiveness, sustainability and suitability for competence transfer. Based on this analysis strong and weak points of the concept and countermeasures to solve the weak points were identified. The analysis further leads to a sustainable teaching approach combining synchronous and asynchronous parts with self-paced learning times that can be used in a very flexible manner for different learning scenarios, pure online, hybrid (mixture of online and presence times) and pure presence teaching.
For typical cases of non-isolated lightning protection systems (LPS) the impulse currents are investigated which may flow through a human body directly touching a structural part of the LPS. Based on a basic LPS model with conventional down-conductors especially the cases of external and internal steel columns and metal façades are considered and compared. Numerical simulations of the line quantities voltages and currents in the time domain are performed with an equivalent circuit of the entire LPS.
As a result it can be stated that by increasing the number of conventional down-conductors and external steel columns the threat for a human being can indeed be reduced, but not down to an acceptable limit. In case of internal steel columns used as natural down-conductors the threat can be reduced sufficiently, depending on the low-resistive connection of the steel columns to the lightning equipotential bonding or the earth termination system, resp. If a metal façade is used the threat for human beings touching is usually very low, if the façade is sufficiently interconnected and multiply connected to the lightning equipotential bonding or the earth termination system, resp.
A new method for improved autoclave loading within the restrictive framework of helicopter manufacturing is proposed. It is derived from experimental and numerical studies of the curing process and aims at optimizing tooling positions in the autoclave for fast and homogeneous heat-up. The mold positioning is based on two sets of information. The thermal properties of the molds, which can be determined via semi-empirical thermal simulation. The second information is a previously determined distribution of heat transfer coefficients inside the autoclave. Finally, an experimental proof of concept is performed to show a cycle time reduction of up to 31% using the proposed methodology.
In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.
The initial idea of Robotic Process Automation (RPA) is the automation of business processes through a simple emulation of user input and output by software robots. Hence, it can be assumed that no changes of the used software systems and existing Enterprise Architecture (EA) is
required. In this short, practical paper we discuss this assumption based on a real-life implementation project. We show that a successful RPA implementation might require architectural work during analysis, implementation, and migration. As practical paper we focus on exemplary lessons-learned and new questions related to RPA and EA.
Digital Shadows as the aggregation, linkage and abstraction of data relating to physical objects are a central vision for the future of production. However, the majority of current research takes a technocentric approach, in which the human actors in production play a minor role. Here, the authors present an alternative anthropocentric perspective that highlights the potential and main challenges of extending the concept of Digital Shadows to humans. Following future research methodology, three prospections that illustrate use cases for Human Digital Shadows across organizational and hierarchical levels are developed: human-robot collaboration for manual work, decision support and work organization, as well as human resource management. Potentials and challenges are identified using separate SWOT analyses for the three prospections and common themes are emphasized in a concluding discussion.
With the increased interest for interstellar exploration after the discovery of exoplanets and the proposal by Breakthrough Starshot, this paper investigates the optimisation of photon-sail trajectories in Alpha Centauri. The prime objective is to find the optimal steering strategy for a photonic sail to get captured around one of the stars after a minimum-time transfer from Earth. By extending the idea of the Breakthrough Starshot project with a deceleration phase upon arrival, the mission’s scientific yield will be increased. As a secondary objective, transfer trajectories between the stars and orbit-raising manoeuvres to explore the habitable zones of the stars are investigated. All trajectories are optimised for minimum time of flight using the trajectory optimisation software InTrance. Depending on the sail technology, interstellar travel times of 77.6-18,790 years can be achieved, which presents an average improvement of 30% with respect to previous work. Still, significant technological development is required to reach and be captured in the Alpha-Centauri system in less than a century. Therefore, a fly-through mission arguably remains the only option for a first exploratory mission to Alpha Centauri, but the enticing results obtained in this work provide perspective for future long-residence missions to our closest neighbouring star system.
This paper presents the laser-based powder bed fusion (L-PBF) using various glass powders (borosilicate and quartz glass). Compared to metals, these require adapted process strategies. First, the glass powders were characterized with regard to their material properties and their processability in the powder bed. This was followed by investigations of the melting behavior of the glass powders with different laser wavelengths (10.6 µm, 1070 nm). In particular, the experimental setup of a CO2 laser was adapted for the processing of glass powder. An experimental setup with integrated coaxial temperature measurement/control and an inductively heatable build platform was created. This allowed the L-PBF process to be carried out at the transformation temperature of the glasses. Furthermore, the component’s material quality was analyzed on three-dimensional test specimen with regard to porosity, roughness, density and geometrical accuracy in order to evaluate the developed L-PBF parameters and to open up possible applications.
This study investigates the influence of pressure on the temperature distribution of the micromix (MMX) hydrogen flame and the NOx emissions. A steady computational fluid dynamic (CFD) analysis is performed by simulating a reactive flow with a detailed chemical reaction model. The numerical analysis is validated based on experimental investigations. A quantitative correlation is parametrized based on the numerical results. We find, that the flame initiation point shifts with increasing pressure from anchoring behind a downstream located bluff body towards anchoring upstream at the hydrogen jet. The numerical NOx emissions trend regarding to a variation of pressure is in good agreement with the experimental results. The pressure has an impact on both, the residence time within the maximum temperature region and on the peak temperature itself. In conclusion, the numerical model proved to be adequate for future prototype design exploration studies targeting on improving the operating range.
Kawasaki Heavy Industries, LTD. (KHI) has research and development projects for a future hydrogen society. These projects comprise the complete hydrogen cycle, including the production of hydrogen gas, the refinement and liquefaction for transportation and storage, and finally the utilization in a gas turbine for electricity and heat supply. Within the development of the hydrogen gas turbine, the key technology is stable and low NOx hydrogen combustion, namely the Dry Low NOx (DLN) hydrogen combustion.
KHI, Aachen University of Applied Science, and B&B-AGEMA have investigated the possibility of low NOx micro-mix hydrogen combustion and its application to an industrial gas turbine combustor. From 2014 to 2018, KHI developed a DLN hydrogen combustor for a 2MW class industrial gas turbine with the micro-mix technology. Thereby, the ignition performance, the flame stability for equivalent rotational speed, and higher load conditions were investigated. NOx emission values were kept about half of the Air Pollution Control Law in Japan: 84ppm (O2-15%). Hereby, the elementary combustor development was completed.
From May 2020, KHI started the engine demonstration operation by using an M1A-17 gas turbine with a co-generation system located in the hydrogen-fueled power generation plant in Kobe City, Japan. During the first engine demonstration tests, adjustments of engine starting and load control with fuel staging were investigated. On 21st May, the electrical power output reached 1,635 kW, which corresponds to 100% load (ambient temperature 20 °C), and thereby NOx emissions of 65 ppm (O2-15, 60 RH%) were verified. Here, for the first time, a DLN hydrogen-fueled gas turbine successfully generated power and heat.
Experimental and numerical investigation on the effect of pressure on micromix hydrogen combustion
(2021)
The micromix (MMX) combustion concept is a DLN gas turbine combustion technology designed for high hydrogen content fuels. Multiple non-premixed miniaturized flames based on jet in cross-flow (JICF) are inherently safe against flashback and ensure a stable operation in various operative conditions.
The objective of this paper is to investigate the influence of pressure on the micromix flame with focus on the flame initiation point and the NOx emissions. A numerical model based on a steady RANS approach and the Complex Chemistry model with relevant reactions of the GRI 3.0 mechanism is used to predict the reactive flow and NOx emissions at various pressure conditions. Regarding the turbulence-chemical interaction, the Laminar Flame Concept (LFC) and the Eddy Dissipation Concept (EDC) are compared. The numerical results are validated against experimental results that have been acquired at a high pressure test facility for industrial can-type gas turbine combustors with regard to flame initiation and NOx emissions.
The numerical approach is adequate to predict the flame initiation point and NOx emission trends. Interestingly, the flame shifts its initiation point during the pressure increase in upstream direction, whereby the flame attachment shifts from anchoring behind a downstream located bluff body towards anchoring directly at the hydrogen jet. The LFC predicts this change and the NOx emissions more accurately than the EDC. The resulting NOx correlation regarding the pressure is similar to a non-premixed type combustion configuration.
The planned coal phase-out in Germany by 2038 will lead to the dismantling of power plants with a total capacity of approx. 30 GW. A possible further use of these assets is the conversion of the power plants to thermal storage power plants; the use of these power plants on the day-ahead market is considerably limited by their technical parameters. In this paper, the influence of the technical boundary conditions on the operating times of these storage facilities is presented. For this purpose, the storage power plants were described as an MILP problem and two price curves, one from 2015 with a relatively low renewable penetration (33 %) and one from 2020 with a high renewable energy penetration (51 %) are compared. The operating times were examined as a function of the technical parameters and the critical influencing factors were investigated. The thermal storage power plant operation duration and the energy shifted with the price curve of 2020
increases by more than 25 % compared to 2015.
Component failures within water supply systems can lead to significant performance losses. One way to address these losses is the explicit anticipation of failures within the design process. We consider a water supply system for high-rise buildings, where pump failures are the most likely failure scenarios. We explicitly consider these failures within an early design stage which leads to a more resilient system, i.e., a system which is able to operate under a predefined number of arbitrary pump failures. We use a mathematical optimization approach to compute such a resilient design. This is based on a multi-stage model for topology optimization, which can be described by a system of nonlinear inequalities and integrality constraints. Such a model has to be both computationally tractable and to represent the real-world system accurately. We therefore validate the algorithmic solutions using experiments on a scaled test rig for high-rise buildings. The test rig allows for an arbitrary connection of pumps to reproduce scaled versions of booster station designs for high-rise buildings. We experimentally verify the applicability of the presented optimization model and that the proposed resilience properties are also fulfilled in real systems.
Conventional EEG devices cannot be used in everyday life and hence, past decade research has been focused on Ear-EEG for mobile, at-home monitoring for various applications ranging from emotion detection to sleep monitoring. As the area available for electrode contact in the ear is limited, the electrode size and location play a vital role for an Ear-EEG system. In this investigation, we present a quantitative study of ear-electrodes with two electrode sizes at different locations in a wet and dry configuration. Electrode impedance scales inversely with size and ranges from 450 kΩ to 1.29 MΩ for dry and from 22 kΩ to 42 kΩ for wet contact at 10 Hz. For any size, the location in the ear canal with the lowest impedance is ELE (Left Ear Superior), presumably due to increased contact pressure caused by the outer-ear anatomy. The results can be used to optimize signal pickup and SNR for specific applications. We demonstrate this by recording sleep spindles during sleep onset with high quality (5.27 μVrms).
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
Communication via serial bus systems, like CAN, plays an important role for all kinds of embedded electronic and mechatronic systems. To cope up with the requirements for functional safety of safety-critical applications, there is a need to enhance the safety features of the communication systems. One measure to achieve a more robust communication is to add redundant data transmission path to the applications. In general, the communication of real-time embedded systems like automotive applications is tethered, and the redundant data transmission lines are also tethered, increasing the size of the wiring harness and the weight of the system. A radio link is preferred as a redundant transmission line as it uses a complementary transmission medium compared to the wired solution and in addition reduces wiring harness size and weight. Standard wireless links like Wi-Fi or Bluetooth cannot meet the requirements for real-time capability with regard to bus communication. Using the new dual-mode radio enables a redundant transmission line meeting all requirements with regard to real-time capability, robustness and transparency for the data bus. In addition, it provides a complementary transmission medium with regard to commonly used tethered links. A CAN bus system is used to demonstrate the redundant data transfer via tethered and wireless CAN.
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
In this paper, we present the structure, the simulation the operation of a multi-stage, hybrid solar desalination system (MSDH), powered by thermal and photovoltaic (PV) (MSDH) energy. The MSDH system consists of a lower basin, eight horizontal stages, a field of four flat thermal collectors with a total area of 8.4 m2, 3 Kw PV panels and solar batteries. During the day the system is heated by thermal energy, and at night by heating resistors, powered by solar batteries. These batteries are charged by the photovoltaic panels during the day. More specifically, during the day and at night, we analyse the temperature of the stages and the production of distilled water according to the solar irradiation intensity and the electric heating power, supplied by the solar batteries. The simulations were carried out in the meteorological conditions of the winter month (February 2020), presenting intensities of irradiance and ambient temperature reaching 824 W/m2 and 23 °C respectively. The results obtained show that during the day the system is heated by the thermal collectors, the temperature of the stages and the quantity of water produced reach 80 °C and 30 Kg respectively. At night, from 6p.m. the system is heated by the electric energy stored in the batteries, the temperature of the stages and the quantity of water produced reach respectively 90 °C and 104 Kg for an electric heating power of 2 Kw. Moreover, when the electric power varies from 1 Kw to 3 Kw the quantity of water produced varies from 92 Kg to 134 Kg. The analysis of these results and their comparison with conventional solar thermal desalination systems shows a clear improvement both in the heating of the stages, by 10%, and in the quantity of water produced by a factor of 3.