Conference Proceeding
Refine
Year of publication
Institute
- Fachbereich Elektrotechnik und Informationstechnik (294)
- Fachbereich Energietechnik (230)
- Fachbereich Luft- und Raumfahrttechnik (204)
- Fachbereich Maschinenbau und Mechatronik (191)
- Solar-Institut Jülich (164)
- Fachbereich Medizintechnik und Technomathematik (162)
- IfB - Institut für Bioengineering (117)
- Fachbereich Bauingenieurwesen (103)
- ECSM European Center for Sustainable Mobility (62)
- Fachbereich Wirtschaftswissenschaften (57)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (45)
- INB - Institut für Nano- und Biotechnologien (44)
- Fachbereich Chemie und Biotechnologie (34)
- Nowum-Energy (22)
- Fachbereich Architektur (17)
- Kommission für Forschung und Entwicklung (17)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (8)
- IaAM - Institut für angewandte Automation und Mechatronik (5)
- Fachbereich Gestaltung (3)
- Arbeitsstelle fuer Hochschuldidaktik und Studienberatung (2)
- Institut fuer Angewandte Polymerchemie (2)
- Verwaltung (2)
- Digitalisierung in Studium & Lehre (1)
- FH Aachen (1)
- Freshman Institute (1)
- Kommission für Planung und Finanzen (1)
- Senat (1)
Has Fulltext
- no (1454) (remove)
Document Type
- Conference Proceeding (1454) (remove)
Keywords
- Enterprise Architecture (5)
- Gamification (5)
- Energy storage (4)
- IO-Link (4)
- Natural language processing (4)
- Power plants (4)
- hydrogen (4)
- solar sail (4)
- Associated liquids (3)
- Concentrated solar power (3)
Das Kopplungsverbot verbietet, die Nutzung einer Dienstleistung von der Erteilung einer nicht für die Leistungserbringung erforderlichen Einwilligung abhängig zu machen. Personalisierte Werbung wird hierdurch erheblich erschwert. Anbieter können jedoch durch Bereitstellung eines alternativen, einwilligungsfreien Zugangs zu derselben Leistung ihren Dienst datenschutzkonform anbieten. Ein solcher Zugang muss nicht zwingend in Form eines fixen Entgelts gestaltet sein. Vielmehr ist es datenschutzrechtlich in gewissem Umfang zulässig, Preise unter Einbeziehung personenbezogener Daten dynamisch zu gestalten.
The discovery of human induced pluripotent stem cells reprogrammed from somatic cells [1] and their ability to differentiate into cardiomyocytes (hiPSC-CMs) has provided a robust platform for drug screening [2]. Drug screenings are essential in the development of new components, particularly for evaluating the potential of drugs to induce life-threatening pro-arrhythmias. Between 1988 and 2009, 14 drugs have been removed from the market for this reason [3]. The microelectrode array (MEA) technique is a robust tool for drug screening as it detects the field potentials (FPs) for the entire cell culture. Furthermore, the propagation of the field potential can be examined on an electrode basis. To analyze MEA measurements in detail, we have developed an open-source tool.
Human induced pluripotent stem cells (hiPSCs) have shown to be promising in disease studies and drug screenings [1]. Cardiomyocytes derived from hiPSCs have been extensively investigated using patch-clamping and optical methods to compare their electromechanical behaviour relative to fully matured adult cells. Mathematical models can be used for translating findings on hiPSCCMs to adult cells [2] or to better understand the mechanisms of various ion channels when a drug is applied [3,4]. Paci et al. (2013) [3] developed the first model of hiPSC-CMs, which they later refined based on new data [3]. The model is based on iCells® (Fujifilm Cellular Dynamics, Inc. (FCDI), Madison WI, USA) but major differences among several cell lines and even within a single cell line have been found and motivate an approach for creating sample-specific models. We have developed an optimisation algorithm that parameterises the conductances (in S/F=Siemens/Farad) of the latest Paci et al. model (2018) [5] using current-voltage data obtained in individual patch-clamp experiments derived from an automated patch clamp system (Patchliner, Nanion Technologies GmbH, Munich).
Hypertension describes the pathological increase of blood pressure, which is most commonly associated with the increase of vascular wall stiffness [1]. Referring to the “Deutsche Bluthochdruck Liga” this pathology shows a growing trend in our aging society. In order to find novel pharmacological and probably personalized treatments, we want to present a functional approach to study biomechanical properties of a human aortic vascular model.
In this method review we will give an overview of recent studies which were carried out with the CellDrum technology [2] and underline the added value to already existing standard procedures known from the field of physiology.
Herein described CellDrum technology is a system to measure functional mechanical properties of cell monolayers and thin tissue constructs in-vitro. Additionally, the CellDrum enables to elucidate the mechanical response of cells to pharmacological drugs, toxins and vasoactive agents. Due to its highly flexible polymer support, cells can also be mechanically stimulated by steady and cyclic biaxial stretching.
Clearance of blood components and fluid drainage play a crucial role in subarachnoid hemorrhage (SAH) and post hemorrhagic hydrocephalus (PHH). With the involvement of interstitial fluid (ISF) and cerebrospinal fluid (CSF), two pathways for the clearance of fluid and solutes in the brain are proposed. Starting at the level of capillaries, flow of ISF follows along the basement membranes in the walls of cerebral arteries out of the parenchyma to drain into the lymphatics and CSF [1]–[3]. Conversely, it is shown that CSF enters the parenchyma between glial and pial basement membranes of penetrating arteries [4]–[6]. Nevertheless, the involved structures and the contribution of either flow pathway to fluid balance between the subarachnoid space and interstitial space remains controversial. Low frequency oscillations in vascular tone are referred to as vasomotion and corresponding vasomotion waves are modeled as the driving force for flow of ISF out of the parenchyma [7]. Retinal vessel analysis (RVA) allows non-invasive measurement of retinal vessel vasomotion with respect to diameter changes [8]. Thus, the aim of the study is to investigate vasomotion in RVA signals of SAH and PHH patients.
Recognition of subjects with mild cognitive impairment (MCI) by the use of retinal arterial vessels.
(2019)
Though weir flow has been studied for centuries, there still remains some nuances of weir flow that are not well understood. Therefore, an international study was conducted in which 20 different hydraulics laboratories from around the world built and tested two linear weirs (quarter-round and half-round crested weirs) of common geometry. The only unconstrained dimension was the weir length, which could be adjusted to match the width of the test flume. Participating laboratories used the instrumentation and data collection methodologies of their choosing for head and discharge measurements.
The experimental results found significant variability in the discharge coefficients as a function of dimensionless upstream head, as well as in the head-discharge relationships (as much as 50% in some cases). Potential sources contributing to the scatter may have included head meter instrumentation, flow meter instrumentation, approach flow length (flume length upstream of weir), head measurement location, nappe behavior, laboratory measurement methods and experimental setup, and the care and skill of the investigator (human error). Analyzing the data as a function of instrumentation types, approach length, and head measurement location did not provide any insight regarding the variations. Nappe behavior (e.g., aeration), which could be influenced by laboratory-specific conditions, varied among the datasets primarily for the half-round crested weir (about 20%).
MedicVR : Acceleration and Enhancement Techniques for Direct Volume Rendering in Virtual Reality
(2019)
In parallel to the evolution of the Planetary Defense Conference, the exploration of small solar system bodies has advanced from fast fly-bys on the sidelines of missions to the planets to the implementation of dedicated sample-return and in-situ analysis missions. Spacecraft of all sizes have landed, touch-and-go sampled, been gently beached, or impacted at hypervelocity on asteroid and comet surfaces. More have flown by close enough to image their surfaces in detail or sample their immediate environment, often as part of an extended or re-purposed mission. And finally, full-scale planetary defense experiment missions are in the making. Highly efficient low-thrust propulsion is increasingly applied beyond commercial use also in mainstream and flagship science missions, in combination with gravity assist propulsion. Another development in the same years is the growth of small spacecraft solutions, not in size but in numbers and individual capabilities. The on-going NASA OSIRIS-REx and JAXA HAYABUSA2 missions exemplify the trend as well as the upcoming NEA SCOUT mission or the landers MINERVA-II and MASCOT recently deployed on Ryugu. We outline likely as well as possible and efficient routes of continuation of all these developments towards a propellant-less and highly efficient class of spacecraft for small solar system body exploration: small spacecraft solar sails designed for carefree handling and equipped with carried landers and application modules, for all asteroid user communities –planetary science, planetary defence, and in-situ resource utilization. This projection builds on the experience gained in the development of deployable membrane structures leading up to the successful ground deployment test of a (20 m)² solar sail at DLR Cologne and in the 20 years since. It draws on the background of extensive trajectory optimization studies, the qualified technology of the DLR GOSSAMER-1 deployment demonstrator, and the MASCOT asteroid lander. These enable ‘now-term’ as well as near-term hardware solutions, and thus responsive fast-paced development. Mission types directly applicable to planetary defense include: single and Multiple NEA Rendezvous ((M)NR) for mitigation precursor, target monitoring and deflection follow-up tasks; sail-propelled head-on retrograde kinetic impactors (RKI) for mitigation; and deployable membrane based methods to modify the asteroid’s properties or interact with it. The DLR-ESTEC GOSSAMER Roadmap initiated studies of missions uniquely feasible with solar sails such as Displaced L1 (DL1) space weather advance warning and monitoring and Solar Polar Orbiter (SPO) delivery which demonstrate the capability of near-term solar sails to achieve NEA rendezvous in any kind of orbit, from Earth-coorbital to extremely inclined and even retrograde orbits. For those mission types using separable payloads, such as SPO, (M)NR and RKI, design concepts can be derived from the separable Boom Sail Deployment Units characteristic of DLR GOSSAMER solar sail technology, nanolanders like MASCOT, or microlanders like the JAXA-DLR Jupiter Trojan Asteroid Lander for the OKEANOS mission which can shuttle from the sail to the asteroids visited and enable multiple NEA sample-return missions. These are an ideal match for solar sails in micro-spacecraft format whose launch configurations are compatible with ESPA and ASAP secondary payload platforms.
Asteroid mining has the potential to greatly reduce the cost of in-space manufacturing, production of propellant for space transportation and consumables for crewed spacecraft, compared to launching the required resources from Earth’s deep gravity well. This paper discusses the top-level mission architecture and trajectory design for these resource-return missions, comparing high-thrust trajectories with continuous low-thrust solar-sail trajectories. This work focuses on maximizing the economic Net Present Value, which takes the time-cost of finance into account and therefore balances the returned resource mass and mission duration. The different propulsion methods will then be compared in terms of maximum economic return, sets of attainable target asteroids, and mission flexibility. This paper provides one more step towards making commercial asteroid mining an economically viable reality by integrating trajectory design, propulsion technology and economic modelling.
Neue Perspektiven für die Bahn in der Produktions- und Distributionslogistik durch Prozessautomation
(2019)
Deutschland braucht mehr Eisenbahn um CO2-Emissionen aus dem Verkehr zu reduzieren. Sie muss zum Rückgrat aktueller Logistikprozesse, z.B. bei Kaufmannsgütern und E-Commerce, werden. Dies geht nicht ohne neuartige betriebliche Konzepte und eine Transformation des Güterwagens von einem „dummen Stück Stahl“ zu einem modernen Werkzeug der Logistik.
Als „Güterwagen 4.0“ wird ein kommunikativer und kooperativer Güterwagen verstanden, der die Voraussetzung zur Automatisierung aller Prozesse der Zugvorbereitung bereitstellt, sich aber ansonsten vollkommen kompatibel mit heutigen Betriebsverfahren im Hauptlauf präsentiert. Durch Kommunikation zwischen Güterwagen und umgebenden intelligenten Systemen im Sinne eines „Internet der Dinge“ gelingt damit unter Anderem die Realisierung hoch effizienter Gleisanschlussverkehre, die der Güterbahn neue Märkte abseits der klassisch bahn-affinen Verkehre erschließen und letztlich den Wandel zu einer nachhaltigen Gütermobilität fördern.
In many instances, freight vehicles exchange load or information with plants that are or will soon be Industry4.0 plants. The Wagon4.0 concept, as developed in close cooperation with e.g. port or mine operations, offers a maximum in railway operational efficiency while providing strong business cases already in the respective plant interaction. The Wagon4.0 consists of main components, a power supply, data network, sensors, actuators and an operating system, the so called WagonOS. The Wagon OS is implemented in a granular, self-sufficient manner, to allow basic features such as WiFi-Mesh and train christening in remote areas without network connection. Furthermore, the granularity of the operating system allows to extend the familiar app concept to freight rail rolling stock, making it possible to use specialised actuators for certain applications, e.g. an electrical parking brake or an auxiliary drive. In order to facilitate migration to the Wagon4.0 for existing fleets, a migration concept featuring five levels of technical adaptation was developed. The present paper investigates the benefits of Wagon4.0-implementations for the particular challenges of heavy haul operations by focusing on train christening, ep-assisted braking, autonomous last mile and traction boost operation as well as improved maintenance schedules
A hybrid-electric propulsion system combines the advantages of fuel-based systems and battery powered systems and offers new design freedom. To take full advantage of this technology, aircraft designers must be aware of its key differences, compared to conventional, carbon-fuel based, propulsion systems. This paper gives an overview of the challenges and potential benefits associated with the design of aircraft that use hybrid-electric propulsion systems. It offers an introduction of the most popular hybrid-electric propulsion architectures and critically assess them against the conventional and fully electric propulsion configurations. The effects on operational aspects and design aspects are covered. Special consideration is given to the application of hybrid-electric propulsion technology to both unmanned and vertical take-off and landing aircraft. The authors conclude that electric propulsion technology has the potential to revolutionize aircraft design. However, new and innovative methods must be researched, to realize the full benefit of the technology.
The results of a statistical investigation of 42 fixed-wing, small to medium sized (20 kg−1000 kg) reconnaissance unmanned air vehicles (UAVs) are presented. Regression analyses are used to identify correlations of the most relevant geometry dimensions with the UAV’s maximum take-off mass. The findings allow an empirical based geometry-build up for a complete unmanned aircraft by referring to its take-off mass only. This provides a bridge between very early design stages (initial sizing) and the later determination of shapes and dimensions. The correlations might be integrated into a UAV sizing environment and allow designers to implement more sophisticated drag and weight estimation methods in this process. Additional information on correlation factors for a rough drag estimation methodology indicate how this technique can significantly enhance the accuracy of early design iterations.
20 years after the successful ground deployment test of a (20 m) 2 solar sail at DLR Cologne, and in the light of the upcoming U.S. NEAscout mission, we provide an overview of the progress made since in our mission and hardware design studies as well as the hardware built in the course of our solar sail technology development. We outline the most likely and most efficient routes to develop solar sails for useful missions in science and applications, based on our developed `now-term' and near-term hardware as well as the many practical and managerial lessons learned from the DLR-ESTEC Gossamer Roadmap. Mission types directly applicable to planetary defense include single and Multiple NEA Rendezvous ((M)NR) for precursor, monitoring and follow-up scenarios as well as sail-propelled head-on retrograde kinetic impactors (RKI) for mitigation. Other mission types such as the Displaced L1 (DL1) space weather advance warning and monitoring or Solar Polar Orbiter (SPO) types demonstrate the capability of near-term solar sails to achieve asteroid rendezvous in any kind of orbit, from Earth-coorbital to extremely inclined and even retrograde orbits. Some of these mission types such as SPO, (M)NR and RKI include separable payloads. For one-way access to the asteroid surface, nanolanders like MASCOT are an ideal match for solar sails in micro-spacecraft format, i.e. in launch configurations compatible with ESPA and ASAP secondary payload platforms. Larger landers similar to the JAXA-DLR study of a Jupiter Trojan asteroid lander for the OKEANOS mission can shuttle from the sail to the asteroids visited and enable multiple NEA sample-return missions. The high impact velocities and re-try capability achieved by the RKI mission type on a final orbit identical to the target asteroid's but retrograde to its motion enables small spacecraft size impactors to carry sufficient kinetic energy for deflection.
Effective training requires high muscle forces potentially leading to training-induced injuries. Thus, continuous monitoring and controlling of the loadings applied to the musculoskeletal system along the motion trajectory is required. In this paper, a norm-optimal iterative learning control algorithm for the robot-assisted training is developed. The algorithm aims at minimizing the external knee joint moment, which is commonly used to quantify the loading of the medial compartment. To estimate the external knee joint moment, a musculoskeletal lower extremity model is implemented in OpenSim and coupled with a model of an industrial robot and a force plate mounted at its end-effector. The algorithm is tested in simulation for patients with varus, normal and valgus alignment of the knee. The results show that the algorithm is able to minimize the external knee joint moment in all three cases and converges after less than seven iterations.
In many historical centers in Europe, stone masonry is part of building aggregates, which developed when the layout of the city or village was densified. The analysis of such building aggregates is very challenging and modelling guidelines missing. Advances in the development of analysis methods have been impeded by the lack of experimental data on the seismic response of such aggregates. The SERA project AIMS (Seismic Testing of Adjacent Interacting Masonry Structures) provides such experimental data by testing an aggregate of two buildings under two horizontal components of dynamic excitation. With the aim to advance the modelling of unreinforced masonry aggregates, a blind prediction competition is organized before the experimental campaign. Each group has been provided a complete set of construction drawings, material properties, testing sequence and the list of measurements to be reported. The applied modelling approaches span from equivalent frame models to Finite Element models using shell elements and discrete element models with solid elements. This paper compares the first entries, regarding the modelling approaches, results in terms of base shear, roof displacements, interface openings, and the failure modes.
Thematisch widmet sich das Projekt Coolplan- AIR der Fortentwicklung und Feldvalidierung eines Berechnungs- und Auslegungstools zur energieeffizienten Kühlung von Gebäuden mit luftgestützten Systemen. Neben dem Aufbau und der Weiterentwicklung von Simulationsmodellen erfolgen Vermessungen der Gesamtsysteme anhand von Praxisanlagen im Feld. Der Schwerpunkt des Projekts liegt auf der Vermessung, Simulation und Integration rein luftgestützter Kühltechnologien. Im Bereich der Kälteerzeugung wurden Luft‐ Luft‐ Wärmepumpen, Anlagen zur adiabaten Kühlung bzw. offene Kühltürme und VRF‐ Multisplit‐ Systeme (Variable Refrigerant Flow) im Feld bzw. auf dem Teststand der HSD vermessen. Die Komponentenmodelle werden in die Matlab/Simulink‐ Toolbox CARNOT integriert und anschließend auf Basis der zuvor erhaltenen Messdaten validiert.
Einerseits erlauben die Messungen das Betriebsverhalten von Anlagenkomponenten zu analysieren. Andererseits soll mit der Vermessung im Feld geprüft werden, inwieweit die Simulationsmodelle, welche im Vorgängerprojekt aus Prüfstandmessungen entwickelt wurden, auch für größere Geräteleistungen Gültigkeit besitzen. Die entwickelten und implementierten Systeme, bestehend aus verschiedensten Anlagenmodellen und Regelungskomponenten, werden geprüft und dahingehend qualifiziert, dass sie in Standard- Auslegungstools zuverlässig verwendet werden können.
Zusätzlich wird ein energetisches Monitoring eines Hörsaalgebäudes am Campus Jülich durchgeführt, das u. a. zur Validierung der Kühllastberechnungen in gängigen Simulationsmodelle genutzt werden kann.
A research framework for human aspects in the internet of production: an intra-company perspective
(2020)
Digitalization in the production sector aims at transferring concepts and methods from the Internet of Things (IoT) to the industry and is, as a result, currently reshaping the production area. Besides technological progress, changes in work processes and organization are relevant for a successful implementation of the “Internet of Production” (IoP). Focusing on the labor organization and organizational procedures emphasizes to consider intra-company factors such as (user) acceptance, ethical issues, and ergonomics in the context of IoP approaches. In the scope of this paper, a research approach is presented that considers these aspects from an intra-company perspective by conducting studies on the shop floor, control level and management level of companies in the production area. Focused on four central dimensions—governance, organization, capabilities, and interfaces—this contribution presents a research framework that is focused on a systematic integration and consideration of human aspects in the realization of the IoP.
This paper presents an approach for UAV propulsion system qualification and validation on the example of FH Aachen's 25 kg cargo UAV "PhoenAIX". Thrust and power consumption are the most important aspects of a propulsion system's layout. In the initial design phase, manufacturers' data has to be trusted, but the validation of components is an essential step in the design process. This process is presented in this paper. The vertical takeoff system is designed for efficient hover; therefore, performance under static conditions is paramount. Because an octo-copter layout with coaxial rotors is considered, the impact of this design choice is analyzed. Data on thrust, voltage stability, power consumption, rotational speed, and temperature development of motors and controllers are presented for different rotors. The fixed-wing propulsion system is designed for efficient cruise flight. At the same time, a certain static thrust has to be provided, as the aircraft needs to accelerate to cruise speed. As for the hover-system, data on different propellers is compared. The measurements were taken for static conditions, as well as for different inflow velocities, using the FH-Aachen's wind-tunnel.
Modeling and upscaling of a pilot bayonettube reactor for indirect solar mixed methane reforming
(2020)
A 16.77 kW thermal power bayonet-tube reactor for the mixed reforming of methane using solar energy has been designed and modeled. A test bench for the experimental tests has been installed at the Synlight facility in Juelich, Germany and has just been commissioned. This paper presents the solar-heated reactor design for a combined steam and dry reforming as well as a scaled-up process simulation of a solar reforming plant for methanol production. Solar power towers are capable of providing large amounts of heat to drive high-endothermic reactions, and their integration with thermochemical processes shows a promising future. In the designed bayonet-tube reactor, the conventional burner arrangement for the combustion of natural gas has been substituted by a continuous 930 °C hot air stream, provided by means of a solar heated air receiver, a ceramic thermal storage and an auxiliary firing system. Inside the solar-heated reactor, the heat is transferred by means of convective mechanism mainly; instead of radiation mechanism as typically prevailing in fossil-based industrial reforming processes. A scaled-up solar reforming plant of 50.5 MWth was designed and simulated in Dymola® and AspenPlus®. In comparison to a fossil-based industrial reforming process of the same thermal capacity, a solar reforming plant with thermal storage promises a reduction up to 57 % of annual natural gas consumption in regions with annual DNI-value of 2349 kWh/m2. The benchmark solar reforming plant contributes to a CO2 avoidance of approx. 79 kilotons per year. This facility can produce a nominal output of 734.4 t of synthesis gas and out of this 530 t of methanol a day.
The paper presents the derivation of a new equivalent skin friction coefficient for estimating the parasitic drag of short-to-medium range fixed-wing unmanned aircraft. The new coefficient is derived from an aerodynamic analysis of ten different unmanned aircraft used on surveillance, reconnaissance, and search and rescue missions. The aircraft are simulated using a validated unsteady Reynolds-averaged Navier Stokes approach. The UAV's parasitic drag is significantly influenced by the presence of miscellaneous components like fixed landing gears or electro-optical sensor turrets. These components are responsible for almost half of an unmanned aircraft's total parasitic drag. The new equivalent skin friction coefficient accounts for these effects and is significantly higher compared to other aircraft categories. It is used to initially size an unmanned aircraft for a typical reconnaissance mission. The improved parasitic drag estimation yields a much heavier unmanned aircraft when compared to the sizing results using available drag data of manned aircraft.
Comparative assessment of parallel-hybrid-electric propulsion systems for four different aircraft
(2020)
As battery technologies advance, electric propulsion concepts are on the edge of disrupting aviation markets. However, until electric energy storage systems are ready to allow fully electric aircraft, the combination of combustion engine and electric motor as a hybrid-electric propulsion system seems to be a promising intermediate solution. Consequently, the design space for future aircraft is expanded considerably, as serial-hybrid-, parallel-hybrid-, fully-electric, and conventional propulsion systems must all be considered. While the best propulsion system depends on a multitude of requirements and considerations, trends can be observed for certain types of aircraft and certain types of missions. This paper provides insight into some factors that drive a new design towards either conventional or hybrid propulsion systems. General aviation aircraft, VTOL air taxis, transport aircraft, and UAVs are chosen as case studies. Typical missions for each class are considered, and the aircraft are analyzed regarding their take-off mass and primary energy consumption. For these case studies, a high-level approach is chosen, using an initial sizing methodology. Results indicate that hybrid-electric propulsion systems should be considered if the propulsion system is sized by short-duration power constraints (e.g. take-off, climb). However, if the propulsion system is sized by a continuous power requirement (e.g. cruise), hybrid-electric systems offer hardly any benefit.
The field of Cognitive Robotics aims at intelligent decision making of autonomous robots. It has matured over the last 25 or so years quite a bit. That is, a number of high-level control languages and architectures have emerged from the field. One concern in this regard is the action language GOLOG. GOLOG has been used in a rather large number of applications as a high-level control language ranging from intelligent service robots to soccer robots. For the lower level robot software, the Robot Operating System (ROS) has been around for more than a decade now and it has developed into the standard middleware for robot applications. ROS provides a large number of packages for standard tasks in robotics like localisation, navigation, and object recognition. Interestingly enough, only little work within ROS has gone into the high-level control of robots. In this paper, we describe our approach to marry the GOLOG action language with ROS. In particular, we present our architecture on inte grating golog++, which is based on the GOLOG dialect Readylog, with the Robot Operating System. With an example application on the Pepper service robot, we show how primitive actions can be easily mapped to the ROS ActionLib framework and present our control architecture in detail.
Seismic behavior of an existing unreinforced masonry building built pre-modern code, located in the City of Ohrid, Republic of North Macedonia has been investigated in this paper. The analyzed school building is selected as an archetype in an ongoing project named “Seismic vulnerability assessment of existing masonry structures in Republic of North Macedonia (SeismoWall)”. Two independent segments were included in this research: Seismic hazard assessment by creating a cite specific response spectra and Seismic vulnerability definition by creating a region - specific series of vulnerability curves for the chosen building topology. A reliable Seismic Hazard Assessment for a selected region is a crucial point for performing a seismic risk analysis of a characteristic building class. In that manner, a scenario – based method that incorporates together the knowledge of tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity named Neo Deterministic approach is used for calculation of the response spectra for the location of the building. Variations of the rupturing process are taken into account in the nucleation point of the rupture, in the rupture velocity pattern and in the istribution of the slip on the fault. The results obtained from the multiple scenarios are obtained as an envelope of the response spectra computed for the cite using the procedure Maximum Credible Seismic Input (MCSI). Capacity of the selected building has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) was used for verification of the structural safety of the chosen unreinforced masonry structure. In the process of optimization of the number of samples, computational cost required in a Monte Carlo simulation is significantly reduced since the simulation is performed on a polynomial response surface function for prediction of the structural response. Performance point, found as the intersection of the capacity of the building and the spectra used, is chosen as a response parameter. Five levels of damage limit states based on the capacity curve of the building are defined in dependency on the yield displacement and the maximum displacement. Maximum likelihood estimation procedure is utilized in the process of vulnerability curves determination. As a result, region specific series of vulnerability curves for the chosen type of masonry structures are defined. The obtained probabilities of exceedance a specific damage states as a result from vulnerability curves are compared with the observed damages happened after the earthquake in July 2017 in the City of Ohrid, North Macedonia.
Masonry is used in many buildings not only for load-bearing walls, but also for non-load-bearing enclosure elements in the form of infill walls. Many studies confirmed that infill walls interact with the surrounding reinforced concrete frame, thus changing dynamic characteristics of the structure. Consequently, masonry infills cannot be neglected in the design process. However, although the relevant standards contain requirements for infill walls, they do not describe how these requirements are to be met concretely. This leads in practice to the fact that the infill walls are neither dimensioned nor constructed correctly. The evidence of this fact is confirmed by the recent earthquakes, which have led to enormous damages, sometimes followed by the total collapse of buildings and loss of human lives. Recently, the increasing effort has been dedicated to the approach of decoupling of masonry infills from the frame elements by introducing the gap in between. This helps in removing the interaction between infills and frame, but raises the question of out-of-plane stability of the panel. This paper presents the results of the experimental campaign showing the out-of-plane behavior of masonry infills decoupled with the system called INODIS (Innovative decoupled infill system), developed within the European project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings). Full scale specimens were subjected to the different loading conditions and combinations of in-plane and out-of-plane loading. Out-of-plane capacity of the masonry infills with the INODIS system is compared with traditionally constructed infills, showing that INODIS system provides reliable out-of-plane connection under various loading conditions. In contrast, traditional infills performed very poor in the case of combined and simultaneously applied in-plane and out-of-plane loading, experiencing brittle behavior under small in-plane drifts followed by high out-of-plane displacements. Decoupled infills with the INODIS system have remained stable under out-of-plane loads, even after reaching high in-plane drifts and being damaged.
Gamification and gamified information systems (GIS) apply video game elements to encourage the work on boring and everyday tasks. Meanwhile, several research works provide evidence that gamification increases efficiency and effectivity of such tasks. The paper at hand investigates the health care sector, which is challenged with cost pressure and suffers in process efficiency. We hypothesize that GIS may improve the efficiency and quality of care processes. By applying an interview-based content analysis, the paper at hand evaluates gamification elements in an assisted living environment and provides three research contributions. First, insights into relevant GIS affordances and application examples for assisted living facilities are given. Second, assisted living experts evaluate GIS design guidelines. Both the relevant affordances and design principles comprise a basis for the development of a GIS for social workers in assisted living facilities. Third, potential adoption barriers and design guidelines for GIS in assisted living are presented.
This paper primarily presents an aerodynamic CFD analysis of a winged spaceplane geometry based on the Japanese Space Walker proposal. StarCCM was used to calculate aerodynamic coefficients for a typical space flight trajectory including super-, trans- and subsonic Mach numbers and two angles of attack. Since the solution of the RANS equations in such supersonic flight regimes is still computationally expensive, inviscid Euler simulations can principally lead to a significant reduction in computational effort. The impact on accuracy of aerodynamic properties is further analysed by comparing both methods for different flight regimes up to a Mach number of 4.
The production of dispatchable renewable energy will be one of the most important key factors of the future energy supply. Concentrated solar power (CSP) plants operated with molten salt as heat transfer and storage media are one opportunity to meet this challenge. Due to the high concentration factor of the solar tower technology the maximum process temperature can be further increased which ultimately decreases the levelized costs of electricity of the technology (LCOE). The development of an improved tubular molten salt receiver for the next generation of molten salt solar tower plants is the aim of this work. The receiver is designed for a receiver outlet temperature up to 600 °C. Together with a complete molten salt system, the receiver will be integrated into the Multi-Focus-Tower (MFT) in Jülich (Germany). The paper describes the basic engineering of the receiver, the molten salt tower system and a laboratory corrosion setup.
We compare four different algorithms for automatically estimating the muscle fascicle angle from ultrasonic images: the vesselness filter, the Radon transform, the projection profile method and the gray level cooccurence matrix (GLCM). The algorithm results are compared to ground truth data generated by three different experts on 425 image frames from two videos recorded during different types of motion. The best agreement with the ground truth data was achieved by a combination of pre-processing with a vesselness filter and measuring the angle with the projection profile method. The robustness of the estimation is increased by applying the algorithms to subregions with high gradients and performing a LOESS fit through these estimates.
With the many achievements of Machine Learning in the past years, it is likely that the sub-area of Deep Learning will continue to deliver major technological breakthroughs [1]. In order to achieve best results, it is important to know the various different Deep Learning frameworks and their respective properties. This paper provides a comparative overview of some of the most popular frameworks. First, the comparison methods and criteria are introduced and described with a focus on computer vision applications: Features and Uses are examined by evaluating papers and articles, Adoption and Popularity is determined by analyzing a data science study. Then, the frameworks TensorFlow, Keras, PyTorch and Caffe are compared based on the previously described criteria to highlight properties and differences. Advantages and disadvantages are compared, enabling researchers and developers to choose a framework according to their specific needs.
In addition to very high safety and reliability requirements, the design of internal combustion engines (ICE) in aviation focuses on economic efficiency. The objective must be to design the aircraft powertrain optimized for a specific flight mission with respect to fuel consumption and specific engine power. Against this background, expert tools provide valuable decision-making assistance for the customer. In this paper, a mathematical calculation model for the fuel consumption of aircraft ICE is presented. This model enables the derivation of fuel consumption maps for different engine configurations. Depending on the flight conditions and based on these maps, the current and the integrated fuel consumption for freely definable flight emissions is calculated. For that purpose, an interpolation method is used, that has been optimized for accuracy and calculation time. The mission boundary conditions flight altitude and power requirement of the ICE form the basis for this calculation. The mathematical fuel consumption model is embedded in a parent program. This parent program presents the simulated fuel consumption by means of an example flight mission for a representative airplane. The focus of the work is therefore on reproducing exact consumption data for flight operations. By use of the empirical approaches according to Gagg-Farrar [1] the power and fuel consumption as a function of the flight altitude are determined. To substantiate this approaches, a 1-D ICE model based on the multi-physical simulation tool GT-Suite® has been created. This 1-D engine model offers the possibility to analyze the filling and gas change processes, the internal combustion as well as heat and friction losses for an ICE under altitude environmental conditions. Performance measurements on a dynamometer at sea level for a naturally aspirated ICE with a displacement of 1211 ccm used in an aviation aircraft has been done to validate the 1-D ICE model. To check the plausibility of the empirical approaches with respect to the fuel consumption and performance adjustment for the flight altitude an analysis of the ICE efficiency chain of the 1-D engine model is done. In addition, a comparison of literature and manufacturer data with the simulation results is presented.
The development of resilient technical systems is a challenging task, as the system should adapt automatically to unknown disturbances and component failures. To evaluate different approaches for deriving resilient technical system designs, we developed a modular test rig that is based on a pumping system. On the basis of this example
system, we present metrics to quantify resilience and an algorithmic approach to improve resilience. This approach enables the pumping system to automatically react on unknown disturbances and to reduce the impact of component failures. In this case, the system is able to automatically adapt its topology by activating additional valves. This enables the system to still reach a minimum performance, even in case of failures. Furthermore, timedependent disturbances are evaluated continuously, deviations from the original state are automatically detected and anticipated in the future. This allows to reduce the impact of future disturbances and leads to a more resilient
system behaviour.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of water distribution systems (WDS), i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, graph-theoretical metrics have been proposed. In this study, a promising approach is first physically derived analytically and then applied to assess the resilience of the WDS for a district in a major German City. The topology based resilience index computed for every consumer node takes into consideration the resistance of the best supply path as well as alternative supply paths. This resistance of a supply path is derived to be the dimensionless pressure loss in the pipes making up the path. The conducted analysis of a present WDS provides insight into the process of actively influencing the resilience of WDS locally and globally by adding pipes. The study shows that especially pipes added close to the reservoirs and main branching points in the WDS result in a high resilience enhancement of the overall WDS.
The integration of product data from heterogeneous sources and manufacturers into a single catalog is often still a laborious, manual task. Especially small- and medium-sized enterprises face the challenge of timely integrating the data their business relies on to have an up-to-date product catalog, due to format specifications, low quality of data and the requirement of expert knowledge. Additionally, modern approaches to simplify catalog integration demand experience in machine learning, word vectorization, or semantic similarity that such enterprises do not have. Furthermore, most approaches struggle with low-quality data. We propose Attribute Label Ranking (ALR), an easy to understand and simple to adapt learning approach. ALR leverages a model trained on real-world integration data to identify the best possible schema mapping of previously unknown, proprietary, tabular format into a standardized catalog schema. Our approach predicts multiple labels for every attribute of an inpu t column. The whole column is taken into consideration to rank among these labels. We evaluate ALR regarding the correctness of predictions and compare the results on real-world data to state-of-the-art approaches. Additionally, we report findings during experiments and limitations of our approach.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
To maximize the travel distances of battery electric vehicles such as cars or buses for a given amount of stored energy, their powertrains are optimized energetically. One key part within optimization models for electric powertrains is the efficiency map of the electric motor. The underlying function is usually highly nonlinear and nonconvex and leads to major challenges within a global optimization process. To enable faster solution times, one possibility is the usage of piecewise linearization techniques to approximate the nonlinear efficiency map with linear constraints. Therefore, we evaluate the influence of different piecewise linearization modeling techniques on the overall solution process and compare the solution time and accuracy for methods with and without explicitly used binary variables.
The chemical industry is one of the most important industrial sectors in Germany in terms of manufacturing revenue. While thermodynamic boundary conditions often restrict the scope for reducing the energy consumption of core processes, secondary processes such as cooling offer scope for energy optimisation. In this contribution, we therefore model and optimise an existing cooling system. The technical boundary conditions of the model are provided by the operators, the German chemical company BASF SE. In order to systematically evaluate different degrees of freedom in topology and operation, we formulate and solve a Mixed-Integer Nonlinear Program (MINLP), and compare our optimisation results with the existing system.
Successful optimization requires an appropriate model of the system under consideration. When selecting a suitable level of detail, one has to consider solution quality as well as the computational and implementation effort. In this paper, we present a MINLP for a pumping system for the drinking water supply of high-rise buildings. We investigate the influence of the granularity of the underlying physical models on the solution quality. Therefore, we model the system with a varying level of detail regarding the friction losses, and conduct an experimental validation of our model on a modular test rig. Furthermore, we investigate the computational effort and show that it can be reduced by the integration of domain-specific knowledge.
Control engineering theory is hard to grasp for undergraduates during the first semesters, as it deals with the dynamical behavior of systems also in combination with control strategies on an abstract level. Therefore, operational amplifier (OpAmp) processes are reasonable and very effective systems to connect mathematical description with actual system’s behavior. In this paper, we present an experiment for a laboratory session in which an embedded system, driven by a LabVIEW human machine interface (HMI) via USB, controls the analog circuits.With this setup we want to show the possibility of firstly, analyzing a first order process and secondly, designing a P-and PI-controller. Thereby, the theory of control engineering is always applied to the empirical results in order to break down the abstract level for the students.
The paper presents a method for the quantitative assessment of choroidal blood flow using an OCT-A system. The developed technique for processing of OCT-A scans is divided into two stages. At the first stage, the identification of the boundaries in the selected portion was performed. At the second stage, each pixel mark on the selected layer was represented as a volume unit, a voxel, which characterizes the region of moving blood. Three geometric shapes were considered to represent the voxel. On the example of one OCT-A scan, this work presents a quantitative assessment of the blood flow index. A possible modification of two-stage algorithm based on voxel scan processing is presented.
The recovery of waste heat requires heat exchangers to extract it from a liquid or gaseous medium into another working medium, a refrigerant. In Organic Rankine Cycles (ORC) on Combustion Engines there are two major heat sources, the exhaust gas and the water/glycol fluid from the engine’s cooling circuit. A heat exchanger design must be adapted to the different requirements and conditions resulting from the heat sources, fluids, system configurations, geometric restrictions, and etcetera. The Stacked Shell Cooler (SSC) is a new and very specific design of a plate heat exchanger, created by AKG, which allows with a maximum degree of freedom the optimization of heat exchange rate and the reduction of the related pressure drop. This optimization in heat exchanger design for ORC systems is even more important, because it reduces the energy consumption of the system and therefore maximizes the increase in overall efficiency of the engine.
Integrated voice assistants (IVA) receive more and more attention and are widespread for entertainment use cases, such as radio hearing or web searches. At the same time, the health care segment suffers in process inefficiency and missing staff, whereas the usage of IVA has the potential to improve caring processes and patient satisfaction. By applying a design science approach and based on a qualitative study, we identify IVA requirements, barriers and design guidelines for the health care sector. The results reveal three important IVA functions: the ability to set appointments with care service staff, the documentation of health history and the communication with service staff. Integration, system stability and volume control are the most important nonfunctional requirements. Based on the interview results and project experiences, six design and implementation guidelines are derived.
Reinforced concrete (RC) structures with masonry infills are widely used for several types of buildings all over the world. However, it is well known that traditional masonry infills constructed with rigid contact to the surrounding RC frame performed rather poor in past earthquakes. Masonry infills showed severe in-plane damages and failed in many cases under out-of-plane seismic loading. As the undesired interactions between frames and infills changes the load transfer on building level, complete collapses of buildings were observed. A possible solution is uncoupling of masonry infills to the frame to reduce the infill contribution activated by the frame deformation under horizontal loading. The paper presents numerical simulations on RC frames equipped with the innovative decoupling system INODIS. The system was developed within the European project INSYSME and allows an effective uncoupling of frame and infill. The simulations are carried out with a micro-modelling approach, which is able to predict the complex nonlinear behaviour resulting from the different materials and their interaction. Each brick is modelled individually and connected taking into account nonlinearity of a brick mortar interface. The calibration of the model is based on small specimen tests and experimental results for one bay one storey frame are used for the validation. The validated model is further used for parametric studies on two storey and two bay infilled frames. The response and change of the structural stiffness are analysed and compared to the traditionally infilled frame. The results confirm the effectiveness of the INODIS system with less damage and relatively low contribution of the infill at high drift levels. In contrast to the uncoupled system configurations, traditionally infilled frames experienced brittle failure at rather low drift levels.
A further development of the Added-Mass-Method allows the combined representation of the effects of both soil-structure-interaction and fluid-structure interaction on a liquid-filled-tank in one model. This results in a practical method for describing the dynamic fluid pressure on the tank shell during joint movement. The fluid pressure is calculated on the basis of the tank's eigenform and the earthquake acceleration and represented by additional masses on the shell. The bearing on compliant ground is represented by replacement springs, which are calculated dependent on the local soil composition. The influence of the shear modulus of the compliant soil is clearly visible in the pressure curves and the stress distribution in the shell. The acceleration spectra are also dependent on soil stiffness. According to Eurocode-8 the acceleration spectra are determined for fixed soil-classes, instead of calculating the accelerations for each site in direct dependence on the soil composition. This leads to unrealistic sudden changes in the system's response. Therefore, earthquake spectra are calculated for different soil models in direct dependence of the shear modulus. Thus, both the acceleration spectra and the replacement springs match the soil composition. This enables a reasonable and consistent calculation of the system response for the actual conditions at each site.
In many historical centres in Europe, stone masonry buildings are part of building aggregates, which developed when the layout of the city or village was densified. In these aggregates, adjacent buildings share structural walls to support floors and roofs. Meanwhile, the masonry walls of the façades of adjacent buildings are often connected by dry joints since adjacent buildings were constructed at different times. Observations after for example the recent Central Italy earthquakes showed that the dry joints between the building units were often the first elements to be damaged. As a result, the joints opened up leading to pounding between the building units and a complicated interaction at floor and roof beam supports. The analysis of such building aggregates is very challenging and modelling guidelines do not exist. Advances in the development of analysis methods have been impeded by the lack of experimental data on the seismic response of such aggregates. The objective of the project AIMS (Seismic Testing of Adjacent Interacting Masonry Structures), included in the H2020 project SERA, is to provide such experimental data by testing an aggregate of two buildings under two horizontal components of dynamic
excitation. The test unit is built at half-scale, with a two-storey building and a one-storey building. The buildings share one common wall while the façade walls are connected by dry joints. The floors are at different heights leading to a complex dynamic response of this smallest possible building aggregate. The shake table test is conducted at the LNEC seismic testing facility. The testing sequence comprises four levels of shaking: 25%, 50%, 75% and 100% of nominal shaking table capacity. Extensive instrumentation, including accelerometers, displacement transducers and optical measurement systems, provides detailed information on the building aggregate response. Special attention is paid to the interface opening, the globa
In the study, the process chain of additive manufacturing by means of powder bed fusion will be presented based on the material glass. In order to reliably process components additively, new concepts with different solutions were developed and investigated.
Compared to established metallic materials, the properties of glass materials differ significantly. Therefore, the process control was adapted to the material glass in the investigations. With extensive parameter studies based on various glass powders such as borosilicate glass and quartz glass, scientifically proven results on powder bed fusion of glass are presented. Based on the determination of the particle properties with different methods, extensive investigations are made regarding the melting behavior of glass by means of laser beams. Furthermore, the experimental setup was steadily expanded. In addition to the integration of coaxial temperature measurement and regulation, preheating of the building platform is of major importance. This offers the possibility to perform 3D printing at the transformation temperatures of the glass materials. To improve the component’s properties, the influence of a subsequent heat treatment was also investigated.
The experience gained was incorporated into a new experimental system, which allows a much better exploration of the 3D printing of glass. Currently, studies are being conducted to improve surface texture, building accuracy, and geometrical capabilities using three-dimensional specimen.
The contribution shows the development of research in the field of 3D printing of glass, gives an insight into the machine and process engineering as well as an outlook on the possibilities and applications.
Design and Development of a Hot S-Parameter Measurement System for Plasma and Magnetron Applications
(2020)
This paper presents the design, development and calibration procedures of a novel hot S-parameter measurement system for plasma and magnetron applications with power level up to 6 kW. Based on a vector network analyzer, a power amplifier and two directional couplers, the input matching hotS 11 and transmission hotS 21 of the device under test are measured at 2.45 GHz center frequency and 300MHz bandwidth, while the device is driven by the magnetron. This measurement system opens a new horizon to develop many new industrial applications such as microwave plasma jets, dryer systems, dryers and so forth. Furthermore, the developing, controlling and monitoring a 2kW 2.45GHz plasma jet and a dryer system using the measurement system are presented and explained.
The number of case studies focusing on hybrid-electric aircraft is steadily increasing, since these configurations are thought to lead to lower operating costs and environmental impact than traditional aircraft. However, due to the lack of reference data of actual hybrid-electric aircraft, in most cases, the design tools and results are difficult to validate. In this paper, two independently developed approaches for hybrid-electric conceptual aircraft design are compared. An existing 19-seat commuter aircraft is selected as the conventional baseline, and both design tools are used to size that aircraft. The aircraft is then re-sized under consideration of hybrid-electric propulsion technology. This is performed for parallel, serial, and fully-electric powertrain architectures. Finally, sensitivity studies are conducted to assess the validity of the basic assumptions and approaches regarding the design of hybrid-electric aircraft. Both methods are found to predict the maximum take-off mass (MTOM) of the reference aircraft with less than 4% error. The MTOM and payload-range energy efficiency of various (hybrid-) electric configurations are predicted with a maximum difference of approximately 2% and 5%, respectively. The results of this study confirm a correct formulation and implementation of the two design methods, and the data obtained can be used by researchers to benchmark and validate their design tools.
Im Projekt Coolplan‐ AIR geht es um die Fortentwicklung und Feld‐ Validierung eines Berechnungs‐ und Auslegungstools zur energieeffizienten Kühlung von Gebäuden mit luftgestützten Systemen. Neben dem Aufbau und der Weiterentwicklung von Simulationsmodellen erfolgen Vermessungen der Gesamtsysteme anhand von Praxisanlagen im Feld. Eine der betrachteten Anlagen arbeitet mit indirekter Verdunstung. Diese Veröffentlichung zeigt den Entwicklungsprozess und den Aufbau des Simulationsmodells zur Verdunstungskühlung in der Simulationsumgebung Matlab‐ Simulink mit der CARNOT‐ Toolbox. Das besondere Augenmerk liegt dabei auf dem physikalischen Modell des Wärmeübertragers, in dem die Verdunstung implementiert ist. Dem neuen Modellansatz liegt die Annahme einer aus der Enthalpie‐ Betrachtung hergeleiteten effektiven Wärmekapazität zugrunde. Des Weiteren wird der Befeuchtungsgrad als konstant angesehen und eine standardisierte Zunahme der Wärmeübertragung des feuchten gegenüber dem trockenen Wärmeübertrager angenommen. Die Validierung des Modells erfolgte anhand von Literaturdaten. Für den trockenen Wärmetauscher ist der maximale absolute Fehler der berechneten Austrittstemperatur (Zuluft) kleiner als ±0.1 K und für den nassen Wärmetauscher (Kühlfall) unter der Annahme eines konstanten Verdunstungsgrades kleiner als ±0.4 K.
A German–Brazilian research project investigates sugarcane as an energy plant in anaerobic digestion for biogas production. The aim of the project is a continuous, efficient, and stable biogas process with sugarcane as the substrate. Tests are carried out in a fermenter with a volume of 10 l.
In order to optimize the space–time load to achieve a stable process, a continuous process in laboratory scale has been devised. The daily feed in quantity and the harvest time of the substrate sugarcane has been varied. Analyses of the digester content were conducted twice per week to monitor the process: The ratio of inorganic carbon content to volatile organic acid content (VFA/TAC), the concentration of short-chain fatty acids, the organic dry matter, the pH value, and the total nitrogen, phosphate, and ammonium concentrations were monitored. In addition, the gas quality (the percentages of CO₂, CH₄, and H₂) and the quantity of the produced gas were analyzed.
The investigations have exhibited feasible and economical production of biogas in a continuous process with energy cane as substrate. With a daily feeding rate of 1.68gᵥₛ/l*d the average specific gas formation rate was 0.5 m3/kgᵥₛ. The long-term study demonstrates a surprisingly fast metabolism of short-chain fatty acids. This indicates a stable and less susceptible process compared to other substrates.
In collaborative research projects, both researchers and practitioners work together solving business-critical challenges. These projects often deal with ETL processes, in which humans extract information from non-machine-readable documents by hand. AI-based machine learning models can help to solve this problem.
Since machine learning approaches are not deterministic, their quality of output may decrease over time. This fact leads to an overall quality loss of the application which embeds machine learning models. Hence, the software qualities in development and production may differ.
Machine learning models are black boxes. That makes practitioners skeptical and increases the inhibition threshold for early productive use of research prototypes. Continuous monitoring of software quality in production offers an early response capability on quality loss and encourages the use of machine learning approaches. Furthermore, experts have to ensure that they integrate possible new inputs into the model training as quickly as possible.
In this paper, we introduce an architecture pattern with a reference implementation that extends the concept of Metrics Driven Research Collaboration with an automated software quality monitoring in productive use and a possibility to auto-generate new test data coming from processed documents in production.
Through automated monitoring of the software quality and auto-generated test data, this approach ensures that the software quality meets and keeps requested thresholds in productive use, even during further continuous deployment and changing input data.
In this paper we present SMART-FACTORY, a setup for a research and teaching facility in industrial robotics that is based on the RoboCup Logistics League. It is driven by the need for developing and applying solutions for digital production. Digitization receives constantly increasing attention in many areas, especially in industry. The common theme is to make things smart by using intelligent computer technology. Especially in the last decade there have been many attempts to improve existing processes in factories, for example, in production logistics, also with deploying cyber-physical systems. An initiative that explores challenges and opportunities for robots in such a setting is the RoboCup Logistics League. Since its foundation in 2012 it is an international effort for research and education in an intra-warehouse logistics scenario. During seven years of competition a lot of knowledge and experience regarding autonomous robots was gained. This knowledge and experience shall provide the basis for further research in challenges of future production. The focus of our SMART-FACTORY is to create a stimulating environment for research on logistics robotics, for teaching activities in computer science and electrical engineering programmes as well as for industrial users to study and explore the feasibility of future technologies. Building on a very successful history in the RoboCup Logistics League we aim to provide stakeholders with a dedicated facility oriented at their individual needs.
The industrial revolution especially in the IR4.0 era have driven many states of the art technologies to be introduced.
The automotive industry as well as many other key industries have also been greatly influenced. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South East Asia (SEA). Indulging this situation, FH JOANNEUM, Austria together with European partners from FH Aachen, Germany and Politecnico di Torino, Italy are taking initiative to close down the gap utilizing the Erasmus+ United Capacity Building in Higher Education grant from EU. A consortium was founded to engage with automotive technology transfer using the European framework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries in respective countries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing University-industry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
We present first results from a newly developed monitoring station for a closed loop geothermal heat pump test installation at our campus, consisting of helix coils and plate heat exchangers, as well as an ice-store system. There are more than 40 temperature sensors and several soil moisture content sensors distributed around the system, allowing a detailed monitoring under different operating conditions.In the view of the modern development of renewable energies along with the newly concepts known as Internet of Things and Industry 4.0 (high-tech strategy from the German government), we created a user-friendly web application, which will connect the things (sensors) with the open network (www). Besides other advantages, this allows a continuous remote monitoring of the data from the numerous sensors at an arbitrary sampling rate.Based on the recorded data, we will also present first results from numerical simulations, taking into account all relevant heat transport processes.The aim is to improve the understanding of these processes and their influence on the thermal behavior of shallow geothermal systems in the unsaturated zone. This will in turn facilitate the prediction of the performance of these systems and therefore yield an improvement in their dimensioning when designing a specific shallow geothermal installation.
As part of the transnational research project EDITOR, a parabolic trough collector system (PTC) with concrete thermal energy storage (C-TES) was installed and commissioned in Limassol, Cyprus. The system is located on the premises of the beverage manufacturer KEAN Soft Drinks Ltd. and its function is to supply process steam for the factory's pasteurisation process [1]. Depending on the factory's seasonally varying capacity for beverage production, the solar system delivers between 5 and 25 % of the total steam demand. In combination with the C-TES, the solar plant can supply process steam on demand before sunrise or after sunset. Furthermore, the C-TES compensates the PTC during the day in fluctuating weather conditions. The parabolic trough collector as well as the control and oil handling unit is designed and manufactured by Protarget AG, Germany. The C-TES is designed and produced by CADE Soluciones de Ingeniería, S.L., Spain. In the focus of this paper is the description of the operational experience with the PTC, C-TES and boiler during the commissioning and operation phase. Additionally, innovative optimisation measures are presented.
In this paper we report on an architecture for a self-driving car that is based on ROS2. Self-driving cars have to take decisions based on their sensory input in real-time, providing high reliability with a strong demand in functional safety. In principle, self-driving cars are robots. However, typical robot software, in general, and the previous version of the Robot Operating System (ROS), in particular, does not always meet these requirements. With the successor ROS2 the situation has changed and it might be considered as a solution for automated and autonomous driving. Existing robotic software based on ROS was not ready for safety critical applications like self-driving cars. We propose an architecture for using ROS2 for a self-driving car that enables safe and reliable real-time behaviour, but keeping the advantages of ROS such as a distributed architecture and standardised message types. First experiments with an automated real passenger car at lower and higher speed-levels show that our approach seems feasible for autonomous driving under the necessary real-time conditions.
The increasing digitalization brings new opportunities but also puts new challenges to modern industrial systems. Software agents are one of the key technologies towards self-optimizing factories and are currently used to address the needs of cyber-physical production systems (CPPS). However their interplay in industrial settings needs to be understood better.This paper focusses on securing a cloud infrastructure for multi-agent systems for industrial sites. An industrial site contains multiple production processes that need to communicate with each other and each physical resource is abstracted with a software agent. This volatile architecture needs to be managed and protected from manipulation. The proposed infrastructure presents a security concept for TCP/IP communication between agents, machines, and external networks. It is based on open-source software and tested on a three-node edge cloud controlling a model-plant.
The implementation of IO-Link in the automation industry has increased over the years. Its main advantage is it offers a digital point-to-point plugand-play interface for any type of device or application. This simplifies the communication between devices and increases productivity with its different features like self-parametrization and maintenance. However, its complete potential is not always used.
The aim of this paper is to create an Arduino based framework for the development of generic IO-Link devices and increase its implementation for rapid prototyping. By generating the IO device description file (IODD) from a graphical user interface, and further customizable options for the device application, the end-user can intuitively develop generic IO-Link devices. The peculiarity of this framework relies on its simplicity and abstraction which allows to implement any sensor functionality and virtually connect any type of device to an IO-Link master. This work consists of the general overview of the framework, the technical background of its development and a proof of concept which demonstrates the workflow for its implementation.
Industry 4.0 imposes many challenges for manufacturing companies and their employees. Innovative and effective training strategies are required to cope with fast-changing production environments and new manufacturing technologies. Virtual Reality (VR) offers new ways of on-the-job, on-demand, and off-premise training. A novel concept and evaluation system combining Gamification and VR practice for flexible assembly tasks is proposed in this paper and compared to existing works. It is based on directed acyclic graphs and a leveling system. The concept enables a learning speed which is adjustable to the users’ pace and dynamics, while the evaluation system facilitates adaptive work sequences and allows employee-specific task fulfillment. The concept was implemented and analyzed in the Industry 4.0 model factory at FH Aachen for mechanical assembly jobs.
Die fortschreitende Digitalisierung und Globalisierung fordert von den Unternehmen eine erhöhte Flexibilität und Anpassungsfähigkeit. Um dies zu erreichen, sind qualifizierte und engagierte Mitarbeiter/-innen unabdingbar. Gamification bietet die Möglichkeit, Beschäftigte individuell in ihren Tätigkeiten zu unterstützen und mittels Feedbackmechanismen zu motivieren. In dieser Arbeit wird ein Gamification Konzept bestehend aus einem intelligenten Arbeitsplatz, einer Wissensdatenbank und einer Gamification Plattform vorgestellt, welches an bestehende Produktionsumgebungen adaptiert werden kann. Das Konzept wird am Beispiel der Longboardproduktion in der Industrie 4.0 Modellfabrik der FH Aachen implementiert und evaluiert.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of the process equipment and multiple and simultaneous release of hazardous substances in industrial facilities. Nevertheless, the design of industrial plants is inadequately described in recent codes and guidelines, as they do not consider the dynamic interaction between the structure and the installations and thus the effect of seismic response of the installations on the response of the structure and vice versa. The current code-based approach for the seismic design of industrial facilities is considered not enough for ensure proper safety conditions against exceptional event entailing loss of content and related consequences. Accordingly, SPIF project (Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities) was proposed within the framework of the European H2020 - SERA funding scheme (Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe). The objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial structure equipped with complex process technology by means of shaking table tests. The test structure is a three-story moment resisting steel frame with vertical and horizontal vessels and cabinets, arranged on the three levels and connected by pipes. The dynamic behaviour of the test structure and of its relative several installations is investigated. Furthermore, both process components and primary structure interactions are considered and analyzed. Several PGA-scaled artificial ground motions are applied to study the seismic response at different levels. After each test, dynamic identification measurements are carried out to characterize the system condition. The contribution presents the experimental setup of the investigated structure and installations, selected measurement data and describes the obtained damage. Furthermore, important findings for the definition of performance limits, the effectiveness of floor response spectra in industrial facilities will be presented and discussed.
The paper presents an overview of the past and present of low-emission combustor research with hydrogen-rich fuels at Aachen University of Applied Sciences. In 1990, AcUAS started developing the Dry-Low-NOx Micromix combustion technology. Micromix reduces NOx emissions using jet-in-crossflow mixing of multiple miniaturized fuel jets and combustor air with an inherent safety against flashback. At first, pure hydrogen as fuel was investigated with lab-scale applications. Later, Micromix prototypes were developed for the use in an industrial gas turbine Honeywell/Garrett GTCP-36-300, proving low NOx characteristics during real gas turbine operation, accompanied by the successful definition of safety laws and control system modifications. Further, the Micromix was optimized for the use in annular and can combustors as well as for fuel-flexibility with hydrogen-methane-mixtures and hydrogen-rich syngas qualities by means of extensive experimental and numerical simulations. In 2020, the latest Micromix application will be demonstrated in a commercial 2 MW-class gas turbine can-combustor with full-scale engine operation. The paper discusses the advances in Micromix research over the last three decades.
Experimental investigation of behaviour of masonry infilled RC frames under out-of-plane loading
(2021)
Masonry infills are commonly used as exterior or interior walls in reinforced concrete (RC) frame structures and they can be encountered all over the world, including earthquake prone regions. Since the middle of the 20th century the behaviour of these non-structural elements under seismic loading has been studied in numerous experimental campaigns. However, most of the studies were carried out by means of in-plane tests, while there is a lack of out-of-plane experimental investigations. In this paper, the out-of-plane tests carried out on full scale masonry infilled frames are described. The results of the out-of-plane tests are presented in terms of force-displacement curves and measured out-of-plane displacements. Finally, the reliability of existing analytical approaches developed to estimate the out-of-plane strength of masonry infills is examined on presented experimental results.
This paper presents the laser-based powder bed fusion (L-PBF) using various glass powders (borosilicate and quartz glass). Compared to metals, these require adapted process strategies. First, the glass powders were characterized with regard to their material properties and their processability in the powder bed. This was followed by investigations of the melting behavior of the glass powders with different laser wavelengths (10.6 µm, 1070 nm). In particular, the experimental setup of a CO2 laser was adapted for the processing of glass powder. An experimental setup with integrated coaxial temperature measurement/control and an inductively heatable build platform was created. This allowed the L-PBF process to be carried out at the transformation temperature of the glasses. Furthermore, the component’s material quality was analyzed on three-dimensional test specimen with regard to porosity, roughness, density and geometrical accuracy in order to evaluate the developed L-PBF parameters and to open up possible applications.
Digital Shadows as the aggregation, linkage and abstraction of data relating to physical objects are a central vision for the future of production. However, the majority of current research takes a technocentric approach, in which the human actors in production play a minor role. Here, the authors present an alternative anthropocentric perspective that highlights the potential and main challenges of extending the concept of Digital Shadows to humans. Following future research methodology, three prospections that illustrate use cases for Human Digital Shadows across organizational and hierarchical levels are developed: human-robot collaboration for manual work, decision support and work organization, as well as human resource management. Potentials and challenges are identified using separate SWOT analyses for the three prospections and common themes are emphasized in a concluding discussion.
Development of open educational resources for renewable energy and the energy transition process
(2021)
The dissemination of knowledge about renewable energies is understood as a social task with the highest topicality. The transfer of teaching content on renewable energies into digital open educational resources offers the opportunity to significantly accelerate the implementation of the energy transition. Thus, in the here presented project six German universities create open educational resources for the energy transition. These materials are available to the public on the internet under a free license. So far there has been no publicly accessible, editable media that cover entire learning units about renewable energies extensively and in high technical quality. Thus, in this project, the content that remains up-to-date for a longer period is appropriately prepared in terms of media didactics. The materials enable lecturers to provide students with in-depth training about technologies for the energy transition. In a particular way, the created material is also suitable for making the general public knowledgeable about the energy transition with scientifically based material.
Seismic vulnerability estimation of existing structures is unquestionably interesting topic of high priority, particularly after earthquake events. Having in mind the vast number of old masonry buildings in North Macedonia serving as public institutions, it is evident that the structural assessment of these buildings is an issue of great importance. In this paper, a comprehensive methodology for the development of seismic fragility curves of existing masonry buildings is presented. A scenario – based method that incorporates the knowledge of the tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity (determined via the Neo Deterministic approach) is used for calculation of the necessary response spectra. The capacity of the investigated masonry buildings has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) is used for verification of the structural safety of the structures Performance point, obtained from the intersection of the capacity of the building and the spectra used, is selected as a response parameter. The thresholds of the spectral displacement are obtained by splitting the capacity curve into five parts, utilizing empirical formulas which are represented as a function of yield displacement and ultimate displacement. As a result, four levels of damage limit states are determined. A maximum likelihood estimation procedure for the process of fragility curves determination is noted as a final step in the proposed procedure. As a result, region specific series of vulnerability curves for structures are defined.
This paper describes the concept of an innovative, interdisciplinary, user-oriented earthquake warning and rapid response system coupled with a structural health monitoring system (SHM), capable to detect structural damages in real time. The novel system is based on interconnected decentralized seismic and structural health monitoring sensors. It is developed and will be exemplarily applied on critical infrastructures in Lower Rhine Region, in particular on a road bridge and within a chemical industrial facility. A communication network is responsible to exchange information between sensors and forward warnings and status reports about infrastructures’health condition to the concerned recipients (e.g., facility operators, local authorities). Safety measures such as emergency shutdowns are activated to mitigate structural damages and damage propagation. Local monitoring systems of the infrastructures are integrated in BIM models. The visualization of sensor data and the graphic representation of the detected damages provide spatial content to sensors data and serve as a useful and effective tool for the decision-making processes after an earthquake in the region under consideration.
Reinforced concrete frames with masonry infill walls are popular form of construction all over the world as well in seismic regions. While severe earthquakes can cause high level of damage of both reinforced concrete and masonry infills, earthquakes of lower to medium intensity some-times can cause significant level of damage of masonry infill walls. Especially important is the level of damage of face loaded infill masonry walls (out-of-plane direction) as out-of-plane load cannot only bring high level of damage to the wall, it can also be life-threating for the people near the wall. The response in out-of-plane direction directly depends on the prior in-plane damage, as previous investigation shown that it decreases resistance capacity of the in-fills. Behaviour of infill masonry walls with and without prior in-plane load is investigated in the experimental campaign and the results are presented in this paper. These results are later compared with analytical approaches for the out-of-plane resistance from the literature. Conclusions based on the experimental campaign on the influence of prior in-plane damage on the out-of-plane response of infill walls are compared with the conclusions from other authors who investigated the same problematic.
A new formulation to calculate the shakedown limit load of Kirchhoff plates under stochastic conditions of strength is developed. Direct structural reliability design by chance con-strained programming is based on the prescribed failure probabilities, which is an effective approach of stochastic programming if it can be formulated as an equivalent deterministic optimization problem. We restrict uncertainty to strength, the loading is still deterministic. A new formulation is derived in case of random strength with lognormal distribution. Upper bound and lower bound shakedown load factors are calculated simultaneously by a dual algorithm.
With the increased interest for interstellar exploration after the discovery of exoplanets and the proposal by Breakthrough Starshot, this paper investigates the optimisation of photon-sail trajectories in Alpha Centauri. The prime objective is to find the optimal steering strategy for a photonic sail to get captured around one of the stars after a minimum-time transfer from Earth. By extending the idea of the Breakthrough Starshot project with a deceleration phase upon arrival, the mission’s scientific yield will be increased. As a secondary objective, transfer trajectories between the stars and orbit-raising manoeuvres to explore the habitable zones of the stars are investigated. All trajectories are optimised for minimum time of flight using the trajectory optimisation software InTrance. Depending on the sail technology, interstellar travel times of 77.6-18,790 years can be achieved, which presents an average improvement of 30% with respect to previous work. Still, significant technological development is required to reach and be captured in the Alpha-Centauri system in less than a century. Therefore, a fly-through mission arguably remains the only option for a first exploratory mission to Alpha Centauri, but the enticing results obtained in this work provide perspective for future long-residence missions to our closest neighbouring star system.
The planned coal phase-out in Germany by 2038 will lead to the dismantling of power plants with a total capacity of approx. 30 GW. A possible further use of these assets is the conversion of the power plants to thermal storage power plants; the use of these power plants on the day-ahead market is considerably limited by their technical parameters. In this paper, the influence of the technical boundary conditions on the operating times of these storage facilities is presented. For this purpose, the storage power plants were described as an MILP problem and two price curves, one from 2015 with a relatively low renewable penetration (33 %) and one from 2020 with a high renewable energy penetration (51 %) are compared. The operating times were examined as a function of the technical parameters and the critical influencing factors were investigated. The thermal storage power plant operation duration and the energy shifted with the price curve of 2020
increases by more than 25 % compared to 2015.