Refine
Year of publication
Institute
- Fachbereich Elektrotechnik und Informationstechnik (234)
- Fachbereich Medizintechnik und Technomathematik (210)
- Fachbereich Luft- und Raumfahrttechnik (183)
- Fachbereich Energietechnik (177)
- IfB - Institut für Bioengineering (148)
- Solar-Institut Jülich (110)
- Fachbereich Maschinenbau und Mechatronik (107)
- Fachbereich Bauingenieurwesen (75)
- ECSM European Center for Sustainable Mobility (52)
- Fachbereich Wirtschaftswissenschaften (51)
Language
- English (1162) (remove)
Document Type
- Conference Proceeding (1162) (remove)
Keywords
- Biosensor (25)
- CAD (7)
- Finite-Elemente-Methode (7)
- civil engineering (7)
- Bauingenieurwesen (6)
- Blitzschutz (6)
- Enterprise Architecture (5)
- Clusterion (4)
- Energy storage (4)
- Gamification (4)
This paper describes the procedure on the evaluation of the masonry chapter for the next generation of Eurocode 8, the European Standard for earthquake-resistant design. In CEN, TC 250/SC8, working group WG 1 has been established to support the subcommittee on the topic of masonry on both design of new structures (EN1998-1) and assessment of existing structures (EN1998-3). The aim is to elaborate suggestions for amendments which fit the current state of the art in masonry and earthquake-resistant design. Focus will be on modelling, simplified methods, linear-analysis (q-values, overstrength-values), nonlinear procedures, out-of-plane design as well as on clearer definition of limit states. Beside these, topics related to general material properties, reinforced masonry, confined masonry, mixed structures and non-structural infills will be covered too. This paper presents the preliminary work and results up to the submission date.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. In practice, the focus is set on the most beneficial maintenance measures and/or capacity adaptations of existing water distribution systems (WDS). Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of WDS, i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, metrics based on graph theory have been proposed. In this study, a promising approach is applied to assess the resilience of the WDS for a district in a major German City. The conducted analysis provides insight into the process of actively influencing the
resilience of WDS
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of water distribution systems (WDS), i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, graph-theoretical metrics have been proposed. In this study, a promising approach is first physically derived analytically and then applied to assess the resilience of the WDS for a district in a major German City. The topology based resilience index computed for every consumer node takes into consideration the resistance of the best supply path as well as alternative supply paths. This resistance of a supply path is derived to be the dimensionless pressure loss in the pipes making up the path. The conducted analysis of a present WDS provides insight into the process of actively influencing the resilience of WDS locally and globally by adding pipes. The study shows that especially pipes added close to the reservoirs and main branching points in the WDS result in a high resilience enhancement of the overall WDS.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
The Volatility Framework is a collection of tools for the analysis of computer RAM. The framework offers a multitude of analysis options and is used by many investigators worldwide. Volatility currently comes with a command line interface only, which might be a hinderer for some investigators to use the tool. In this paper we present a GUI and extensions for the Volatility Framework, which on the one hand simplify the usage of the tool and on the other hand offer additional functionality like storage of results in a database, shortcuts for long Volatility Framework command sequences, and entirely new commands based on correlation of data stored in the database.
By DLR-contact, sample return missions to the large main-belt asteroid “19, Fortuna” have been studied. The mission scenario has been based on three ion thrusters of the RIT-22 model, which is presently under space qualification, and on solar arrays equipped with triple-junction GaAs solar cells. After having designed the spacecraft, the orbit-to-orbit trajectories for both, a one-way SEP mission with a chemical sample return and an all-SEP return mission, have been optimized using a combination of artificial neural networks with evolutionary algorithms. Additionally, body-to-body trajectories have been
investigated within a launch period between 2012 and 2015. For orbit-to-orbit calculation, the launch masses of the hybrid mission and of the all-SEP mission resulted in 2.05 tons and 1.56 tons, respectively, including a scientific payload of 246 kg. For the related transfer
durations 4.14 yrs and 4.62 yrs were obtained. Finally, a comparison between the mission scenarios based on SEP and on NEP have been carried out favouring clearly SEP.
Under DLR-contract, Giessen University and DLR Cologne are studying solar-electric propulsion missions (SEP) to the outer regions of the solar system. The most challenging reference mission concerns the transport of a 1.35-tons chemical lander spacecraft into an 80-RJ circular orbit around Jupiter, which would enable to place a 375 kg lander with 50 kg of scientific instruments on the surface of the icy moon "Europa". Thorough analyses show that the best solution in terms of SEP launch mass times thrusting time would be a two-stage EP module and a triple-junction solar array with concentrators which would be deployed step by step. Mission performance optimizations suggest to propel the spacecraft in the first EP stage by 6 gridded ion thrusters, running at 4.0 kV of beam voltage, which would save launch mass, and in the second stage by 4 thrusters with 1.25 to 1.5 kV of positive high voltage saving thrusting time. In this way, the launch mass of the spacecraft would be kept within 5.3 tons. Without a launcher's C3 and interplanetary gravity assists, Jupiter might be reached within about 4 yrs. The spiraling-down into the parking orbit would need another 1.8 yrs. This "large mission" can be scaled down to a smaller one, e.g., by halving all masses, the solar array power, and the number of thrusters. Due to their reliability, long lifetime and easy control, RIT-22 engines have been chosen for mission analysis. Based on precise tests, the thruster performance has been modeled.
An Interstellar – Heliopause mission using a combination of solar/radioisotope electric propulsion
(2011)
There is common agreement within the scientific community that in order to understand our local galactic environment it will be necessary to send a spacecraft into the region beyond the solar wind termination shock. Considering distances of 200 AU for a new mission, one needs a spacecraft travelling at a speed of close to 10 AU/yr in order to keep the mission duration in the range of less than 25 yrs, a transfer time postulated by ESA.Two propulsion options for the mission have been proposed and discussed so far: the solar sail propulsion and the ballistic/radioisotope electric propulsion. As a further alternative, we here investigate a combination of solar-electric propulsion and radioisotope-electric propulsion. The solar-electric propulsion stage consists of six 22 cm diameter “RIT-22”ion thrusters working with a high specific impulse of 7377 s corresponding to a positive grid voltage of 5 kV. Solar power of 53 kW BOM is provided by a light-weight solar array. The REP-stage consists of four space-proven 10 cm diameter “RIT-10” ion thrusters that will be operating one after the other for 9 yrs in total. Four advanced radioisotope generators provide 648 W at BOM. The scientific instrument package is oriented at earlier studies. For its mass and electric power requirement 35 kg and 35 W are assessed, respectively. Optimized trajectory calculations, treated in a separate contribution, are based on our “InTrance” method.The program yields a burn out of the REP stage in a distance of 79.6 AU for a usage of 154 kg of Xe propellant. With a C3 = 45,1 (km/s)2 a heliocentric probe velocity of 10 AU/yr is reached at this distance, provided a close Jupiter gravity assist adds a velocity increment of 2.7 AU/yr. A transfer time of 23.8 yrs results for this scenario requiring about 450 kg Xe for the SEP stage, jettisoned at 3 AU. We interpret the SEP/REP propulsion as a competing alternative to solar sail and ballistic/REP propulsion. Omiting a Jupiter fly-by even allows more launch flexibility, leaving the mission duration in the range of the ESA specification.
Time-of-flight (ToF) sensors have become an alternative to conventional distance sensing techniques like laser scanners or image based stereo. ToF sensors provide full range distance information at high frame-rates and thus have a significant impact onto current research in areas like online object recognition, collision prevention or scene reconstruction. However, ToF cameras like the photonic mixer device (PMD) still exhibit a number of challenges regarding static and dynamic effects, e.g. systematic distance errors and motion artefacts, respectively. Sensor calibration techniques reducing static system errors have been proposed and show promising results. However, current calibration techniques in general need a large set of reference data in order to determine the corresponding parameters for the calibration model. This paper introduces a new calibration approach which combines different demodulation techniques for the ToF- camera 's reference signal. Examples show, that the resulting combined demodulation technique yields improved distance values based on only two required reference data sets.
To maximize the travel distances of battery electric vehicles such as cars or buses for a given amount of stored energy, their powertrains are optimized energetically. One key part within optimization models for electric powertrains is the efficiency map of the electric motor. The underlying function is usually highly nonlinear and nonconvex and leads to major challenges within a global optimization process. To enable faster solution times, one possibility is the usage of piecewise linearization techniques to approximate the nonlinear efficiency map with linear constraints. Therefore, we evaluate the influence of different piecewise linearization modeling techniques on the overall solution process and compare the solution time and accuracy for methods with and without explicitly used binary variables.
The development of resilient technical systems is a challenging task, as the system should adapt automatically to unknown disturbances and component failures. To evaluate different approaches for deriving resilient technical system designs, we developed a modular test rig that is based on a pumping system. On the basis of this example
system, we present metrics to quantify resilience and an algorithmic approach to improve resilience. This approach enables the pumping system to automatically react on unknown disturbances and to reduce the impact of component failures. In this case, the system is able to automatically adapt its topology by activating additional valves. This enables the system to still reach a minimum performance, even in case of failures. Furthermore, timedependent disturbances are evaluated continuously, deviations from the original state are automatically detected and anticipated in the future. This allows to reduce the impact of future disturbances and leads to a more resilient
system behaviour.
The overall energy efficiency of ventilation systems can be improved by considering not only single components, but by considering as well the interplay between every part of the system. With the help of the method "TOR" ("Technical Operations Research"), which was developed at the Chair of Fluid Systems at TU Darmstadt, it is possible to improve the energy efficiency of the whole system by considering all possible design choices programmatically. We show the ability of this systematic design approach with a ventilation system for buildings as a use case example.
Based on a Mixed-Integer Nonlinear Program (MINLP) we model the ventilation system. We use binary variables to model the selection of different pipe diameters. Multiple fans are model with the help of scaling laws. The whole system is represented by a graph, where the edges represent the pipes and fans and the nodes represents the source of air for cooling and the sinks, that have to be cooled. At the beginning, the human designer chooses a construction kit of different suitable fans and pipes of different diameters and different load cases. These boundary conditions define a variety of different possible system topologies. It is not possible to consider all topologies by hand. With the help of state of the art solvers, on the other side, it is possible to solve this MINLP.
Next to this, we also consider the effects of malfunctions in different components. Therefore, we show a first approach to measure the resilience of the shown example use case. Further, we compare the conventional approach with designs that are more resilient. These more resilient designs are derived by extending the before mentioned model with further constraints, that consider explicitly the resilience of the overall system. We show that it is possible to design resilient systems with this method already in the early design stage and compare the energy efficiency and resilience of these different system designs.
To increase pressure to supply all floors of high buildings with water, booster stations, normally consisting of several parallel pumps in the basement, are used. In this work, we demonstrate the potential of a decentralized pump topology regarding energy savings in water supply systems of skyscrapers. We present an approach, based on Mixed-Integer Nonlinear Programming, that allows to choose an optimal network topology and optimal pumps from a predefined construction kit comprising different pump types. Using domain-specific scaling laws and Latin Hypercube Sampling, we generate different input sets of pump types and compare their impact on the efficiency and cost of the total system design. As a realistic application example, we consider a hotel building with 325 rooms, 12 floors and up to four pressure zones.
Future engineers are increasingly confronted with the so-called Megatrends which are the big social challenges society has to cope with. These Megatrends, such as “Silver Society”, “Globalization”, “Mobility” and “Female Shift” require an application-oriented perspective on Diversity especially in the engineering field. Therefore, it is necessary to enable future engineers not only to look at the technical perspectives of a problem, but also to be able to see the related questions within societies they are developing their artefacts for. The aim of teaching engineering should be to prepare engineers for these requirements and to draw attention to the diverse needs in a globalized world.
Bringing together technical knowledge and social competences which go beyond a mere training of the so-called “soft skills”, is a new approach followed at RWTH Aachen University, one of the leading technical universities in Germany. RWTH Aachen University has established the bridging professorship “Gender and Diversity in Engineering” (GDI) which educates engineers with an interdisciplinary approach to expand engineering limits. In the frame of a sustainable teaching concept the research group under the leadership of Prof. Carmen Leicht-Scholten has developed an approach which imparts a supplication-specific Gender and Diversity expertise to engineers. In workshops students gain theoretical knowledge about Gender and Diversity and learn how to transfer their knowledge in their special field of study and later work. To substantiate this, the course participants have to solve case studies from real life. The cases which are developed in collaboration with non-profit organizations and enterprises from economy rise the students to challenges which are inspired by professional life. Evaluation shows the success of this approach as well as an increasing demand for such teaching formats.
This paper reports a first microbial biosensor for rapid and cost-effective determination of organophosphorus pesticides fenitrothion and EPN. The biosensor consisted of recombinant PNP-degrading/oxidizing bacteria Pseudomonas putida JS444 anchoring and displaying organophosphorus hydrolase (OPH) on its cell surface as biological sensing element and a dissolved oxygen electrode as the transducer. Surfaceexpressed OPH catalyzed the hydrolysis of fenitrothion and EPN to release 3-methyl-4-nitrophenol and p-nitrophenol, respectively, which were oxidized by the enzymatic machinery of Pseudomonas putida JS444 to carbon dioxide while consuming oxygen, which was measured and correlated to the concentration of organophosphates. Under the optimum operating conditions, the biosensor was able to measure as low as 277 ppb of fenitrothion and 1.6 ppm of EPN without interference from phenolic compounds and other commonly used pesticides such as carbamate pesticides, triazine herbicides and organophosphate pesticides without nitrophenyl substituent. The applicability of the biosensor to lake water was also demonstrated.
The replacement of existing spillway crests or gates with labyrinth weirs is a proven techno-economical means to increase the discharge capacity when rehabilitating existing structures. However, additional information is needed regarding energy dissipation of such weirs, since due to the folded weir crest, a three-dimensional flow field is generated, yielding more complex overflow and energy dissipation processes. In this study, CFD simulations of labyrinth weirs were conducted 1) to analyze the discharge coefficients for different discharges to compare the Cd values to literature data and 2) to analyze and improve energy dissipation downstream of the structure. All tests were performed for a structure at laboratory scale with a height of approx. P = 30.5 cm, a ratio of the total crest length to the total width of 4.7, a sidewall angle of 10° and a quarter-round weir crest shape. Tested headwater ratios were 0.089 ≤ HT/P ≤ 0.817. For numerical simulations, FLOW-3D Hydro was employed, solving the RANS equations with use of finite-volume method and RNG k-ε turbulence closure. In terms of discharge capacity, results were compared to data from physical model tests performed at the Utah Water Research Laboratory (Utah State University), emphasizing higher discharge coefficients from CFD than from the physical model. For upstream heads, some discrepancy in the range of ± 1 cm between literature, CFD and physical model tests was identified with a discussion regarding differences included in the manuscript. For downstream energy dissipation, variable tailwater depths were considered to analyze the formation and sweep-out of a hydraulic jump. It was found that even for high discharges, relatively low downstream Froude numbers were obtained due to high energy dissipation involved by the three-dimensional flow between the sidewalls. The effects of some additional energy dissipation devices, e.g. baffle blocks or end sills, were also analyzed. End sills were found to be non-effective. However, baffle blocks with different locations may improve energy dissipation downstream of labyrinth weirs.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
KNX is a protocol for smart building automation, e.g., for automated heating, air conditioning, or lighting. This paper analyses and evaluates state-of-the-art KNX devices from manufacturers Merten, Gira and Siemens with respect to security. On the one hand, it is investigated if publicly known vulnerabilities like insecure storage of passwords in software, unencrypted communication, or denialof-service attacks, can be reproduced in new devices. On the other hand, the security is analyzed in general, leading to the discovery of a previously unknown and high risk vulnerability related to so-called BCU (authentication) keys.
There are different types of games that try to make use of the motivation of a gaming situation in learning contexts. This paper introduces the new terminology ‘Competence Developing Game’ (CDG) as an umbrella term for all games with this intention. Based on this new terminology, an assessment framework has been developed and validated in scope of an empirical study. Now, all different types of CDGs can be evaluated according to a defined and uniform set of assessment criteria and, thus, are comparable according to their characteristics and effectiveness.
This paper introduces a Competence Developing Game (CDG) for the purpose of a cybersecurity awareness training for businesses. The target audience will be discussed in detail to understand their requirements. It will be explained why and how a mix of business simulation and serious game meets these stakeholder requirements. It will be shown that a tablet and touchscreen based approach is the most suitable solution. In addition, an empirical study will be briefly presented. The study was carried out to examine how an interaction system for a 3D-tablet based CDG has to be designed, to be manageable for non-game experienced employees. Furthermore, it will be explained which serious content is necessary for a Cybersecurity awareness training CDG and how this content is wrapped in the game
During the development of a Competence Developing Game’s (CDG) story it is indispensable to understand the target audience. Thereby, CDGs stories represent more than just the plot. The Story is about the
Setting, the Characters and the Plot. As a toolkit to support the
development of such a story, this paper introduces the UserFocused Storybuilding (short UFoS) Framework for CDGs. The Framework and its utilization will be explained, followed by a description of its development and derivation, including an empirical study. In addition, to simplify the Framework use regarding the CDG’s target audience, a new concept of Nine Psychographic Player Types will be explained. This concept of Player Types provides an approach to handle the differences in between players during the UFoS Framework use. Thereby,
this article presents a unique approach to the development of
target group-differentiated CDGs stories.
Investigation Of The Seismic Behaviour Of Infill Masonry Using Numerical Modelling Approaches
(2017)
Masonry is a widely spread construction type which is used all over the world for different types of structures. Due to its simple and cheap construction, it is used as non-structural as well as structural element. In frame structures, such as
reinforced concrete frames, masonry may be used as infill. While the bare frame itself is able to carry the loads when it comes to seismic events, the infilled frame is not able to warp freely due to the constrained movement. This restraint results in a complex interaction between the infill and the surrounding frame, which may lead to severe damage to the infill as well as the surrounding frame. The interaction is studied in different projects and effective approaches for the description of the behavior are still lacking. Experimental programs are usually quite expensive, while numerical models, once validated, do offer an efficient approach for the investigation of the interaction when horizontally loaded. In order to study the numerous parameters influencing the seismic load bearing behavior, numerical models may be used. Therefore, this contribution presents a numerical approach for the simulation of infill masonry in reinforced concrete frames. Both parts, the surrounding frame as well as the infill are represented by micro modelling approaches to correctly take into account the different types of failure. The adopted numerical model describes the inelastic behavior of the system, as indicated by the obtained results of the overall structural response as well as the formation of damage in the infilled wall. Comparison of the numerical and experimental results highlights the valuable contribution of numerical simulations in the study and design of infilled frames. As damage of the infill masonry may occur in-plane due to the interaction as well as out-of-plane due to the low vertical load, both directions of loading are investigated.
Experimental and numerical investigation on the effect of pressure on micromix hydrogen combustion
(2021)
The micromix (MMX) combustion concept is a DLN gas turbine combustion technology designed for high hydrogen content fuels. Multiple non-premixed miniaturized flames based on jet in cross-flow (JICF) are inherently safe against flashback and ensure a stable operation in various operative conditions.
The objective of this paper is to investigate the influence of pressure on the micromix flame with focus on the flame initiation point and the NOx emissions. A numerical model based on a steady RANS approach and the Complex Chemistry model with relevant reactions of the GRI 3.0 mechanism is used to predict the reactive flow and NOx emissions at various pressure conditions. Regarding the turbulence-chemical interaction, the Laminar Flame Concept (LFC) and the Eddy Dissipation Concept (EDC) are compared. The numerical results are validated against experimental results that have been acquired at a high pressure test facility for industrial can-type gas turbine combustors with regard to flame initiation and NOx emissions.
The numerical approach is adequate to predict the flame initiation point and NOx emission trends. Interestingly, the flame shifts its initiation point during the pressure increase in upstream direction, whereby the flame attachment shifts from anchoring behind a downstream located bluff body towards anchoring directly at the hydrogen jet. The LFC predicts this change and the NOx emissions more accurately than the EDC. The resulting NOx correlation regarding the pressure is similar to a non-premixed type combustion configuration.
This study investigates the influence of pressure on the temperature distribution of the micromix (MMX) hydrogen flame and the NOx emissions. A steady computational fluid dynamic (CFD) analysis is performed by simulating a reactive flow with a detailed chemical reaction model. The numerical analysis is validated based on experimental investigations. A quantitative correlation is parametrized based on the numerical results. We find, that the flame initiation point shifts with increasing pressure from anchoring behind a downstream located bluff body towards anchoring upstream at the hydrogen jet. The numerical NOx emissions trend regarding to a variation of pressure is in good agreement with the experimental results. The pressure has an impact on both, the residence time within the maximum temperature region and on the peak temperature itself. In conclusion, the numerical model proved to be adequate for future prototype design exploration studies targeting on improving the operating range.
In energy economy forecasts of different time series are rudimentary. In this study, a prediction for the German day-ahead spot market is created with Apache Spark and R. It is just an example for many different applications in virtual power plant environments. Other examples of use as intraday price processes, load processes of machines or electric vehicles, real time energy loads of photovoltaic systems and many more time series need to be analysed and predicted.
This work gives a short introduction into the project where this study is settled. It describes the time series methods that are used in energy industry for forecasts shortly. As programming technique Apache Spark, which is a strong cluster computing technology, is utilised. Today, single time series can be predicted. The focus of this work is on developing a method to parallel forecasting, to process multiple time series simultaneously with R and Apache Spark.
In addition to very high safety and reliability requirements, the design of internal combustion engines (ICE) in aviation focuses on economic efficiency. The objective must be to design the aircraft powertrain optimized for a specific flight mission with respect to fuel consumption and specific engine power. Against this background, expert tools provide valuable decision-making assistance for the customer. In this paper, a mathematical calculation model for the fuel consumption of aircraft ICE is presented. This model enables the derivation of fuel consumption maps for different engine configurations. Depending on the flight conditions and based on these maps, the current and the integrated fuel consumption for freely definable flight emissions is calculated. For that purpose, an interpolation method is used, that has been optimized for accuracy and calculation time. The mission boundary conditions flight altitude and power requirement of the ICE form the basis for this calculation. The mathematical fuel consumption model is embedded in a parent program. This parent program presents the simulated fuel consumption by means of an example flight mission for a representative airplane. The focus of the work is therefore on reproducing exact consumption data for flight operations. By use of the empirical approaches according to Gagg-Farrar [1] the power and fuel consumption as a function of the flight altitude are determined. To substantiate this approaches, a 1-D ICE model based on the multi-physical simulation tool GT-Suite® has been created. This 1-D engine model offers the possibility to analyze the filling and gas change processes, the internal combustion as well as heat and friction losses for an ICE under altitude environmental conditions. Performance measurements on a dynamometer at sea level for a naturally aspirated ICE with a displacement of 1211 ccm used in an aviation aircraft has been done to validate the 1-D ICE model. To check the plausibility of the empirical approaches with respect to the fuel consumption and performance adjustment for the flight altitude an analysis of the ICE efficiency chain of the 1-D engine model is done. In addition, a comparison of literature and manufacturer data with the simulation results is presented.
Scientific questions
- How can a non-stationary heat offering in the commercial vehicle be used to reduce fuel consumption?
- Which potentials offer route and environmental information among with predicted speed and load trajectories to increase the efficiency of a ORC-System?
Methods
- Desktop bound holistic simulation model for a heavy duty truck incl. an ORC System
- Prediction of massflows, temperatures and mixture quality (AFR) of exhaust gas
An array of 50 MHz quartz microbalances (QMBs) coated with a dendronized polymer was used to detect small amounts of volatile organic compounds (VOCs) in the gas phase. The results were compared to those obtained with the commonly used 10 MHz QMBs. The 50 MHz QMBs proved to be a powerful tool for the detection of VOCs in the gas phase; therefore, they represent a promising alternative to the much more delicate surface acoustic wave devices (SAWs).
This paper serves as an introduction to the ECTS monitoring system and its potential applications in higher education. It also emphasizes the potential for ECTS monitoring to become a proactive system, supporting students by predicting academic success and identifying groups of potential dropouts for tailored support services. The use of the nearest neighbor analysis is suggested for improving data analysis and prediction accuracy.
Proc. of the 2005 ASCE Intl. Conf. on Computing in Civil Engineering (ICCC 2005) eds. L. Soibelman und F. Pena-Mora, Seite 1-14, ASCE (CD-ROM), Cancun, Mexico, 2005 Current CAD tools are not able to support the fundamental conceptual design phase, and none of them provides consistency analyses of sketches produced by architects. To give architects a greater support at the conceptual design phase, we develop a CAD tool for conceptual design and a knowledge specification tool allowing the definition of conceptually relevant knowledge. The knowledge is specific to one class of buildings and can be reused. Based on a dynamic knowledge model, different types of design rules formalize the knowledge in a graph-based realization. An expressive visual language provides a user-friendly, human readable representation. Finally, consistency analyses enable conceptual designs to be checked against this defined knowledge. In this paper we concentrate on the knowledge specification part of our project.
In: Net-distributed Co-operation : Xth International Conference on Computing in Civil and Building Engineering, Weimar, June 02 - 04, 2004 ; proceedings / [ed. by Karl Beuke ...] . - Weimar: Bauhaus-Univ. Weimar 2004. - 1. Aufl. . Seite 1-14 ISBN 3-86068-213-X International Conference on Computing in Civil and Building Engineering <10, 2004, Weimar> Summary In our project, we develop new tools for the conceptual design phase. During conceptual design, the coarse functionality and organization of a building is more important than a detailed worked out construction. We identify two roles, first the knowledge engineer who is responsible for knowledge definition and maintenance; second the architect who elaborates the conceptual de-sign. The tool for the knowledge engineer is based on graph technology, it is specified using PROGRES and the UPGRADE framework. The tools for the architect are integrated to the in-dustrial CAD tool ArchiCAD. Consistency between knowledge and conceptual design is en-sured by the constraint checker, another extension to ArchiCAD.
In: Computer Aided Architectural Design Futures 2005 2005, Part 4, 207-216, DOI: http://dx.doi.org/10.1007/1-4020-3698-1_19 The conceptual design at the beginning of the building construction process is essential for the success of a building project. Even if some CAD tools allow elaborating conceptual sketches, they rather focus on the shape of the building elements and not on their functionality. We introduce semantic roomobjects and roomlinks, by way of example to the CAD tool ArchiCAD. These extensions provide a basis for specifying the organisation and functionality of a building and free architects being forced to directly produce detailed constructive sketches. Furthermore, we introduce consistency analyses of the conceptual sketch, based on an ontology containing conceptual relevant knowledge, specific to one class of buildings.
In: Proc. of the 11th Intl. Conf. on Computing in Civil and Building Engineering (ICCCBE-XI) ed. Hugues Rivard, Montreal, Canada, Seite 1-12, ACSE (CD-ROM), 2006 Currently, the conceptual design phase is not adequately supported by any CAD tool. Neither the support while elaborating conceptual sketches, nor the automatic proof of correctness with respect to effective restrictions is currently provided by any commercial tool. To enable domain experts to store the common as well as their personal domain knowledge, we develop a visual language for knowledge formalization. In this paper, a major extension to the already existing concepts is introduced. The possibility to define rule dependencies extends the expressiveness of the knowledge definition language and contributes to the usability of our approach.
ITCE-2003 - 4th Joint Symposium on Information Technology in Civil Engineering ed Flood, I., Seite 1-12, ASCE (CD-ROM), Nashville, USA In this paper we discussed graph based tools to support architects during the conceptual design phase. Conceptual Design is defined before constructive design; the used concepts are more abstract. We develop two graph based approaches, a topdown using the graph rewriting system PROGRES and a more industrially oriented approach, where we extend the CAD system ArchiCAD. In both approaches, knowledge can be defined by a knowledge engineer, in the top-down approach in the domain model graph, in the bottom-up approach in the in an XML file. The defined knowledge is used to incrementally check the sketch and to inform the architect about violations of the defined knowledge. Our goal is to discover design error as soon as possible and to support the architect to design buildings with consideration of conceptual knowledge.
In: Advances in intelligent computing in engineering : proceedings of the 9.International EG-ICE Workshop ; Darmstadt, (01 - 03 August) 2002 / Martina Schnellenbach-Held ... (eds.) . - Düsseldorf: VDI-Verl., 2002 .- Fortschritt-Berichte VDI, Reihe 4, Bauingenieurwesen ; 180 ; S. 1-35 The paper describes a novel way to support conceptual design in civil engineering. The designer uses semantical tools guaranteeing certain internal structures of the design result but also the fulfillment of various constraints. Two different approaches and corresponding tools are discussed: (a) Visually specified tools with automatic code generation to determine a design structure as well as fixing various constraints a design has to obey. These tools are also valuable for design knowledge specialist. (b) Extensions of existing CAD tools to provide semantical knowledge to be used by an architect. It is sketched how these different tools can be combined in the future. The main part of the paper discusses the concepts and realization of two prototypes following the two above approaches. The paper especially discusses that specific graphs and the specification of their structure are useful for both tool realization projects.
Applications of Graph Transformations with Industrial Relevance Lecture Notes in Computer Science, 2004, Volume 3062/2004, 434-439, DOI: http://dx.doi.org/10.1007/978-3-540-25959-6_33 This paper gives a brief overview of the tools we have developed to support conceptual design in civil engineering. Based on the UPGRADE framework, two applications, one for the knowledge engineer and another for architects allow to store domain specific knowledge and to use this knowledge during conceptual design. Consistency analyses check the design against the defined knowledge and inform the architect if rules are violated.
The workflow of a high throughput screening setup for the rapid identification of new and improved sensor materials is presented. The polyol method was applied to prepare nanoparticular metal oxides as base materials, which were functionalised by surface doping. Using multi-electrode substrates and high throughput impedance spectroscopy (HT-IS) a wide range of materials could be screened in a short time. Applying HT-IS in search of new selective gas sensing materials a NO2-tolerant NO sensing material with reduced sensitivities towards other test gases was identified based on iridium doped zinc oxide. Analogous behaviour was observed for iridium doped indium oxide.
Phase change materials offer a way of storing excess heat and releasing it when it is needed. They can be utilized as a method to control thermal behavior without the need for additional energy. This work focuses on exploring the potential of using phase change materials to passively control the thermal behavior of a star tracker by infusing it with a fitting phase change material. Based on the numerical model of the star trackers thermal behavior using ESATAN-TMS without implemented phase change material, a fitting phase change material for selected orbits is chosen and implemented in the thermal model. The altered thermal behavior of the numerical model after the implementation is analyzed for different amounts of the chosen phase change materials using an ESATAN-based subroutine developed by the FH Aachen. The PCM-modelling-subroutine is explained in the paper ICES-2021-110. The results show that an increasing amount of phase change material increasingly damps temperature oscillations. Using an integral part structure some of the mass increase can be compensated.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Heavy metal detection with semiconductor devices based on PLD-prepared chalcogenide glass thin films
(2007)
The field of Cognitive Robotics aims at intelligent decision making of autonomous robots. It has matured over the last 25 or so years quite a bit. That is, a number of high-level control languages and architectures have emerged from the field. One concern in this regard is the action language GOLOG. GOLOG has been used in a rather large number of applications as a high-level control language ranging from intelligent service robots to soccer robots. For the lower level robot software, the Robot Operating System (ROS) has been around for more than a decade now and it has developed into the standard middleware for robot applications. ROS provides a large number of packages for standard tasks in robotics like localisation, navigation, and object recognition. Interestingly enough, only little work within ROS has gone into the high-level control of robots. In this paper, we describe our approach to marry the GOLOG action language with ROS. In particular, we present our architecture on inte grating golog++, which is based on the GOLOG dialect Readylog, with the Robot Operating System. With an example application on the Pepper service robot, we show how primitive actions can be easily mapped to the ROS ActionLib framework and present our control architecture in detail.