Conference Proceeding
Refine
Year of publication
Institute
- Fachbereich Elektrotechnik und Informationstechnik (302)
- Fachbereich Energietechnik (259)
- Fachbereich Medizintechnik und Technomathematik (241)
- Fachbereich Luft- und Raumfahrttechnik (208)
- Fachbereich Maschinenbau und Mechatronik (207)
- Solar-Institut Jülich (167)
- IfB - Institut für Bioengineering (152)
- Fachbereich Bauingenieurwesen (139)
- Fachbereich Wirtschaftswissenschaften (69)
- ECSM European Center for Sustainable Mobility (59)
Language
- English (1162)
- German (475)
- Multiple languages (1)
- Spanish (1)
Document Type
- Conference Proceeding (1639) (remove)
Keywords
- Biosensor (25)
- Blitzschutz (15)
- CAD (11)
- Finite-Elemente-Methode (11)
- civil engineering (11)
- Bauingenieurwesen (10)
- Lightning protection (9)
- Einspielen <Werkstoff> (6)
- Telekommunikationsmarkt (6)
- shakedown analysis (6)
Concerning current efforts to improve operational efficiency and to lower overall costs of concentrating solar power (CSP) plants with prediction-based algorithms, this study investigates the quality and uncertainty of nowcasting data regarding the implications for process predictions. DNI (direct normal irradiation) maps from an all-sky imager-based nowcasting system are applied to a dynamic prediction model coupled with ray tracing. The results underline the need for high-resolution DNI maps in order to predict net yield and receiver outlet temperature realistically. Furthermore, based on a statistical uncertainty analysis, a correlation is developed, which allows for predicting the uncertainty of the net power prediction based on the corresponding DNI forecast uncertainty. However, the study reveals significant prediction errors and the demand for further improvement in the accuracy at which local shadings are forecasted.
A promising approach to reduce the system costs of molten salt solar receivers is to enable the irradiation of the absorber tubes on both sides. The star design is an innovative receiver design, pursuing this approach. The unconventional design leads to new challenges in controlling the system. This paper presents a control concept for a molten salt receiver system in star design. The control parameters are optimized in a defined test cycle by minimizing a cost function. The control concept is tested in realistic cloud passage scenarios based on real weather data. During these tests, the control system showed no sign of unstable behavior, but to perform sufficiently in every scenario further research and development like integrating Model Predictive Controls (MPCs) need to be done. The presented concept is a starting point to do so.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
The future of industrial manufacturing and production will increasingly manifest in the form of cyber-physical production systems. Here, Digital Shadows will act as mediators between the physical and digital world to model and operationalize the interactions and relationships between different entities in production systems. Until now, the associated concepts have been primarily pursued and implemented from a technocentric perspective, in which human actors play a subordinate role, if they are considered at all. This paper outlines an anthropocentric approach that explicitly considers the characteristics, behavior, and traits and states of human actors in socio-technical production systems. For this purpose, we discuss the potentials and the expected challenges and threats of creating and using Human Digital Shadows in production.
Messenger apps like WhatsApp or Telegram are an integral part of daily communication. Besides the various positive effects, those services extend the operating range of criminals. Open trading groups with many thousand participants emerged on Telegram. Law enforcement agencies monitor suspicious users in such chat rooms. This research shows that text analysis, based on natural language processing, facilitates this through a meaningful domain overview and detailed investigations. We crawled a corpus from such self-proclaimed black markets and annotated five attribute types products, money, payment methods, user names, and locations. Based on each message a user sends, we extract and group these attributes to build profiles. Then, we build features to cluster the profiles. Pretrained word vectors yield better unsupervised clustering results than current
state-of-the-art transformer models. The result is a semantically meaningful high-level overview of the user landscape of black market chatrooms. Additionally, the extracted structured information serves as a foundation for further data exploration, for example, the most active users or preferred payment methods.
This paper covers the use of the magnetic Wiegand effect to design an innovative incremental encoder. First, a theoretical design is given, followed by an estimation of the achievable accuracy and an optimization in open-loop operation.
Finally, a successful experimental verification is presented. For this purpose, a permanent magnet synchronous machine is controlled in a field-oriented manner, using the angle information of the prototype.
Digital twins enable the modeling and simulation of real-world entities (objects, processes or systems), resulting in improvements in the associated value chains. The emerging field of quantum computing holds tremendous promise forevolving this virtualization towards Quantum (Digital) Twins (QDT) and ultimately Quantum Twins (QT). The quantum (digital) twin concept is not a contradiction in terms - but instead describes a hybrid approach that can be implemented using the technologies available today by combining classicalcomputing and digital twin concepts with quantum processing. This paperpresents the status quo of research and practice on quantum (digital) twins. It alsodiscuses their potential to create competitive advantage through real-timesimulation of highly complex, interconnected entities that helps companies better
address changes in their environment and differentiate their products andservices.
The fourth industrial revolution presents a multitude of challenges for industries, one of which being the increased flexibility required of manufacturing lines as a result of increased consumer demand for individualised products. One solution to tackle this challenge is the digital twin, more specifically the standardised model of a digital twin also known as the asset administration shell. The standardisation of an industry wide communications tool is a critical step in enabling inter-company operations. This paper discusses the current state of asset administration shells, the frameworks used to host them and their problems that need to be addressed. To tackle these issues, we propose an event-based server capable of drastically reducing response times between assets and asset administration shells and a multi-agent system used for the orchestration and deployment of the shells in the field.
In this article we describe an Internet-of-Things sensing device with a wireless interface which is powered by the oftenoverlooked harvesting method of the Wiegand effect. The sensor can determine position, temperature or other resistively measurable quantities and can transmit the data via an ultra-low power ultra-wideband (UWB) data transmitter. With this approach we can energy-self-sufficiently acquire, process, and wirelessly transmit data in a pulsed operation. A proof-of-concept system was built up to prove the feasibility of the approach. The energy consumption of the system is analyzed and traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof-of-concept, an application demonstrator was developed. Finally, we point out possible use cases.
Cybersecurity of Industrial Control Systems (ICS) is an important issue, as ICS incidents may have a direct impact on safety of people or the environment. At the same time the awareness and knowledge about cybersecurity, particularly in the context of ICS, is alarmingly low. Industrial honeypots offer a cheap and easy to implement way to raise cybersecurity awareness and to educate ICS staff about typical attack patterns. When integrated in a productive network, industrial honeypots may not only reveal attackers early but may also distract them from the actual important systems of the network. Implementing multiple honeypots as a honeynet, the systems can be used to emulate or simulate a whole Industrial Control System. This paper describes a network of honeypots emulating HTTP, SNMP, S7communication and the Modbus protocol using Conpot, IMUNES and SNAP7. The nodes mimic SIMATIC S7 programmable logic controllers (PLCs) which are widely used across the globe. The deployed honeypots' features will be compared with the features of real SIMATIC S7 PLCs. Furthermore, the honeynet has been made publicly available for ten days and occurring cyberattacks have been analyzed
The recent amendment to the Ethernet physical layer known as the IEEE 802.3cg specification, allows to connect devices up to a distance of one kilometer and delivers a maximum of 60 watts of power over a twisted pair of wires. This new standard, also known as 10BASE-TIL, promises to overcome the limits of current physical layers used for field devices and bring them a step closer to Ethernet-based applications. The main advantage of 10BASE- TIL is that it can deliver power and data over the same line over a long distance, where traditional solutions (e.g., CAN, IO-Link, HART) fall short and cannot match its 10 Mbps bandwidth. Due to its recentness, IOBASE- TIL is still not integrated into field devices and it has been less than two years since silicon manufacturers released the first Ethernet-PHY chips. In this paper, we present a design proposal on how field devices could be integrated into a IOBASE-TIL smart switch that allows plug-and-play connectivity for sensors and actuators and is compliant with the Industry 4.0 vision. Instead of presenting a new field-level protocol for this work, we have decided to adopt the IO-Link specification which already includes a plug-and-play approach with features such as diagnosis and device configuration. The main objective of this work is to explore how field devices could be integrated into 10BASE-TIL Ethernet, its adaption with a well-known protocol, and its integration with Industry 4.0 technologies.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Additive Manufacturing (AM) of metallic workpieces faces a continuously rising technological relevance and market size. Producing complex or highly strained unique workpieces is a significant field of application, making AM highly relevant for tool components. Its successful economic application requires systematic workpiece based decisions and optimizations. Considering geometric and technological requirements as well as the necessary post-processing makes deciding effortful and requires in-depth knowledge. As design is usually adjusted to established manufacturing, associated technological and strategic potentials are often neglected. To embed AM in a future proof industrial environment, software-based self-learning tools are necessary. Integrated into production planning, they enable companies to unlock the potentials of AM efficiently. This paper presents an appropriate methodology for the analysis of process-specific AM-eligibility and optimization potential, added up by concrete optimization proposals. For an integrated workpiece characterization, proven methods are enlarged by tooling-specific figures.
The first stage of the approach specifies the model’s initialization. A learning set of tooling components is described using the developed key figure system. Based on this, a set of applicable rules for workpiece-specific result determination is generated through clustering and expert evaluation. Within the following application stage, strategic orientation is quantified and workpieces of interest are described using the developed key figures. Subsequently, the retrieved information is used for automatically generating specific recommendations relying on the generated ruleset of stage one. Finally, actual experiences regarding the recommendations are gathered within stage three. Statistic learning transfers those to the generated ruleset leading to a continuously deepening knowledge base. This process enables a steady improvement in output quality.
The industrial revolution IR4.0 era have driven many states of the art technologies to be introduced especially in the automotive industry. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South-East Asia (SEA). Indulging this situation, FH Joanneum, Austria together with European partners from FH Aachen, Germany and Politecnico Di Torino, Italy is taking initiative to close the gap utilizing the Erasmus+ United grant from EU. A consortium was founded to engage with automotive technology transfer using the European ramework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative, and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing Universityindustry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
The development of protype applications with sensors and actuators in the automation industry requires tools that are independent of manufacturer, and are flexible enough to be modified or extended for any specific requirements. Currently, developing prototypes with industrial sensors and actuators is not straightforward. First of all, the exchange of information depends on the industrial protocol that these devices have. Second, a specific configuration and installation is done based on the hardware that is used, such as automation controllers or industrial gateways. This means that the development for a specific industrial protocol, highly depends on the hardware and the software that vendors provide. In this work we propose a rapid-prototyping framework based on Arduino to solve this problem. For this project we have focused to work with the IO-Link protocol. The framework consists of an Arduino shield that acts as the physical layer, and a software that implements the IO-Link Master protocol. The main advantage of such framework is that an application with industrial devices can be rapid-prototyped with ease as its vendor independent, open-source and can be ported easily to other Arduino compatible boards. In comparison, a typical approach requires proprietary hardware, is not easy to port to another system and is closed-source.
Digital twins are seen as one of the key technologies of Industry 4.0. Although many research groups focus on digital twins and create meaningful outputs, the technology has not yet reached a broad application in the industry. The main reasons for this imbalance are the complexity of the topic, the lack of specialists, and the unawareness of the twin opportunities. The project "Digital Twin Academy" aims to overcome these barriers by focusing on three actions: Building a digital twin community for discussion and exchange, offering multi-stage training for various knowledge levels, and implementing realworld use cases for deeper insights and guidance. In this work, we focus on creating a flexible learning platform that allows the user to select a training path adjusted to personal knowledge and needs. Therefore, a mix of basic and advanced modules is created and expanded by individual feedback options. The usage of personas supports the selection of the appropriate modules.
Having well-defined control strategies for fuel cells, that can efficiently detect errors and take corrective action is critically important for safety in all applications, and especially so in aviation. The algorithms not only ensure operator safety by monitoring the fuel cell and connected components, but also contribute to extending the health of the fuel cell, its durability and safe operation over its lifetime. While sensors are used to provide peripheral data surrounding the fuel cell, the internal states of the fuel cell cannot be directly measured. To overcome this restriction, Kalman Filter has been implemented as an internal state observer.
Other safety conditions are evaluated using real-time data from every connected sensor and corrective actions automatically take place to ensure safety. The algorithms discussed in this paper have been validated thorough Model-in-the-Loop (MiL) tests as well as practical validation at a dedicated test bench.
Quantitative evaluation of health management designs for fuel cell systems in transport vehicles
(2022)
Focusing on transport vehicles, mainly with regard to aviation applications, this paper presents compilation and subsequent quantitative evaluation of methods aimed at building an optimum integrated health management solution for fuel cell systems. The methods are divided into two different main types and compiled in a related scheme. Furthermore, different methods are analysed and evaluated based on parameters specific to the aviation context of this study. Finally, the most suitable method for use in fuel cell health management systems is identified and its performance and suitability is quantified.
Open Data impliziert die freie Zugänglichkeit, Verfügbarkeit und Wiederverwendbarkeit von Datensätzen. Obwohl hochwertige Datensätze öffentlich verfügbar sind, ist der Zugang zu diesen und die Transparenz über die Formate nicht immer gegeben. Dies mindert die optimale Nutzung des Potenzials zur Wertschöpfung, trotz der vorherrschenden Einigkeit über ihre Chancen. Denn Open Data ermöglicht das Vorantreiben von Compliance-Themen wie Transparenz und Rechenschaftspflicht bis hin zur Förderung von Innovationen. Die Nutzung von Open Data erfordert Mut und eine gemeinsame Anstrengung verschiedener Akteure und Branchen. Im Rahmen des vorliegenden Beitrags werden auf Grundlage des Design Science-Ansatzes eine Open Data Capability Map sowie darauf aufbauend eine Datenarchitektur für Open Data in der Luftfahrtindustrie an einem Beispiel entwickelt.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
Useful market simulations are key to the evaluation of diferent market designs existing of multiple market mechanisms or rules. Yet a simulation framework which has a comparison of diferent market mechanisms in mind was not found. The need to create an objective view on different sets of market rules while investigating meaningful agent strategies concludes that such a simulation framework is needed to advance the research on this subject. An overview of diferent existing market simulation models is given which also shows the research gap and the missing capabilities of those systems. Finally, a methodology is outlined how a novel market simulation which can answer the research questions can be developed.
Kawasaki Heavy Industries, Ltd. (KHI), Aachen University of Applied Sciences, and B&B-AGEMA GmbH have investigated the potential of low NOx micro-mix (MMX) hydrogen combustion and its application to an industrial gas turbine combustor. Engine demonstration tests of a MMX combustor for the M1A-17 gas turbine with a co-generation system were conducted in the hydrogen-fueled power generation plant in Kobe City, Japan.
This paper presents the results of the commissioning test and the combined heat and power (CHP) supply demonstration. In the commissioning test, grid interconnection, loading tests and load cut-off tests were successfully conducted. All measurement results satisfied the Japanese environmental regulation values. Dust and soot as well as SOx were not detected. The NOx emissions were below 84 ppmv at 15 % O2. The noise level at the site boundary was below 60 dB. The vibration at the site boundary was below 45 dB.
During the combined heat and power supply demonstration, heat and power were supplied to neighboring public facilities with the MMX combustion technology and 100 % hydrogen fuel. The electric power output reached 1800 kW at which the NOx emissions were 72 ppmv at 15 % O2, and 60 %RH. Combustion instabilities were not observed. The gas turbine efficiency was improved by about 1 % compared to a non-premixed type combustor with water injection as NOx reduction method. During a total equivalent operation time of 1040 hours, all combustor parts, the M1A-17 gas turbine as such, and the co-generation system were without any issues.
Industrial facilities must be thoroughly designed to withstand seismic actions as they exhibit an increased loss potential due to the possibly wideranging damage consequences and the valuable process engineering equipment. Past earthquakes showed the social and political consequences of seismic damage to industrial facilities and sensitized the population and politicians worldwide for the possible hazard emanating from industrial facilities. However, a holistic approach for the seismic design of industrial facilities can presently neither be found in national nor in international standards. The introduction of EN 1998-4 of the new generation of Eurocode 8 will improve the normative situation with
specific seismic design rules for silos, tanks and pipelines and secondary process components. The article presents essential aspects of the seismic design of industrial facilities based on the new generation of Eurocode 8 using the example of tank structures and secondary process components. The interaction effects of the process components with the primary structure are illustrated by means of the experimental results of a shaking table test of a three story moment resisting steel frame with different process components. Finally, an integrated approach of
digital plant models based on building information modelling (BIM) and structural health monitoring (SHM) is presented, which provides not only a reliable decision-making basis for operation, maintenance and repair but also an excellent tool for rapid assessment of seismic damage.
Recent earthquakes showed that low-rise URM buildings following codecompliant seismic design and details behaved in general very well without substantial damages. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. Values of q-factors are derived for low-rise URM buildings with rigid diaphragms, with reference to modern structural configurations realized in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0 to 3.0 are proposed.
A Gamified Information System (GIS) implements game concepts and elements, such as affordances and game design principles to motivate people. Based on the idea to develop a GIS to increase the motivation of software developers to perform software quality tasks, the research work at hand aims at investigating relevant requirements from that target group. Therefore, 14 interviews with software development experts are conducted and analyzed. According to the results, software developers prefer the affordances points, narrative storytelling in a multiplayer and a round-based setting. Furthermore, six design principles for the development of a GIS are derived.
Many of today’s factors make software development more and more complex, such as time pressure, new technologies, IT security risks, et cetera. Thus, a good preparation of current as well as future software developers in terms of a good software engineering education becomes progressively important. As current research shows, Competence Developing Games (CDGs) and Serious Games can offer a potential solution.
This paper identifies the necessary requirements for CDGs to be conducive in principle, but especially in software engineering (SE) education. For this purpose, the current state of research was summarized in the context of a literature review. Afterwards, some of the identified requirements as well as some additional requirements were evaluated by a survey in terms of subjective relevance.
In the Laser Powder Bed Fusion (LPBF) process, parts are built out of metal powder material by exposure of a laser beam. During handling operations of the powder material, several influencing factors can affect the properties of the powder material and therefore directly influence the processability during manufacturing. Contamination by moisture due to handling operations is one of the most critical aspects of powder quality. In order to investigate the influences of powder humidity on LPBF processing, four materials (AlSi10Mg, Ti6Al4V, 316L and IN718) are chosen for this study. The powder material is artificially humidified, subsequently characterized, manufactured into cubic samples in a miniaturized process chamber and analyzed for their relative density. The results indicate that the processability and reproducibility of parts made of AlSi10Mg and Ti6Al4V are susceptible to humidity, while IN718 and 316L are barely influenced.
Process mining gets more and more attention even outside large enterprises and can be a major benefit for small and medium sized enterprises (SMEs) to gain competitive advantages. Applying process mining is challenging, particularly for SMEs because they have less resources and process maturity. So far, IS researchers analyzed process mining challenges with a focus on larger companies. This paper investigates the application of process mining by means of a case study and sheds light into the particular challenges of an IT SME. The results reveal 13 SME process mining challenges and seven guidelines to address them. In this way, the paper contributes to the understanding of process mining application in SME and shows similarities and differences to larger companies.
Electric flight has the potential for a more sustainable and energy-saving way of aviation compared to fossil fuel aviation. The electric motor can be used as a generator inflight to regenerate energy during descent. Three different approaches to regenerating with electric propeller powertrains are proposed in this paper. The powertrain is to be set up in a wind tunnel to determine the propeller efficiency in both working modes as well as the noise emissions. Furthermore, the planned flight tests are discussed. In preparation for these tests, a yaw stability analysis is performed with the result that the aeroplane is controllable during flight and in the most critical failure case. The paper shows the potential for inflight regeneration and addresses the research gaps in the dual role of electric powertrains for propulsion and regeneration of general aviation aircraft.
When confining pressure is low or absent, extensional fractures are typical, with fractures occurring on unloaded planes in rock. These “paradox” fractures can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. But this criterion makes unrealistic strength predictions in biaxial compression and tension. A new extension strain criterion overcomes this limitation by adding a weighted principal shear component. The weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting failure modes, which are unexpected in the understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak P. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
A capacitive electrolyte-insulator-semiconductor (EISCAP) biosensor modified with Tobacco mosaic virus (TMV) particles for the detection of acetoin is presented. The enzyme acetoin reductase (AR) was immobilized on the surface of the EISCAP using TMV particles as nanoscaffolds. The study focused on the optimization of the TMV-assisted AR immobilization on the Ta 2 O 5 -gate EISCAP surface. The TMV-assisted acetoin EISCAPs were electrochemically characterized by means of leakage-current, capacitance-voltage, and constant-capacitance measurements. The TMV-modified transducer surface was studied via scanning electron microscopy.
Digital start-ups are perceived as an engine for innovation and job promotor. While success factors for non-IT start-ups have already been extensively researched, this study sheds light on digital entrepreneurs, whose business model relies primarily on services based on digital technologies. Applying the Grounded Theory method, we identify relevant environmental success factors for digital entrepreneurs. The study’s research contribution is threefold. First, we provide 16 relevant and less relevant environmental success factors, which enables a comparison with prior identified factors. We found out that several prior environmental success factors, such as accessibility to transportation or the availability of land and facilities are less relevant for a digital entrepreneur. Second, we derive and discuss hypotheses for the influence of these factors on digital start-up success. Third, we present a theoretical model that lays the foundation for explaining the environmental influence on digital
entrepreneurship success.
In order to reduce energy consumption of homes, it is important to make transparent which devices consume how much energy. However, power consumption is often only monitored aggregated at the house energy meter. Disaggregating this power consumption into the contributions of individual devices can be achieved using Machine Learning. Our work aims at making state of the art disaggregation algorithms accessibe for users of the open source home automation platform Home Assistant.
Despite the challenges of pioneering molten salt towers (MST), it remains the leading technology in central receiver power plants today, thanks to cost effective storage integration and high cost reduction potential. The limited controllability in volatile solar conditions can cause significant losses, which are difficult to estimate without comprehensive modeling [1]. This paper presents a Methodology to generate predictions of the dynamic behavior of the receiver system as part of an operating assistance system (OAS). Based on this, it delivers proposals if and when to drain and refill the receiver during a cloudy period in order maximize the net yield and quantifies the amount of net electricity gained by this. After prior analysis with a detailed dynamic two-phase model of the entire receiver system, two different reduced modeling approaches where developed and implemented in the OAS. A tailored decision algorithm utilizes both models to deliver the desired predictions efficiently and with appropriate accuracy.
The complex questions of today for a world of tomorrow are characterized by their global impact. Solutions must therefore not only be sustainable in the sense of the three pillars of sustainability (economic, environmental, and social) but must also function globally. This goes hand in hand with the need for intercultural acceptance of developed services and products. To achieve this, engineers, as the problem solvers of the future, must be able to work in intercultural teams on appropriate solutions, and be sensitive to intercultural perspectives. To equip the engineers of the future with the so-called future skills, teaching concepts are needed in which students can acquire these methods and competencies in application-oriented formats. The presented course "Applying Design Thinking - Sustainability, Innovation and Interculturality" was developed to teach future skills from the competency areas Digital Key Competencies, Classical Competencies and Transformative Competencies. The CDIO Standard 3.0, in particular the standards 5, 6, 7 and 8, was used as a guideline. The course aims to prepare engineering students from different disciplines and cultures for their future work in an international environment by combining a digital teaching format with an interdisciplinary, transdisciplinary and intercultural setting for solving sustainability challenges. The innovative moment lies in the digital application of design thinking and the inclusion of intercultural as well as trans- and interdisciplinary perspectives in innovation development processes. In this paper, the concept of the course will be presented in detail and the particularities of a digital implementation of design thinking will be addressed. Subsequently, the potentials and challenges will be reflected and practical advice for integrating design thinking in engineering education will be given.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
In addition to the technical content, modern courses at university should also teach professional skills to enhance the competencies of students towards their future work. The competency driven approach including technical as well as professional skills makes it necessary to find a suitable way for the integration into the corresponding module in a scalable and flexible manner. Agile development, for example, is essential for the development of modern systems and applications and makes use of dedicated professional skills of the team members, like structured group dynamics and communication, to enable the fast and reliable development. This paper presents an easy to integrate and flexible approach to integrate Scrum, an agile development method, into the lab of an existing module. Due to the different role models of Scrum the students have an individual learning success, gain valuable insight into modern system development and strengthen their communication and organization skills. The approach is implemented and evaluated in the module Vehicle Systems, but it can be transferred easily to other technical courses as well. The evaluation of the implementation considers feedback of all stakeholders, students, supervisor and lecturers, and monitors the observations during project lifetime.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
This work proposes a hybrid algorithm combining an Artificial Neural Network (ANN) with a conventional local path planner to navigate UAVs efficiently in various unknown urban environments. The proposed method of a Hybrid Artificial Neural Network Avoidance System is called HANNAS. The ANN analyses a video stream and classifies the current environment. This information about the current Environment is used to set several control parameters of a conventional local path planner, the 3DVFH*. The local path planner then plans the path toward a specific goal point based on distance data from a depth camera. We trained and tested a state-of-the-art image segmentation algorithm, PP-LiteSeg. The proposed HANNAS method reaches a failure probability of 17%, which is less than half the failure probability of the baseline and around half the failure probability of an improved, bio-inspired version of the 3DVFH*. The proposed HANNAS method does not show any disadvantages regarding flight time or flight distance.
The management of knowledge in organizations considers both established long-term processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
Research on robotic lunar exploration has seen a broad revival, especially since the Google Lunar X-Prize increasingly brought private endeavors into play. This development is supported by national agencies with the aim of enabling long-term lunar infrastructure for in-situ operations and the establishment of a moon village. One challenge for effective exploration missions is developing a compact and lightweight robotic rover to reduce launch costs and open the possibility for secondary payload options. Existing micro rovers for exploration missions are clearly limited by their design for one day of sunlight and their low level of autonomy. For expanding the potential mission applications and range of use, an extension of lifetime could be reached by surviving the lunar night and providing a higher level of autonomy. To address this objective, the paper presents a system design concept for a lightweight micro rover with long-term mission duration capabilities, derived from a multi-day lunar mission scenario at equatorial regions. Technical solution approaches are described, analyzed, and evaluated, with emphasis put on the harmonization of hardware selection due to a strictly limited budget in dimensions and power.
In Europe, efforts are underway to develop key technologies that can be used to explore the Moon and to exploit the resources available. This includes technologies for in-situ resource utilization (ISRU), facilitating the possibility of a future Moon Village. The Moon is the next step for humans and robots to exploit the use of available resources for longer term missions, but also for further exploration of the solar system. A challenge for effective exploration missions is to achieve a compact and lightweight robot to reduce launch costs and open up the possibility of secondary payload options. Current micro rover concepts are primarily designed to last for one day of solar illumination and show a low level of autonomy. Extending the lifetime of the system by enabling survival of the lunar night and implementing a high level of autonomy will significantly increase potential mission applications and the operational range. As a reference mission, the deployment of a micro rover in the equatorial region of the Moon is being considered. An overview of mission parameters and a detailed example mission sequence is given in this paper. The mission parameters are based on an in-depth study of current space agency roadmaps, scientific goals, and upcoming flight opportunities. Furthermore, concepts of the ongoing international micro rover developments are analyzed along with technology solutions identified for survival of lunar nights and a high system autonomy. The results provide a basis of a concise requirements set-up to allow dedicated system developments and qualification measures in the future.