Refine
Year of publication
- 2023 (90) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (21)
- Fachbereich Elektrotechnik und Informationstechnik (20)
- ECSM European Center for Sustainable Mobility (16)
- Fachbereich Luft- und Raumfahrttechnik (15)
- Fachbereich Chemie und Biotechnologie (10)
- Fachbereich Energietechnik (10)
- Fachbereich Wirtschaftswissenschaften (7)
- IfB - Institut für Bioengineering (7)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (6)
- Fachbereich Maschinenbau und Mechatronik (5)
Has Fulltext
- no (90) (remove)
Language
- English (90) (remove)
Document Type
- Article (41)
- Conference Proceeding (35)
- Part of a Book (6)
- Habilitation (2)
- Preprint (2)
- Book (1)
- Conference: Meeting Abstract (1)
- Contribution to a Periodical (1)
- Talk (1)
Keywords
- Natural language processing (3)
- Associated liquids (2)
- Diversity Management (2)
- Engineering Habitus (2)
- Future Skills (2)
- Information extraction (2)
- Interdisciplinarity (2)
- Organizational Culture (2)
- Power plants (2)
- Sustainability (2)
The first and last mile of a railway journey, in both freight and transit applications, constitutes a high effort and is either non-productive (e.g. in the case of depot operations) or highly inefficient (e.g. in industrial railways). These parts are typically managed on-sight, i.e. with no signalling and train protection systems ensuring the freedom of movement. This is possible due to the rather short braking distances of individual vehicles and shunting consists. The present article analyses the braking behaviour of such shunting units. For this purpose, a dedicated model is developed. It is calibrated on published results of brake tests and validated against a high-definition model for low-speed applications. Based on this model, multiple simulations are executed to obtain a Monte Carlo simulation of the resulting braking distances. Based on the distribution properties and established safety levels, the risk of exceeding certain braking distances is evaluated and maximum braking distances are derived. Together with certain parameters of the system, these can serve in the design and safety assessment of driver assistance systems and automation of these processes.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
This book is based on a multimedia course for biological and chemical engineers, which is designed to trigger students' curiosity and initiative. A solid basic knowledge of thermodynamics and kinetics is necessary for understanding many technical, chemical, and biological processes.
The one-semester basic lecture course was divided into 12 workshops (chapters). Each chapter covers a practically relevant area of physical chemistry and contains the following didactic elements that make this book particularly exciting and understandable:
- Links to Videos at the start of each chapter as preparation for the workshop
- Key terms (in bold) for further research of your own
- Comprehension questions and calculation exercises with solutions as learning checks
- Key illustrations as simple, easy-to-replicate blackboard pictures
Humorous cartoons for each workshop (by Faelis) additionally lighten up the text and facilitate the learning process as a mnemonic. To round out the book, the appendix includes a summary of the most popular experiments in basic physical chemistry courses, as well as suggestions for designing workshops with exhibits, experiments, and "questions of the day."
Suitable for students minoring in chemistry; chemistry majors are sure to find this slimmed-down, didactically valuable book helpful as well. The book is excellent for self-study.
Digital forensics of smartphones is of utmost importance in many criminal cases. As modern smartphones store chats, photos, videos etc. that can be relevant for investigations and as they can have storage capacities of hundreds of gigabytes, they are a primary target for forensic investigators. However, it is exactly this large amount of data that is causing problems: extracting and examining the data from multiple phones seized in the context of a case is taking more and more time. This bears the risk of wasting a lot of time with irrelevant phones while there is not enough time left to analyze a phone which is worth examination. Forensic triage can help in this case: Such a triage is a preselection step based on a subset of data and is performed before fully extracting all the data from the smartphone. Triage can accelerate subsequent investigations and is especially useful in cases where time is essential. The aim of this paper is to determine which and how much data from an Android smartphone can be made directly accessible to the forensic investigator – without tedious investigations. For this purpose, an app has been developed that can be used with extremely limited storage of data in the handset and which outputs the extracted data immediately to the forensic workstation in a human- and machine-readable format.
Experimental determination of the cross sections of proton capture on radioactive nuclei is extremely difficult. Therefore, it is of substantial interest for the understanding of the production of the p-nuclei. For the first time, a direct measurement of proton-capture cross sections on stored, radioactive ions became possible in an energy range of interest for nuclear astrophysics. The experiment was performed at the Experimental Storage Ring (ESR) at GSI by making use of a sensitive method to measure (p,γ) and (p,n) reactions in inverse kinematics. These reaction channels are of high relevance for the nucleosyn-thesis processes in supernovae, which are among the most violent explosions in the universe and are not yet well understood. The cross section of the ¹¹⁸Te(p,γ) reaction has been measured at energies of 6 MeV/u and 7 MeV/u. The heavy ions interacted with a hydrogen gas jet target. The radiative recombination process of the fully stripped ¹¹⁸Te ions and electrons from the hydrogen target was used as a luminosity monitor. An overview of the experimental method and preliminary results from the ongoing analysis will be presented.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
Due to the increasing complexity of software projects, software development is becoming more and more dependent on teams. The quality of this teamwork can vary depending on the team composition, as teams are always a combination of different skills and personality types. This paper aims to answer the question of how to describe a software development team and what influence the personality of the team members has on the team dynamics. For this purpose, a systematic literature review (n=48) and a literature search with the AI research assistant Elicit (n=20) were conducted. Result: A person’s personality significantly shapes his or her thinking and actions, which in turn influences his or her behavior in software development teams. It has been shown that team performance and satisfaction can be strongly influenced by personality. The quality of communication and the likelihood of conflict can also be attributed to personality.
Hydrogen peroxide (H₂O₂), a strong oxidizer, is a commonly used sterilization agent employed during aseptic food processing and medical applications. To assess the sterilization efficiency with H₂O₂, bacterial spores are common microbial systems due to their remarkable robustness against a wide variety of decontamination strategies. Despite their widespread use, there is, however, only little information about the detailed time-resolved mechanism underlying the oxidative spore death by H₂O₂. In this work, we investigate chemical and morphological changes of individual Bacillus atrophaeus spores undergoing oxidative damage using optical sensing with trapping Raman microscopy in real-time. The time-resolved experiments reveal that spore death involves two distinct phases: (i) an initial phase dominated by the fast release of dipicolinic acid (DPA), a major spore biomarker, which indicates the rupture of the spore’s core; and (ii) the oxidation of the remaining spore material resulting in the subsequent fragmentation of the spores’ coat. Simultaneous observation of the spore morphology by optical microscopy corroborates these mechanisms. The dependence of the onset of DPA release and the time constant of spore fragmentation on H₂O₂ shows that the formation of reactive oxygen species from H₂O₂ is the rate-limiting factor of oxidative spore death.
Immunosorbent turnip vein clearing virus (TVCV) particles displaying the IgG-binding domains D and E of Staphylococcus aureus protein A (PA) on every coat protein (CP) subunit (TVCVPA) were purified from plants via optimized and new protocols. The latter used polyethylene glycol (PEG) raw precipitates, from which virions were selectively re-solubilized in reverse PEG concentration gradients. This procedure improved the integrity of both TVCVPA and the wild-type subgroup 3 tobamovirus. TVCVPA could be loaded with more than 500 IgGs per virion, which mediated the immunocapture of fluorescent dyes, GFP, and active enzymes. Bi-enzyme ensembles of cooperating glucose oxidase and horseradish peroxidase were tethered together on the TVCVPA carriers via a single antibody type, with one enzyme conjugated chemically to its Fc region, and the other one bound as a target, yielding synthetic multi-enzyme complexes. In microtiter plates, the TVCVPA-displayed sugar-sensing system possessed a considerably increased reusability upon repeated testing, compared to the IgG-bound enzyme pair in the absence of the virus. A high coverage of the viral adapters was also achieved on Ta2O5 sensor chip surfaces coated with a polyelectrolyte interlayer, as a prerequisite for durable TVCVPA-assisted electrochemical biosensing via modularly IgG-assembled sensor enzymes.
Anti-bias trainings are increasingly demanded and practiced in academia and industry to increase employees’ sensitivity to discrimination, racism, and diversity. Under the heading of “Diversity Management”, anti-bias trainings are mainly offered as one-off workshops intending to raise awareness of unconscious biases, create a diversity-affirming corporate culture, awake awareness of the potential of diversity, and ultimately enable the reflection of diversity in development processes. However, coming from childhood education, research and scientific articles on the sustainable effectiveness of anti-bias in adulthood, especially in academia, are very scarce. In order to fill this research gap, the paper explores how sustainable the effects of individual anti-bias trainings on the behavior of participants are. In order to investigate this, participant observation in a qualitative pre-post setting was conducted, analyzing anti-bias trainings in an academic context. Two observers actively participated in the training sessions and documented the activities and reflection processes of the participants. Overall, the results question the effectiveness of single anti-bias trainings and show that a target-group adaptive approach is mandatory due to the background of the approach in early childhood education. Therefore, it can be concluded that anti-bias work needs to be adapted to the target group’s needs and reality of life. Furthermore, the study reveals that single anti-bias trainings must be embedded in a holistic diversity management approach to stimulate sustainable reflection processes among the target group. This paper is one of the first to scientifically evaluate anti-bias training effectiveness, especially in engineering sciences and the university context.
Software development projects often fail because of insufficient code quality. It is now well documented that the task of testing software, for example, is perceived as uninteresting and rather boring, leading to poor software quality and major challenges to software development companies. One promising approach to increase the motivation for considering software quality is the use of gamification. Initial research works already investigated the effects of gamification on software developers and come to promising. Nevertheless, a lack of results from field experiments exists, which motivates the chapter at hand. By conducting a gamification experiment with five student software projects and by interviewing the project members, the chapter provides insights into the changing programming behavior of information systems students when confronted with a leaderboard. The results reveal a motivational effect as well as a reduction of code smells.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Autonomous agents require rich environment models for fulfilling their missions. High-definition maps are a well-established map format which allows for representing semantic information besides the usual geometric information of the environment. These are, for instance, road shapes, road markings, traffic signs or barriers. The geometric resolution of HD maps can be as precise as of centimetre level. In this paper, we report on our approach of using HD maps as a map representation for autonomous load-haul-dump vehicles in open-pit mining operations. As the mine undergoes constant change, we also need to constantly update the map. Therefore, we follow a lifelong mapping approach for updating the HD maps based on camera-based object detection and GPS data. We show our mapping algorithm based on the Lanelet 2 map format and show our integration with the navigation stack of the Robot Operating System. We present experimental results on our lifelong mapping approach from a real open-pit mine.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
AI-based systems are nearing ubiquity not only in everyday low-stakes activities but also in medical procedures. To protect patients and physicians alike, explainability requirements have been proposed for the operation of AI-based decision support systems (AI-DSS), which adds hurdles to the productive use of AI in clinical contexts. This raises two questions: Who decides these requirements? And how should access to AI-DSS be provided to communities that reject these standards (particularly when such communities are expert-scarce)? This chapter investigates a dilemma that emerges from the implementation of global AI governance. While rejecting global AI governance limits the ability to help communities in need, global AI governance risks undermining and subjecting health-insecure communities to the force of the neo-colonial world order. For this, this chapter first surveys the current landscape of AI governance and introduces the approach of relational egalitarianism as key to (global health) justice. To discuss the two horns of the referred dilemma, the core power imbalances faced by health-insecure collectives (HICs) are examined. The chapter argues that only strong demands of a dual strategy towards health-secure collectives can both remedy the immediate needs of HICs and enable them to become healthcare independent.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Lead and nickel, as heavy metals, are still used in industrial processes, and are classified as “environmental health hazards” due to their toxicity and polluting potential. The detection of heavy metals can prevent environmental pollution at toxic levels that are critical to human health. In this sense, the electrolyte–insulator–semiconductor (EIS) field-effect sensor is an attractive sensing platform concerning the fabrication of reusable and robust sensors to detect such substances. This study is aimed to fabricate a sensing unit on an EIS device based on Sn₃O₄ nanobelts embedded in a polyelectrolyte matrix of polyvinylpyrrolidone (PVP) and polyacrylic acid (PAA) using the layer-by-layer (LbL) technique. The EIS-Sn₃O₄ sensor exhibited enhanced electrochemical performance for detecting Pb²⁺ and Ni²⁺ ions, revealing a higher affinity for Pb²⁺ ions, with sensitivities of ca. 25.8 mV/decade and 2.4 mV/decade, respectively. Such results indicate that Sn₃O₄ nanobelts can contemplate a feasible proof-of-concept capacitive field-effect sensor for heavy metal detection, envisaging other future studies focusing on environmental monitoring.
Ice melting probes
(2023)
The exploration of icy environments in the solar system, such as the poles of Mars and the icy moons (a.k.a. ocean worlds), is a key aspect for understanding their astrobiological potential as well as for extraterrestrial resource inspection. On these worlds, ice melting probes are considered to be well suited for the robotic clean execution of such missions. In this chapter, we describe ice melting probes and their applications, the physics of ice melting and how the melting behavior can be modeled and simulated numerically, the challenges for ice melting, and the required key technologies to deal with those challenges. We also give an overview of existing ice melting probes and report some results and lessons learned from laboratory and field tests.
Even the shortest flight through unknown, cluttered environments requires reliable local path planning algorithms to avoid unforeseen obstacles. The algorithm must evaluate alternative flight paths and identify the best path if an obstacle blocks its way. Commonly, weighted sums are used here. This work shows that weighted Chebyshev distances and factorial achievement scalarising functions are suitable alternatives to weighted sums if combined with the 3DVFH* local path planning algorithm. Both methods considerably reduce the failure probability of simulated flights in various environments. The standard 3DVFH* uses a weighted sum and has a failure probability of 50% in the test environments. A factorial achievement scalarising function, which minimises the worst combination of two out of four objective functions, reaches a failure probability of 26%; A weighted Chebyshev distance, which optimises the worst objective, has a failure probability of 30%. These results show promise for further enhancements and to support broader applicability.
KNX is a protocol for smart building automation, e.g., for automated heating, air conditioning, or lighting. This paper analyses and evaluates state-of-the-art KNX devices from manufacturers Merten, Gira and Siemens with respect to security. On the one hand, it is investigated if publicly known vulnerabilities like insecure storage of passwords in software, unencrypted communication, or denialof-service attacks, can be reproduced in new devices. On the other hand, the security is analyzed in general, leading to the discovery of a previously unknown and high risk vulnerability related to so-called BCU (authentication) keys.
Selected problems in the field of multivariate statistical analysis are treated. Thereby, one focus is on the paired sample case. Among other things, statistical testing problems of marginal homogeneity are under consideration. In detail, properties of Hotelling‘s T² test in a special parametric situation are obtained. Moreover, the nonparametric problem of marginal homogeneity is discussed on the basis of possibly incomplete data. In the bivariate data case, properties of the Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic on the basis of partly not identically distributed data are investigated. Similar testing problems are treated within the scope of the application of a result for the empirical process of the concomitants for partly categorial data. Furthermore, testing changes in the modeled solvency capital requirement of an insurance company by means of a paired sample from an internal risk model is discussed. Beyond the paired sample case, a new asymptotic relative efficiency concept based on the expected volumes of multidimensional confidence regions is introduced. Besides, a new approach for the treatment of the multi-sample goodness-of-fit problem is presented. Finally, a consistent test for the treatment of the goodness-of-fit problem is developed for the background of huge or infinite dimensional data.
The connective tissues such as tendons contain an extracellular matrix (ECM) comprising collagen fibrils scattered within the ground substance. These fibrils are instrumental in lending mechanical stability to tissues. Unfortunately, our understanding of how collagen fibrils reinforce the ECM remains limited, with no direct experimental evidence substantiating current theories. Earlier theoretical studies on collagen fibril reinforcement in the ECM have relied predominantly on the assumption of uniform cylindrical fibers, which is inadequate for modelling collagen fibrils, which possessed tapered ends. Recently, Topçu and colleagues published a paper in the International Journal of Solids and Structures, presenting a generalized shear-lag theory for the transfer of elastic stress between the matrix and fibers with tapered ends. This paper is a positive step towards comprehending the mechanics of the ECM and makes a valuable contribution to formulating a complete theory of collagen fibril reinforcement in the ECM.
To fulfil the CO2 emission reduction targets of the European Union (EU), heavy-duty (HD) trucks need to operate 15% more efficiently by 2025 and 30% by 2030. Their electrification is necessary as conventional HD trucks are already optimized for the long-haul application. The resulting hybrid electric vehicle (HEV) truck gains most of the fuel saving potential by the recuperation of potential energy and its consecutive utilization. The key to utilizing the full potential of HEV-HD trucks is to maximize the amount of recuperated energy and ensure its intelligent usage while keeping the operating point of the internal combustion engine as efficient as possible. To achieve this goal, an intelligent energy management strategy (EMS) based on ECMS is developed for a parallel HEV-HD truck which uses predictive discharge of the battery and adaptive operating strategy regarding the height profile and the vehicle mass. The presented EMS can reproduce the global optimal operating strategy over long phases and lead to a fuel saving potential of up to 2% compared with a heuristic strategy. Furthermore, the fuel saving potential is correlated with the investigated boundary conditions to deepen the understanding of the impact of intelligent EMS for HEV-HD trucks.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
The work in modern open-pit and underground mines requires the transportation of large amounts of resources between fixed points. The navigation to these fixed points is a repetitive task that can be automated. The challenge in automating the navigation of vehicles commonly used in mines is the systemic properties of such vehicles. Many mining vehicles, such as the one we have used in the research for this paper, use steering systems with an articulated joint bending the vehicle’s drive axis to change its course and a hydraulic drive system to actuate axial drive components or the movements of tippers if available. To address the difficulties of controlling such a vehicle, we present a model-predictive approach for controlling the vehicle. While the control optimisation based on a parallel error minimisation of the predicted state has already been established in the past, we provide insight into the design and implementation of an MPC for an articulated mining vehicle and show the results of real-world experiments in an open-pit mine environment.
The complex questions of today for a world of tomorrow are characterized by their global impact. Solutions must therefore not only be sustainable in the sense of the three pillars of sustainability (economic, environmental, and social) but must also function globally. This goes hand in hand with the need for intercultural acceptance of developed services and products. To achieve this, engineers, as the problem solvers of the future, must be able to work in intercultural teams on appropriate solutions, and be sensitive to intercultural perspectives. To equip the engineers of the future with the so-called future skills, teaching concepts are needed in which students can acquire these methods and competencies in application-oriented formats. The presented course "Applying Design Thinking - Sustainability, Innovation and Interculturality" was developed to teach future skills from the competency areas Digital Key Competencies, Classical Competencies and Transformative Competencies. The CDIO Standard 3.0, in particular the standards 5, 6, 7 and 8, was used as a guideline. The course aims to prepare engineering students from different disciplines and cultures for their future work in an international environment by combining a digital teaching format with an interdisciplinary, transdisciplinary and intercultural setting for solving sustainability challenges. The innovative moment lies in the digital application of design thinking and the inclusion of intercultural as well as trans- and interdisciplinary perspectives in innovation development processes. In this paper, the concept of the course will be presented in detail and the particularities of a digital implementation of design thinking will be addressed. Subsequently, the potentials and challenges will be reflected and practical advice for integrating design thinking in engineering education will be given.
The popularity of social media and particularly Instagram grows steadily. People use the different platforms to share pictures as well as videos and to communicate with friends. The potential of social media platforms is also being used for marketing purposes and for selling products. While for Facebook and other online social media platforms the purchase decision factors are investigated several times, Instagram stores remain mainly unattended so far. The present research work closes this gap and sheds light into decisive factors for purchasing products offered in Instagram stores. A theoretical research model, which contains selected constructs that are assumed to have a significant influence on Instagram user´s purchase intention, is developed. The hypotheses are evaluated by applying structural equation modelling on survey data containing 127 relevant participants. The results of the study reveal that ‘trust’, ‘personal recommendation’, and ‘usability’ significantly influences user’s buying intention in Instagram stores.
This study evaluates neuromechanical control and muscle-tendon interaction during energy storage and dissipation tasks in hypergravity. During parabolic flights, while 17 subjects performed drop jumps (DJs) and drop landings (DLs), electromyography (EMG) of the lower limb muscles was combined with in vivo fascicle dynamics of the gastrocnemius medialis, two-dimensional (2D) kinematics, and kinetics to measure and analyze changes in energy management. Comparisons were made between movement modalities executed in hypergravity (1.8 G) and gravity on ground (1 G). In 1.8 G, ankle dorsiflexion, knee joint flexion, and vertical center of mass (COM) displacement are lower in DJs than in DLs; within each movement modality, joint flexion amplitudes and COM displacement demonstrate higher values in 1.8 G than in 1 G. Concomitantly, negative peak ankle joint power, vertical ground reaction forces, and leg stiffness are similar between both movement modalities (1.8 G). In DJs, EMG activity in 1.8 G is lower during the COM deceleration phase than in 1 G, thus impairing quasi-isometric fascicle behavior. In DLs, EMG activity before and during the COM deceleration phase is higher, and fascicles are stretched less in 1.8 G than in 1 G. Compared with the situation in 1 G, highly task-specific neuromuscular activity is diminished in 1.8 G, resulting in fascicle lengthening in both movement modalities. Specifically, in DJs, a high magnitude of neuromuscular activity is impaired, resulting in altered energy storage. In contrast, in DLs, linear stiffening of the system due to higher neuromuscular activity combined with lower fascicle stretch enhances the buffering function of the tendon, and thus the capacity to safely dissipate energy.
Using scenarios is vital in identifying and specifying measures for successfully transforming the energy system. Such transformations can be particularly challenging and require the support of a broader set of stakeholders. Otherwise, there will be opposition in the form of reluctance to adopt the necessary technologies. Usually, processes for considering stakeholders' perspectives are very time-consuming and costly. In particular, there are uncertainties about how to deal with modifications in the scenarios. In principle, new consulting processes will be required. In our study, we show how multi-criteria decision analysis can be used to analyze stakeholders' attitudes toward transition paths. Since stakeholders differ regarding their preferences and time horizons, we employ a multi-criteria decision analysis approach to identify which stakeholders will support or oppose a transition path. We provide a flexible template for analyzing stakeholder preferences toward transition paths. This flexibility comes from the fact that our multi-criteria decision aid-based approach does not involve intensive empirical work with stakeholders. Instead, it involves subjecting assumptions to robustness analysis, which can help identify options to influence stakeholders' attitudes toward transitions.
Germany is a frontrunner in setting frameworks for the transition to a low-carbon system. The mobility sector plays a significant role in this shift, affecting different people and groups on multiple levels. Without acceptance from these stakeholders, emission targets are out of reach. This research analyzes how the heterogeneous preferences of various stakeholders align with the transformation of the mobility sector, looking at the extent to which the German transformation paths are supported and where stakeholders are located.
Under the research objective of comparing stakeholders' preferences to identify which car segments require additional support for a successful climate transition, a status quo of stakeholders and car performance criteria is the foundation for the analysis. Stakeholders' hidden preferences hinder the derivation of criteria weightings from stakeholders; therefore, a ranking from observed preferences is used. This study's inverse multi-criteria decision analysis means that weightings can be predicted and used together with a recalibrated performance matrix to explore future preferences toward car segments.
Results show that stakeholders prefer medium-sized cars, with the trend pointing towards the increased potential for alternative propulsion technologies and electrified vehicles. These insights can guide the improved targeting of policy supporting the energy and mobility transformation. Additionally, the method proposed in this work can fully handle subjective approaches while incorporating a priori information. A software implementation of the proposed method completes this work and is made publicly available.
It has been shown that muscle fascicle curvature increases with increasing contraction level and decreasing muscle–tendon complex length. The analyses were done with limited examination windows concerning contraction level, muscle–tendon complex length, and/or intramuscular position of ultrasound imaging. With this study we aimed to investigate the correlation between fascicle arching and contraction, muscle–tendon complex length and their associated architectural parameters in gastrocnemius muscles to develop hypotheses concerning the fundamental mechanism of fascicle curving. Twelve participants were tested in five different positions (90°/105°*, 90°/90°*, 135°/90°*, 170°/90°*, and 170°/75°*; *knee/ankle angle). They performed isometric contractions at four different contraction levels (5%, 25%, 50%, and 75% of maximum voluntary contraction) in each position. Panoramic ultrasound images of gastrocnemius muscles were collected at rest and during constant contraction. Aponeuroses and fascicles were tracked in all ultrasound images and the parameters fascicle curvature, muscle–tendon complex strain, contraction level, pennation angle, fascicle length, fascicle strain, intramuscular position, sex and age group were analyzed by linear mixed effect models. Mean fascicle curvature of the medial gastrocnemius increased with contraction level (+5 m−1 from 0% to 100%; p = 0.006). Muscle–tendon complex length had no significant impact on mean fascicle curvature. Mean pennation angle (2.2 m−1 per 10°; p < 0.001), inverse mean fascicle length (20 m−1 per cm−1; p = 0.003), and mean fascicle strain (−0.07 m−1 per +10%; p = 0.004) correlated with mean fascicle curvature. Evidence has also been found for intermuscular, intramuscular, and sex-specific intramuscular differences of fascicle curving. Pennation angle and the inverse fascicle length show the highest predictive capacities for fascicle curving. Due to the strong correlations between pennation angle and fascicle curvature and the intramuscular pattern of curving we suggest for future studies to examine correlations between fascicle curvature and intramuscular fluid pressure.
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Preprint: Studies on the enzymatic reduction of levulinic acid using Chiralidon-R and Chiralidon-S
(2023)
The enzymatic reduction of levulinic acid by the chiral catalysts Chiralidon-R and Chiralidon-S which are commercially available superabsorbed alcohol dehydrogenases is described. The Chiralidon®-R/S reduces the levulinic acid to the (R,S)-4-hydroxy valeric acid and the (R)- or (S)- gamma-valerolactone.
Like all preceding transformations of the manufacturing industry, the large-scale usage of production data will reshape the role of humans within the sociotechnical production ecosystem. To ensure that this transformation creates work systems in which employees are empowered, productive, healthy, and motivated, the transformation must be guided by principles of and research on human-centered work design. Specifically, measures must be taken at all levels of work design, ranging from (1) the work tasks to (2) the working conditions to (3) the organizational level and (4) the supra-organizational level. We present selected research across all four levels that showcase the opportunities and requirements that surface when striving for human-centered work design for the Internet of Production (IoP). (1) On the work task level, we illustrate the user-centered design of human-robot collaboration (HRC) and process planning in the composite industry as well as user-centered design factors for cognitive assistance systems. (2) On the working conditions level, we present a newly developed framework for the classification of HRC workplaces. (3) Moving to the organizational level, we show how corporate data can be used to facilitate best practice sharing in production networks, and we discuss the implications of the IoP for new leadership models. Finally, (4) on the supra-organizational level, we examine overarching ethical dimensions, investigating, e.g., how the new work contexts affect our understanding of responsibility and normative values such as autonomy and privacy. Overall, these interdisciplinary research perspectives highlight the importance and necessary scope of considering the human factor in the IoP.
Achieving the 17 Sustainable Development Goals (SDGs) set by the United Nations (UN) in 2015 requires global collaboration between different stakeholders. Industry, and in particular engineers who shape industrial developments, have a special role to play as they are confronted with the responsibility to holistically reflect sustainability in industrial processes. This means that, in addition to the technical specifications, engineers must also question the effects of their own actions on an ecological, economic and social level in order to ensure sustainable action and contribute to the achievement of the SDGs. However, this requires competencies that enable engineers to apply all three pillars of sustainability to their own field of activity and to understand the global impact of industrial processes. In this context, it is relevant to understand how industry already reflects sustainability and to identify competences needed for sustainable development.
In addition to the technical content, modern courses at university should also teach professional skills to enhance the competencies of students towards their future work. The competency driven approach including technical as well as professional skills makes it necessary to find a suitable way for the integration into the corresponding module in a scalable and flexible manner. Agile development, for example, is essential for the development of modern systems and applications and makes use of dedicated professional skills of the team members, like structured group dynamics and communication, to enable the fast and reliable development. This paper presents an easy to integrate and flexible approach to integrate Scrum, an agile development method, into the lab of an existing module. Due to the different role models of Scrum the students have an individual learning success, gain valuable insight into modern system development and strengthen their communication and organization skills. The approach is implemented and evaluated in the module Vehicle Systems, but it can be transferred easily to other technical courses as well. The evaluation of the implementation considers feedback of all stakeholders, students, supervisor and lecturers, and monitors the observations during project lifetime.
Throughout the last decade, and particularly in 2022, water scarcity has become a critical concern in Morocco and other Mediterranean countries. The lack of rainfall during spring was worsened by a succession of heat waves during the summer. To address this drought, innovative solutions, including the use of new technologies such as hydrogels, will be essential to transform agriculture. This paper presents the findings of a study that evaluated the impact of hydrogel application on onion (Allium cepa) cultivation in Meknes, Morocco. The treatments investigated in this study comprised two different types of hydrogel-based soil additives (Arbovit® polyacrylate and Huminsorb® polyacrylate), applied at two rates (30 and 20 kg/ha), and irrigated at two levels of water supply (100% and 50% of daily crop evapotranspiration; ETc). Two control treatments were included, without hydrogel application and with both water amounts. The experiment was conducted in an open field using a completely randomized design. The results indicated a significant impact of both hydrogel-type dose and water dose on onion plant growth, as evidenced by various vegetation parameters. Among the hydrogels tested, Huminsorb® Polyacrylate produced the most favorable outcomes, with treatment T9 (100%, HP, 30 kg/ha) yielding 70.55 t/ha; this represented an increase of 11 t/ha as compared to the 100% ETc treatment without hydrogel application. Moreover, the combination of hydrogel application with 50% ETc water stress showed promising results, with treatment T4 (HP, 30 kg, 50%) producing almost the same yield as the 100% ETc treatment without hydrogel while saving 208 mm of water.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
Antibias training is increasingly demanded and practiced in academia and industry to increase employees’ sensitivity to discrimination, racism, and diversity. Under the heading of “Diversity Management,” antibias trainings are mainly offered as one-off workshops intending to raise awareness of unconscious biases, create a diversity-affirming corporate culture, promote awareness of the potential of
diversity, and ultimately enable the reflection of diversity in development processes. However, coming from childhood education, research and scientific articles on the sustainable effectiveness of antibias in adulthood, especially in academia, are very scarce. In order to fill this research gap, the article aims to explore how sustainable the effects of individual antibias trainings on participants’ behavior are. In order to investigate this, participant observation in a qualitative pre–post setting was conducted, analyzing antibias training in an academic context. Two observers actively participated in the training sessions and documented the activities and reflection processes of the participants. Overall, the results question the effectiveness of single antibias trainings and show that a target-group adaptive approach is mandatory owing to the background of the approach in early childhood education. Therefore, antibias work needs to be adapted to the target group’s needs and realities of life. Furthermore, the study reveals that single antibias trainings must be embedded in a holistic diversity management approach to stimulate sustainable reflection processes among the target group. This article is one of the first to scientifically evaluate antibias training effectiveness, especially in engineering sciences and the university context.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
By developing innovative solutions to social and environmental problems, sustainable ventures carry greatpotential. Entrepreneurship which focuses especially on new venture creation can be developed through education anduniversities, in particular, are called upon to provide an impetus for social change. But social innovations are associatedwith certain hurdles, which are related to the multi-dimensionality, i.e. the tension between creating social,environmental and economic value and dealing with a multiplicity of stakeholders. The already complex field ofentrepreneurship education has to face these challenges. This paper, therefore, aims to identify starting points for theintegration of sustainability into entrepreneurship education. To pursue this goal experiences from three differentproject initiatives between the partner universities: Lapland University of Applied Sciences, FH Aachen University ofApplied Sciences and Turiba University are reflected and findings are systematically condensed into recommendationsfor education on sustainable entrepreneurship.
This study describes the development of a new combined polysaccharide-matrix-based technology for the immobilization of Lactobacillus rhamnosus GG (LGG) bacteria in biofilm form. The new composition allows for delivering the bacteria to the digestive tract in a manner that improves their robustness compared with planktonic cells and released biofilm cells. Granules consisting of a polysaccharide matrix with probiotic biofilms (PMPB) with high cell density (>9 log CFU/g) were obtained by immobilization in the optimized nutrient medium. Successful probiotic loading was confirmed by fluorescence microscopy and scanning electron microscopy. The developed prebiotic polysaccharide matrix significantly enhanced LGG viability under acidic (pH 2.0) and bile salt (0.3%) stress conditions. Enzymatic extract of feces, mimicking colon fluid in terms of cellulase activity, was used to evaluate the intestinal release of probiotics. PMPB granules showed the ability to gradually release a large number of viable LGG cells in the model colon fluid. In vivo, the oral administration of PMPB granules in rats resulted in the successful release of probiotics in the colon environment. The biofilm-forming incubation method of immobilization on a complex polysaccharide matrix tested in this study has shown high efficacy and promising potential for the development of innovative biotechnologies.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
The feasibility study presents results of a hydrogen combustor integration for a Medium-Range aircraft engine using the Dry-Low-NOₓ Micromix combustion principle. Based on a simplified Airbus A320-type flight mission, a thermodynamic performance model of a kerosene and a hydrogen-powered V2530-A5 engine is used to derive the thermodynamic combustor boundary conditions. A new combustor design using the Dry-Low NOx Micromix principle is investigated by slice model CFD simulations of a single Micromix injector for design and off-design operation of the engine. Combustion characteristics show typical Micromix flame shapes and good combustion efficiencies for all flight mission operating points. Nitric oxide emissions are significant below ICAO CAEP/8 limits. For comparison of the Emission Index (EI) for NOₓ emissions between kerosene and hydrogen operation, an energy (kerosene) equivalent Emission Index is used.
A full 15° sector model CFD simulation of the combustion chamber with multiple Micromix injectors including inflow homogenization and dilution and cooling air flows investigates the combustor integration effects, resulting NOₓ emission and radial temperature distributions at the combustor outlet. The results show that the integration of a Micromix hydrogen combustor in actual aircraft engines is feasible and offers, besides CO₂ free combustion, a significant reduction of NOₓ emissions compared to kerosene operation.
In order to reduce energy consumption of homes, it is important to make transparent which devices consume how much energy. However, power consumption is often only monitored aggregated at the house energy meter. Disaggregating this power consumption into the contributions of individual devices can be achieved using Machine Learning. Our work aims at making state of the art disaggregation algorithms accessibe for users of the open source home automation platform Home Assistant.