Conference Proceeding
Refine
Year of publication
Institute
- Fachbereich Elektrotechnik und Informationstechnik (241)
- Fachbereich Medizintechnik und Technomathematik (218)
- Fachbereich Luft- und Raumfahrttechnik (189)
- Fachbereich Energietechnik (181)
- IfB - Institut für Bioengineering (151)
- Solar-Institut Jülich (110)
- Fachbereich Maschinenbau und Mechatronik (108)
- Fachbereich Bauingenieurwesen (75)
- Fachbereich Wirtschaftswissenschaften (57)
- ECSM European Center for Sustainable Mobility (53)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (47)
- INB - Institut für Nano- und Biotechnologien (39)
- Fachbereich Chemie und Biotechnologie (24)
- Kommission für Forschung und Entwicklung (17)
- Nowum-Energy (11)
- Fachbereich Architektur (9)
- Fachbereich Gestaltung (4)
- IaAM - Institut für angewandte Automation und Mechatronik (3)
- Arbeitsstelle fuer Hochschuldidaktik und Studienberatung (2)
- Institut fuer Angewandte Polymerchemie (2)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (2)
- Digitalisierung in Studium & Lehre (1)
- Freshman Institute (1)
- Kommission für Planung und Finanzen (1)
- Senat (1)
Language
- English (1197) (remove)
Document Type
- Conference Proceeding (1197) (remove)
Keywords
- Biosensor (25)
- CAD (7)
- Finite-Elemente-Methode (7)
- civil engineering (7)
- Bauingenieurwesen (6)
- Blitzschutz (6)
- Enterprise Architecture (5)
- Clusterion (4)
- Energy storage (4)
- Gamification (4)
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
This work proposes a hybrid algorithm combining an Artificial Neural Network (ANN) with a conventional local path planner to navigate UAVs efficiently in various unknown urban environments. The proposed method of a Hybrid Artificial Neural Network Avoidance System is called HANNAS. The ANN analyses a video stream and classifies the current environment. This information about the current Environment is used to set several control parameters of a conventional local path planner, the 3DVFH*. The local path planner then plans the path toward a specific goal point based on distance data from a depth camera. We trained and tested a state-of-the-art image segmentation algorithm, PP-LiteSeg. The proposed HANNAS method reaches a failure probability of 17%, which is less than half the failure probability of the baseline and around half the failure probability of an improved, bio-inspired version of the 3DVFH*. The proposed HANNAS method does not show any disadvantages regarding flight time or flight distance.
Rocket engine test facilities and launch pads are typically equipped with a guide tube. Its purpose is to ensure the controlled and safe routing of the hot exhaust gases. In addition, the guide tube induces a suction that effects the nozzle flow, namely the flow separation during transient start-up and shut-down of the engine. A cold flow subscale nozzle in combination with a set of guide tubes was studied experimentally
to determine the main influencing parameters.
Due to the increasing complexity of software projects, software development is becoming more and more dependent on teams. The quality of this teamwork can vary depending on the team composition, as teams are always a combination of different skills and personality types. This paper aims to answer the question of how to describe a software development team and what influence the personality of the team members has on the team dynamics. For this purpose, a systematic literature review (n=48) and a literature search with the AI research assistant Elicit (n=20) were conducted. Result: A person’s personality significantly shapes his or her thinking and actions, which in turn influences his or her behavior in software development teams. It has been shown that team performance and satisfaction can be strongly influenced by personality. The quality of communication and the likelihood of conflict can also be attributed to personality.
Autonomous agents require rich environment models for fulfilling their missions. High-definition maps are a well-established map format which allows for representing semantic information besides the usual geometric information of the environment. These are, for instance, road shapes, road markings, traffic signs or barriers. The geometric resolution of HD maps can be as precise as of centimetre level. In this paper, we report on our approach of using HD maps as a map representation for autonomous load-haul-dump vehicles in open-pit mining operations. As the mine undergoes constant change, we also need to constantly update the map. Therefore, we follow a lifelong mapping approach for updating the HD maps based on camera-based object detection and GPS data. We show our mapping algorithm based on the Lanelet 2 map format and show our integration with the navigation stack of the Robot Operating System. We present experimental results on our lifelong mapping approach from a real open-pit mine.
The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot’s perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.
The work in modern open-pit and underground mines requires the transportation of large amounts of resources between fixed points. The navigation to these fixed points is a repetitive task that can be automated. The challenge in automating the navigation of vehicles commonly used in mines is the systemic properties of such vehicles. Many mining vehicles, such as the one we have used in the research for this paper, use steering systems with an articulated joint bending the vehicle’s drive axis to change its course and a hydraulic drive system to actuate axial drive components or the movements of tippers if available. To address the difficulties of controlling such a vehicle, we present a model-predictive approach for controlling the vehicle. While the control optimisation based on a parallel error minimisation of the predicted state has already been established in the past, we provide insight into the design and implementation of an MPC for an articulated mining vehicle and show the results of real-world experiments in an open-pit mine environment.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
Existing residential buildings have an average lifetime of 100 years. Many of these buildings will exist for at least another 50 years. To increase the efficiency of these buildings while keeping costs at reasonable rates, they can be retrofitted with sensors that deliver information to central control units for heating, ventilation and electricity. This retrofitting process should happen with minimal intervention into existing infrastructure and requires new approaches for sensor design and data transmission. At FH Aachen University of Applied Sciences, students of different disciplines work together to learn how to design, build, deploy and operate such sensors. The presented teaching project already created a low power design for a combined CO2, temperature and humidity measurement device that can be easily integrated into most home automation systems
The integration of high temperature thermal energy storages into existing conventional power plants can help to reduce the CO2 emissions of those plants and lead to lower capital expenditures for building energy storage systems, due to the use of synergy effects [1]. One possibility to implement that, is a molten salt storage system with a powerful power-to-heat unit. This paper presents two possible control concepts for the startup of the charging system of such a facility. The procedures are implemented in a detailed dynamic process model. The performance and safety regarding the film temperatures at heat transmitting surfaces are investigated in the process simulations. To improve the accuracy in predicting the film temperatures, CFD simulations of the electrical heater are carried out and the results are merged with the dynamic model. The results show that both investigated control concepts are safe regarding the temperature limits. The gradient controlled startup performed better than the temperature-controlled startup. Nevertheless, there are several uncertainties that need to be investigated further.
Digital forensics of smartphones is of utmost importance in many criminal cases. As modern smartphones store chats, photos, videos etc. that can be relevant for investigations and as they can have storage capacities of hundreds of gigabytes, they are a primary target for forensic investigators. However, it is exactly this large amount of data that is causing problems: extracting and examining the data from multiple phones seized in the context of a case is taking more and more time. This bears the risk of wasting a lot of time with irrelevant phones while there is not enough time left to analyze a phone which is worth examination. Forensic triage can help in this case: Such a triage is a preselection step based on a subset of data and is performed before fully extracting all the data from the smartphone. Triage can accelerate subsequent investigations and is especially useful in cases where time is essential. The aim of this paper is to determine which and how much data from an Android smartphone can be made directly accessible to the forensic investigator – without tedious investigations. For this purpose, an app has been developed that can be used with extremely limited storage of data in the handset and which outputs the extracted data immediately to the forensic workstation in a human- and machine-readable format.
KNX is a protocol for smart building automation, e.g., for automated heating, air conditioning, or lighting. This paper analyses and evaluates state-of-the-art KNX devices from manufacturers Merten, Gira and Siemens with respect to security. On the one hand, it is investigated if publicly known vulnerabilities like insecure storage of passwords in software, unencrypted communication, or denialof-service attacks, can be reproduced in new devices. On the other hand, the security is analyzed in general, leading to the discovery of a previously unknown and high risk vulnerability related to so-called BCU (authentication) keys.
The popularity of social media and particularly Instagram grows steadily. People use the different platforms to share pictures as well as videos and to communicate with friends. The potential of social media platforms is also being used for marketing purposes and for selling products. While for Facebook and other online social media platforms the purchase decision factors are investigated several times, Instagram stores remain mainly unattended so far. The present research work closes this gap and sheds light into decisive factors for purchasing products offered in Instagram stores. A theoretical research model, which contains selected constructs that are assumed to have a significant influence on Instagram user´s purchase intention, is developed. The hypotheses are evaluated by applying structural equation modelling on survey data containing 127 relevant participants. The results of the study reveal that ‘trust’, ‘personal recommendation’, and ‘usability’ significantly influences user’s buying intention in Instagram stores.