Refine
Year of publication
- 2023 (30) (remove)
Document Type
- Conference Proceeding (30) (remove)
Has Fulltext
- no (30) (remove)
Keywords
- Natural language processing (3)
- Associated liquids (2)
- Future Skills (2)
- Information extraction (2)
- Interdisciplinarity (2)
- Power plants (2)
- Sustainability (2)
- Active learning (1)
- Agent-based simulation (1)
- Agile development (1)
- Android (1)
- Anomaly detection (1)
- Automation (1)
- Building Automation (1)
- Business Process Intelligence (1)
- CO2 (1)
- Carbon Dioxide (1)
- Clustering (1)
- Connected Automated Vehicle (1)
- Control (1)
- Datasets (1)
- Decision theory (1)
- Deep learning (1)
- Design Thinking (1)
- Digital transformation (1)
- Digital triage (1)
- Digital twin (1)
- District data model (1)
- District energy planning platform (1)
- Education (1)
- Electrocardiography (1)
- Electrochemistry (1)
- Elicit (1)
- Energy Disaggregation (1)
- Energy market design (1)
- Energy storage (1)
- Energy system planning (1)
- Future skills (1)
- Home Assistant (1)
- Home Automation Platform (1)
- Instagram store (1)
- Interculturality (1)
- Key competences (1)
- Machine Learning (1)
- Market modeling (1)
- Micromix (1)
- Minimum Risk Manoeuvre (1)
- Mpc (1)
- Natural Language Processing (1)
- Natural language understanding (1)
- Navigation (1)
- Neural networks (1)
- Open Source (1)
- Operational Design Domain (1)
- PLS (1)
- Path-following (1)
- Performance (1)
- Personality (1)
- Process Model Extraction (1)
- Process optimization (1)
- Profile extraction (1)
- Prototype (1)
- Quality control (1)
- Query learning (1)
- Relation classification (1)
- Renewable energy integration (1)
- Reproducible research (1)
- Sensors comparison (1)
- Smart Building (1)
- Social impact measurement (1)
- Society (1)
- Software (1)
- Stress testing (1)
- Sustainable engineering education (1)
- Teamwork (1)
- Text Mining (1)
- Text mining (1)
- Time-series synchronization (1)
- Transdisciplinarity (1)
- Transformative Competencies (1)
- Transiton of Control (1)
- Triage-app (1)
- Trustworthy artificial intelligence (1)
- V2X (1)
- Wearable electronic device (1)
- active learning (1)
- aircraft engine (1)
- combustion (1)
- emission index (1)
- hydrogen (1)
- lab work (1)
- nitric oxides (1)
- professional skills (1)
- purchase factor (1)
- shopping behavior (1)
- structural equation model (1)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (15)
- ECSM European Center for Sustainable Mobility (7)
- Fachbereich Energietechnik (5)
- Fachbereich Luft- und Raumfahrttechnik (4)
- Fachbereich Medizintechnik und Technomathematik (4)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (4)
- Fachbereich Wirtschaftswissenschaften (3)
- Kommission für Forschung und Entwicklung (3)
- Solar-Institut Jülich (3)
- Nowum-Energy (2)
Achieving the 17 Sustainable Development Goals (SDGs) set by the United Nations (UN) in 2015 requires global collaboration between different stakeholders. Industry, and in particular engineers who shape industrial developments, have a special role to play as they are confronted with the responsibility to holistically reflect sustainability in industrial processes. This means that, in addition to the technical specifications, engineers must also question the effects of their own actions on an ecological, economic and social level in order to ensure sustainable action and contribute to the achievement of the SDGs. However, this requires competencies that enable engineers to apply all three pillars of sustainability to their own field of activity and to understand the global impact of industrial processes. In this context, it is relevant to understand how industry already reflects sustainability and to identify competences needed for sustainable development.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
In order to reduce energy consumption of homes, it is important to make transparent which devices consume how much energy. However, power consumption is often only monitored aggregated at the house energy meter. Disaggregating this power consumption into the contributions of individual devices can be achieved using Machine Learning. Our work aims at making state of the art disaggregation algorithms accessibe for users of the open source home automation platform Home Assistant.
Digital forensics of smartphones is of utmost importance in many criminal cases. As modern smartphones store chats, photos, videos etc. that can be relevant for investigations and as they can have storage capacities of hundreds of gigabytes, they are a primary target for forensic investigators. However, it is exactly this large amount of data that is causing problems: extracting and examining the data from multiple phones seized in the context of a case is taking more and more time. This bears the risk of wasting a lot of time with irrelevant phones while there is not enough time left to analyze a phone which is worth examination. Forensic triage can help in this case: Such a triage is a preselection step based on a subset of data and is performed before fully extracting all the data from the smartphone. Triage can accelerate subsequent investigations and is especially useful in cases where time is essential. The aim of this paper is to determine which and how much data from an Android smartphone can be made directly accessible to the forensic investigator – without tedious investigations. For this purpose, an app has been developed that can be used with extremely limited storage of data in the handset and which outputs the extracted data immediately to the forensic workstation in a human- and machine-readable format.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.