Conference Proceeding
Refine
Year of publication
- 2024 (11)
- 2023 (36)
- 2022 (48)
- 2021 (52)
- 2020 (55)
- 2019 (91)
- 2018 (74)
- 2017 (85)
- 2016 (72)
- 2015 (78)
- 2014 (55)
- 2013 (74)
- 2012 (81)
- 2011 (70)
- 2010 (67)
- 2009 (64)
- 2008 (48)
- 2007 (44)
- 2006 (53)
- 2005 (29)
- 2004 (25)
- 2003 (22)
- 2002 (25)
- 2001 (18)
- 2000 (14)
- 1999 (15)
- 1998 (21)
- 1997 (10)
- 1996 (7)
- 1995 (6)
- 1994 (4)
- 1993 (11)
- 1992 (6)
- 1991 (3)
- 1990 (2)
- 1989 (5)
- 1988 (3)
- 1987 (1)
- 1986 (2)
- 1985 (4)
- 1984 (3)
- 1983 (3)
- 1981 (2)
- 1980 (1)
- 1979 (1)
- 1978 (3)
- 1977 (2)
- 1975 (3)
- 1974 (1)
- 1973 (3)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (286)
- Fachbereich Energietechnik (226)
- Fachbereich Luft- und Raumfahrttechnik (198)
- Fachbereich Maschinenbau und Mechatronik (188)
- Solar-Institut Jülich (164)
- Fachbereich Medizintechnik und Technomathematik (150)
- IfB - Institut für Bioengineering (114)
- Fachbereich Bauingenieurwesen (103)
- ECSM European Center for Sustainable Mobility (59)
- Fachbereich Wirtschaftswissenschaften (54)
Has Fulltext
- no (1413) (remove)
Document Type
- Conference Proceeding (1413) (remove)
Keywords
- Enterprise Architecture (5)
- Gamification (5)
- Energy storage (4)
- Natural language processing (4)
- Power plants (4)
- hydrogen (4)
- solar sail (4)
- Associated liquids (3)
- Concentrated solar power (3)
- Hybrid energy system (3)
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
Direct methods comprising limit and shakedown analysis is a branch of computational mechanics. It plays a significant role in mechanical and civil engineering design. The concept of direct method aims to determinate the ultimate load bearing capacity of structures beyond the elastic range. For practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and onstraints. If strength and loading are random quantities, the problem of shakedown analysis is considered as stochastic programming. This paper presents a method so called chance constrained programming, an effective method of stochastic programming, to solve shakedown analysis problem under random condition of strength. In this our investigation, the loading is deterministic, the strength is distributed as normal or lognormal variables.
Having well-defined control strategies for fuel cells, that can efficiently detect errors and take corrective action is critically important for safety in all applications, and especially so in aviation. The algorithms not only ensure operator safety by monitoring the fuel cell and connected components, but also contribute to extending the health of the fuel cell, its durability and safe operation over its lifetime. While sensors are used to provide peripheral data surrounding the fuel cell, the internal states of the fuel cell cannot be directly measured. To overcome this restriction, Kalman Filter has been implemented as an internal state observer.
Other safety conditions are evaluated using real-time data from every connected sensor and corrective actions automatically take place to ensure safety. The algorithms discussed in this paper have been validated thorough Model-in-the-Loop (MiL) tests as well as practical validation at a dedicated test bench.
The industrial revolution IR4.0 era have driven many states of the art technologies to be introduced especially in the automotive industry. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South-East Asia (SEA). Indulging this situation, FH Joanneum, Austria together with European partners from FH Aachen, Germany and Politecnico Di Torino, Italy is taking initiative to close the gap utilizing the Erasmus+ United grant from EU. A consortium was founded to engage with automotive technology transfer using the European ramework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative, and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing Universityindustry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
When confining pressure is low or absent, extensional fractures are typical, with fractures occurring on unloaded planes in rock. These “paradox” fractures can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. But this criterion makes unrealistic strength predictions in biaxial compression and tension. A new extension strain criterion overcomes this limitation by adding a weighted principal shear component. The weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting failure modes, which are unexpected in the understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak P. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
Despite the challenges of pioneering molten salt towers (MST), it remains the leading technology in central receiver power plants today, thanks to cost effective storage integration and high cost reduction potential. The limited controllability in volatile solar conditions can cause significant losses, which are difficult to estimate without comprehensive modeling [1]. This paper presents a Methodology to generate predictions of the dynamic behavior of the receiver system as part of an operating assistance system (OAS). Based on this, it delivers proposals if and when to drain and refill the receiver during a cloudy period in order maximize the net yield and quantifies the amount of net electricity gained by this. After prior analysis with a detailed dynamic two-phase model of the entire receiver system, two different reduced modeling approaches where developed and implemented in the OAS. A tailored decision algorithm utilizes both models to deliver the desired predictions efficiently and with appropriate accuracy.
The complex questions of today for a world of tomorrow are characterized by their global impact. Solutions must therefore not only be sustainable in the sense of the three pillars of sustainability (economic, environmental, and social) but must also function globally. This goes hand in hand with the need for intercultural acceptance of developed services and products. To achieve this, engineers, as the problem solvers of the future, must be able to work in intercultural teams on appropriate solutions, and be sensitive to intercultural perspectives. To equip the engineers of the future with the so-called future skills, teaching concepts are needed in which students can acquire these methods and competencies in application-oriented formats. The presented course "Applying Design Thinking - Sustainability, Innovation and Interculturality" was developed to teach future skills from the competency areas Digital Key Competencies, Classical Competencies and Transformative Competencies. The CDIO Standard 3.0, in particular the standards 5, 6, 7 and 8, was used as a guideline. The course aims to prepare engineering students from different disciplines and cultures for their future work in an international environment by combining a digital teaching format with an interdisciplinary, transdisciplinary and intercultural setting for solving sustainability challenges. The innovative moment lies in the digital application of design thinking and the inclusion of intercultural as well as trans- and interdisciplinary perspectives in innovation development processes. In this paper, the concept of the course will be presented in detail and the particularities of a digital implementation of design thinking will be addressed. Subsequently, the potentials and challenges will be reflected and practical advice for integrating design thinking in engineering education will be given.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
This work proposes a hybrid algorithm combining an Artificial Neural Network (ANN) with a conventional local path planner to navigate UAVs efficiently in various unknown urban environments. The proposed method of a Hybrid Artificial Neural Network Avoidance System is called HANNAS. The ANN analyses a video stream and classifies the current environment. This information about the current Environment is used to set several control parameters of a conventional local path planner, the 3DVFH*. The local path planner then plans the path toward a specific goal point based on distance data from a depth camera. We trained and tested a state-of-the-art image segmentation algorithm, PP-LiteSeg. The proposed HANNAS method reaches a failure probability of 17%, which is less than half the failure probability of the baseline and around half the failure probability of an improved, bio-inspired version of the 3DVFH*. The proposed HANNAS method does not show any disadvantages regarding flight time or flight distance.
Rocket engine test facilities and launch pads are typically equipped with a guide tube. Its purpose is to ensure the controlled and safe routing of the hot exhaust gases. In addition, the guide tube induces a suction that effects the nozzle flow, namely the flow separation during transient start-up and shut-down of the engine. A cold flow subscale nozzle in combination with a set of guide tubes was studied experimentally
to determine the main influencing parameters.