Conference Proceeding
Refine
Year of publication
- 2024 (5)
- 2023 (30)
- 2022 (43)
- 2021 (46)
- 2020 (46)
- 2019 (74)
- 2018 (64)
- 2017 (66)
- 2016 (65)
- 2015 (71)
- 2014 (51)
- 2013 (57)
- 2012 (59)
- 2011 (44)
- 2010 (48)
- 2009 (52)
- 2008 (37)
- 2007 (44)
- 2006 (60)
- 2005 (23)
- 2004 (22)
- 2003 (22)
- 2002 (25)
- 2001 (12)
- 2000 (12)
- 1999 (7)
- 1998 (8)
- 1997 (8)
- 1996 (4)
- 1995 (4)
- 1993 (6)
- 1992 (3)
- 1991 (2)
- 1990 (1)
- 1989 (3)
- 1988 (3)
- 1986 (1)
- 1985 (2)
- 1984 (3)
- 1983 (2)
- 1981 (2)
- 1980 (1)
- 1979 (1)
- 1978 (3)
- 1975 (2)
- 1973 (2)
Document Type
- Conference Proceeding (1146) (remove)
Language
- English (1146) (remove)
Keywords
- Biosensor (25)
- CAD (7)
- Finite-Elemente-Methode (7)
- civil engineering (7)
- Bauingenieurwesen (6)
- Blitzschutz (6)
- Enterprise Architecture (5)
- Clusterion (4)
- Energy storage (4)
- Gamification (4)
- Leadership (4)
- Limit analysis (4)
- Natural language processing (4)
- Power plants (4)
- Sonde (4)
- Telekommunikationsmarkt (4)
- hydrogen (4)
- solar sail (4)
- Air purification (3)
- Associated liquids (3)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (230)
- Fachbereich Medizintechnik und Technomathematik (208)
- Fachbereich Luft- und Raumfahrttechnik (178)
- Fachbereich Energietechnik (177)
- IfB - Institut für Bioengineering (147)
- Solar-Institut Jülich (110)
- Fachbereich Maschinenbau und Mechatronik (107)
- Fachbereich Bauingenieurwesen (73)
- Fachbereich Wirtschaftswissenschaften (51)
- ECSM European Center for Sustainable Mobility (50)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (46)
- INB - Institut für Nano- und Biotechnologien (39)
- Fachbereich Chemie und Biotechnologie (23)
- Kommission für Forschung und Entwicklung (16)
- Nowum-Energy (11)
- Fachbereich Architektur (7)
- Fachbereich Gestaltung (4)
- Institut fuer Angewandte Polymerchemie (2)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (2)
- Arbeitsstelle fuer Hochschuldidaktik und Studienberatung (1)
An interdisciplinary view on humane interfaces for digital shadows in the internet of production
(2022)
Digital shadows play a central role for the next generation industrial internet, also known as Internet of Production (IoP). However, prior research has not considered systematically how human actors interact with digital shadows, shaping their potential for success. To address this research gap, we assembled an interdisciplinary team of authors from diverse areas of human-centered research to propose and discuss design and research recommendations for the implementation of industrial user interfaces for digital shadows, as they are currently conceptualized for the IoP. Based on the four use cases of decision support systems, knowledge sharing in global production networks, human-robot collaboration, and monitoring employee workload, we derive recommendations for interface design and enhancing workers’ capabilities. This analysis is extended by introducing requirements from the higher-level perspectives of governance and organization.
A capacitive electrolyte-insulator-semiconductor (EISCAP) biosensor modified with Tobacco mosaic virus (TMV) particles for the detection of acetoin is presented. The enzyme acetoin reductase (AR) was immobilized on the surface of the EISCAP using TMV particles as nanoscaffolds. The study focused on the optimization of the TMV-assisted AR immobilization on the Ta 2 O 5 -gate EISCAP surface. The TMV-assisted acetoin EISCAPs were electrochemically characterized by means of leakage-current, capacitance-voltage, and constant-capacitance measurements. The TMV-modified transducer surface was studied via scanning electron microscopy.
In the past, CSP and PV have been seen as competing technologies. Despite massive reductions in the electricity generation costs of CSP plants, PV power generation is - at least during sunshine hours - significantly cheaper. If electricity is required not only during the daytime, but around the clock, CSP with its inherent thermal energy storage gets an advantage in terms of LEC. There are a few examples of projects in which CSP plants and PV plants have been co-located, meaning that they feed into the same grid connection point and ideally optimize their operation strategy to yield an overall benefit. In the past eight years, TSK Flagsol has developed a plant concept, which merges both solar technologies into one highly Integrated CSP-PV-Hybrid (ICPH) power plant. Here, unlike in simply co-located concepts, as analyzed e.g. in [1] – [4], excess PV power that would have to be dumped is used in electric molten salt heaters to increase the storage temperature, improving storage and conversion efficiency. The authors demonstrate the electricity cost sensitivity to subsystem sizing for various market scenarios, and compare the resulting optimized ICPH plants with co-located hybrid plants. Independent of the three feed-in tariffs that have been assumed, the ICPH plant shows an electricity cost advantage of almost 20% while maintaining a high degree of flexibility in power dispatch as it is characteristic for CSP power plants. As all components of such an innovative concept are well proven, the system is ready for commercial market implementation. A first project is already contracted and in early engineering execution.
Technical assessment of Brayton cycle heat pumps for the integration in hybrid PV-CSP power plants
(2022)
The hybridization of Concentrated Solar Power (CSP) and Photovoltaics (PV) systems is a promising approach to reduce costs of solar power plants, while increasing dispatchability and flexibility of power generation. High temperature heat pumps (HT HP) can be utilized to boost the salt temperature in the thermal energy storage (TES) of a Parabolic Trough Collector (PTC) system from 385 °C up to 565 °C. A PV field can supply the power for the HT HP, thus effectively storing the PV power as thermal energy. Besides cost-efficiently storing energy from the PV field, the power block efficiency of the overall system is improved due to the higher steam parameters. This paper presents a technical assessment of Brayton cycle heat pumps to be integrated in hybrid PV-CSP power plants. As a first step, a theoretical analysis was carried out to find the most suitable working fluid. The analysis included the fluids Air, Argon (Ar), Nitrogen (N2) and Carbon dioxide (CO2). N2 has been chosen as the optimal working fluid for the system. After the selection of the ideal working medium, different concepts for the arrangement of a HT HP in a PV-CSP hybrid power plant were developed and simulated in EBSILON®Professional. The concepts were evaluated technically by comparing the number of components required, pressure losses and coefficient of performance (COP).
Concentrated Solar Power (CSP) systems are able to store energy cost-effectively in their integrated thermal energy storage (TES). By intelligently combining Photovoltaics (PV) systems with CSP, a further cost reduction of solar power plants is expected, as well as an increase in dispatchability and flexibility of power generation. PV-powered Resistance Heaters (RH) can be deployed to raise the temperature of the molten salt hot storage from 385 °C up to 565 °C in a Parabolic Trough Collector (PTC) plant. To avoid freezing and decomposition of molten salt, the temperature distribution in the electrical resistance heater is investigated in the present study. For this purpose, a RH has been modeled and CFD simulations have been performed. The simulation results show that the hottest regions occur on the electric rod surface behind the last baffle. A technical optimization was performed by adjusting three parameters: Shell-baffle clearance, electric rod-baffle clearance and number of baffles. After the technical optimization was carried out, the temperature difference between the maximum temperature and the average outlet temperature of the salt is within the acceptable limits, thus critical salt decomposition has been avoided. Additionally, the CFD simulations results were analyzed and compared with results obtained with a one-dimensional model in Modelica.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
New materials often lead to innovations and advantages in technical applications. This also applies to the particle receiver proposed in this work that deploys high-temperature and scratch resistant transparent ceramics. With this receiver design, particles are heated through direct-contact concentrated solar irradiance while flowing downwards through tubular transparent ceramics from top to bottom. In this paper, the developed particle receiver as well as advantages and disadvantages are described. Investigations on the particle heat-up characteristics from solar irradiance were carried out with DEM simulations which indicate that particle temperatures can reach up to 1200 K. Additionally, a simulation model was set up for investigating the dynamic behavior. A test receiver at laboratory scale has been designed and is currently being built. In upcoming tests, the receiver test rig will be used to validate the simulation results. The design and the measurement equipment is described in this work.
In this work, three patent pending calibration methods for heliostat fields of central receiver systems (CRS) developed by the Solar-Institut Jülich (SIJ) of the FH Aachen University of Applied Sciences are presented. The calibration methods can either operate in a combined mode or in stand-alone mode. The first calibration method, method A, foresees that a camera matrix is placed into the receiver plane where it is subjected to concentrated solar irradiance during a measurement process. The second calibration method, method B, uses an unmanned aerial vehicle (UAV) such as a quadrocopter to automatically fly into the reflected solar irradiance cross-section of one or more heliostats (two variants of method B were tested). The third calibration method, method C, foresees a stereo central camera or multiple stereo cameras installed e.g. on the solar tower whereby the orientations of the heliostats are calculated from the location detection of spherical red markers attached to the heliostats. The most accurate method is method A which has a mean accuracy of 0.17 mrad. The mean accuracy of method B variant 1 is 1.36 mrad and of variant 2 is 1.73 mrad. Method C has a mean accuracy of 15.07 mrad. For method B there is great potential regarding improving the measurement accuracy. For method C the collected data was not sufficient for determining whether or not there is potential for improving the accuracy.
This work presents a basic forecast tool for predicting direct normal irradiance (DNI) in hourly resolution, which the Solar-Institut Jülich (SIJ) is developing within a research project. The DNI forecast data shall be used for a parabolic trough collector (PTC) system with a concrete thermal energy storage (C-TES) located at the company KEAN Soft Drinks Ltd in Limassol, Cyprus. On a daily basis, 24-hour DNI prediction data in hourly resolution shall be automatically produced using free or very low-cost weather forecast data as input. The purpose of the DNI forecast tool is to automatically transfer the DNI forecast data on a daily basis to a main control unit (MCU). The MCU automatically makes a smart decision on the operation mode of the PTC system such as steam production mode and/or C-TES charging mode. The DNI forecast tool was evaluated using historical data of measured DNI from an on-site weather station, which was compared to the DNI forecast data. The DNI forecast tool was tested using data from 56 days between January and March 2022, which included days with a strong variation in DNI due to cloud passages. For the evaluation of the DNI forecast reliability, three categories were created and the forecast data was sorted accordingly. The result was that the DNI forecast tool has a reliability of 71.4 % based on the tested days. The result fulfils SIJ’s aim to achieve a reliability of around 70 %, but SIJ aims to still improve the DNI forecast quality.
Concerning current efforts to improve operational efficiency and to lower overall costs of concentrating solar power (CSP) plants with prediction-based algorithms, this study investigates the quality and uncertainty of nowcasting data regarding the implications for process predictions. DNI (direct normal irradiation) maps from an all-sky imager-based nowcasting system are applied to a dynamic prediction model coupled with ray tracing. The results underline the need for high-resolution DNI maps in order to predict net yield and receiver outlet temperature realistically. Furthermore, based on a statistical uncertainty analysis, a correlation is developed, which allows for predicting the uncertainty of the net power prediction based on the corresponding DNI forecast uncertainty. However, the study reveals significant prediction errors and the demand for further improvement in the accuracy at which local shadings are forecasted.
A promising approach to reduce the system costs of molten salt solar receivers is to enable the irradiation of the absorber tubes on both sides. The star design is an innovative receiver design, pursuing this approach. The unconventional design leads to new challenges in controlling the system. This paper presents a control concept for a molten salt receiver system in star design. The control parameters are optimized in a defined test cycle by minimizing a cost function. The control concept is tested in realistic cloud passage scenarios based on real weather data. During these tests, the control system showed no sign of unstable behavior, but to perform sufficiently in every scenario further research and development like integrating Model Predictive Controls (MPCs) need to be done. The presented concept is a starting point to do so.
This paper compares several blade element theory (BET) method-based propeller simulation tools, including an evaluation against static propeller ground tests and high-fidelity Reynolds-Average Navier Stokes (RANS) simulations. Two proprietary propeller geometries for paraglider applications are analysed in static and flight conditions. The RANS simulations are validated with the static test data and used as a reference for comparing the BET in flight conditions. The comparison includes the analysis of varying 2D aerodynamic airfoil parameters and different induced velocity calculation methods. The evaluation of the BET propeller simulation tools shows the strength of the BET tools compared to RANS simulations. The RANS simulations underpredict static experimental data within 10% relative error, while appropriate BET tools overpredict the RANS results by 15–20% relative error. A variation in 2D aerodynamic data depicts the need for highly accurate 2D data for accurate BET results. The nonlinear BET coupled with XFOIL for the 2D aerodynamic data matches best with RANS in static operation and flight conditions. The novel BET tool PropCODE combines both approaches and offers further correction models for highly accurate static and flight condition results.
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.
Digital twins enable the modeling and simulation of real-world entities
(objects, processes or systems), resulting in improvements in the associated value
chains. The emerging field of quantum computing holds tremendous promise for
evolving this virtualization towards Quantum (Digital) Twins (QDT) and
ultimately Quantum Twins (QT). The quantum (digital) twin concept is not a
contradiction in terms - but instead describes a hybrid approach that can be
implemented using the technologies available today by combining classical
computing and digital twin concepts with quantum processing. This paper
presents the status quo of research and practice on quantum (digital) twins. It also
discuses their potential to create competitive advantage through real-time
simulation of highly complex, interconnected entities that helps companies better
address changes in their environment and differentiate their products and
services.
In this article we describe an Internet-of-Things sensing device with a wireless interface which is powered by the oftenoverlooked harvesting method of the Wiegand effect. The sensor can determine position, temperature or other resistively measurable quantities and can transmit the data via an ultra-low power ultra-wideband (UWB) data transmitter. With this approach we can energy-self-sufficiently acquire, process, and wirelessly transmit data in a pulsed operation. A proof-of-concept system was built up to prove the feasibility of the approach. The energy consumption of the system is analyzed and traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof-of-concept, an application demonstrator was developed. Finally, we point out possible use cases.
A Gamified Information System (GIS) implements game concepts and elements, such as affordances and game design principles to motivate people. Based on the idea to develop a GIS to increase the motivation of software developers to perform software quality tasks, the research work at hand aims at investigating relevant requirements from that target group. Therefore, 14 interviews with software development experts are conducted and analyzed. According to the results, software developers prefer the affordances points, narrative storytelling in a multiplayer and a round-based setting. Furthermore, six design principles for the development of a GIS are derived.
When confining pressure is low or absent, extensional fractures are typical, with fractures occurring on unloaded planes in rock. These “paradox” fractures can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. But this criterion makes unrealistic strength predictions in biaxial compression and tension. A new extension strain criterion overcomes this limitation by adding a weighted principal shear component. The weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting failure modes, which are unexpected in the understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak P. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
The fourth industrial revolution presents a multitude of challenges for industries, one of which being the increased flexibility required of manufacturing lines as a result of increased consumer demand for individualised products. One solution to tackle this challenge is the digital twin, more specifically the standardised model of a digital twin also known as the asset administration shell. The standardisation of an industry wide communications tool is a critical step in enabling inter-company operations. This paper discusses the current state of asset administration shells, the frameworks used to host them and their problems that need to be addressed. To tackle these issues, we propose an event-based server capable of drastically reducing response times between assets and asset administration shells and a multi-agent system used for the orchestration and deployment of the shells in the field.
Industrial facilities must be thoroughly designed to withstand seismic
actions as they exhibit an increased loss potential due to the possibly wideranging
damage consequences and the valuable process engineering equipment.
Past earthquakes showed the social and political consequences of seismic damage
to industrial facilities and sensitized the population and politicians worldwide
for the possible hazard emanating from industrial facilities. However, a holistic
approach for the seismic design of industrial facilities can presently neither be
found in national nor in international standards. The introduction of EN 1998-4
of the new generation of Eurocode 8 will improve the normative situation with
specific seismic design rules for silos, tanks and pipelines and secondary process
components. The article presents essential aspects of the seismic design of
industrial facilities based on the new generation of Eurocode 8 using the example
of tank structures and secondary process components. The interaction effects of
the process components with the primary structure are illustrated by means of
the experimental results of a shaking table test of a three story moment resisting
steel frame with different process components. Finally, an integrated approach of
digital plant models based on building information modelling (BIM) and structural
health monitoring (SHM) is presented, which provides not only a reliable
decision-making basis for operation, maintenance and repair but also an excellent
tool for rapid assessment of seismic damage.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.