Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1693)
- Fachbereich Elektrotechnik und Informationstechnik (719)
- IfB - Institut für Bioengineering (624)
- Fachbereich Energietechnik (589)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (552)
- Fachbereich Luft- und Raumfahrttechnik (497)
- Fachbereich Maschinenbau und Mechatronik (283)
- Fachbereich Wirtschaftswissenschaften (222)
- Solar-Institut Jülich (165)
Language
- English (4935) (remove)
Document Type
- Article (3285)
- Conference Proceeding (1170)
- Part of a Book (195)
- Book (146)
- Doctoral Thesis (32)
- Conference: Meeting Abstract (29)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
Solar sails enable missions to the outer solar system and beyond, although the solar
radiation pressure decreases with the square of solar distance. For such missions, the solar sail may gain a large amount of energy by first making one or more close approaches to the sun. Within this paper, optimal trajectories for solar sail missions to the outer planets and into near interstellar space (200 AU) are presented. Thereby, it is shown that even near/medium-term solar sails with relatively moderate performance allow reasonable transfer times to the boundaries of the solar system.
The scientific interest for near-Earth asteroids as well as the interest in potentially hazardous asteroids from the perspective of planetary defense led the space community to focus on near-Earth asteroid mission studies. A multiple near-Earth asteroid rendezvous mission with close-up observations of several objects can help to improve the characterization of these asteroids. This work explores the design of a solar-sail spacecraft for such a mission, focusing on the search of possible sequences of encounters and the trajectory optimization. This is done in two sequential steps: a sequence search by means of a simplified trajectory model and a set of heuristic rules based on astrodynamics, and a subsequent optimization phase. A shape-based approach for solar sailing has been developed and is used for the first phase. The effectiveness of the proposed approach is demonstrated through a fully optimized multiple near-Earth asteroid rendezvous mission. The results show that it is possible to visit five near-Earth asteroids within 10 years with near-term solar-sail technology.
Solar thermal concentrated power is an emerging technology that provides clean electricity for the growing energy market. To the solar thermal concentrated power plant systems belong the parabolic trough, the Fresnel collector, the solar dish, and the central receiver system.
For high-concentration solar collector systems, optical and thermal analysis is essential. There exist a number of measurement techniques and systems for the optical and thermal characterization of the efficiency of solar thermal concentrated systems.
For each system, structure, components, and specific characteristics types are described. The chapter presents additionally an outline for the calculation of system performance and operation and maintenance topics. One main focus is set to the models of components and their construction details as well as different types on the market. In the later part of this article, different criteria for the choice of technology are analyzed in detail.
Concentrating solar power
(2022)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
A Gamified Information System (GIS) implements game concepts and elements, such as affordances and game design principles to motivate people. Based on the idea to develop a GIS to increase the motivation of software developers to perform software quality tasks, the research work at hand aims at investigating relevant requirements from that target group. Therefore, 14 interviews with software development experts are conducted and analyzed. According to the results, software developers prefer the affordances points, narrative storytelling in a multiplayer and a round-based setting. Furthermore, six design principles for the development of a GIS are derived.
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
This study focuses on thermoelectric elements (TEE) as an alternative for room temperature control. TEE are semi-conductor devices that can provide heating and cooling via a heat pump effect without direct noise emissions and no refrigerant use. An efficiency evaluation of the optimal operating mode is carried out for different numbers of TEE, ambient temperatures, and heating loads. The influence of an additional heat recovery unit on system efficiency and an unevenly distributed heating demand are examined. The results show that TEE can provide heat at a coefficient of performance (COP) greater than one especially for small heating demands and high ambient temperatures. The efficiency increases with the number of elements in the system and is subject to economies of scale. The best COP exceeds six at optimal operating conditions. An additional heat recovery unit proves beneficial for low ambient temperatures and systems with few TEE. It makes COPs above one possible at ambient temperatures below 0 ∘C. The effect increases efficiency by maximal 0.81 (from 1.90 to 2.71) at ambient temperature 5 K below room temperature and heating demand Q˙h=100W but is subject to diseconomies of scale. Thermoelectric technology is a valuable option for electricity-based heat supply and can provide cooling and ventilation functions. A careful system design as well as an additional heat recovery unit significantly benefits the performance. This makes TEE superior to direct current heating systems and competitive to heat pumps for small scale applications with focus on avoiding noise and harmful refrigerants.
A generalized shear-lag theory for fibres with variable radius is developed to analyse elastic fibre/matrix stress transfer. The theory accounts for the reinforcement of biological composites, such as soft tissue and bone tissue, as well as for the reinforcement of technical composite materials, such as fibre-reinforced polymers (FRP). The original shear-lag theory proposed by Cox in 1952 is generalized for fibres with variable radius and with symmetric and asymmetric ends. Analytical solutions are derived for the distribution of axial and interfacial shear stress in cylindrical and elliptical fibres, as well as conical and paraboloidal fibres with asymmetric ends. Additionally, the distribution of axial and interfacial shear stress for conical and paraboloidal fibres with symmetric ends are numerically predicted. The results are compared with solutions from axisymmetric finite element models. A parameter study is performed, to investigate the suitability of alternative fibre geometries for use in FRP.
Wind energy represents the dominant share of renewable energies. The rotor blades of a wind turbine are typically made from composite material, which withstands high forces during rotation. The huge dimensions of the rotor blades complicate the inspection processes in manufacturing. The automation of inspection processes has a great potential to increase the overall productivity and to create a consistent reliable database for each individual rotor blade. The focus of this paper is set on the process of rotor blade inspection automation by utilizing an autonomous mobile manipulator. The main innovations include a novel path planning strategy for zone-based navigation, which enables an intuitive right-hand or left-hand driving behavior in a shared human–robot workspace. In addition, we introduce a new method for surface orthogonal motion planning in connection with large-scale structures. An overall execution strategy controls the navigation and manipulation processes of the long-running inspection task. The implemented concepts are evaluated in simulation and applied in a real-use case including the tip of a rotor blade form.
In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.
The recovery of waste heat requires heat exchangers to extract it from a liquid or gaseous medium into another working medium, a refrigerant. In Organic Rankine Cycles (ORC) on Combustion Engines there are two major heat sources, the exhaust gas and the water/glycol fluid from the engine’s cooling circuit. A heat exchanger design must be adapted to the different requirements and conditions resulting from the heat sources, fluids, system configurations, geometric restrictions, and etcetera. The Stacked Shell Cooler (SSC) is a new and very specific design of a plate heat exchanger, created by AKG, which allows with a maximum degree of freedom the optimization of heat exchange rate and the reduction of the related pressure drop. This optimization in heat exchanger design for ORC systems is even more important, because it reduces the energy consumption of the system and therefore maximizes the increase in overall efficiency of the engine.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of water distribution systems (WDS), i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, graph-theoretical metrics have been proposed. In this study, a promising approach is first physically derived analytically and then applied to assess the resilience of the WDS for a district in a major German City. The topology based resilience index computed for every consumer node takes into consideration the resistance of the best supply path as well as alternative supply paths. This resistance of a supply path is derived to be the dimensionless pressure loss in the pipes making up the path. The conducted analysis of a present WDS provides insight into the process of actively influencing the resilience of WDS locally and globally by adding pipes. The study shows that especially pipes added close to the reservoirs and main branching points in the WDS result in a high resilience enhancement of the overall WDS.
Booster stations can fulfill a varying pressure demand with high energy-efficiency, because individual pumps can be deactivated at smaller loads. Although this is a seemingly simple approach, it is not easy to decide precisely when to activate or deactivate pumps. Contemporary activation controls derive the switching points from the current volume flow through the system. However, it is not measured directly for various reasons. Instead, the controller estimates the flow based on other system properties. This causes further uncertainty for the switching decision. In this paper, we present a method to find a robust, yet energy-efficient activation strategy.
Cheap does not imply cost-effective -- this is rule number one of zeitgeisty system design. The initial investment accounts only for a small portion of the lifecycle costs of a technical system. In fluid systems, about ninety percent of the total costs are caused by other factors like power consumption and maintenance. With modern optimization methods, it is already possible to plan an optimal technical system considering multiple objectives. In this paper, we focus on an often neglected contribution to the lifecycle costs: downtime costs due to spontaneous failures. Consequently, availability becomes an issue.
In times of planned obsolescence the demand for sustainability keeps growing. Ideally, a technical system is highly reliable, without failures and down times due to fast wear of single components. At the same time, maintenance should preferably be limited to pre-defined time intervals. Dispersion of load between multiple components can increase a system’s reliability and thus its availability inbetween maintenance points. However, this also results in higher investment costs and additional efforts due to higher complexity. Given a specific load profile and resulting wear of components, it is often unclear which system structure is the optimal one. Technical Operations Research (TOR) finds an optimal structure balancing availability and effort. We present our approach by designing a hydrostatic transmission system.
The conference center darmstadtium in Darmstadt is a prominent example of energy efficient buildings. Its heating system consists of different source and consumer circuits connected by a Zortström reservoir. Our goal was to reduce the energy costs of the system as much as possible. Therefore, we analyzed its supply circuits. The first step towards optimization is a complete examination of the system: 1) Compilation of an object list for the system, 2) collection of the characteristic curves of the components, and 3) measurement of the load profiles of the heat and volume-flow demand. Instead of modifying the system manually and testing the solution by simulation, the second step was the creation of a global optimization program. The objective was to minimize the total energy costs for one year. We compare two different topologies and show opportunities for significant savings.
Gearboxes are mechanical transmission systems that provide speed and torque conversions from a rotating power source. Being a central element of the drive train, they are relevant for the efficiency and durability of motor vehicles. In this work, we present a new approach for gearbox design: Modeling the design problem as a mixed-integer nonlinear program (MINLP) allows us to create gearbox designs from scratch for arbitrary requirements and—given enough time—to compute provably globally optimal designs for a given objective. We show how different degrees of freedom influence the runtime and present an exemplary solution.
Resilience as a concept has found its way into different disciplines to describe the ability of an individual or system to withstand and adapt to changes in its environment. In this paper, we provide an overview of the concept in different communities and extend it to the area of mechanical engineering. Furthermore, we present metrics to measure resilience in technical systems and illustrate them by applying them to load-carrying structures. By giving application examples from the Collaborative Research Centre (CRC) 805, we show how the concept of resilience can be used to control uncertainty during different stages of product life.
The understanding that optimized components do not automatically lead to energy-efficient systems sets the attention from the single component on the entire technical system. At TU Darmstadt, a new field of research named Technical Operations Research (TOR) has its origin. It combines mathematical and technical know-how for the optimal design of technical systems. We illustrate our optimization approach in a case study for the design of a ventilation system with the ambition to minimize the energy consumption for a temporal distribution of diverse load demands. By combining scaling laws with our optimization methods we find the optimal combination of fans and show the advantage of the use of multiple fans.
Energy-efficient components do not automatically lead to energy-efficient systems. Technical Operations Research (TOR) shifts the focus from the single component to the system as a whole and finds its optimal topology and operating strategy simultaneously. In previous works, we provided a preselected construction kit of suitable components for the algorithm. This approach may give rise to a combinatorial explosion if the preselection cannot be cut down to a reasonable number by human intuition. To reduce the number of discrete decisions, we integrate laws derived from similarity theory into the optimization model. Since the physical characteristics of a production series are similar, it can be described by affinity and scaling laws. Making use of these laws, our construction kit can be modeled more efficiently: Instead of a preselection of components, it now encompasses whole model ranges. This allows us to significantly increase the number of possible set-ups in our model. In this paper, we present how to embed this new formulation into a mixed-integer program and assess the run time via benchmarks. We present our approach on the example of a ventilation system design problem.
Planning the layout and operation of a technical system is a common task
for an engineer. Typically, the workflow is divided into consecutive stages: First,
the engineer designs the layout of the system, with the help of his experience or of
heuristic methods. Secondly, he finds a control strategy which is often optimized
by simulation. This usually results in a good operating of an unquestioned sys-
tem topology. In contrast, we apply Operations Research (OR) methods to find a
cost-optimal solution for both stages simultaneously via mixed integer program-
ming (MILP). Technical Operations Research (TOR) allows one to find a provable
global optimal solution within the model formulation. However, the modeling error
due to the abstraction of physical reality remains unknown. We address this ubiq-
uitous problem of OR methods by comparing our computational results with mea-
surements in a test rig. For a practical test case we compute a topology and control
strategy via MILP and verify that the objectives are met up to a deviation of 8.7%.
A new method for improved autoclave loading within the restrictive framework of helicopter manufacturing is proposed. It is derived from experimental and numerical studies of the curing process and aims at optimizing tooling positions in the autoclave for fast and homogeneous heat-up. The mold positioning is based on two sets of information. The thermal properties of the molds, which can be determined via semi-empirical thermal simulation. The second information is a previously determined distribution of heat transfer coefficients inside the autoclave. Finally, an experimental proof of concept is performed to show a cycle time reduction of up to 31% using the proposed methodology.
Anyone who has always wanted to understand the hieroglyphs on Sheldon's blackboard in the TV series The Big Bang Theory or who wanted to know exactly what the fate of Schrödinger's cat is all about will find a short, descriptive introduction to the world of quantum mechanics in this essential. The text particularly focuses on the mathematical description in the Hilbert space. The content goes beyond popular scientific presentations, but is nevertheless suitable for readers without special prior knowledge thanks to the clear examples.
For typical cases of non-isolated lightning protection systems (LPS) the impulse currents are investigated which may flow through a human body directly touching a structural part of the LPS. Based on a basic LPS model with conventional down-conductors especially the cases of external and internal steel columns and metal façades are considered and compared. Numerical simulations of the line quantities voltages and currents in the time domain are performed with an equivalent circuit of the entire LPS.
As a result it can be stated that by increasing the number of conventional down-conductors and external steel columns the threat for a human being can indeed be reduced, but not down to an acceptable limit. In case of internal steel columns used as natural down-conductors the threat can be reduced sufficiently, depending on the low-resistive connection of the steel columns to the lightning equipotential bonding or the earth termination system, resp. If a metal façade is used the threat for human beings touching is usually very low, if the façade is sufficiently interconnected and multiply connected to the lightning equipotential bonding or the earth termination system, resp.
Modern industry and multi-discipline projects require highly trained individuals with resilient science and engineering back-grounds. Graduates must be able to agilely apply excellent theoretical knowledge in their subject matter as well as essential practical “hands-on” knowledge of diverse working processes to solve complex problems. To meet these demands, university education follows the concept of Constructive Alignment and thus increasingly adopts the teaching of necessary practical skills to the actual industry requirements and assessment routines. However, a systematic approach to coherently align these three central teaching demands is strangely absent from current university curricula. We demonstrate the feasibility of implementing practical assessments in a regular theory-based examination, thus defining the term “blended assessment”. We assessed a course for natural science and engineering students pursuing a career in biomedical engineering, and evaluated the benefit of blended assessment exams for students and lecturers. Our controlled study assessed the physiological background of electrocardiograms (ECGs), the practical measurement of ECG curves, and their interpretation of basic pathologic alterations. To study on long time effects, students have been assessed on the topic twice with a time lag of 6 months. Our findings suggest a significant improvement in student gain with respect to practical skills and theoretical knowledge. The results of the reassessments support these outcomes. From the lecturers ́ point of view, blended assessment complements practical training courses while keeping organizational effort manageable. We consider blended assessment a viable tool for providing an improved student gain, industry-ready education format that should be evaluated and established further to prepare university graduates optimally for their future careers.
Dynamic retinal vessel analysis (DVA) provides a non-invasive way to assess microvascular function in patients and potentially to improve predictions of individual cardiovascular (CV) risk. The aim of our study was to use untargeted machine learning on DVA in order to improve CV mortality prediction and identify corresponding response alterations.
Delayed cerebral ischemia (DCI) is a common complication after aneurysmal subarachnoid hemorrhage (aSAH) and can lead to infarction and poor clinical outcome. The underlying mechanisms are still incompletely understood, but animal models indicate that vasoactive metabolites and inflammatory cytokines produced within the subarachnoid space may progressively impair and partially invert neurovascular coupling (NVC) in the brain. Because cerebral and retinal microvasculature are governed by comparable regulatory mechanisms and may be connected by perivascular pathways, retinal vascular changes are increasingly recognized as a potential surrogate for altered NVC in the brain. Here, we used non-invasive retinal vessel analysis (RVA) to assess microvascular function in aSAH patients at different times after the ictus.
Purpose Vascular risk factors and ocular perfusion are heatedly discussed in the pathogenesis of glaucoma. The retinal vessel analyzer (RVA, IMEDOS Systems, Germany) allows noninvasive measurement of retinal vessel regulation. Significant differences especially in the veins between healthy subjects and patients suffering from glaucoma were previously reported. In this pilot-study we investigated if localized vascular regulation is altered in glaucoma patients with altitudinal visual field defect asymmetry. Methods 15 eyes of 12 glaucoma patients with advanced altitudinal visual field defect asymmetry were included. The mean defect was calculated for each hemisphere separately (-20.99 ± 10.49 pro- found hemispheric visual field defect vs -7.36 ± 3.97 dB less profound hemisphere). After pupil dilation, RVA measurements of retinal arteries and veins were conducted using the standard protocol. The superior and inferior retinal vessel reactivity were measured consecutively in each eye. Results Significant differences were recorded in venous vessel constriction after flicker light stimulation and overall amplitude of the reaction (p \ 0.04 and p \ 0.02 respectively) in-between the hemispheres spheres. Vessel reaction was higher in the hemisphere corresponding to the more advanced visual field defect. Arterial diameters reacted similarly, failing to reach statistical significance. Conclusion Localized retinal vessel regulation is significantly altered in glaucoma patients with asymmetri altitudinal visual field defects. Veins supplying the hemisphere concordant to a less profound visual field defect show diminished diameter changes. Vascular dysregulation might be particularly important in early glaucoma stages prior to a significant visual field defect.
The term ocular rigidity is widely used in clinical ophthalmology. Generally it is assumed as a resistance of the whole eyeball to mechanical deformation and relates to biomechanical properties of the eye and its tissues. Basic principles and formulas for clinical tonometry, tonography and pulsatile ocular blood flow measurements are based on the concept of ocular rigidity. There is evidence for altered ocular rigidity in aging, in several eye diseases and after eye surgery. Unfortunately, there is no consensual view on ocular rigidity: it used to make a quite different sense for different people but still the same name. Foremost there is no clear consent between biomechanical engineers and ophthalmologists on the concept. Moreover ocular rigidity is occasionally characterized using various parameters with their different physical dimensions. In contrast to engineering approach, clinical approach to ocular rigidity claims to characterize the total mechanical response of the eyeball to its deformation without any detailed considerations on eye morphology or material properties of its tissues. Further to the previous chapter this section aims to describe clinical approach to ocular rigidity from the perspective of an engineer in an attempt to straighten out this concept, to show its advantages, disadvantages and various applications.
Pure analytical or experimental methods can only find a control strategy for technical systems with a fixed setup. In former contributions we presented an approach that simultaneously finds the optimal topology and the optimal open-loop control of a system via Mixed Integer Linear Programming (MILP). In order to extend this approach by a closed-loop control we present a Mixed Integer Program for a time discretized tank level control. This model is the basis for an extension by combinatorial decisions and thus for the variation of the network topology. Furthermore, one is able to appraise feasible solutions using the global optimality gap.
Nach Stand von Wissenschaft und Technik werden Komponenten hinsichtlich ihrer Eigenschaften, wie Lebensdauer oder Energieeffizienz, optimiert. Allerdings können selbst hervorragende Komponenten zu ineffizienten oder instabilen Systemen führen, wenn ihr Zusammenspiel nur unzureichend berücksichtigt wird. Eine Systembetrachtung schafft ein größeres Optimierungspotential - dem erhöhten Potential steht jedoch auch ein erhöhter Komplexitätsgrad gegenüber. Die vorliegende Arbeit ist im Rahmen des Sonderforschungsbereichs 805 entstanden, dessen Ziel die Beherrschung von Unsicherheit in Systemen des Maschinenbaus ist. Die Arbeit zeigt anhand eines realen Systems aus dem Bereich der Hydraulik, wie Unsicherheit in der Entwicklungsphase beherrscht werden kann. Hierbei ist neu, dass die durch den späteren Betrieb zu erwartende Systemdegradation eines jeden möglichen Systemvorschlags antizipiert werden kann. Dadurch können Betriebs- und Wartungskosten vorausgesagt und minimiert werden und durch eine optimale Betriebs- und Wartungsstrategie die Verfügbarkeit des Systems garantiert werden. Wesentliche Fragen bei der optimalen Auslegung des betrachteten hydrostatischen Getriebes sind dessen physikalische Modellierung, die Darstellung des Optimierungsproblems als gemischt-ganzzahliges lineares Programm, und dessen algorithmische Behandlung zur Lösungsfindung. Hierzu werden Heuristiken zum schnelleren Auffinden sinnvoller Systemtopologien vorgestellt und mittels mathematischer Dekomposition eine Bewertung des dynamischen Verschleiß- und Wartungsverlaufs möglicher Systemvorschläge vorgenommen. Die Arbeit stellt die Optimierung technischer Systeme an der Schnittstelle von Mathematik, Informatik und Ingenieurwesen sowohl gründlich als auch anschaulich und nachvollziehbar dar.
Finding a good system topology with more than a handful of components is a
highly non-trivial task. The system needs to be able to fulfil all expected load cases, but at the
same time the components should interact in an energy-efficient way. An example for a system
design problem is the layout of the drinking water supply of a residential building. It may be
reasonable to choose a design of spatially distributed pumps which are connected by pipes in at
least two dimensions. This leads to a large variety of possible system topologies. To solve such
problems in a reasonable time frame, the nonlinear technical characteristics must be modelled
as simple as possible, while still achieving a sufficiently good representation of reality. The
aim of this paper is to compare the speed and reliability of a selection of leading mathematical
programming solvers on a set of varying model formulations. This gives us empirical evidence
on what combinations of model formulations and solver packages are the means of choice with the current state of the art.
The UN sets the goal to ensure access to water and sanitation for all people by 2030. To address this goal, we present a multidisciplinary approach for designing water supply networks for slums in large cities by applying mathematical optimization. The problem is modeled as a mixed-integer linear problem (MILP) aiming to find a network describing the optimal supply infrastructure. To illustrate the approach, we apply it on a small slum cluster in Dhaka, Bangladesh.
The energy-efficiency of technical systems can be improved by a systematic design approach. Technical Operations Research (TOR) employs methods known from Operations Research to find a global optimal layout and operation strategy of technical systems. We show the practical usage of this approach by the systematic design of a decentralized water supply system for skyscrapers. All possible network options and operation strategies are modeled by a Mixed-Integer Nonlinear Program. We present the optimal system found by our approach and highlight the energy savings compared to a conventional system design.
The overall energy efficiency of ventilation systems can be improved by considering not only single components, but by considering as well the interplay between every part of the system. With the help of the method "TOR" ("Technical Operations Research"), which was developed at the Chair of Fluid Systems at TU Darmstadt, it is possible to improve the energy efficiency of the whole system by considering all possible design choices programmatically. We show the ability of this systematic design approach with a ventilation system for buildings as a use case example.
Based on a Mixed-Integer Nonlinear Program (MINLP) we model the ventilation system. We use binary variables to model the selection of different pipe diameters. Multiple fans are model with the help of scaling laws. The whole system is represented by a graph, where the edges represent the pipes and fans and the nodes represents the source of air for cooling and the sinks, that have to be cooled. At the beginning, the human designer chooses a construction kit of different suitable fans and pipes of different diameters and different load cases. These boundary conditions define a variety of different possible system topologies. It is not possible to consider all topologies by hand. With the help of state of the art solvers, on the other side, it is possible to solve this MINLP.
Next to this, we also consider the effects of malfunctions in different components. Therefore, we show a first approach to measure the resilience of the shown example use case. Further, we compare the conventional approach with designs that are more resilient. These more resilient designs are derived by extending the before mentioned model with further constraints, that consider explicitly the resilience of the overall system. We show that it is possible to design resilient systems with this method already in the early design stage and compare the energy efficiency and resilience of these different system designs.
Highly competitive markets paired with tremendous production volumes demand particularly cost efficient products. The usage of common parts and modules across product families can potentially reduce production costs. Yet, increasing commonality typically results in overdesign of individual products. Multi domain virtual prototyping enables designers to evaluate costs and technical feasibility of different single product designs at reasonable computational effort in early design phases. However, savings by platform commonality are hard to quantify and require detailed knowledge of e.g. the production process and the supply chain. Therefore, we present and evaluate a multi-objective metamodel-based optimization algorithm which enables designers to explore the trade-off between high commonality and cost optimal design of single products.
To increase pressure to supply all floors of high buildings with water, booster stations, normally consisting of several parallel pumps in the basement, are used. In this work, we demonstrate the potential of a decentralized pump topology regarding energy savings in water supply systems of skyscrapers. We present an approach, based on Mixed-Integer Nonlinear Programming, that allows to choose an optimal network topology and optimal pumps from a predefined construction kit comprising different pump types. Using domain-specific scaling laws and Latin Hypercube Sampling, we generate different input sets of pump types and compare their impact on the efficiency and cost of the total system design. As a realistic application example, we consider a hotel building with 325 rooms, 12 floors and up to four pressure zones.
Given industrial applications, the costs for the operation and maintenance of a pump system typically far exceed its purchase price. For finding an optimal pump configuration which minimizes not only investment, but life-cycle costs, methods like Technical Operations Research which is based on Mixed-Integer Programming can be applied. However, during the planning phase, the designer is often faced with uncertain input data, e.g. future load demands can only be estimated. In this work, we deal with this uncertainty by developing a chance-constrained two-stage (CCTS) stochastic program. The design and operation of a booster station working under uncertain load demand are optimized to minimize total cost including purchase price, operation cost incurred by energy consumption and penalty cost resulting from water shortage. We find optimized system layouts using a sample average approximation (SAA) algorithm, and analyze the results for different risk levels of water shortage. By adjusting the risk level, the costs and performance range of the system can be balanced, and thus the
system’s resilience can be engineered
On obligations in the development process of resilient systems with algorithmic design methods
(2018)
Advanced computational methods are needed both for the design of large systems and to compute high accuracy solutions. Such methods are efficient in computation, but the validation of results is very complex, and highly skilled auditors are needed to verify them. We investigate legal questions concerning obligations in the development phase, especially for technical systems developed using advanced methods. In particular, we consider methods of resilient and robust optimization. With these techniques, high performance solutions can be found, despite a high variety of input parameters. However, given the novelty of these methods, it is uncertain whether legal obligations are being met. The aim of this paper is to discuss if and how the choice of a specific computational method affects the developer’s product liability. The review of legal obligations in this paper is based on German law and focuses on the requirements that must be met during the design and development process.
The paper industry is the industry with the third highest energy consumption in the European Union. Using recycled paper instead of fresh fibers for papermaking is less energy consuming and saves resources. However, adhesive contaminants in recycled paper are particularly problematic since they reduce the quality of the resulting paper-product. To remove as many contaminants and at the same time obtain as many valuable fibres as possible, fine screening systems, consisting of multiple interconnected pressure screens, are used. Choosing the best configuration is a non-trivial task: The screens can be interconnected in several ways, and suitable screen designs as well as operational parameters have to be selected. Additionally, one has to face conflicting objectives. In this paper, we present an approach for the multi-criteria optimization of pressure screen systems based on Mixed-Integer Nonlinear Programming. We specifically focus on a clear representation of the trade-off between different objectives.
Ensuring access to water and sanitation for all is Goal No. 6 of the 17 UN Sustainability Development Goals to transform our world. As one step towards this goal, we present an approach that leverages remote sensing data to plan optimal water supply networks for informal urban settlements. The concept focuses on slums within large urban areas, which are often characterized by a lack of an appropriate water supply. We apply methods of mathematical optimization aiming to find a network describing the optimal supply infrastructure. Hereby, we choose between different decentral and central approaches combining supply by motorized vehicles with supply by pipe systems. For the purposes of illustration, we apply the approach to two small slum clusters in Dhaka and Dar es Salaam. We show our optimization results, which represent the lowest cost water supply systems possible. Additionally, we compare the optimal solutions of the two clusters (also for varying input parameters, such as population densities and slum size development over time) and describe how the result of the optimization depends on the entered remote sensing data.
Water suppliers are faced with the great challenge of achieving high-quality and, at the same time, low-cost water supply. In practice, the focus is set on the most beneficial maintenance measures and/or capacity adaptations of existing water distribution systems (WDS). Since climatic and demographic influences will pose further challenges in the future, the resilience enhancement of WDS, i.e. the enhancement of their capability to withstand and recover from disturbances, has been in particular focus recently. To assess the resilience of WDS, metrics based on graph theory have been proposed. In this study, a promising approach is applied to assess the resilience of the WDS for a district in a major German City. The conducted analysis provides insight into the process of actively influencing the
resilience of WDS
The development of resilient technical systems is a challenging task, as the system should adapt automatically to unknown disturbances and component failures. To evaluate different approaches for deriving resilient technical system designs, we developed a modular test rig that is based on a pumping system. On the basis of this example
system, we present metrics to quantify resilience and an algorithmic approach to improve resilience. This approach enables the pumping system to automatically react on unknown disturbances and to reduce the impact of component failures. In this case, the system is able to automatically adapt its topology by activating additional valves. This enables the system to still reach a minimum performance, even in case of failures. Furthermore, timedependent disturbances are evaluated continuously, deviations from the original state are automatically detected and anticipated in the future. This allows to reduce the impact of future disturbances and leads to a more resilient
system behaviour.
The transition within transportation towards battery electric vehicles can lead to a more sustainable future. To account for the development goal ‘climate action’ stated by the United Nations, it is mandatory, within the conceptual design phase, to derive energy-efficient system designs. One barrier is the uncertainty of the driving behaviour within the usage phase. This uncertainty is often addressed by using a stochastic synthesis process to derive representative driving cycles and by using cycle-based optimization. To deal with this uncertainty, a new approach based on a stochastic optimization program is presented. This leads to an optimization model that is solved with an exact solver. It is compared to a system design approach based on driving cycles and a genetic algorithm solver. Both approaches are applied to find efficient electric powertrains with fixed-speed and multi-speed transmissions. Hence, the similarities, differences and respective advantages of each optimization procedure are discussed.
The course Physics for Electrical Engineering is part of the curriculum of the bachelor program Electrical Engineering at University of Applied Science Aachen.
Before covid-19 the course was conducted in a rather traditional way with all parts (lecture, exercise and lab) face-to-face. This teaching approach changed fundamentally within a week when the covid-19 limitations forced all courses to distance learning. All parts of the course were transformed to pure distance learning including synchronous and asynchronous parts for the lecture, live online-sessions for the exercises and self-paced labs at home. Using these methods, the course was able to impart the required knowledge and competencies. Taking the teacher’s observations of the student’s learning behaviour and engagement, the formal and informal feedback of the students and the results of the exams into account, the new methods are evaluated with respect to effectiveness, sustainability and suitability for competence transfer. Based on this analysis strong and weak points of the concept and countermeasures to solve the weak points were identified. The analysis further leads to a sustainable teaching approach combining synchronous and asynchronous parts with self-paced learning times that can be used in a very flexible manner for different learning scenarios, pure online, hybrid (mixture of online and presence times) and pure presence teaching.
Adapting augmented reality systems to the users’ needs using gamification and error solving methods
(2021)
Animations of virtual items in AR support systems are typically predefined and lack interactions with dynamic physical environments. AR applications rarely consider users’ preferences and do not provide customized spontaneous support under unknown situations. This research focuses on developing adaptive, error-tolerant AR systems based on directed acyclic graphs and error resolving strategies. Using this approach, users will have more freedom of choice during AR supported work, which leads to more efficient workflows. Error correction methods based on CAD models and predefined process data create individual support possibilities. The framework is implemented in the Industry 4.0 model factory at FH Aachen.
Around 60% of the paper worldwide is made from recovered paper. Especially adhesive contaminants, so called stickies, reduce paper quality. To remove stickies but at the same time keep as many valuable fibers as possible, multi-stage screening systems with several interconnected pressure screens are used. When planning such systems, suitable screens have to be selected and their interconnection as well as operational parameters have to be defined considering multiple conflicting objectives. In this contribution, we present a Mixed-Integer Nonlinear Program to optimize system layout, component selection and operation to find a suitable trade-off between output quality and yield.
In product development, numerous design decisions have to be made. Multi-domain virtual prototyping provides a variety of tools to assess technical feasibility of design options, however often requires substantial computational effort for just a single evaluation. A special challenge is therefore the optimal design of product families, which consist of a group of products derived from a common platform. Finding an optimal platform configuration (stating what is shared and what is individually designed for each product) and an optimal design of all products simultaneously leads to a mixed-integer nonlinear black-box optimization model. We present an optimization approach based on metamodels and a metaheuristic. To increase computational efficiency and solution quality, we compare different types of Gaussian process regression metamodels adapted from the domain of machine learning, and combine them with a genetic algorithm. We illustrate our approach on the example of a product family of electrical drives, and investigate the trade-off between solution quality and computational overhead.
In order to maximize the possible travel distance of battery electric vehicles with one battery charge, it is mandatory to adjust all components of the powertrain carefully to each other. While current vehicle designs mostly simplify the powertrain rigorously and use an electric motor in combination with a gearbox with only one fixed transmission ratio, the use of multi-gear systems has great potential. First, a multi-speed system is able to improve the overall energy efficiency. Secondly, it is able to reduce the maximum momentum and therefore to reduce the maximum current provided by the traction battery, which results in a longer battery lifetime. In this paper, we present a systematic way to generate multi-gear gearbox designs that—combined with a certain electric motor—lead to the most efficient fulfillment of predefined load scenarios and are at the same time robust to uncertainties in the load. Therefore, we model the electric motor and the gearbox within a Mixed-Integer Nonlinear Program, and optimize the efficiency of the mechanical parts of the powertrain. By combining this mathematical optimization program with an unsupervised machine learning algorithm, we are able to derive global-optimal gearbox designs for practically relevant momentum and speed requirements.
The chemical industry is one of the most important industrial sectors in Germany in terms of manufacturing revenue. While thermodynamic boundary conditions often restrict the scope for reducing the energy consumption of core processes, secondary processes such as cooling offer scope for energy optimisation. In this contribution, we therefore model and optimise an existing cooling system. The technical boundary conditions of the model are provided by the operators, the German chemical company BASF SE. In order to systematically evaluate different degrees of freedom in topology and operation, we formulate and solve a Mixed-Integer Nonlinear Program (MINLP), and compare our optimisation results with the existing system.
Component failures within water supply systems can lead to significant performance losses. One way to address these losses is the explicit anticipation of failures within the design process. We consider a water supply system for high-rise buildings, where pump failures are the most likely failure scenarios. We explicitly consider these failures within an early design stage which leads to a more resilient system, i.e., a system which is able to operate under a predefined number of arbitrary pump failures. We use a mathematical optimization approach to compute such a resilient design. This is based on a multi-stage model for topology optimization, which can be described by a system of nonlinear inequalities and integrality constraints. Such a model has to be both computationally tractable and to represent the real-world system accurately. We therefore validate the algorithmic solutions using experiments on a scaled test rig for high-rise buildings. The test rig allows for an arbitrary connection of pumps to reproduce scaled versions of booster station designs for high-rise buildings. We experimentally verify the applicability of the presented optimization model and that the proposed resilience properties are also fulfilled in real systems.
This chapter describes three general strategies to master uncertainty in technical systems: robustness, flexibility and resilience. It builds on the previous chapters about methods to analyse and identify uncertainty and may rely on the availability of technologies for particular systems, such as active components. Robustness aims for the design of technical systems that are insensitive to anticipated uncertainties. Flexibility increases the ability of a system to work under different situations. Resilience extends this characteristic by requiring a given minimal functional performance, even after disturbances or failure of system components, and it may incorporate recovery. The three strategies are described and discussed in turn. Moreover, they are demonstrated on specific technical systems.
Algorithmic design and resilience assessment of energy efficient high-rise water supply systems
(2018)
High-rise water supply systems provide water flow and suitable pressure in all levels of tall buildings. To design such state-of-the-art systems, the consideration of energy efficiency and the anticipation of component failures are mandatory. In this paper, we use Mixed-Integer Nonlinear Programming to compute an optimal placement of pipes and pumps, as well as an optimal control strategy.Moreover, we consider the resilience of the system to pump failures. A resilient system is able to fulfill a predefined minimum functionality even though components fail or are restricted in their normal usage. We present models to measure and optimize the resilience. To demonstrate our approach, we design and analyze an optimal resilient decentralized water supply system inspired by a real-life hotel building.
Successful optimization requires an appropriate model of the system under consideration. When selecting a suitable level of detail, one has to consider solution quality as well as the computational and implementation effort. In this paper, we present a MINLP for a pumping system for the drinking water supply of high-rise buildings. We investigate the influence of the granularity of the underlying physical models on the solution quality. Therefore, we model the system with a varying level of detail regarding the friction losses, and conduct an experimental validation of our model on a modular test rig. Furthermore, we investigate the computational effort and show that it can be reduced by the integration of domain-specific knowledge.
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
The recently discovered first hyperbolic objects passing through the Solar System, 1I/’Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
Water distribution systems are an essential supply infrastructure for cities. Given that climatic and demographic influences will pose further challenges for these infrastructures in the future, the resilience of water supply systems, i.e. their ability to withstand and recover from disruptions, has recently become a subject of research. To assess the resilience of a WDS, different graph-theoretical approaches exist. Next to general metrics characterizing the network topology, also hydraulic and technical restrictions have to be taken into account. In this work, the resilience of an exemplary water distribution network of a major German city is assessed, and a Mixed-Integer Program is presented which allows to assess the impact of capacity adaptations on its resilience.
Geochemical characterisation of hypersaline waters is difficult as high concentrations of salts hinder the analysis of constituents at low concentrations, such as trace metals, and the collection of samples for trace metal analysis in natural waters can be easily contaminated. This is particularly the case if samples are collected by non-conventional techniques such as those required for aquatic subglacial environments. In this paper we present the first analysis of a subglacial brine from Taylor Valley, (~ 78°S), Antarctica for the trace metals: Ba, Co, Mo, Rb, Sr, V, and U. Samples were collected englacially using an electrothermal melting probe called the IceMole. This probe uses differential heating of a copper head as well as the probe’s sidewalls and an ice screw at the melting head to move through glacier ice. Detailed blanks, meltwater, and subglacial brine samples were collected to evaluate the impact of the IceMole and the borehole pump, the melting and collection process, filtration, and storage on the geochemistry of the samples collected by this device. Comparisons between melt water profiles through the glacier ice and blank analysis, with published studies on ice geochemistry, suggest the potential for minor contributions of some species Rb, As, Co, Mn, Ni, NH4+, and NO2−+NO3− from the IceMole. The ability to conduct detailed chemical analyses of subglacial fluids collected with melting probes is critical for the future exploration of the hundreds of deep subglacial lakes in Antarctica.
To maximize the travel distances of battery electric vehicles such as cars or buses for a given amount of stored energy, their powertrains are optimized energetically. One key part within optimization models for electric powertrains is the efficiency map of the electric motor. The underlying function is usually highly nonlinear and nonconvex and leads to major challenges within a global optimization process. To enable faster solution times, one possibility is the usage of piecewise linearization techniques to approximate the nonlinear efficiency map with linear constraints. Therefore, we evaluate the influence of different piecewise linearization modeling techniques on the overall solution process and compare the solution time and accuracy for methods with and without explicitly used binary variables.
Concentrating Solar Power
(2021)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
Test-retest reliability of the internal shoulder rotator muscles' stretch reflex in healthy men
(2021)
Until now the reproducibility of the short latency stretch reflex of the internal rotator muscles of the glenohumeral joint has not been identified. Twenty-three healthy male participants performed three sets of external shoulder rotation stretches with various pre-activation levels on two different dates of measurement to assess test-retest reliability. All stretches were applied with a dynamometer acceleration of 104°/s2 and a velocity of 150°/s. Electromyographical response was measured via surface EMG. Reflex latencies showed a pre-activation effect (ƞ2 = 0,355). ICC ranged from 0,735 to 0,909 indicating an overall “good” relative reliability. SRD 95% lay between ±7,0 to ±12,3 ms.. The reflex gain showed overall poor test-retest reproducibility. The chosen methodological approach presented a suitable test protocol for shoulder muscles stretch reflex latency evaluation. A proof-of-concept study to validate the presented methodical approach in shoulder involvement including subjects with clinically relevant conditions is recommended.
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
Microbial diversity studies regarding the aquatic communities that experienced or are experiencing environmental problems are essential for the comprehension of the remediation dynamics. In this pilot study, we present data on the phylogenetic and ecological structure of microorganisms from epipelagic water samples collected in the Small Aral Sea (SAS). The raw data were generated by massive parallel sequencing using the shotgun approach. As expected, most of the identified DNA sequences belonged to Terrabacteria and Actinobacteria (40% and 37% of the total reads, respectively). The occurrence of Deinococcus-Thermus, Armatimonadetes, Chloroflexi in the epipelagic SAS waters was less anticipated. Surprising was also the detection of sequences, which are characteristic for strict anaerobes—Ignavibacteria, hydrogen-oxidizing bacteria, and archaeal methanogenic species. We suppose that the observed very broad range of phylogenetic and ecological features displayed by the SAS reads demonstrates a more intensive mixing of water masses originating from diverse ecological niches of the Aral-Syr Darya River basin than presumed before.
Conventional EEG devices cannot be used in everyday life and hence, past decade research has been focused on Ear-EEG for mobile, at-home monitoring for various applications ranging from emotion detection to sleep monitoring. As the area available for electrode contact in the ear is limited, the electrode size and location play a vital role for an Ear-EEG system. In this investigation, we present a quantitative study of ear-electrodes with two electrode sizes at different locations in a wet and dry configuration. Electrode impedance scales inversely with size and ranges from 450 kΩ to 1.29 MΩ for dry and from 22 kΩ to 42 kΩ for wet contact at 10 Hz. For any size, the location in the ear canal with the lowest impedance is ELE (Left Ear Superior), presumably due to increased contact pressure caused by the outer-ear anatomy. The results can be used to optimize signal pickup and SNR for specific applications. We demonstrate this by recording sleep spindles during sleep onset with high quality (5.27 μVrms).
Multi-attribute relation extraction (MARE): simplifying the application of relation extraction
(2021)
Natural language understanding’s relation extraction makes innovative and encouraging novel business concepts possible and facilitates new digitilized decision-making processes. Current approaches allow the extraction of relations with a fixed number of entities as attributes. Extracting relations with an arbitrary amount of attributes requires complex systems and costly relation-trigger annotations to assist these systems. We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches, facilitating an explicit mapping from business use cases to the data annotations. Avoiding elaborated annotation constraints simplifies the application of relation extraction approaches. The evaluation compares our models to current state-of-the-art event extraction and binary relation extraction methods. Our approaches show improvement compared to these on the extraction of general multi-attribute relations.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
Determinants of earnings forecast error, earnings forecast revision and earnings forecast accuracy
(2012)
Earnings forecasts are ubiquitous in today’s financial markets. They are essential indicators of future firm performance and a starting point for firm valuation. Extremely inaccurate and overoptimistic forecasts during the most recent financial crisis have raised serious doubts regarding the reliability of such forecasts. This thesis therefore investigates new determinants of forecast errors and accuracy. In addition, new determinants of forecast revisions are examined. More specifically, the thesis answers the following questions: 1) How do analyst incentives lead to forecast errors? 2) How do changes in analyst incentives lead to forecast revisions?, and 3) What factors drive differences in forecast accuracy?
Communication via serial bus systems, like CAN, plays an important role for all kinds of embedded electronic and mechatronic systems. To cope up with the requirements for functional safety of safety-critical applications, there is a need to enhance the safety features of the communication systems. One measure to achieve a more robust communication is to add redundant data transmission path to the applications. In general, the communication of real-time embedded systems like automotive applications is tethered, and the redundant data transmission lines are also tethered, increasing the size of the wiring harness and the weight of the system. A radio link is preferred as a redundant transmission line as it uses a complementary transmission medium compared to the wired solution and in addition reduces wiring harness size and weight. Standard wireless links like Wi-Fi or Bluetooth cannot meet the requirements for real-time capability with regard to bus communication. Using the new dual-mode radio enables a redundant transmission line meeting all requirements with regard to real-time capability, robustness and transparency for the data bus. In addition, it provides a complementary transmission medium with regard to commonly used tethered links. A CAN bus system is used to demonstrate the redundant data transfer via tethered and wireless CAN.
We consider a binary multivariate regression model where the conditional expectation of a binary variable given a higher-dimensional input variable belongs to a parametric family. Based on this, we introduce a model-based bootstrap (MBB) for higher-dimensional input variables. This test can be used to check whether a sequence of independent and identically distributed observations belongs to such a parametric family. The approach is based on the empirical residual process introduced by Stute (Ann Statist 25:613–641, 1997). In contrast to Stute and Zhu’s approach (2002) Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), a transformation is not required. Thus, any problems associated with non-parametric regression estimation are avoided. As a result, the MBB method is much easier for users to implement. To illustrate the power of the MBB based tests, a small simulation study is performed. Compared to the approach of Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), the simulations indicate a slightly improved power of the MBB based method. Finally, both methods are applied to a real data set.
This book provides a compact introduction to the bootstrap method. In addition to classical results on point estimation and test theory, multivariate linear regression models and generalized linear models are covered in detail. Special attention is given to the use of bootstrap procedures to perform goodness-of-fit tests to validate model or distributional assumptions. In some cases, new methods are presented here for the first time.
The text is motivated by practical examples and the implementations of the corresponding algorithms are always given directly in R in a comprehensible form. Overall, R is given great importance throughout. Each chapter includes a section of exercises and, for the more mathematically inclined readers, concludes with rigorous proofs. The intended audience is graduate students who already have a prior knowledge of probability theory and mathematical statistics.
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
The progress in natural language processing (NLP) research over the last years, offers novel business opportunities for companies, as automated user interaction or improved data analysis. Building sophisticated NLP applications requires dealing with modern machine learning (ML) technologies, which impedes enterprises from establishing successful NLP projects. Our experience in applied NLP research projects shows that the continuous integration of research prototypes in production-like environments with quality assurance builds trust in the software and shows convenience and usefulness regarding the business goal. We introduce STAMP 4 NLP as an iterative and incremental process model for developing NLP applications. With STAMP 4 NLP, we merge software engineering principles with best practices from data science. Instantiating our process model allows efficiently creating prototypes by utilizing templates, conventions, and implementations, enabling developers and data scientists to focus on the business goals. Due to our iterative-incremental approach, businesses can deploy an enhanced version of the prototype to their software environment after every iteration, maximizing potential business value and trust early and avoiding the cost of successful yet never deployed experiments.
The fourth industrial revolution introduces disruptive technologies to production environments. One of these technologies are multi-agent systems (MASs), where agents virtualize machines. However, the agent's actual performances in production environments can hardly be estimated as most research has been focusing on isolated projects and specific scenarios. We address this gap by implementing a highly connected and configurable reference model with quantifiable key performance indicators (KPIs) for production scheduling and routing in single-piece workflows. Furthermore, we propose an algorithm to optimize the search of extrema in highly connected distributed systems. The benefits, limits, and drawbacks of MASs and their performances are evaluated extensively by event-based simulations against the introduced model, which acts as a benchmark. Even though the performance of the proposed MAS is, on average, slightly lower than the reference system, the increased flexibility allows it to find new solutions and deliver improved factory-planning outcomes. Our MAS shows an emerging behavior by using flexible production techniques to correct errors and compensate for bottlenecks. This increased flexibility offers substantial improvement potential. The general model in this paper allows the transfer of the results to estimate real systems or other models.
Magnetic nanoparticle relaxation in biomedical application: focus on simulating nanoparticle heating
(2021)
Extension fractures are typical for the deformation under low or no confining pressure. They can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. In this article, it is shown that the simple extension strain criterion makes unrealistic strength predictions in biaxial compression and tension. To overcome this major limitation, a new extension strain criterion is proposed by adding a weighted principal shear component to the simple criterion. The shear weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting extension failure modes, which are unexpected in the classical understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain leading to dilatancy. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak stress CP. Different from compressive loading, tensile loading requires only a limited number of critical cracks to cause failure. Therefore, for tensile stresses, the failure criteria must be modified somehow, possibly by a cut-off corresponding to the CI stress. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Background:
Additional stabilization of the “comma sign” in anterosuperior rotator cuff repair has been proposed to provide biomechanical benefits regarding stability of the repair.
Purpose:
This in vitro investigation aimed to investigate the influence of a comma sign–directed reconstruction technique for anterosuperior rotator cuff tears on the primary stability of the subscapularis tendon repair.
Study Design:
Controlled laboratory study.
Methods:
A total of 18 fresh-frozen cadaveric shoulders were used in this study. Anterosuperior rotator cuff tears (complete full-thickness tear of the supraspinatus and subscapularis tendons) were created, and supraspinatus repair was performed with a standard suture bridge technique. The subscapularis was repaired with either a (1) single-row or (2) comma sign technique. A high-resolution 3D camera system was used to analyze 3-mm and 5-mm gap formation at the subscapularis tendon-bone interface upon incremental cyclic loading. Moreover, the ultimate failure load of the repair was recorded. A Mann-Whitney test was used to assess significant differences between the 2 groups.
Results:
The comma sign repair withstood significantly more loading cycles than the single-row repair until 3-mm and 5-mm gap formation occurred (P≤ .047). The ultimate failure load did not reveal any significant differences when the 2 techniques were compared (P = .596).
Conclusion:
The results of this study show that additional stabilization of the comma sign enhanced the primary stability of subscapularis tendon repair in anterosuperior rotator cuff tears. Although this stabilization did not seem to influence the ultimate failure load, it effectively decreased the micromotion at the tendon-bone interface during cyclic loading.
Clinical Relevance:
The proposed technique for stabilization of the comma sign has shown superior biomechanical properties in comparison with a single-row repair and might thus improve tendon healing. Further clinical research will be necessary to determine its influence on the functional outcome.
In positron emission tomography improving time, energy and spatial detector resolutions and using Compton kinematics introduces the possibility to reconstruct a radioactivity distribution image from scatter coincidences, thereby enhancing image quality. The number of single scattered coincidences alone is in the same order of magnitude as true coincidences. In this work, a compact Compton camera module based on monolithic scintillation material is investigated as a detector ring module. The detector interactions are simulated with Monte Carlo package GATE. The scattering angle inside the tissue is derived from the energy of the scattered photon, which results in a set of possible scattering trajectories or broken line of response. The Compton kinematics collimation reduces the number of solutions. Additionally, the time of flight information helps localize the position of the annihilation. One of the questions of this investigation is related to how the energy, spatial and temporal resolutions help confine the possible annihilation volume. A comparison of currently technically feasible detector resolutions (under laboratory conditions) demonstrates the influence on this annihilation volume and shows that energy and coincidence time resolution have a significant impact. An enhancement of the latter from 400 ps to 100 ps leads to a smaller annihilation volume of around 50%, while a change of the energy resolution in the absorber layer from 12% to 4.5% results in a reduction of 60%. The inclusion of single tissue-scattered data has the potential to increase the sensitivity of a scanner by a factor of 2 to 3 times. The concept can be further optimized and extended for multiple scatter coincidences and subsequently validated by a reconstruction algorithm.
Thrombogenic complications are a main issue in mechanical circulatory support (MCS). There is no validated in vitro method available to quantitatively assess the thrombogenic performance of pulsatile MCS devices under realistic hemodynamic conditions. The aim of this study is to propose a method to evaluate the thrombogenic potential of new designs without the use of complex in-vivo trials. This study presents a novel in vitro method for reproducible thrombogenicity testing of pulsatile MCS systems using low molecular weight heparinized porcine blood. Blood parameters are continuously measured with full blood thromboelastometry (ROTEM; EXTEM, FIBTEM and a custom-made analysis HEPNATEM). Thrombus formation is optically observed after four hours of testing. The results of three experiments are presented each with two parallel loops. The area of thrombus formation inside the MCS device was reproducible. The implantation of a filter inside the loop catches embolizing thrombi without a measurable increase of platelet activation, allowing conclusions of the place of origin of thrombi inside the device. EXTEM and FIBTEM parameters such as clotting velocity (α) and maximum clot firmness (MCF) show a total decrease by around 6% with a characteristic kink after 180 minutes. HEPNATEM α and MCF rise within the first 180 minutes indicate a continuously increasing activation level of coagulation. After 180 minutes, the consumption of clotting factors prevails, resulting in a decrease of α and MCF. With the designed mock loop and the presented protocol we are able to identify thrombogenic hot spots inside a pulsatile pump and characterize their thrombogenic potential.
Aneurysmal subarachnoid hemorrhage (aSAH) is associated with early and delayed brain injury due to several underlying and interrelated processes, which include inflammation, oxidative stress, endothelial, and neuronal apoptosis. Treatment with melatonin, a cytoprotective neurohormone with anti-inflammatory, anti-oxidant and anti-apoptotic effects, has been shown to attenuate early brain injury (EBI) and to prevent delayed cerebral vasospasm in experimental aSAH models. Less is known about the role of endogenous melatonin for aSAH outcome and how its production is altered by the pathophysiological cascades initiated during EBI. In the present observational study, we analyzed changes in melatonin levels during the first three weeks after aSAH.
Cardiopulmonary bypass (CPB) is a standard technique for cardiac surgery, but comes with the risk of severe neurological complications (e.g. stroke) caused by embolisms and/or reduced cerebral perfusion. We report on an aortic cannula prototype design (optiCAN) with helical outflow and jet-splitting dispersion tip that could reduce the risk of embolic events and restores cerebral perfusion to 97.5% of physiological flow during CPB in vivo, whereas a commercial curved-tip cannula yields 74.6%. In further in vitro comparison, pressure loss and hemolysis parameters of optiCAN remain unaffected. Results are reproducibly confirmed in silico for an exemplary human aortic anatomy via computational fluid dynamics (CFD) simulations. Based on CFD simulations, we firstly show that optiCAN design improves aortic root washout, which reduces the risk of thromboembolism. Secondly, we identify regions of the aortic intima with increased risk of plaque release by correlating areas of enhanced plaque growth and high wall shear stresses (WSS). From this we propose another easy-to-manufacture cannula design (opti2CAN) that decreases areas burdened by high WSS, while preserving physiological cerebral flow and favorable hemodynamics. With this novel cannula design, we propose a cannulation option to reduce neurological complications and the prevalence of stroke in high-risk patients after CPB.
Biologically sensitive field-effect devices (BioFEDs) advantageously combine the electronic field-effect functionality with the (bio)chemical receptor’s recognition ability for (bio)chemical sensing. In this review, basic and widely applied device concepts of silicon-based BioFEDs (ion-sensitive field-effect transistor, silicon nanowire transistor, electrolyte-insulator-semiconductor capacitor, light-addressable potentiometric sensor) are presented and recent progress (from 2019 to early 2021) is discussed. One of the main advantages of BioFEDs is the label-free sensing principle enabling to detect a large variety of biomolecules and bioparticles by their intrinsic charge. The review encompasses applications of BioFEDs for the label-free electrical detection of clinically relevant protein biomarkers, deoxyribonucleic acid molecules and viruses, enzyme-substrate reactions as well as recording of the cell acidification rate (as an indicator of cellular metabolism) and the extracellular potential.
Previous studies optimized the dimensions of coaxial heat exchangers using constant mass fow rates as a boundary condition. They show a thermal optimal circular ring width of nearly zero. Hydraulically optimal is an inner to outer pipe radius ratio of 0.65 for turbulent and 0.68 for laminar fow types. In contrast, in this study, fow conditions in the circular ring are kept constant (a set of fxed Reynolds numbers) during optimization. This approach ensures fxed fow conditions and prevents inappropriately high or low mass fow rates. The optimization is carried out for three objectives: Maximum energy gain, minimum hydraulic efort and eventually optimum net-exergy balance. The optimization changes the inner pipe radius and mass fow rate but not the Reynolds number of the circular ring. The thermal calculations base on Hellström’s borehole resistance and the hydraulic optimization on individually calculated linear loss of head coefcients. Increasing the inner pipe radius results in decreased hydraulic losses in the inner pipe but increased losses in the circular ring. The net-exergy diference is a key performance indicator and combines thermal and hydraulic calculations. It is the difference between thermal exergy fux and hydraulic efort. The Reynolds number in the circular ring is instead of the mass fow rate constant during all optimizations. The result from a thermal perspective is an optimal width of the circular ring of nearly zero. The hydraulically optimal inner pipe radius is 54% of the outer pipe radius for laminar fow and 60% for turbulent fow scenarios. Net-exergetic optimization shows a predominant infuence of hydraulic losses, especially for small temperature gains. The exact result depends on the earth’s thermal properties and the fow type. Conclusively, coaxial geothermal probes’ design should focus on the hydraulic optimum and take the thermal optimum as a secondary criterion due to the dominating hydraulics.
In the study, the process chain of additive manufacturing by means of powder bed fusion will be presented based on the material glass. In order to reliably process components additively, new concepts with different solutions were developed and investigated.
Compared to established metallic materials, the properties of glass materials differ significantly. Therefore, the process control was adapted to the material glass in the investigations. With extensive parameter studies based on various glass powders such as borosilicate glass and quartz glass, scientifically proven results on powder bed fusion of glass are presented. Based on the determination of the particle properties with different methods, extensive investigations are made regarding the melting behavior of glass by means of laser beams. Furthermore, the experimental setup was steadily expanded. In addition to the integration of coaxial temperature measurement and regulation, preheating of the building platform is of major importance. This offers the possibility to perform 3D printing at the transformation temperatures of the glass materials. To improve the component’s properties, the influence of a subsequent heat treatment was also investigated.
The experience gained was incorporated into a new experimental system, which allows a much better exploration of the 3D printing of glass. Currently, studies are being conducted to improve surface texture, building accuracy, and geometrical capabilities using three-dimensional specimen.
The contribution shows the development of research in the field of 3D printing of glass, gives an insight into the machine and process engineering as well as an outlook on the possibilities and applications.