Article
Refine
Year of publication
- 2024 (33)
- 2023 (54)
- 2022 (81)
- 2021 (90)
- 2020 (130)
- 2019 (119)
- 2018 (127)
- 2017 (109)
- 2016 (115)
- 2015 (125)
- 2014 (138)
- 2013 (138)
- 2012 (129)
- 2011 (178)
- 2010 (176)
- 2009 (197)
- 2008 (179)
- 2007 (170)
- 2006 (178)
- 2005 (182)
- 2004 (207)
- 2003 (147)
- 2002 (166)
- 2001 (154)
- 2000 (168)
- 1999 (153)
- 1998 (164)
- 1997 (151)
- 1996 (139)
- 1995 (147)
- 1994 (136)
- 1993 (108)
- 1992 (102)
- 1991 (74)
- 1990 (82)
- 1989 (79)
- 1988 (79)
- 1987 (77)
- 1986 (65)
- 1985 (59)
- 1984 (56)
- 1983 (47)
- 1982 (38)
- 1981 (39)
- 1980 (50)
- 1979 (43)
- 1978 (41)
- 1977 (22)
- 1976 (25)
- 1975 (18)
- 1974 (13)
- 1973 (6)
- 1972 (15)
- 1971 (7)
- 1970 (2)
- 1968 (2)
- 1967 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1547)
- Fachbereich Wirtschaftswissenschaften (700)
- Fachbereich Elektrotechnik und Informationstechnik (629)
- Fachbereich Energietechnik (598)
- Fachbereich Chemie und Biotechnologie (590)
- INB - Institut für Nano- und Biotechnologien (524)
- Fachbereich Maschinenbau und Mechatronik (470)
- IfB - Institut für Bioengineering (428)
- Fachbereich Luft- und Raumfahrttechnik (368)
- Fachbereich Bauingenieurwesen (327)
Has Fulltext
- no (5530) (remove)
Language
Document Type
- Article (5530) (remove)
Keywords
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- CellDrum (3)
- Heparin (3)
- SLM (3)
- additive manufacturing (3)
- capacitive field-effect sensor (3)
Kein Urteil zum Datenschutzrecht sorgte im vergangenen Jahr für mehr panische Reaktionen als die Entscheidung des EuGH in der Rechtssache “Wirtschaftsakademie Schleswig-Holstein”(C-210/16). Das Urteil warf in datenschutzrechtlicher Literatur und Öffentlichkeit zahlreiche Fragen auf: Ist jetzt jeder “gemeinsam” Verantwortlicher? Was sind die Kriterien? Der EuGH hat kürzlich in einem – dem allgemeinen Vernehmen nach aufsehenerregenden, de facto aber kaum überraschenden – Urteil für Klarheit gesorgt. Dabei hat das Gericht jedoch einige Fragen offengelassen und neue Fragen aufgeworfen. Ein Blick auf alte und neue Herausforderungen in Kooperationsszenarien.
Kurz vor der parlamentarischen Sommerpause hat der Bundestag am 28.6.2019 das 2. Datenschutz-Anpassungs- und Umsetzungsgesetz EU (2. DSAnpUG-EU) beschlossen, der Bundesrat hat diesem Gesetz am 20.9.2019 zugestimmt. Das Artikelgesetz, welches im sog. Omnibusverfahren zahlreiche Gesetze auf Bundesebene ändert, soll zur Vereinheitlichung und Anpassung des Bundesrechts an die seit Mai 2018 geltende Datenschutz-Grundverordnung (DSGVO) beitragen.
Verantwortlichkeit, Data Breach, das Ende von Fax & E-Mail: Aufsichtsbehörden mit streitbaren Thesen
(2020)
EDPB: Europäische Aufsichtsbehörden mit neuen Guidelines zur datenschutzkonformen Einwilligung
(2020)
Aufsicht und Rechtsdurchsetzung bei unzulässigem Einsatz von Cookies & Co. unter Geltung des TTDSG
(2022)
Booster stations can fulfill a varying pressure demand with high energy-efficiency, because individual pumps can be deactivated at smaller loads. Although this is a seemingly simple approach, it is not easy to decide precisely when to activate or deactivate pumps. Contemporary activation controls derive the switching points from the current volume flow through the system. However, it is not measured directly for various reasons. Instead, the controller estimates the flow based on other system properties. This causes further uncertainty for the switching decision. In this paper, we present a method to find a robust, yet energy-efficient activation strategy.
Cheap does not imply cost-effective -- this is rule number one of zeitgeisty system design. The initial investment accounts only for a small portion of the lifecycle costs of a technical system. In fluid systems, about ninety percent of the total costs are caused by other factors like power consumption and maintenance. With modern optimization methods, it is already possible to plan an optimal technical system considering multiple objectives. In this paper, we focus on an often neglected contribution to the lifecycle costs: downtime costs due to spontaneous failures. Consequently, availability becomes an issue.
The conference center darmstadtium in Darmstadt is a prominent example of energy efficient buildings. Its heating system consists of different source and consumer circuits connected by a Zortström reservoir. Our goal was to reduce the energy costs of the system as much as possible. Therefore, we analyzed its supply circuits. The first step towards optimization is a complete examination of the system: 1) Compilation of an object list for the system, 2) collection of the characteristic curves of the components, and 3) measurement of the load profiles of the heat and volume-flow demand. Instead of modifying the system manually and testing the solution by simulation, the second step was the creation of a global optimization program. The objective was to minimize the total energy costs for one year. We compare two different topologies and show opportunities for significant savings.
Verfügbarkeit und Nachhaltigkeit sind wichtige Anforderungen bei der Planung langlebiger technischer Systeme. Meist werden bei Lebensdaueroptimierungen lediglich einzelne Komponenten vordefinierter Systeme untersucht. Ob eine optimale Lebensdauer eine gänzlich andere Systemvariante bedingt, wird nur selten hinterfragt. Technical Operations Research (TOR) erlaubt es, aus Obermengen technischer Systeme automatisiert die lebensdaueroptimale Systemstruktur auszuwählen. Der Artikel zeigt dies am Beispiel eines hydrostatischen Getriebes.
Resilience as a concept has found its way into different disciplines to describe the ability of an individual or system to withstand and adapt to changes in its environment. In this paper, we provide an overview of the concept in different communities and extend it to the area of mechanical engineering. Furthermore, we present metrics to measure resilience in technical systems and illustrate them by applying them to load-carrying structures. By giving application examples from the Collaborative Research Centre (CRC) 805, we show how the concept of resilience can be used to control uncertainty during different stages of product life.
Planning the layout and operation of a technical system is a common task
for an engineer. Typically, the workflow is divided into consecutive stages: First,
the engineer designs the layout of the system, with the help of his experience or of
heuristic methods. Secondly, he finds a control strategy which is often optimized
by simulation. This usually results in a good operating of an unquestioned sys-
tem topology. In contrast, we apply Operations Research (OR) methods to find a
cost-optimal solution for both stages simultaneously via mixed integer program-
ming (MILP). Technical Operations Research (TOR) allows one to find a provable
global optimal solution within the model formulation. However, the modeling error
due to the abstraction of physical reality remains unknown. We address this ubiq-
uitous problem of OR methods by comparing our computational results with mea-
surements in a test rig. For a practical test case we compute a topology and control
strategy via MILP and verify that the objectives are met up to a deviation of 8.7%.
Modern industry and multi-discipline projects require highly trained individuals with resilient science and engineering back-grounds. Graduates must be able to agilely apply excellent theoretical knowledge in their subject matter as well as essential practical “hands-on” knowledge of diverse working processes to solve complex problems. To meet these demands, university education follows the concept of Constructive Alignment and thus increasingly adopts the teaching of necessary practical skills to the actual industry requirements and assessment routines. However, a systematic approach to coherently align these three central teaching demands is strangely absent from current university curricula. We demonstrate the feasibility of implementing practical assessments in a regular theory-based examination, thus defining the term “blended assessment”. We assessed a course for natural science and engineering students pursuing a career in biomedical engineering, and evaluated the benefit of blended assessment exams for students and lecturers. Our controlled study assessed the physiological background of electrocardiograms (ECGs), the practical measurement of ECG curves, and their interpretation of basic pathologic alterations. To study on long time effects, students have been assessed on the topic twice with a time lag of 6 months. Our findings suggest a significant improvement in student gain with respect to practical skills and theoretical knowledge. The results of the reassessments support these outcomes. From the lecturers ́ point of view, blended assessment complements practical training courses while keeping organizational effort manageable. We consider blended assessment a viable tool for providing an improved student gain, industry-ready education format that should be evaluated and established further to prepare university graduates optimally for their future careers.
Dynamic retinal vessel analysis (DVA) provides a non-invasive way to assess microvascular function in patients and potentially to improve predictions of individual cardiovascular (CV) risk. The aim of our study was to use untargeted machine learning on DVA in order to improve CV mortality prediction and identify corresponding response alterations.
Delayed cerebral ischemia (DCI) is a common complication after aneurysmal subarachnoid hemorrhage (aSAH) and can lead to infarction and poor clinical outcome. The underlying mechanisms are still incompletely understood, but animal models indicate that vasoactive metabolites and inflammatory cytokines produced within the subarachnoid space may progressively impair and partially invert neurovascular coupling (NVC) in the brain. Because cerebral and retinal microvasculature are governed by comparable regulatory mechanisms and may be connected by perivascular pathways, retinal vascular changes are increasingly recognized as a potential surrogate for altered NVC in the brain. Here, we used non-invasive retinal vessel analysis (RVA) to assess microvascular function in aSAH patients at different times after the ictus.
Purpose Vascular risk factors and ocular perfusion are heatedly discussed in the pathogenesis of glaucoma. The retinal vessel analyzer (RVA, IMEDOS Systems, Germany) allows noninvasive measurement of retinal vessel regulation. Significant differences especially in the veins between healthy subjects and patients suffering from glaucoma were previously reported. In this pilot-study we investigated if localized vascular regulation is altered in glaucoma patients with altitudinal visual field defect asymmetry. Methods 15 eyes of 12 glaucoma patients with advanced altitudinal visual field defect asymmetry were included. The mean defect was calculated for each hemisphere separately (-20.99 ± 10.49 pro- found hemispheric visual field defect vs -7.36 ± 3.97 dB less profound hemisphere). After pupil dilation, RVA measurements of retinal arteries and veins were conducted using the standard protocol. The superior and inferior retinal vessel reactivity were measured consecutively in each eye. Results Significant differences were recorded in venous vessel constriction after flicker light stimulation and overall amplitude of the reaction (p \ 0.04 and p \ 0.02 respectively) in-between the hemispheres spheres. Vessel reaction was higher in the hemisphere corresponding to the more advanced visual field defect. Arterial diameters reacted similarly, failing to reach statistical significance. Conclusion Localized retinal vessel regulation is significantly altered in glaucoma patients with asymmetri altitudinal visual field defects. Veins supplying the hemisphere concordant to a less profound visual field defect show diminished diameter changes. Vascular dysregulation might be particularly important in early glaucoma stages prior to a significant visual field defect.
Given industrial applications, the costs for the operation and maintenance of a pump system typically far exceed its purchase price. For finding an optimal pump configuration which minimizes not only investment, but life-cycle costs, methods like Technical Operations Research which is based on Mixed-Integer Programming can be applied. However, during the planning phase, the designer is often faced with uncertain input data, e.g. future load demands can only be estimated. In this work, we deal with this uncertainty by developing a chance-constrained two-stage (CCTS) stochastic program. The design and operation of a booster station working under uncertain load demand are optimized to minimize total cost including purchase price, operation cost incurred by energy consumption and penalty cost resulting from water shortage. We find optimized system layouts using a sample average approximation (SAA) algorithm, and analyze the results for different risk levels of water shortage. By adjusting the risk level, the costs and performance range of the system can be balanced, and thus the
system’s resilience can be engineered
On obligations in the development process of resilient systems with algorithmic design methods
(2018)
Advanced computational methods are needed both for the design of large systems and to compute high accuracy solutions. Such methods are efficient in computation, but the validation of results is very complex, and highly skilled auditors are needed to verify them. We investigate legal questions concerning obligations in the development phase, especially for technical systems developed using advanced methods. In particular, we consider methods of resilient and robust optimization. With these techniques, high performance solutions can be found, despite a high variety of input parameters. However, given the novelty of these methods, it is uncertain whether legal obligations are being met. The aim of this paper is to discuss if and how the choice of a specific computational method affects the developer’s product liability. The review of legal obligations in this paper is based on German law and focuses on the requirements that must be met during the design and development process.
Nahezu 100.000 denkbare Strukturen kann ein Getriebe bei gleicher Funktion aufweisen - je nach Ganganzahl und gefordertem Freiheitsgrad. Mit dem traditionellen Ansatz bei der Entwicklung, einzelne vielversprechende Systemkonfigurationen manuell zu identifizieren und zu vergleichen, können leicht innovative und vor allem kostenminimale Lösungen übersehen werden. Im Rahmen eines Forschungsprojekts hat die TU Darmstadt spezielle Optimierungsmethoden angewendet, um auch bei großen Lösungsräumen zielsicher ein für die individuellen Zielstellungen optimales Layout zu finden.
Ensuring access to water and sanitation for all is Goal No. 6 of the 17 UN Sustainability Development Goals to transform our world. As one step towards this goal, we present an approach that leverages remote sensing data to plan optimal water supply networks for informal urban settlements. The concept focuses on slums within large urban areas, which are often characterized by a lack of an appropriate water supply. We apply methods of mathematical optimization aiming to find a network describing the optimal supply infrastructure. Hereby, we choose between different decentral and central approaches combining supply by motorized vehicles with supply by pipe systems. For the purposes of illustration, we apply the approach to two small slum clusters in Dhaka and Dar es Salaam. We show our optimization results, which represent the lowest cost water supply systems possible. Additionally, we compare the optimal solutions of the two clusters (also for varying input parameters, such as population densities and slum size development over time) and describe how the result of the optimization depends on the entered remote sensing data.
The transition within transportation towards battery electric vehicles can lead to a more sustainable future. To account for the development goal ‘climate action’ stated by the United Nations, it is mandatory, within the conceptual design phase, to derive energy-efficient system designs. One barrier is the uncertainty of the driving behaviour within the usage phase. This uncertainty is often addressed by using a stochastic synthesis process to derive representative driving cycles and by using cycle-based optimization. To deal with this uncertainty, a new approach based on a stochastic optimization program is presented. This leads to an optimization model that is solved with an exact solver. It is compared to a system design approach based on driving cycles and a genetic algorithm solver. Both approaches are applied to find efficient electric powertrains with fixed-speed and multi-speed transmissions. Hence, the similarities, differences and respective advantages of each optimization procedure are discussed.
Algorithmic design and resilience assessment of energy efficient high-rise water supply systems
(2018)
High-rise water supply systems provide water flow and suitable pressure in all levels of tall buildings. To design such state-of-the-art systems, the consideration of energy efficiency and the anticipation of component failures are mandatory. In this paper, we use Mixed-Integer Nonlinear Programming to compute an optimal placement of pipes and pumps, as well as an optimal control strategy.Moreover, we consider the resilience of the system to pump failures. A resilient system is able to fulfill a predefined minimum functionality even though components fail or are restricted in their normal usage. We present models to measure and optimize the resilience. To demonstrate our approach, we design and analyze an optimal resilient decentralized water supply system inspired by a real-life hotel building.
The application of mathematical optimization methods for water supply system design and operation provides the capacity to increase the energy efficiency and to lower the investment costs considerably. We present a system approach for the optimal design and operation of pumping systems in real-world high-rise buildings that is based on the usage of mixed-integer nonlinear and mixed-integer linear modeling approaches. In addition, we consider different booster station topologies, i.e. parallel and series-parallel central booster stations as well as decentral booster stations. To confirm the validity of the underlying optimization models with real-world system behavior, we additionally present validation results based on experiments conducted on a modularly constructed pumping test rig. Within the models we consider layout and control decisions for different load scenarios, leading to a Deterministic Equivalent of a two-stage stochastic optimization program. We use a piecewise linearization as well as a piecewise relaxation of the pumps’ characteristics to derive mixed-integer linear models. Besides the solution with off-the-shelf solvers, we present a problem specific exact solving algorithm to improve the computation time. Focusing on the efficient exploration of the solution space, we divide the problem into smaller subproblems, which partly can be cut off in the solution process. Furthermore, we discuss the performance and applicability of the solution approaches for real buildings and analyze the technical aspects of the solutions from an engineer’s point of view, keeping in mind the economically important trade-off between investment and operation costs.
The recently discovered first hyperbolic objects passing through the Solar System, 1I/’Oumuamua and 2I/Borisov, have raised the question about near term missions to Interstellar Objects. In situ spacecraft exploration of these objects will allow the direct determination of both their structure and their chemical and isotopic composition, enabling an entirely new way of studying small bodies from outside our solar system. In this paper, we map various Interstellar Object classes to mission types, demonstrating that missions to a range of Interstellar Object classes are feasible, using existing or near-term technology. We describe flyby, rendezvous and sample return missions to interstellar objects, showing various ways to explore these bodies characterizing their surface, dynamics, structure and composition. Their direct exploration will constrain their formation and history, situating them within the dynamical and chemical evolution of the Galaxy. These mission types also provide the opportunity to explore solar system bodies and perform measurements in the far outer solar system.
Geochemical characterisation of hypersaline waters is difficult as high concentrations of salts hinder the analysis of constituents at low concentrations, such as trace metals, and the collection of samples for trace metal analysis in natural waters can be easily contaminated. This is particularly the case if samples are collected by non-conventional techniques such as those required for aquatic subglacial environments. In this paper we present the first analysis of a subglacial brine from Taylor Valley, (~ 78°S), Antarctica for the trace metals: Ba, Co, Mo, Rb, Sr, V, and U. Samples were collected englacially using an electrothermal melting probe called the IceMole. This probe uses differential heating of a copper head as well as the probe’s sidewalls and an ice screw at the melting head to move through glacier ice. Detailed blanks, meltwater, and subglacial brine samples were collected to evaluate the impact of the IceMole and the borehole pump, the melting and collection process, filtration, and storage on the geochemistry of the samples collected by this device. Comparisons between melt water profiles through the glacier ice and blank analysis, with published studies on ice geochemistry, suggest the potential for minor contributions of some species Rb, As, Co, Mn, Ni, NH4+, and NO2−+NO3− from the IceMole. The ability to conduct detailed chemical analyses of subglacial fluids collected with melting probes is critical for the future exploration of the hundreds of deep subglacial lakes in Antarctica.
Test-retest reliability of the internal shoulder rotator muscles' stretch reflex in healthy men
(2021)
Until now the reproducibility of the short latency stretch reflex of the internal rotator muscles of the glenohumeral joint has not been identified. Twenty-three healthy male participants performed three sets of external shoulder rotation stretches with various pre-activation levels on two different dates of measurement to assess test-retest reliability. All stretches were applied with a dynamometer acceleration of 104°/s2 and a velocity of 150°/s. Electromyographical response was measured via surface EMG. Reflex latencies showed a pre-activation effect (ƞ2 = 0,355). ICC ranged from 0,735 to 0,909 indicating an overall “good” relative reliability. SRD 95% lay between ±7,0 to ±12,3 ms.. The reflex gain showed overall poor test-retest reproducibility. The chosen methodological approach presented a suitable test protocol for shoulder muscles stretch reflex latency evaluation. A proof-of-concept study to validate the presented methodical approach in shoulder involvement including subjects with clinically relevant conditions is recommended.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
The fourth industrial revolution introduces disruptive technologies to production environments. One of these technologies are multi-agent systems (MASs), where agents virtualize machines. However, the agent's actual performances in production environments can hardly be estimated as most research has been focusing on isolated projects and specific scenarios. We address this gap by implementing a highly connected and configurable reference model with quantifiable key performance indicators (KPIs) for production scheduling and routing in single-piece workflows. Furthermore, we propose an algorithm to optimize the search of extrema in highly connected distributed systems. The benefits, limits, and drawbacks of MASs and their performances are evaluated extensively by event-based simulations against the introduced model, which acts as a benchmark. Even though the performance of the proposed MAS is, on average, slightly lower than the reference system, the increased flexibility allows it to find new solutions and deliver improved factory-planning outcomes. Our MAS shows an emerging behavior by using flexible production techniques to correct errors and compensate for bottlenecks. This increased flexibility offers substantial improvement potential. The general model in this paper allows the transfer of the results to estimate real systems or other models.
Die Arbeitswelt ist im Umbruch. Der Bedarf an flexiblen Arbeitszeitmodellen nimmt stetig zu, wobei die Corona-Pandemie dieses Bedürfnis nochmals verschärft hat. Gerade auch in kleineren und mittleren Unternehmen wächst die Notwendigkeit, den Einsatz der Beschäftigten möglichst bedarfsgerecht zu steuern, also bei guter Auftragslage mehr Arbeitszeit abzurufen und bei ausbleibenden Aufträgen die Arbeitszeit zu reduzieren und somit bezahlte „Leerlaufzeiten“ zu vermeiden. Der Gesetzgeber stellt den Arbeitgebern hierfür das Instrument der sog. Abrufarbeit ( § 12 Teilzeit- und Befristungsgesetz – TzBfG ) zur Verfügung. In dem nachfolgenden Beitrag werden Möglichkeiten und Grenzen der Abrufarbeit skizziert und konkrete arbeitsvertragliche Gestaltungsmöglichkeiten aufgezeigt.
Im Anschluss an die arbeitsrechtlichen Jahresübersichten der vergangenen Jahre (NWB 5/2019 S. 266; NWB 8/2020 S. 557) gibt der nachfolgende Beitrag einen Überblick über relevante Entwicklungen im Arbeitsrecht des Jahres 2020. Der erste Teil beschäftigt sich hierbei mit gesetzlichen Änderungen. Neben Regelungen, die auf die COVID-19-Pandemie zurückgeführt werden können, hat der Gesetzgeber auch andere Vorhaben umgesetzt. So wurde u. a. das Arbeitnehmerentsenderecht reformiert. Für Diskussionen sorgt(e) zudem das Gesetzesvorhaben zur Regelung mobiler Arbeit. Im zweiten Teil werden für die Praxis wichtige höchstrichterliche Gerichtsentscheidungen zum Arbeitsrecht erläutert, alphabetisch sortiert von A (wie Antidiskriminierungsrecht) bis U (wie Urlaub).
Der arbeitsrechtlich richtige Umgang mit Zeiten einer Dienstreise kann in der betrieblichen Praxis Probleme bereiten. Ob die Reisezeiten als solche Arbeitszeit i. S. des Arbeitszeitgesetzes (ArbZG ) darstellen, also bspw. auf die tägliche Höchstarbeitszeit angerechnet werden müssen oder nicht, ist oftmals genauso unklar wie die Frage, ob und inwieweit Reisezeiten vergütungspflichtig sind. Aber auch, ob ein Arbeitnehmer überhaupt zur Durchführung einer Dienstreise verpflichtet werden kann, ist ein möglicher Anlass von Streitigkeiten zwischen den Arbeitsvertragsparteien, wie verschiedene Gerichtsentscheidungen zeigen. Der nachfolgende Beitrag beantwortet die aufgeworfenen Fragen und gibt hierzu einen praxisorientierten Überblick.