Conference Proceeding
Refine
Year of publication
Institute
- Fachbereich Elektrotechnik und Informationstechnik (302)
- Fachbereich Energietechnik (259)
- Fachbereich Medizintechnik und Technomathematik (243)
- Fachbereich Maschinenbau und Mechatronik (209)
- Fachbereich Luft- und Raumfahrttechnik (208)
- Solar-Institut Jülich (167)
- IfB - Institut für Bioengineering (152)
- Fachbereich Bauingenieurwesen (139)
- Fachbereich Wirtschaftswissenschaften (73)
- ECSM European Center for Sustainable Mobility (62)
- INB - Institut für Nano- und Biotechnologien (52)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (48)
- Fachbereich Chemie und Biotechnologie (35)
- Nowum-Energy (22)
- Kommission für Forschung und Entwicklung (17)
- Fachbereich Architektur (16)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (10)
- FH Aachen (7)
- Fachbereich Gestaltung (4)
- IaAM - Institut für angewandte Automation und Mechatronik (3)
- Arbeitsstelle fuer Hochschuldidaktik und Studienberatung (2)
- Institut fuer Angewandte Polymerchemie (2)
- Verwaltung (2)
- Digitalisierung in Studium & Lehre (1)
- Freshman Institute (1)
- Kommission für Planung und Finanzen (1)
- Senat (1)
Language
- English (1170)
- German (477)
- Italian (1)
- Multiple languages (1)
- Spanish (1)
Document Type
- Conference Proceeding (1650) (remove)
Keywords
- Biosensor (25)
- Blitzschutz (15)
- CAD (11)
- Finite-Elemente-Methode (11)
- civil engineering (11)
- Bauingenieurwesen (10)
- Lightning protection (9)
- Einspielen <Werkstoff> (6)
- Telekommunikationsmarkt (6)
- shakedown analysis (6)
Interplanetary trajectories for low-thrust spacecraft are often characterized by multiple revolutions around the sun. Unfortunately, the convergence of traditional trajectory optimizers that are based on numerical optimal control methods depends strongly on an adequate initial guess for the control function (if a direct method is used) or for the starting values of the adjoint vector (if an indirect method is used). Especially when many revolutions around the sun are re-
quired, trajectory optimization becomes a very difficult and time-consuming task that involves a lot of experience and expert knowledge in astrodynamics and optimal control theory, because an adequate initial guess is extremely hard to find. Evolutionary neurocontrol (ENC) was proposed as a smart method for low-thrust trajectory optimization that fuses artificial neural networks and evolutionary algorithms to so-called evolutionary neurocontrollers (ENCs) [1]. Inspired by natural archetypes, ENC attacks the trajectoryoptimization problem from the perspective of artificial intelligence and machine learning, a perspective that is quite different from that of optimal control theory. Within the context of ENC, a trajectory is regarded as the result of a spacecraft steering strategy that maps permanently the actual spacecraft state and the actual target state onto the actual spacecraft control vector. This way, the problem of searching the optimal spacecraft trajectory is equivalent to the problem of searching (or "learning") the optimal spacecraft steering strategy. An artificial neural network is used to implement such a spacecraft steering strategy. It can be regarded as a parameterized function (the network function) that is defined by the internal network parameters. Therefore, each distinct set of network parameters defines a different network function and thus a different steering strategy. The problem of searching the optimal steering strategy is now equivalent to the problem of searching the optimal set of network parameters. Evolutionary algorithms that work on a population of (artificial) chromosomes are used to find the optimal network parameters, because the parameters can be easily mapped onto a chromosome. The trajectory optimization problem is solved when the optimal chromosome is found. A comparison of solar sail trajectories that have been published by others [2, 3, 4, 5] with ENC-trajectories has shown that ENCs can be successfully applied for near-globally optimal spacecraft control [1, 6] and that they are able to find trajectories that are closer to the (unknown) global optimum, because they explore the trajectory search space more exhaustively than a human expert can do. The obtained trajectories are fairly accurate with respect to the terminal constraint. If a more accurate trajectory is required, the ENC-solution can be used as an initial guess for a local trajectory optimization method. Using ENC, low-thrust trajectories can be optimized without an initial guess and without expert attendance.
Here, new results for nuclear electric spacecraft and for solar sail spacecraft are presented and it will be shown that ENCs find very good trajectories even for very difficult problems. Trajectory optimization results are presented for 1. NASA's Solar Polar Imager Mission, a mission to attain a highly inclined close solar orbit with a solar sail [7] 2. a mission to de ect asteroid Apophis with a solar sail from a retrograde orbit with a very-high velocity impact [8, 9] 3. JPL's \2nd Global Trajectory Optimization Competition", a grand tour to visit four asteroids from different classes with a NEP spacecraft
After a brief introduction of conventional laboratory structures, this work focuses on an innovative and universal approach for a setup of a training laboratory for electric machines and drive systems. The novel approach employs a central 48 V DC bus, which forms the backbone of the structure. Several sets of DC machine, asynchronous machine and synchronous machine are connected to this bus. The advantages of the novel system structure are manifold, both from a didactic and a technical point of view: Student groups can work on their own performance level in a highly parallelized and at the same time individualized way. Additional training setups (similar or different) can easily be added. Only the total power dissipation has to be provided, i.e. the DC bus balances the power flow between the student groups. Comparative results of course evaluations of several cohorts of students are shown.
In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.
Magnetic nanoparticles (MNP) are investigated with great interest for biomedical applications in diagnostics (e.g. imaging: magnetic particle imaging (MPI)), therapeutics (e.g. hyperthermia: magnetic fluid hyperthermia (MFH)) and multi-purpose biosensing (e.g. magnetic immunoassays (MIA)). What all of these applications have in common is that they are based on the unique magnetic relaxation mechanisms of MNP in an alternating magnetic field (AMF). While MFH and MPI are currently the most prominent examples of biomedical applications, here we present results on the relatively new biosensing application of frequency mixing magnetic detection (FMMD) from a simulation perspective. In general, we ask how the key parameters of MNP (core size and magnetic anisotropy) affect the FMMD signal: by varying the core size, we investigate the effect of the magnetic volume per MNP; and by changing the effective magnetic anisotropy, we study the MNPs’ flexibility to leave its preferred magnetization direction. From this, we predict the most effective combination of MNP core size and magnetic anisotropy for maximum signal generation.
In the paper the results obtained from experiments at a modelled reinforced building in case of a direct lightning strike are compared with calculations. The comparison includes peak values of the magnetic field Hmax, its derivative (dH/dt)max and of induced voltages umax in typical cable routings. The experiments are performed at a 1:6 scaled building and the results are extrapolated using the similarity relations theory. The calculations are based on the approximate formulae given in IEC 62305-4 and have to be supplemented by a rough estimation of the additional shielding effect of a second reinforcement layer. The comparison shows, that the measured peak values of the magnetic field and its derivative are mostly lower than the calculated. The induced voltages are in good agreement. Hence, calculations of the induced voltages based on IEC 62305-4 are a good method for lightning protection studies of buildings, where the reinforcement is used as a grid-like electromagnetic shield.
For the application of the concept of Lightning Protection Zones (LPZ), the knowledge of the magnetic fields and induced voltages inside a structure is necessary. Laboratory experiments have been conducted at a downscaled model of a building (scale factor 1:6) to determine these electromagnetic quantities in case of a direct strike to the structure. The model (3 m x 2 m x 2 m) represented a small industrial building using the reinforcement of the concrete as electromagnetic shield. The magnetic fields and magnetic field derivatives were measured at several location inside the scaled model. Further, the voltages induced on three typical cable routes inside the model was determined. The influence of the lightning current waveshape, point-of-strike, bonding of the cable routes, and bridging of an expansion joint in the middle of the building on these quantities was studied.
Hydrophobic magnetic nanoparticles (NPs) consisting of undecanoate-capped magnetite (Fe3O4, average diameter ca. 5 nm) are used to control quantized electron transfer to surface-confined redox units and metal NPs. A two-phase system consisting of an aqueous electrolyte solution and a toluene phase that includes the suspended undecanoatecapped magnetic NPs is used to control the interfacial properties of the electrode surface. The attracted magnetic NPs form a hydrophobic layer on the electrode surface resulting in the change of the mechanisms of the surface-confined electrochemical processes. A quinone-monolayer modified Au electrode demonstrates an aqueous-type of the electrochemical process (2e-+2H+ redox mechanism) for the quinone units in the absence of the hydrophobic magnetic NPs, while the attraction of the magnetic NPs to the surface results in the stepwise single-electron transfer mechanism characteristic of a dry nonaqueous medium. Also, the attraction of the hydrophobic magnetic NPs to the Au electrode surface modified with Au NPs (ca. 1.4 nm) yields a microenvironment with a low dielectric constant that results in the single-electron quantum charging of the Au NPs.
Market changes have forced telecommunication companies to transform their business. Increased competition, short innovation cycles, changed usage patterns, increased customer expectations and cost reduction are the main drivers. Our objective is to analyze to what extend transformation projects have improved the orientation towards the end-customers. Therefore, we selected 38 real-life case studies that are dealing with customer orientation. Our analysis is based on a telecommunication-specific framework that aligns strategy, business processes and information systems. The result of our analysis shows the following: transformation projects that aim to improve the customer orientation are combined with clear goals on costs and revenue of the enterprise. These projects are usually directly linked to the customer touch points, but also to the development and provisioning of products. Furthermore, the analysis shows that customer orientation is not the sole trigger for transformation. There is no one-fits-all solution; rather, improved customer orientation needs aligned changes of business processes as well as information systems related to different parts of the company.
Manufacturing Process Simulation for the Prediction of Tool-Part-Interaction and Ply Wrinkling
(2019)
Manufacturing Process Simulation for the Prediction of Tool-Part-Interaction and Ply Wrinkling
(2015)
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
Martinella
(2010)
This paper presents a two-dimensional-in-space mathematical model of biosensors based on an array of enzyme microreactors immobilised on a single electrode. The modeling system acts under amperometric conditions. The microreactors were modeled by particles and by strips. The model is based on the diffusion equations containing a nonlinear term related to the Michaelis-Menten kinetics of the enzymatic reaction. The model involves three regions: an array of enzyme microreactors where enzyme reaction as well as mass transport by diffusion takes place, a diffusion limiting region where only the diffusion takes place, and a convective region, where the analyte concentration is maintained constant. Using computer simulation, the influence of the geometry of the microreactors and of the diffusion region on the biosensor response was investigated. The digital simulation was carried out using the finite difference technique.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
MedicVR : Acceleration and Enhancement Techniques for Direct Volume Rendering in Virtual Reality
(2019)
Der Erfolg eines Softwarenentwicklungsprojektes insbesondere eines Systemintegrationsprojektes wird mit der Erfüllung des „Teufelsdreiecks“, „In-Time“, „In-Budget“, „In-Quality“ gemessen. Hierzu ist die Kenntnis der Software- und Prozessqualität essenziell, um die Einhaltung der Qualitätskriterien festzustellen, aber auch, um eine Vorhersage hinsichtlich Termin- und Budgettreue zu treffen. Zu diesem Zweck wurde in der T-Systems Systems Integration ein System aus verschiedenen Key Performance Indikatoren entworfen und in der Organisation implementiert, das genau das leistet und die Kriterien für CMMI Level 3 erfüllt.
Seismic behavior of an existing unreinforced masonry building built pre-modern code, located in the City of Ohrid, Republic of North Macedonia has been investigated in this paper. The analyzed school building is selected as an archetype in an ongoing project named “Seismic vulnerability assessment of existing masonry structures in Republic of North Macedonia (SeismoWall)”. Two independent segments were included in this research: Seismic hazard assessment by creating a cite specific response spectra and Seismic vulnerability definition by creating a region - specific series of vulnerability curves for the chosen building topology. A reliable Seismic Hazard Assessment for a selected region is a crucial point for performing a seismic risk analysis of a characteristic building class. In that manner, a scenario – based method that incorporates together the knowledge of tectonic style of the considered region, the active fault characterization, the earth crust model and the historical seismicity named Neo Deterministic approach is used for calculation of the response spectra for the location of the building. Variations of the rupturing process are taken into account in the nucleation point of the rupture, in the rupture velocity pattern and in the istribution of the slip on the fault. The results obtained from the multiple scenarios are obtained as an envelope of the response spectra computed for the cite using the procedure Maximum Credible Seismic Input (MCSI). Capacity of the selected building has been determined by using nonlinear static analysis. MINEA software (SDA Engineering) was used for verification of the structural safety of the chosen unreinforced masonry structure. In the process of optimization of the number of samples, computational cost required in a Monte Carlo simulation is significantly reduced since the simulation is performed on a polynomial response surface function for prediction of the structural response. Performance point, found as the intersection of the capacity of the building and the spectra used, is chosen as a response parameter. Five levels of damage limit states based on the capacity curve of the building are defined in dependency on the yield displacement and the maximum displacement. Maximum likelihood estimation procedure is utilized in the process of vulnerability curves determination. As a result, region specific series of vulnerability curves for the chosen type of masonry structures are defined. The obtained probabilities of exceedance a specific damage states as a result from vulnerability curves are compared with the observed damages happened after the earthquake in July 2017 in the City of Ohrid, North Macedonia.
Additive Manufacturing (AM) of metallic workpieces faces a continuously rising technological relevance and market size. Producing complex or highly strained unique workpieces is a significant field of application, making AM highly relevant for tool components. Its successful economic application requires systematic workpiece based decisions and optimizations. Considering geometric and technological requirements as well as the necessary post-processing makes deciding effortful and requires in-depth knowledge. As design is usually adjusted to established manufacturing, associated technological and strategic potentials are often neglected. To embed AM in a future proof industrial environment, software-based self-learning tools are necessary. Integrated into production planning, they enable companies to unlock the potentials of AM efficiently. This paper presents an appropriate methodology for the analysis of process-specific AM-eligibility and optimization potential, added up by concrete optimization proposals. For an integrated workpiece characterization, proven methods are enlarged by tooling-specific figures.
The first stage of the approach specifies the model’s initialization. A learning set of tooling components is described using the developed key figure system. Based on this, a set of applicable rules for workpiece-specific result determination is generated through clustering and expert evaluation. Within the following application stage, strategic orientation is quantified and workpieces of interest are described using the developed key figures. Subsequently, the retrieved information is used for automatically generating specific recommendations relying on the generated ruleset of stage one. Finally, actual experiences regarding the recommendations are gathered within stage three. Statistic learning transfers those to the generated ruleset leading to a continuously deepening knowledge base. This process enables a steady improvement in output quality.
Research collaborations provide opportunities for both practitioners and researchers: practitioners need solutions for difficult business challenges and researchers are looking for hard problems to solve and publish. Nevertheless, research collaborations carry the risk that practitioners focus on quick solutions too much and that researchers tackle theoretical problems, resulting in products which do not fulfill the project requirements.
In this paper we introduce an approach extending the ideas of agile and lean software development. It helps practitioners and researchers keep track of their common research collaboration goal: a scientifically enriched software product which fulfills the needs of the practitioner’s business model.
This approach gives first-class status to application-oriented metrics that measure progress and success of a research collaboration continuously. Those metrics are derived from the collaboration requirements and help to focus on a commonly defined goal.
An appropriate tool set evaluates and visualizes those metrics with minimal effort, and all participants will be pushed to focus on their tasks with appropriate effort. Thus project status, challenges and progress are transparent to all research collaboration members at any time.
A new and simple method for nanostructuring using conventional photolithography and layer expansion or pattern-size reduction technique is presented, which can further be applied for the fabrication of different nanostructures and nano-devices. The method is based on the conversion of a photolithographically patterned metal layer to a metal-oxide mask with improved pattern-size resolution using thermal oxidation. With this technique, the pattern size can be scaled down to several nanometer dimensions. The proposed method is experimentally demonstrated by preparing nanostructures with different configurations and layouts, like circles, rectangles, trapezoids, “fluidic-channel”-, “cantilever”- and meander-type structures.
Dipl.-Ing. Ralf Engels - DHI Wasser und Umwelt GmbH, Syke. 24 S. (S. 70-93) Beitrag zum 1. Aachener Softwaretag in der Wasserwirtschaft <1,2007, Aachen> Einleitung [des Autors] Die hydrodynamische Kanalnetzmodellierung ist ein Standardwerkzeug für die Bemessung von Kanalnetzen. Neben der Berechnung der hydrologischen und hydraulischen Gegebenheiten in einem städtischen Einzugsgebiet gehören auch weiterführende Technologien mittlerweile zum Standard. So können alle steuerbaren Elemente eines Kanalnetzes dynamisch so optimiert werden, dass die Leistungsfähigkeit des Kanalnetzes zusätzlich gesteigert werden kann. Automatische Werkzeuge zur dynamischen hydraulischen Schmutzfrachtberechnung ermöglichen die Erweiterung der Steuerung – insbesondere von Entlastungsanlagen – im Hinblick auf die entlasteten Schmutzfrachten und geben darüber hinaus detaillierte Informationen für den Betrieb der Kläranlage. Weiterführende biologische Prozessmodellierungen ergänzen dieses Themenfeld. GIS Werkzeuge können bei der räumlich differenzierten Modellierung von Kanalnetzen wertvolle Dienste leisten. Die detaillierte Betrachtung einzelner Haltungsflächen in ihrem räumlichen Zusammenhang ist damit ebenso möglich wie eine komplette Verwaltung aller für die Kanalnetzmodellierung notwendigen Daten in einem übersichtlichen grafischen Menü. Die Grenzen der Kanalnetzmodellierung lagen in früheren Zeiten an dessen Rand. Detaillierte Informationen über die Wege des Wassers auf der Geländeoberfläche, an der Schnittstelle zu Vorflutern und in der Interaktion mit Grundwasser waren bisher nicht modelltechnisch bewertbar. Eine dynamische Kopplung verschiedener Modelle zur Darstellung aller relevanten hydraulischen Prozesse ermöglicht eine integrative Betrachtung aller möglichen Wege, die das Wasser in der Stadt nehmen kann (Mark & Djordjevic, 2006). Dieser Beitrag präsentiert den Stand der Technik für die integrierte Modellierung städtischer Überschwemmungen mit Hilfe der Modellkopplung von Oberflächenmodellen und Kanalnetzmodellen.
Urban farming is an innovative and sustainable way of food production and is becoming more and more important in smart city and quarter concepts. It also enables the production of certain foods in places where they usually dare not produced, such as production of fish or shrimps in large cities far away from the coast. Unfortunately, it is not always possible to show students such concepts and systems in real life as part of courses: visits of such industry plants are sometimes not possible because of distance or are permitted by the operator for hygienic reasons. In order to give the students the opportunity of getting into contact with such an urban farming system and its complex operation, an industrial urban farming plant was set up on a significantly smaller scale. Therefore, all needed technical components like water aeriation, biological and mechanical filtration or water circulation have been replaced either by aquarium components or by self-designed parts also using a 3D-printer. Students from different courses like mechanical engineering, smart building engineering, biology, electrical engineering, automation technology and civil engineering were involved in this project. This “miniature industrial plant” was also able to start operation and has now been running for two years successfully. Due to Corona pandemic, home office and remote online lectures, the automation of this miniature plant should be brought to a higher level in future for providing a good control over the system and water quality remotely. The aim of giving the student a chance to get to know the operation of an urban farming plant was very well achieved and the students had lots of fun in “playing” and learning with it in a realistic way.
Various planar technologies are employed for developing solid-state sensors having low cost, small size and high reproducibility; thin- and thick-film technologies are most suitable for such productions. Screen-printing is especially suitable due to its simplicity, low-cost, high reproducibility and efficiency in large-scale production. This technology enables the deposition of a thick layer and allows precise pattern control. Moreover, this is a highly economic technology, saving large amounts of the used inks. In the course of repetitions of the film-deposition procedure there is no waste of material due to additivity of this thick-film technology. Finally, the thick films can be easily and quickly deposited on inexpensive substrates. In this contribution, thick-film ion-selective electrodes based on ionophores as well as crystalline ion-selective materials dedicated for potentiometric measurements are demonstrated. Analytical parameters of these sensors are comparable with those reported for conventional potentiometric electrodes. All mentioned thick-film strip electrodes have been totally fabricated in only one, fully automated thickfilm technology, without any additional manual, chemical or electrochemical steps. In all cases simple, inexpensive, commercially available materials, i.e. flexible, plastic substrates and easily cured polymer-based pastes were used.
Mischwassereinleitungen in Gewässer nach BWK Merkblatt M3 - Vorteile des detaillierten Nachweises
(2009)
Dipl.-Ing. Brigitte Huber und Dr.-Ing. Gerd Demny - Wasserverband Eifel Rur, Düren. 16 Seiten ( S. 59-74). Beitrag zum 2. Aachener Softwaretag in der Wasserwirtschaft <2, 2009, Aachen> Zusammenfassung [der Autoren] Für das städtisch geprägte Einzugsgebiet des Broicher Baches sind ein vereinfachter und ein detaillierter Nachweis nach BWK-M3 durchgeführt worden. Dabei zeigt sich, dass die Methodik des vereinfachten Nachweises nicht geeignet ist, um eine realitätsnahe Abbildung der einleitungsgeprägten Abflüsse des Gewässers zu erhalten. Dies ist insbesondere auf die Vernachlässigung von Wellentranslation und -retention im Gerinne zurückzuführen. Die dadurch entstehende Fehleinschätzung der Abflussverhältnisse versperrt den Blick auf eine situationsgerechte Maßnahmenplanung. Der mit Hilfe eines NA-Modells geführte detaillierte Nachweis ist zwar in der Erstellung aufwändiger, zeichnet aber ein reales Bild der Abflusserhöhung durch Einleitungen. Mit Hilfe des Modells können die wesentlichen Einflüsse schnell lokalisiert und zielführende Maßnahmenvarianten identifiziert werden. In dem hier vorgestellten Beispiel des Broicher Baches können die ursprünglich identifizierten acht Maßnahmen auf eine reduziert werden. Das Gesamtvolumen der erforderlichen Rückhaltungen wird um die Hälfte verringert. Der Vergleich beider Nachweismethoden legt nach Ansicht der Autoren nahe, den vereinfachten Nachweis höchstens für eine erste Einschätzung des Maßnahmenbedarfs anzuwenden. Die Maßnahmenidentifikation und -dimensionierung sollte grundsätzlich mit der detaillierten Nachweismethode durchgeführt werden, die auf einem entsprechenden NA-Modell basiert. Dies gilt insbesondere für Gewässerstrecken, deren Abfluss durch mehrere, hintereinander liegende Einleitungsstellen geprägt ist.