Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1936)
- Fachbereich Elektrotechnik und Informationstechnik (1150)
- Fachbereich Wirtschaftswissenschaften (1121)
- Fachbereich Energietechnik (1067)
- Fachbereich Chemie und Biotechnologie (898)
- Fachbereich Maschinenbau und Mechatronik (813)
- Fachbereich Luft- und Raumfahrttechnik (769)
- Fachbereich Bauingenieurwesen (664)
- IfB - Institut für Bioengineering (629)
- INB - Institut für Nano- und Biotechnologien (586)
Has Fulltext
- no (9335) (remove)
Language
Document Type
- Article (5534)
- Conference Proceeding (1422)
- Book (1062)
- Part of a Book (567)
- Patent (177)
- Bachelor Thesis (169)
- Report (83)
- Doctoral Thesis (82)
- Conference: Meeting Abstract (76)
- Other (67)
- Contribution to a Periodical (20)
- Master's Thesis (18)
- Review (18)
- Working Paper (13)
- Conference Poster (5)
- Habilitation (5)
- Preprint (5)
- Talk (5)
- Diploma Thesis (3)
- Part of a Periodical (2)
- Examination Thesis (1)
- Video (1)
Keywords
- Illustration (10)
- Nachhaltigkeit (10)
- Corporate Design (9)
- Erscheinungsbild (8)
- Gamification (8)
- Redesign (7)
- Animation (6)
- Datenschutz (6)
- Deutschland (6)
- Digitalisierung (6)
Solar sails are large and lightweight reflective structures that are propelled by solar radiation pressure. This chapter covers their orbital and attitude dynamics and control. First, the advantages and limitations of solar sails are discussed and their history and development status is outlined. Because the dynamics of solar sails is governed by the (thermo-)optical properties of the sail film, the basic solar radiation pressure force models have to be described and compared before parameters to measure solar sail performance can be defined. The next part covers the orbital dynamics of solar sails for heliocentric motion, planetocentric motion, and motion at Lagrangian equilibrium points. Afterwards, some advanced solar radiation pressure force models are described, which allow to quantify the thrust force on solar sails of arbitrary shape, the effects of temperature, of light incidence angle, of surface roughness, and the effects of optical degradation of the sail film in the space environment. The orbital motion of a solar sail is strongly coupled to its rotational motion, so that the attitude control of these soft and flexible structures is very challenging, especially for planetocentric orbits that require fast attitude maneuvers. Finally, some potential attitude control methods are sketched and selection criteria are given.
Low-thrust space propulsion systems enable flexible high-energy deep space missions, but the design and optimization of the interplanetary transfer trajectory is usually difficult. It involves much experience and expert knowledge because the convergence behavior of traditional local trajectory optimization methods depends strongly on an adequate initial guess. Within this extended abstract, evolutionary neurocontrol, a method that fuses artificial neural networks and evolutionary algorithms, is proposed as a smart global method for low-thrust trajectory optimization. It does not require an initial guess. The implementation of evolutionary neurocontrol is detailed and its performance is shown for an exemplary mission.
Solar sails enable missions to the outer solar system and beyond, although the solar
radiation pressure decreases with the square of solar distance. For such missions, the solar sail may gain a large amount of energy by first making one or more close approaches to the sun. Within this paper, optimal trajectories for solar sail missions to the outer planets and into near interstellar space (200 AU) are presented. Thereby, it is shown that even near/medium-term solar sails with relatively moderate performance allow reasonable transfer times to the boundaries of the solar system.
Interplanetary trajectories for low-thrust spacecraft are often characterized by multiple revolutions around the sun. Unfortunately, the convergence of traditional trajectory optimizers that are based on numerical optimal control methods depends strongly on an adequate initial guess for the control function (if a direct method is used) or for the starting values of the adjoint vector (if an indirect method is used). Especially when many revolutions around the sun are re-
quired, trajectory optimization becomes a very difficult and time-consuming task that involves a lot of experience and expert knowledge in astrodynamics and optimal control theory, because an adequate initial guess is extremely hard to find. Evolutionary neurocontrol (ENC) was proposed as a smart method for low-thrust trajectory optimization that fuses artificial neural networks and evolutionary algorithms to so-called evolutionary neurocontrollers (ENCs) [1]. Inspired by natural archetypes, ENC attacks the trajectoryoptimization problem from the perspective of artificial intelligence and machine learning, a perspective that is quite different from that of optimal control theory. Within the context of ENC, a trajectory is regarded as the result of a spacecraft steering strategy that maps permanently the actual spacecraft state and the actual target state onto the actual spacecraft control vector. This way, the problem of searching the optimal spacecraft trajectory is equivalent to the problem of searching (or "learning") the optimal spacecraft steering strategy. An artificial neural network is used to implement such a spacecraft steering strategy. It can be regarded as a parameterized function (the network function) that is defined by the internal network parameters. Therefore, each distinct set of network parameters defines a different network function and thus a different steering strategy. The problem of searching the optimal steering strategy is now equivalent to the problem of searching the optimal set of network parameters. Evolutionary algorithms that work on a population of (artificial) chromosomes are used to find the optimal network parameters, because the parameters can be easily mapped onto a chromosome. The trajectory optimization problem is solved when the optimal chromosome is found. A comparison of solar sail trajectories that have been published by others [2, 3, 4, 5] with ENC-trajectories has shown that ENCs can be successfully applied for near-globally optimal spacecraft control [1, 6] and that they are able to find trajectories that are closer to the (unknown) global optimum, because they explore the trajectory search space more exhaustively than a human expert can do. The obtained trajectories are fairly accurate with respect to the terminal constraint. If a more accurate trajectory is required, the ENC-solution can be used as an initial guess for a local trajectory optimization method. Using ENC, low-thrust trajectories can be optimized without an initial guess and without expert attendance.
Here, new results for nuclear electric spacecraft and for solar sail spacecraft are presented and it will be shown that ENCs find very good trajectories even for very difficult problems. Trajectory optimization results are presented for 1. NASA's Solar Polar Imager Mission, a mission to attain a highly inclined close solar orbit with a solar sail [7] 2. a mission to de ect asteroid Apophis with a solar sail from a retrograde orbit with a very-high velocity impact [8, 9] 3. JPL's \2nd Global Trajectory Optimization Competition", a grand tour to visit four asteroids from different classes with a NEP spacecraft
The concept of a laser-enhanced solar sail is introduced and the radiation pressure force model for an ideal laser-enhanced solar sail is derived. A laser-enhanced solar sail is a “traditional” solar sail that is, however, not solely propelled by solar radiation, but additionally by a laser beam that illuminates the sail. The additional laser radiation pressure increases the sail's propulsive force and can give, depending on the location of the laser source, more control authority over the direction of the solar sail’s propulsive force vector. This way, laser-enhanced solar sails may augment already existing solar sail mission concepts and make novel mission concepts feasible.
Ein Garten im Weltraum
(2017)
Der Telekommunikationsmarkt erfährt substanzielle Veränderungen. Neue Geschäftsmodelle, innovative Dienstleistungen und Technologien erfordern Reengineering, Transformation und Prozessstandardisierung. Mit der Enhanced Telecom Operation Map (eTOM) bietet das TM Forum ein international anerkanntes de facto Referenz-Prozess-Framework basierend auf spezifischen Anforderungen und Ausprägungen der Telekommunikationsindustrie an. Allerdings enthält dieses Referenz-Framework nur eine hierarchische Sammlung von Prozessen auf unterschiedlichen Abstraktionsebenen. Eine Kontrollsicht verstanden als sequenzielle Anordnung von Aktivitäten und daraus resultierend ein realer Prozessablauf fehlt ebenso wie eine Ende-zu-Ende-Sicht auf den Kunden. In diesem Artikel erweitern wir das eTOM-Referenzmodell durch Referenzprozessabläufe, in welchen wir das Wissen über Prozesse in Telekommunikationsunternehmen abstrahieren und generalisieren. Durch die Referenzprozessabläufe werden Unternehmen bei dem strukturierten und transparenten (Re-)Design ihrer Prozesse unterstützt. Wir demonstrieren die Anwendbarkeit und Nützlichkeit unserer Referenzprozessabläufe in zwei Fallstudien und evaluieren diese anhand von Kriterien für die Bewertung von Referenzmodellen. Die Referenzprozessabläufe wurden vom TM Forum in den Standard aufgenommen und als Teil von eTOM Version 9 veröffentlicht. Darüber hinaus diskutieren wir die Komponenten unseres Ansatzes, die auch außerhalb der Telekommunikationsindustrie angewandt werden können.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
Die Veränderungen des Telekommunikationsmarktes haben in der Praxis zu einer Vielzahl von Transformationsprojekten geführt. Was gehört aber zu einem “Transformationsprojekt”, welche Prozesse und Systeme werden verändert? Zur Beantwortung dieser Frage haben wir 184 Berichte zu Projekten analysiert, die als "Transformationsprojekte" bezeichnet waren. Für die Analyse haben wir einen Kodierungsrahmen konzipiert und anhand dessen die Berichte mit einem hierarchischen Clustering-Verfahren in Themen gruppiert. Die Ergebnisse liefern Hinweise über die in der Praxis gesetzten Schwerpunkte und Prioritäten. Sie können
somit als Unterstützung für Unternehmen dienen, die ein Transformationsprojekt planen. Sie weisen zudem darauf hin, in welchen Bereichen eines Unternehmens Unterstützung durch wissenschaftlich erprobte Werkzeuge und Modelle nötig ist.
Market changes have forced telecommunication companies to transform their business. Increased competition, short innovation cycles, changed usage patterns, increased customer expectations and cost reduction are the main drivers. Our objective is to analyze to what extend transformation projects have improved the orientation towards the end-customers. Therefore, we selected 38 real-life case studies that are dealing with customer orientation. Our analysis is based on a telecommunication-specific framework that aligns strategy, business processes and information systems. The result of our analysis shows the following: transformation projects that aim to improve the customer orientation are combined with clear goals on costs and revenue of the enterprise. These projects are usually directly linked to the customer touch points, but also to the development and provisioning of products. Furthermore, the analysis shows that customer orientation is not the sole trigger for transformation. There is no one-fits-all solution; rather, improved customer orientation needs aligned changes of business processes as well as information systems related to different parts of the company.
As the potential of a next generation network (NGN) is recognised, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company, because it builds upon the separation between service and transport, a flexible bundling of services to products and the streamlining of the IT infrastructure. We propose a holistic framework, structured into the layers ‘strategy’, ‘processes’ and ‘information systems’ and incorporate into each layer all concepts necessary for the implementation of an NGN, as well as the alignment of these concepts. As a first proof-of-concept for our framework we have performed a case study on the introduction of NGN in a large telecommunication company; we show that our framework captures all topics that are affected by an NGN implementation.
Subject of this case is Deutsche Telekom Services Europe (DTSE), a service center for administrative processes. Due to the high volume of repetitive tasks (e.g., 100k manual uploads of offer documents into SAP per year), automation was identified as an important strategic target with a high management attention and commitment. DTSE has to work with various backend application systems without any possibility to change those systems. Furthermore, the complexity of administrative processes differed. When it comes to the transfer of unstructured data (e.g., offer documents) to structured data (e.g., MS Excel files), further cognitive technologies were needed.
How does the implementation of a next generation network influence a telecommunication company?
(2009)
As the potential of a Next Generation Network (NGN) is recognized, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company and even influence the company strategy. To capture the effects of NGN we propose a framework based on concepts of business engineering and technical recommendations for the introduction of NGN technology. The specific design of solutions for the layers "Strategy", "Processes" and "Information Systems" as well as their interdependencies are an essential characteristic of the developed framework. We have per-formed a case study on NGN implementation and observed that all layers captured by our framework are influenced by the introduction of an NGN.
Unternehmen sind in der Regel überzeugt, dass sie die Bedürfnisse ihrer Kunden in den Mittelpunkt stellen. Aber in der direkten Interaktion mit dem Kunden zeigen sie häufig Schwächen. Der folgende Beitrag illustriert, wie durch eine konsequente Ausrichtung der Wertschöpfungsprozesse auf die zentralen Kundenbedürfnisse ein Dreifacheffekt erzielt werden kann: Nachhaltig erhöhte Kundenzufriedenheit, gesteigerte Effizienz und eine Differenzierung im Wettbewerb.
Kundenanforderungen an Netzwerke haben sich in den vergangenen Jahren stark verändert. Mit NFV und SDN sind Unternehmen technisch in der Lage, diesen gerecht zu werden. Die Provider stehen jedoch vor großen Herausforderungen: Insbesondere Produkte und Prozesse müssen angepasst und agiler werden, um die Stärken von NFV und SDN zum Kundenvorteil auszuspielen.
Robotic process automation (RPA) has attracted increasing attention in research and practice. This chapter positions, structures, and frames the topic as an introduction to this book. RPA is understood as a broad concept that comprises a variety of concrete solutions. From a management perspective RPA offers an innovative approach for realizing automation potentials, whereas from a technical perspective the implementation based on software products and the impact of artificial intelligence (AI) and machine learning (ML) are relevant. RPA is industry-independent and can be used, for example, in finance, telecommunications, and the public sector. With respect to RPA this chapter discusses definitions, related approaches, a structuring framework, a research framework, and an inside as well as outside architectural view. Furthermore, it provides an overview of the book combined with short summaries of each chapter.
The telecommunications industry is currently going through a major transformation. In this context, the enhanced Telecom Operations Map (eTOM) is a domain-specific process reference model that is offered by the industry organization TM Forum. In practice, eTOM is well accepted and confirmed as de facto standard. It provides process definitions and process flows on different levels of detail. This article discusses the reference modeling of eTOM, i.e., the design, the resulting artifact, and its evaluation based on three project cases. The application of eTOM in three projects illustrates the design approach and concrete models on strategic and operational levels. The article follows the Design Science Research (DSR) paradigm. It contributes with concrete design artifacts to the transformational needs of the telecommunications industry and offers lessons-learned from a general DSR perspective.
Am Beispiel der Telekommunikationsindustrie zeigt der Beitrag eine konkrete Ausgestaltung anwendungsorientierter Forschung, die sowohl für die Praxis als auch für die Wissenschaft nutzen- und erkenntnisbringend ist. Forschungsgegenstand sind die Referenzmodelle des Industriegremiums TM Forum, die von vielen Telekommunikationsunternehmen zur Transformation ihrer Strukturen und Systeme genutzt werden. Es wird die langjährige Forschungstätigkeit bei der Weiterentwicklung und Anwendung dieser Referenzmodelle beschrieben. Dabei wird ein konsequent gestaltungsorientierter Forschungsansatz verfolgt. Das Zusammenspiel aus kontinuierlicher Weiterentwicklung in Zusammenarbeit mit einem Industriegremium und der Anwendung in vielfältigen Praxisprojekten führt zu einer erfolgreichen Symbiose aus praktischer Nutzengenerierung sowie wissenschaftlichem Erkenntnisgewinn. Der Beitrag stellt den gewählten Forschungsansatz anhand konkreter Beispiele vor. Darauf basierend werden Empfehlungen und Herausforderungen für eine gestaltungs- und praxisorientierte Forschung diskutiert.
This book reflects the tremendous changes in the telecommunications industry in the course of the past few decades – shorter innovation cycles, stiffer competition and new communication products. It analyzes the transformation of processes, applications and network technologies that are now expected to take place under enormous time pressure. The International Telecommunication Union (ITU) and the TM Forum have provided reference solutions that are broadly recognized and used throughout the value chain of the telecommunications industry, and which can be considered the de facto standard. The book describes how these reference solutions can be used in a practical context: it presents the latest insights into their development, highlights lessons learned from numerous international projects and combines them with well-founded research results in enterprise architecture management and reference modeling. The complete architectural transformation is explained, from the planning and set-up stage to the implementation. Featuring a wealth of examples and illustrations, the book offers a valuable resource for telecommunication professionals, enterprise architects and project managers alike.
Im Rahmen der Digitalisierung ist die zunehmende Automatisierung von bisher manuellen Prozessschritten ein Aspekt, der massive Auswirkungen auf die zukünftige Arbeitswelt haben wird. In diesem Kontext werden an den Einsatz von Softwarerobotern zur Prozessautomatisierung hohe Erwartungen geknüpft. Bei den Implementierungsansätzen wird die Diskussion aktuell insbesondere durch Robotic Process Automation (RPA) und Chatbots geprägt. Beide Ansätze verfolgen das gemeinsame Ziel einer 1:1-Automatisierung von menschlichen Handlungen und dadurch ein direktes Ersetzen von Mitarbeitern durch Maschinen. Bei RPA werden Prozesse durch Softwareroboter erlernt und automatisiert ausgeführt. Dabei emulieren RPA-Roboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Durch Chatbots werden Ein- und Ausgaben von Anwendungssystemen über natürliche Sprache realisiert. Dadurch ist die Automatisierung von unternehmensexterner Kommunikation (z. B. mit Kunden) aber auch von unternehmensinternen Assistenztätigkeiten möglich. Der Beitrag diskutiert die Auswirkungen von Softwarerobotern auf die Arbeitswelt anhand von Anwendungsbeispielen und erläutert die unternehmensindividuelle Entscheidung über den Einsatz von Softwarerobotern anhand von Effektivitäts- und Effizienzzielen.
Im Rahmen der digitalen Transformation werden innovative Technologiekonzepte, wie z. B. das Internet der Dinge und Cloud Computing als Treiber für weitreichende Veränderungen von Organisationen und Geschäftsmodellen angesehen. In diesem Kontext ist Robotic Process Automation (RPA) ein neuartiger Ansatz zur Prozessautomatisierung, bei dem manuelle Tätigkeiten durch sogenannte Softwareroboter erlernt und automatisiert ausgeführt werden. Dabei emulieren Softwareroboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Die innovative Idee ist die Transformation der bestehenden Prozessausführung von manuell zu digital, was RPA von traditionellen Ansätzen des Business Process Managements (BPM) unterscheidet, bei denen z. B. prozessgetriebene
Anpassungen auf Ebene der Geschäftslogik notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Gerade bei operativen Prozessen mit sich wiederholenden Verarbeitungsschritten in unterschiedlichen Anwendungssystemen sind gute Ergebnisse durch RPA dokumentiert, wie z. B. die Automatisierung von 35 % der Backoffice-Prozesse bei Telefonica. Durch den vergleichsweise niedrigen Implementierungsaufwand verbunden mit einem hohen Automatisierungspotenzial ist in der Praxis (z. B. Banken, Telekommunikation, Energieversorgung) ein hohes Interesse an RPA vorhanden. Der Beitrag diskutiert RPA als innovativen Ansatz zur
Prozessdigitalisierung und gibt konkrete Handlungsempfehlungen für die Praxis. Dazu wird zwischen modellgetriebenen und selbstlernenden Ansätzen unterschieden. Anhand von generellen Architekturen von RPA-Systemen werden Anwendungsszenarien sowie deren Automatisierungspotenziale, aber auch Einschränkungen, diskutiert. Es folgt ein strukturierter Marktüberblick ausgewählter RPA-Produkte. Anhand von drei konkreten Anwendungsbeispielen wird die Nutzung von RPA in der Praxis verdeutlicht.
Zur Unterstützung des Transformationsbedarfs von Telekommunikationsunternehmen sind die Referenzmodelle des TM Forums in der Praxis weltweit anerkannt. Dabei findet jedoch meist eine losgelöste Nutzung für spezifische Einzelthemen statt. Daher führt dieser Artikel die bestehenden Inhalte in einer industriespezifischen, übergreifenden Referenzarchitektur zusammen. Der Fokus liegt auf den Ebenen Aufbauorganisation, Prozesse, Applikationen und Daten. Darüber hinaus werden inhaltliche Architekturdomänen zur Strukturierung angeboten. Die Referenzarchitektur ist hierarchisch aufgebaut und wird hier beispielhaft für ausgewählte, aggregierte Inhalte beschrieben. Als erste Evaluation wird die Anwendung der Referenzarchitektur in drei Praxisprojekten erläutert.
Durch die Fragmentierung von Wertschöpfungsketten ergeben sich neue Herausforderungen für das Management von Kundenbeziehungen. Die Dissertation untersucht die daraus resultierenden Anforderungen an eine übergreifende Integration von Customer Relationship Management in der
Telekommunikationsindustrie. Ziel ist es, durch Anwendung von Methoden eines Enterprise Architecture Framework eine übergreifend Lösung zu gestalten. Grundlegende Prämisse dabei ist, dass die übergreifende Gestaltung eines Customer Relationship Management für alle an der
Wertschöpfung beteiligten Unternehmen vorteilhaft ist.
Die Telekommunikationsindustrie hat in den letzten Jahrzehnten einen enormen Wandel vollzogen. Für Telekommunikationsunternehmen erfordert dies fundamentale Umstrukturierungen von Strategie, Prozessen, Anwendungssystemen und Netzwerktechnologien. Dabei spielen Unternehmensarchitekturen und Referenzmodelle eine wichtige Rolle. Zwar existieren in der Praxis anerkannte Referenzmodelle, aber wie sind diese für eine systematische Transformation zu gestalten? Wie sieht eine konkrete Lösung für die Telekommunikationsindustrie aus?
Als Antwort stellt Christian Czarnecki in seinem Buch eine referenzmodellbasierte Unternehmensarchitektur vor. Basierend auf einer umfangreichen Untersuchung von Transformationsprojekten werden Probleme und Anforderungen der Praxis identifiziert, für die mit Methoden der Unternehmenstransformation, Referenzmodellierung und Unternehmensarchitektur ein Lösungsvorschlag entwickelt und evaluiert wird. Dieser besteht u. a. aus detaillierten Anwendungsfällen, Referenzprozessabläufen, einer Zuordnung von Prozessen zu Anwendungssystemen sowie Handlungsempfehlungen zur Virtualisierung.
Für Wissenschaftler und Studierende der Wirtschaftsinformatik zeigt das Buch neue Erkenntnisse einer anwendungsorientierten Referenzmodellierung. Für Praktiker liefert es eine methodisch fundierte Lösung für die aktuellen Transformationsbedarfe der Telekommunikationsindustrie. Christian Czarnecki arbeitet seit 2004 als Unternehmensberater und hat viele Telekommunikationsunternehmen bei deren Transformation begleitet. In 2013 erfolgte die Promotion zum Doktoringenieur an der Otto-von-Guericke-Universität Magdeburg.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Die Stadt Augsburg war eine der größten Textilstädte Europas. Seit 2010 präsentiert das Textil- und Industriemuseum (TIM) eine Vielzahl von Exponaten in der alten Kammgarnspinnerei im ehemaligen Augsburger Textilviertel und ist somit ein wichtiger Bestandteil der bayrischen Museumslandschaft und deutschen Textilgeschichte. Neben seiner besonderen Lage überzeugt das TIM vor allem mit einer großen Musterbuchsammlung. Im Fokus des in dieser Arbeit neugestalteten Erscheinungsbildes stehen die verschiedenen Muster, die den textilen Schwerpunkt in den Vordergrund rücken und dem Museum damit einen einprägsamen Wiedererkennungswert geben. Mithilfe des neuen Corporate Designs soll das Museum für regionale und überregionale Besucher:innen attraktiver gemacht und somit die einmalige geschichtliche Bedeutung vermittelt und erhalten werden.
Intelligent autonomous software robots replacing human activities and performing administrative processes are reality in today’s corporate world. This includes, for example, decisions about invoice payments, identification of customers for a marketing campaign, and answering customer complaints. What happens if such a software robot causes a damage? Due to the complete absence of human activities, the question is not trivial. It could even happen that no one is liable for a damage towards a third party, which could create an uncalculatable legal risk for business partners. Furthermore, the implementation and operation of those software robots involves various stakeholders, which result in the unsolvable endeavor of identifying the originator of a damage. Overall it is advisable to all involved parties to carefully consider the legal situation. This chapter discusses the liability of software robots from an interdisciplinary perspective. Based on different technical scenarios the legal aspects of liability are discussed.
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
The investigation of atomic resonance fluorescence has always been of special interest as a means for the determination of atomic parameters. In addition, information on the interaction mechanism between atoms and radiation can be obtained. In the standard fluorescence experiment the frequency distribution of the incident photons is larger than the natural width of the respective transition; as a consequence the correlation time in the photon-atom interaction is determined by the lifetime of the atoms in the excited state. With the development of lasers and especially of tunable dye lasers in recent years it became possible to study the case where the incident radiation has a spectral distribution which is narrower than the natural width. This corresponds to a correlation time of the incoming light wave which is much longer than the excited-state lifetime. In this chapter a survey of experiments on the resonance fluorescence of atoms in monochromatic laser fields will be given.
Improving the Mechanical Strength of Dental Applications and Lattice Structures SLM Processed
(2020)
To manufacture custom medical parts or scaffolds with reduced defects and high mechanical characteristics, new research on optimizing the selective laser melting (SLM) parameters are needed. In this work, a biocompatible powder, 316L stainless steel, is characterized to understand the particle size, distribution, shape and flowability. Examination revealed that the 316L particles are smooth, nearly spherical, their mean diameter is 39.09 μm and just 10% of them hold a diameter less than 21.18 μm. SLM parameters under consideration include laser power up to 200 W, 250–1500 mm/s scanning speed, 80 μm hatch spacing, 35 μm layer thickness and a preheated platform. The effect of these on processability is evaluated. More than 100 samples are SLM-manufactured with different process parameters. The tensile results show that is possible to raise the ultimate tensile strength up to 840 MPa, adapting the SLM parameters for a stable processability, avoiding the technological defects caused by residual stress. Correlating with other recent studies on SLM technology, the tensile strength is 20% improved. To validate the SLM parameters and conditions established, complex bioengineering applications such as dental bridges and macro-porous grafts are SLM-processed, demonstrating the potential to manufacture medical products with increased mechanical resistance made of 316L.
Numerische Berechnung des Tritium-Verhaltens von Kugelhaufenreaktoren am Beispiel des AVR-Reaktors
(1979)
Impaired cerebral autoregulation and neurovascular coupling (NVC) contribute to delayed cerebral ischemia after subarachnoid hemorrhage (SAH). Retinal vessel analysis (RVA) allows non-invasive assessment of vessel dimension and NVC hereby demonstrating a predictive value in the context of various neurovascular diseases. Using RVA as a translational approach, we aimed to assess the retinal vessels in patients with SAH. RVA was performed prospectively in 24 patients with acute SAH (group A: day 5–14), in 11 patients 3 months after ictus (group B: day 90 ± 35), and in 35 age-matched healthy controls (group C). Data was acquired using a Retinal Vessel Analyzer (Imedos Systems UG, Jena) for examination of retinal vessel dimension and NVC using flicker-light excitation. Diameter of retinal vessels—central retinal arteriolar and venular equivalent—was significantly reduced in the acute phase (p < 0.001) with gradual improvement in group B (p < 0.05). Arterial NVC of group A was significantly impaired with diminished dilatation (p < 0.001) and reduced area under the curve (p < 0.01) when compared to group C. Group B showed persistent prolonged latency of arterial dilation (p < 0.05). Venous NVC was significantly delayed after SAH compared to group C (A p < 0.001; B p < 0.05). To our knowledge, this is the first clinical study to document retinal vasoconstriction and impairment of NVC in patients with SAH. Using non-invasive RVA as a translational approach, characteristic patterns of compromise were detected for the arterial and venous compartment of the neurovascular unit in a time-dependent fashion. Recruitment will continue to facilitate a correlation analysis with clinical course and outcome.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Helle Fensterprofilmaterialien : Alterungsverhalten auf Basis von peroxidisch vernetztem EPDM
(2010)
Purpose
In vivo, a loss of mesh porosity triggers scar tissue formation and restricts functionality. The purpose of this study was to evaluate the properties and configuration changes as mesh deformation and mesh shrinkage of a soft mesh implant compared with a conventional stiff mesh implant in vitro and in a porcine model.
Material and Methods
Tensile tests and digital image correlation were used to determine the textile porosity for both mesh types in vitro. A group of three pigs each were treated with magnetic resonance imaging (MRI) visible conventional stiff polyvinylidene fluoride meshes (PVDF) or with soft thermoplastic polyurethane meshes (TPU) (FEG Textiltechnik mbH, Aachen, Germany), respectively. MRI was performed with a pneumoperitoneum at a pressure of 0 and 15 mmHg, which resulted in bulging of the abdomen. The mesh-induced signal voids were semiautomatically segmented and the mesh areas were determined. With the deformations assessed in both mesh types at both pressure conditions, the porosity change of the meshes after 8 weeks of ingrowth was calculated as an indicator of preserved elastic properties. The explanted specimens were examined histologically for the maturity of the scar (collagen I/III ratio).
Results
In TPU, the in vitro porosity increased constantly, in PVDF, a loss of porosity was observed under mild stresses. In vivo, the mean mesh areas of TPU were 206.8 cm2 (± 5.7 cm2) at 0 mmHg pneumoperitoneum and 274.6 cm2 (± 5.2 cm2) at 15 mmHg; for PVDF the mean areas were 205.5 cm2 (± 8.8 cm2) and 221.5 cm2 (± 11.8 cm2), respectively. The pneumoperitoneum-induced pressure increase resulted in a calculated porosity increase of 8.4% for TPU and of 1.2% for PVDF. The mean collagen I/III ratio was 8.7 (± 0.5) for TPU and 4.7 (± 0.7) for PVDF.
Conclusion
The elastic properties of TPU mesh implants result in improved tissue integration compared to conventional PVDF meshes, and they adapt more efficiently to the abdominal wall. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 827–833, 2018.
Die Fallstudie FAYMONVILLE beschäftigt sich damit, wie es dem Familienunternehmen Faymonville aus Ostbelgien gelungen ist, sich zu einem der führenden Hersteller in seiner Branche zu entwickeln. Die gezielte Identifizierung neuer Märkte, die Fokussierung auf die relevanten Kundenbedürfnisse und eine konsistente Produktpolitik mit einem abgestimmten Fertigungskonzept legen die Grundsteine für den Erfolg. Das vorliegende Fallbeispiel zeigt anschaulich, wie es gelingen kann, den prinzipiellen Widerspruch zwischen wirtschaftlicher und kundenindividueller Fertigung erfolgreich aufzulösen.
The FAYMONVILLE case study describes how the family-owned company Faymonville from eastern Belgium has succeeded in becoming one of the leading manufacturers in its sector. The targeted identification of new markets, the focus on relevant customer needs, and a consistent product policy with a coordinated manufacturing concept lay the foundations for success. In this case study, students can learn about how a company can successfully resolve the fundamental contradiction between economic and customized production.
Ein Drittel der Mitarbeiter der Saint-Gobain Glass Deutschland
GmbH hat drei Jahre lang regelmäßig seinen Rücken trainiert.
Mit Erfolg, wie eine abschließende Evaluation in Zusammenarbeit
mit der FH Aachen zeigt. Die Fehltage der Trainingsteilnehmer
sind enorm zurückgegangen, während die untrainierten Kollegen
weiterhin unter Rückenbeschwerden leiden.
Bewertungsrelevanz veröffentlichter Kapitalflußrechnungen börsennotierter deutscher Unternehmen
(1999)
A methodology for assessment, seismic verification and strengthening of existing masonry buildings is presented in this paper. The verification is performed using a calculation model calibrated with the results from ambient vibration measurements. The calibrated model serves as an input for a deformation-based verification procedure based on the Capacity Spectrum Method (CSM). The bearing capacity of the building is calculated from experimental capacity curves of the individual walls idealized with bilinear elastic-perfectly plastic curves. The experimental capacity curves were obtained from in-plane cyclic loading tests on unreinforced and strengthened masonry walls with reinforced concrete jackets. The seismic action is compared with the load-bearing capacity of the building considering non-linear material behavior with its post-peak capacity. The application of the CSM to masonry buildings and the influence of a traditional strengthening method are demonstrated on the example of a public school building in Skopje, Macedonia.
Textile reinforced concrete. Part I: Process model for collaborative research and development
(2003)
Investigation of TRPV1 loss-of-function phenotypes in transgenic shRNA expressing and knockout mice
(2008)
Numerical avalanche dynamics models have become an essential part of snow engineering. Coupled with field observations and historical records, they are especially helpful in understanding avalanche flow in complex terrain. However, their application poses several new challenges to avalanche engineers. A detailed understanding of the avalanche phenomena is required to construct hazard scenarios which involve the careful specification of initial conditions (release zone location and dimensions) and definition of appropriate friction parameters. The interpretation of simulation results requires an understanding of the numerical solution schemes and easy to use visualization tools. We discuss these problems by presenting the computer model RAMMS, which was specially designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged equations governing avalanche flow with accurate second-order numerical solution schemes. The model allows the specification of multiple release zones in three-dimensional terrain. Snow cover entrainment is considered. Furthermore, two different flow rheologies can be applied: the standard Voellmy–Salm (VS) approach or a random kinetic energy (RKE) model, which accounts for the random motion and inelastic interaction between snow granules. We present the governing differential equations, highlight some of the input and output features of RAMMS and then apply the models with entrainment to simulate two well-documented avalanche events recorded at the Vallée de la Sionne test site.
Numerical models have become an essential part of snow avalanche engineering. Recent
advances in understanding the rheology of flowing snow and the mechanics of entrainment and
deposition have made numerical models more reliable. Coupled with field observations and historical
records, they are especially helpful in understanding avalanche flow in complex terrain. However, the
application of numerical models poses several new challenges to avalanche engineers. A detailed
understanding of the avalanche phenomena is required to specify initial conditions (release zone
dimensions and snowcover entrainment rates) as well as the friction parameters, which are no longer
based on empirical back-calculations, rather terrain roughness, vegetation and snow properties. In this
paper we discuss these problems by presenting the computer model RAMMS, which was specially
designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged
equations governing avalanche flow with first and second-order numerical solution schemes. A
tremendous effort has been invested in the implementation of advanced input and output features.
Simulation results are therefore clearly and easily visualized to simplify their interpretation. More
importantly, RAMMS has been applied to a series of well-documented avalanches to gauge model
performance. In this paper we present the governing differential equations, highlight some of the input
and output features of RAMMS and then discuss the simulation of the Gatschiefer avalanche that
occurred in April 2008, near Klosters/Monbiel, Switzerland.
Two- and three-dimensional avalanche dynamics models are being increasingly used in hazard-mitigation studies. These models can provide improved and more accurate results for hazard mapping than the simple one-dimensional models presently used in practice. However, two- and three-dimensional models generate an extensive amount of output data, making the interpretation of simulation results more difficult. To perform a simulation in three-dimensional terrain, numerical models require a digital elevation model, specification of avalanche release areas (spatial extent and volume), selection of solution methods, finding an adequate calculation resolution and, finally, the choice of friction parameters. In this paper, the importance and difficulty of correctly setting up and analysing the results of a numerical avalanche dynamics simulation is discussed. We apply the two-dimensional simulation program RAMMS to the 1968 extreme avalanche event In den Arelen. We show the effect of model input variations on simulation results and the dangers and complexities in their interpretation.
MultiChannel Photomultipliers (PM), like the R7600-00-M64 or R5900-00-M64 from Hamamatsu, are often chosen as photodetectors in high-resolution positron emission tomography (PET). A major problem of this PM is the nonuniform channel gain. In order to solve this problem, light attenuating masks were created. The aim of the masks is a homogenization of the output of all 64 channels using different hole sizes at the channel positions. The hole area, which is individually defined for the different channels, is inversely proportional to the channel gain. The measurements by inserting light attenuating masks improved a homogenization to a ratio of 1:1.2.
Gas- und Dampfturbinen-Kraftwerke mit Druckwirbelschicht- oder mit Druckvergasungsverfahren ermöglichen die Verstromung von Kohle mit hohem Wirkungsgrad und niedrigen Emissionen. Eine Voraussetzung für den Betrieb dieser Anlagen ist die Entstaubung der Rauchgase bei hohen Temperaturen und Drücken. Abreinigungsfilter mit keramischen Elementen werden dazu eingesetzt. Eine Reduzierung gasförmiger Schadstoffe unter den gleichen Bedingungen könnte Rauchgaswäsche ersetzen. Ziel des Gesamtvorhabens ist es, die Integration von Heißgasfiltration und katalytischem Abbau der Schadstoffe Kohlenmonoxid, Kohlenwasserstoffe und Stickoxide in einen Verfahrensschritt zu untersuchen. Die Arbeitsschwerpunkte dieses Teilvorhabens betreffen:
die katalytische Wirkung eisenhaltiger Braunkohlenaschen,
die Wirksamkeit des Calciumaluminat als Katalysator des Abbaus unverbrannter Kohlenwasserstoffe im Heißgasfilter,
numerische Simulation der kombinierten Abscheidung von Partikeln und gasförmigen Schadstoffen aus Rauchgasen
Design, evaluation and comparison of endorectal coils for hybrid MR-PET imaging of the prostate
(2020)
Prostate cancer is one of the most common cancers among men and its early detection is critical for its successful treatment. The use of multimodal imaging, such as MR-PET, is most advantageous as it is able to provide detailed information about the prostate. However, as the human prostate is flexible and can move into different positions under external conditions, it is important to localise the focused region-of-interest using both MRI and PET under identical circumstances. In this work, we designed five commonly used linear and quadrature radiofrequency surface coils suitable for hybrid MR-PET use in endorectal applications. Due to the endorectal design and the shielded PET insert, the outer face of the coils investigated was curved and the region to be imaged was outside the volume of the coil. The tilting angles of the coils were varied with respect to the main magnetic field direction. This was done to approximate the various positions from which the prostate could be imaged. The transmit efficiencies and safety excitation efficiencies from simulations, together with the signal-to-noise ratios from the MR images were calculated and analysed. Overall, it was found that the overlapped loops driven in quadrature were superior to the other types of coils we tested. In order to determine the effect of the different coil designs on PET, transmission scans were carried out, and it was observed that the differences between attenuation maps with and without the coils were negligible. The findings of this work can provide useful guidance for the integration of such coil designs into MR-PET hybrid systems in the future.
Orthodontic treatments are concomitant with mechanical forces and thereby cause teeth movements. The applied forces are transmitted to the tooth root and the periodontal ligaments which is compressed on one side and tensed up on the other side. Indeed, strong forces can lead to tooth root resorption and the crown-to-tooth ratio is reduced with the potential for significant clinical impact. The cementum, which covers the tooth root, is a thin mineralized tissue of the periodontium that connects the periodontal ligament with the tooth and is build up by cementoblasts. The impact of tension and compression on these cells is investigated in several in vivo and in vitro studies demonstrating differences in protein expression and signaling pathways. In summary, osteogenic marker changes indicate that cyclic tensile forces support whereas static tension inhibits cementogenesis. Furthermore, cementogenesis experiences the same protein expression changes in static conditions as static tension, but cyclic compression leads to the exact opposite of cyclic tension. Consistent with marker expression changes, the singaling pathways of Wnt/ß-catenin and RANKL/OPG show that tissue compression leads to cementum degradation and tension forces to cementogenesis. However, the cementum, and in particular its cementoblasts, remain a research area which should be explored in more detail to understand the underlying mechanism of bone resorption and remodeling after orthodontic treatments.
This paper describes the potential for developing a digital twin of society- a dynamic model that can be used to observe, analyze, and predict the evolution of various societal aspects. Such a digital twin can help governmental agencies and policy makers in interpreting trends, understanding challenges, and making decisions regarding investments or policies necessary to support societal development and ensure future prosperity. The paper reviews related work regarding the digital twin paradigm and its applications. The paper presents a motivating case study- an analysis of opportunities and challenges faced by the German federal employment agency, Bundesagentur f¨ur Arbeit (BA), proposes solutions using digital twins, and describes initial proofs of concept for such solutions.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
Objective
This study assesses and quantifies impairment of postoperative magnetic resonance imaging (MRI) at 7 Tesla (T) after implantation of titanium cranial fixation plates (CFPs) for neurosurgical bone flap fixation.
Materials and methods
The study group comprised five patients who were intra-individually examined with 3 and 7 T MRI preoperatively and postoperatively (within 72 h/3 months) after implantation of CFPs. Acquired sequences included T₁-weighted magnetization-prepared rapid-acquisition gradient-echo (MPRAGE), T₂-weighted turbo-spin-echo (TSE) imaging, and susceptibility-weighted imaging (SWI). Two experienced neurosurgeons and a neuroradiologist rated image quality and the presence of artifacts in consensus reading.
Results
Minor artifacts occurred around the CFPs in MPRAGE and T2 TSE at both field strengths, with no significant differences between 3 and 7 T. In SWI, artifacts were accentuated in the early postoperative scans at both field strengths due to intracranial air and hemorrhagic remnants. After resorption, the brain tissue directly adjacent to skull bone could still be assessed. Image quality after 3 months was equal to the preoperative examinations at 3 and 7 T.
Conclusion
Image quality after CFP implantation was not significantly impaired in 7 T MRI, and artifacts were comparable to those in 3 T MRI.
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.