Refine
Year of publication
- 2018 (252) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (67)
- Fachbereich Elektrotechnik und Informationstechnik (43)
- IfB - Institut für Bioengineering (39)
- INB - Institut für Nano- und Biotechnologien (25)
- Fachbereich Maschinenbau und Mechatronik (24)
- Fachbereich Luft- und Raumfahrttechnik (23)
- Fachbereich Chemie und Biotechnologie (22)
- Fachbereich Energietechnik (22)
- Fachbereich Wirtschaftswissenschaften (20)
- Fachbereich Bauingenieurwesen (16)
Has Fulltext
- no (252) (remove)
Document Type
- Article (125)
- Conference Proceeding (74)
- Part of a Book (31)
- Book (11)
- Conference: Meeting Abstract (2)
- Doctoral Thesis (2)
- Patent (2)
- Working Paper (2)
- Conference Poster (1)
- Other (1)
Keywords
- Datenschutz (2)
- Digitale Transformation (2)
- Energy efficiency (2)
- Engineering optimization (2)
- Literaturanalyse (2)
- MINLP (2)
- Pump System (2)
- Serious Game (2)
- Water (2)
- Agility (1)
In energy economy forecasts of different time series are rudimentary. In this study, a prediction for the German day-ahead spot market is created with Apache Spark and R. It is just an example for many different applications in virtual power plant environments. Other examples of use as intraday price processes, load processes of machines or electric vehicles, real time energy loads of photovoltaic systems and many more time series need to be analysed and predicted.
This work gives a short introduction into the project where this study is settled. It describes the time series methods that are used in energy industry for forecasts shortly. As programming technique Apache Spark, which is a strong cluster computing technology, is utilised. Today, single time series can be predicted. The focus of this work is on developing a method to parallel forecasting, to process multiple time series simultaneously with R and Apache Spark.
In this paper, a coupled multiphase model considering both non-linearities of water retention curves and solid state modeling is proposed. The solid displacements and the pressures of both water and air phases are unknowns of the proposed model. The finite element method is used to solve the governing differential equations. The proposed method is demonstrated through simulation of seepage test and partially consolidation problem. Then, implementation of the model is done by using hypoplasticity for the solid phase and analyzing the fully saturated triaxial experiments. In integration of the constitutive law error controlling is improved and comparisons done accordingly. In this work, the advantages and limitations of the numerical model are discussed.
Around 60% of the paper worldwide is made from recovered paper. Especially adhesive contaminants, so called stickies, reduce paper quality. To remove stickies but at the same time keep as many valuable fibers as possible, multi-stage screening systems with several interconnected pressure screens are used. When planning such systems, suitable screens have to be selected and their interconnection as well as operational parameters have to be defined considering multiple conflicting objectives. In this contribution, we present a Mixed-Integer Nonlinear Program to optimize system layout, component selection and operation to find a suitable trade-off between output quality and yield.
Given industrial applications, the costs for the operation and maintenance of a pump system typically far exceed its purchase price. For finding an optimal pump configuration which minimizes not only investment, but life-cycle costs, methods like Technical Operations Research which is based on Mixed-Integer Programming can be applied. However, during the planning phase, the designer is often faced with uncertain input data, e.g. future load demands can only be estimated. In this work, we deal with this uncertainty by developing a chance-constrained two-stage (CCTS) stochastic program. The design and operation of a booster station working under uncertain load demand are optimized to minimize total cost including purchase price, operation cost incurred by energy consumption and penalty cost resulting from water shortage. We find optimized system layouts using a sample average approximation (SAA) algorithm, and analyze the results for different risk levels of water shortage. By adjusting the risk level, the costs and performance range of the system can be balanced, and thus the
system’s resilience can be engineered
To increase pressure to supply all floors of high buildings with water, booster stations, normally consisting of several parallel pumps in the basement, are used. In this work, we demonstrate the potential of a decentralized pump topology regarding energy savings in water supply systems of skyscrapers. We present an approach, based on Mixed-Integer Nonlinear Programming, that allows to choose an optimal network topology and optimal pumps from a predefined construction kit comprising different pump types. Using domain-specific scaling laws and Latin Hypercube Sampling, we generate different input sets of pump types and compare their impact on the efficiency and cost of the total system design. As a realistic application example, we consider a hotel building with 325 rooms, 12 floors and up to four pressure zones.
Highly competitive markets paired with tremendous production volumes demand particularly cost efficient products. The usage of common parts and modules across product families can potentially reduce production costs. Yet, increasing commonality typically results in overdesign of individual products. Multi domain virtual prototyping enables designers to evaluate costs and technical feasibility of different single product designs at reasonable computational effort in early design phases. However, savings by platform commonality are hard to quantify and require detailed knowledge of e.g. the production process and the supply chain. Therefore, we present and evaluate a multi-objective metamodel-based optimization algorithm which enables designers to explore the trade-off between high commonality and cost optimal design of single products.
Resilience as a concept has found its way into different disciplines to describe the ability of an individual or system to withstand and adapt to changes in its environment. In this paper, we provide an overview of the concept in different communities and extend it to the area of mechanical engineering. Furthermore, we present metrics to measure resilience in technical systems and illustrate them by applying them to load-carrying structures. By giving application examples from the Collaborative Research Centre (CRC) 805, we show how the concept of resilience can be used to control uncertainty during different stages of product life.
The overall energy efficiency of ventilation systems can be improved by considering not only single components, but by considering as well the interplay between every part of the system. With the help of the method "TOR" ("Technical Operations Research"), which was developed at the Chair of Fluid Systems at TU Darmstadt, it is possible to improve the energy efficiency of the whole system by considering all possible design choices programmatically. We show the ability of this systematic design approach with a ventilation system for buildings as a use case example.
Based on a Mixed-Integer Nonlinear Program (MINLP) we model the ventilation system. We use binary variables to model the selection of different pipe diameters. Multiple fans are model with the help of scaling laws. The whole system is represented by a graph, where the edges represent the pipes and fans and the nodes represents the source of air for cooling and the sinks, that have to be cooled. At the beginning, the human designer chooses a construction kit of different suitable fans and pipes of different diameters and different load cases. These boundary conditions define a variety of different possible system topologies. It is not possible to consider all topologies by hand. With the help of state of the art solvers, on the other side, it is possible to solve this MINLP.
Next to this, we also consider the effects of malfunctions in different components. Therefore, we show a first approach to measure the resilience of the shown example use case. Further, we compare the conventional approach with designs that are more resilient. These more resilient designs are derived by extending the before mentioned model with further constraints, that consider explicitly the resilience of the overall system. We show that it is possible to design resilient systems with this method already in the early design stage and compare the energy efficiency and resilience of these different system designs.
The energy-efficiency of technical systems can be improved by a systematic design approach. Technical Operations Research (TOR) employs methods known from Operations Research to find a global optimal layout and operation strategy of technical systems. We show the practical usage of this approach by the systematic design of a decentralized water supply system for skyscrapers. All possible network options and operation strategies are modeled by a Mixed-Integer Nonlinear Program. We present the optimal system found by our approach and highlight the energy savings compared to a conventional system design.
The UN sets the goal to ensure access to water and sanitation for all people by 2030. To address this goal, we present a multidisciplinary approach for designing water supply networks for slums in large cities by applying mathematical optimization. The problem is modeled as a mixed-integer linear problem (MILP) aiming to find a network describing the optimal supply infrastructure. To illustrate the approach, we apply it on a small slum cluster in Dhaka, Bangladesh.
Ensuring access to water and sanitation for all is Goal No. 6 of the 17 UN Sustainability Development Goals to transform our world. As one step towards this goal, we present an approach that leverages remote sensing data to plan optimal water supply networks for informal urban settlements. The concept focuses on slums within large urban areas, which are often characterized by a lack of an appropriate water supply. We apply methods of mathematical optimization aiming to find a network describing the optimal supply infrastructure. Hereby, we choose between different decentral and central approaches combining supply by motorized vehicles with supply by pipe systems. For the purposes of illustration, we apply the approach to two small slum clusters in Dhaka and Dar es Salaam. We show our optimization results, which represent the lowest cost water supply systems possible. Additionally, we compare the optimal solutions of the two clusters (also for varying input parameters, such as population densities and slum size development over time) and describe how the result of the optimization depends on the entered remote sensing data.
Nahezu 100.000 denkbare Strukturen kann ein Getriebe bei gleicher Funktion aufweisen - je nach Ganganzahl und gefordertem Freiheitsgrad. Mit dem traditionellen Ansatz bei der Entwicklung, einzelne vielversprechende Systemkonfigurationen manuell zu identifizieren und zu vergleichen, können leicht innovative und vor allem kostenminimale Lösungen übersehen werden. Im Rahmen eines Forschungsprojekts hat die TU Darmstadt spezielle Optimierungsmethoden angewendet, um auch bei großen Lösungsräumen zielsicher ein für die individuellen Zielstellungen optimales Layout zu finden.
On obligations in the development process of resilient systems with algorithmic design methods
(2018)
Advanced computational methods are needed both for the design of large systems and to compute high accuracy solutions. Such methods are efficient in computation, but the validation of results is very complex, and highly skilled auditors are needed to verify them. We investigate legal questions concerning obligations in the development phase, especially for technical systems developed using advanced methods. In particular, we consider methods of resilient and robust optimization. With these techniques, high performance solutions can be found, despite a high variety of input parameters. However, given the novelty of these methods, it is uncertain whether legal obligations are being met. The aim of this paper is to discuss if and how the choice of a specific computational method affects the developer’s product liability. The review of legal obligations in this paper is based on German law and focuses on the requirements that must be met during the design and development process.
Textsammlung mit allen für den Datenschutz in Kirchen maßgeblichen Regelwerken: DSGVO, KDG, KDR-OG und DSG-EKD sowie begleitende Verordnungen KDO-DVO und ITSVO-EKD. Die vorliegende Textsammlung enthält neben dem zentralen Regelwerk, der Datenschutz-Grundverordnung (DSGVO) in ihrer letzten korrigierten Fassung vom 19. April 2018, die Normen des kirchlichen Rechts, die aufgrund der DSGVO neu erlassen wurden. Auf Seiten der katholischen Kirche sind dies das Gesetz über den Kirchlichen Datenschutz (KDG) und die Kirchliche Datenschutzregelung der Ordensgemeinschaft päpstlichen Rechts (KDR-OG); zudem findet die Verordnung zur Durchführung der Anordnung über den kirchlichen Datenschutz (KDO-DVO) weiterhin entsprechende Anwendung. Die evangelische Kirche novellierte das Kirchengesetz über den Datenschutz der Evangelischen Kirche in Deutschland (DSG-EKD) und hielt an der Verordnung zur Sicherheit der Informationstechnik (ITSVO-EKD) fest. Ergänzt wird das Werk durch Verweise auf maßgebliche Veröffentlichungen der Artikel-29-Datenschutzgruppe und der weltlichen und kirchlichen Datenschutzaufsichtsbehörden. Damit richtet sich das vorliegende Werk vor allem an kirchliche Gemeinden sowie Unternehmen in kirchlicher Trägerschaft und ihre Datenschutzbeauftragten wie gleichermaßen an private Unternehmen, Kanzleien und Berater mit kirchlicher Kundschaft bzw. Mandantschaft.
Nach einem intensiven politischen Diskurs wurde im vergangenen Jahr die Datenschutz-Grundverordnung (DSGVO) verabschiedet. Die DSGVO ersetzt zum 25.5.2018 die bislang geltende, aus dem Jahre 1995 stammende Datenschutz-Richtlinie 95/46/EG. Die Novellierung des Datenschutzrechts bringt zahlreiche neue Anforderungen mit sich. Unternehmen sind daher gezwungen, sich auf die Änderungen einzustellen, ihre datenschutzrelevanten Prozesse im Hinblick auf die neuen Anforderungen zu überprüfen und bis zum Mai 2018 an der DSGVO auszurichten. Der Beitrag gibt einen kurzen Überblick über die zentralen Aspekte der Datenschutzreform und die damit einhergehenden Herausforderungen für Unternehmen.
Das neue kirchliche Datenschutzrecht – Herausforderungen für Unternehmen der Privatwirtschaft
(2018)
Die Rechtsfigur der gemeinsamen Verantwortlichkeit beschäftigt die datenschutzrechtliche Literatur seit Langem. Die Bestimmung der Verantwortlichkeit bei arbeitsteiligen Verarbeitungsverfahren, welche vor allem bei heutigen Plattformdiensten üblich sind, ist komplex: Stets sind mehrere Akteure beteiligt und in der Regel werden durch die Handlung eines Beteiligten mehrere Verarbeitungsschritte ausgelöst. Nun hat sich der EuGH in einem in mehrfacher Hinsicht bemerkenswerten Urteil geäußert.
Die Datenschutz-Grundverordnung (DS-GVO) regelt in ihrem Art. 3 das räumlich anwendbare Datenschutzrecht und zielt dabei gerade auch auf Angebote nichteuropäischer Diensteanbieter ab. Die bisherige Diskussion konzentriert sich bislang in erster Linie darauf, das eingeführte Marktortprinzip zu thematisieren; das weitgehend unangetastete
Niederlassungsprinzip und vor allem die Probleme, die sich durch dessen unveränderte Beibehaltung ergeben, werden dagegen nicht erörtert. Der folgende Beitrag versucht sich an einer systematischen Analyse eines teils kontrovers, teils kaum diskutierten Themas.
Das Kopplungsverbot fristete – obwohl in rechtswissenschaftlicher Literatur seit jeher diskutiert – unter der Geltung des BDSG ein Schattendasein. Mit der Datenschutz-Grundverordnung (DS-GVO) ist eine Änderung absehbar: Der neue Art. EWG_DSGVO Artikel 7 Abs. EWG_DSGVO Artikel 7 Absatz 4 DS-GVO stellt klar, dass die Leistungserbringung nicht von der Einwilligungserteilung abhängig gemacht werden darf. Doch dieses scheinbare Novum des Datenschutzrechts wirft zahlreiche Fragen auf. Während vor allem Vertreter der unternehmerischen Praxis die Anwendung des Kopplungsverbots in zahlreichen Konstellationen ablehnen, beschwören dessen Apologeten das Ende sämtlicher „datenfinanzierten“ Dienste herauf. Der vorliegende Beitrag gibt Einblick in die Regelungstiefe einer Norm, die das Web 2.0 revolutionieren könnte, und schlägt eine Lösung vor, die dem Schutz der Privatsphäre des Betroffenen und den wirtschaftlichen Interessen von Diensteanbietern gleichermaßen gerecht wird.
Cloud Computing wirft in zahlreichen Rechtsbereichen neuartige juristische Fragestellungen auf. Ziel der Darstellung der rechtlichen Rahmenbedingungen ist, die das Identitätsmanagement in der Cloud betreffenden Rechtsgrundlagen aus den unterschiedlichen Rechtsgebieten vorzustellen und einzuordnen, bevor im Rahmen des sechsten Kapitels die Darstellung der hieraus resultierenden Verpflichtungen in ihrer konkreten Form erfolgt.
Datenschutz und der 25.5.2018, wie ein Damoklesschwert scheinen beide Begriffe zurzeit im Raum zu stehe. Jeder weiß oder sollte zumindest um das Inkrafttreten der europäischen DSGVO am 25.5.2018 wissen. Viel wurde über wesentliche Neuerungen im Datenschutzrecht berichtet. Nicht zuletzt über gesteigerte organisatorische Anforderungen, Dokumentationspflichten und drohende Bußgelder. Doch was bedeuten diese Neuerungen ganz konkret für die Praxis des Steuerberaters? Anders als man vermuten könnte, werden die datenschutzrechtlichen Neuerungen nicht nur im Bereich der Kanzleiorganisation relevant. Auch im Steuerverwaltungsverfahren sieht sich der Steuerberater datenschutzrechtlichen Fragestellungen gegenüber, bspw. dann, wenn die Finanzbehörden bei der Verarbeitung der personenbezogenen Daten des Mandanten gegen die DSGVO verstoßen. Gleiches gilt in Bereichen des Beschäftigtendatenschutzes. Sowohl der Kanzleiinhaber selbst, als auch seine Arbeitgeber-Mandanten haben die Vorschriften des Beschäftigtendatenschutzes einzuhalten. Der Steuerberater benötigt datenschutzrechtliches Know How, welches unmittelbar seine tägliche Praxis betrifft. Andernfalls besteht das Risiko, dass dieser mit mehr Fragen, als Antworten zurück bleibt.
The continuing growth of scientific publications raises the question how research processes can be digitalized and thus realized more productively. Especially in information technology fields, research practice is characterized by a rapidly growing volume of publications. For the search process various information systems exist. However, the analysis of the published content is still a highly manual task. Therefore, we propose a text analytics system that allows a fully digitalized analysis of literature sources. We have realized a prototype by using EBSCO Discovery Service in combination with IBM Watson Explorer and demonstrated the results in real-life research projects. Potential addressees are research institutions, consulting firms, and decision-makers in politics and business practice.
As an interdisciplinary research network, the Cluster of Excellence “Integrative Production Technology for High-Wage Countries” (CoE) comprises of around 150 researchers. Their scientific background ranges from mechanical engineering and computer science to social sciences such as sociology and psychology. In addition to content- and methodbased challenges, the CoE’s employees are faced with heterogenic organizational cultures, different hierarchical levels, an imbalanced gender distribution, and a high employee fluctuation. The sub-project Scientific Cooperation Engineering 1 (CSP1) addresses the challenge of interdisciplinary cooperation and organizational learning and aims at fostering interdisciplinarity and its synergies as a source of innovation. Therefore, the project examines means of reaching an organizational development, ranging from temporal structures to a sustainable network in production technology. To achieve this aim, a broad range of means has been developed during the last twelve years: In addition to physical measures such as regular network events and trainings, virtual measures such as the Terminology App were focused. The app is an algorithmic analysis method for uncovering latent topic structures of publications of the CoE to highlight thematic intersections and synergy potentials. The detection and promotion of has been a vital and long known element in knowledge management. Furthermore, CSP1 focusses on project management and thus developed evaluation tools to measure and control the success of interdisciplinary cooperation. In addition to the cooperation fostering measures, CSP1 conducted studies about interdisciplinarity and diversity and their relationship with innovation. The scientific background of these means and the research results of CSP1 are outlined in this paper to offer approaches for successful interdisciplinary cooperation management.
Das anhaltende Wachstum wissenschaftlicher Veröffentlichungen wirft die Fragestellung auf, wie Literaturana-lysen im Rahmen von Forschungsprozessen digitalisiert und somit produktiver realisiert werden können. Insbesondere in informationstechnischen Fachgebieten ist die Forschungspraxis durch ein rasant wachsendes Publikationsaufkommen gekennzeichnet. Infolgedessen bietet sich der Einsatz von Methoden der Textanalyse (Text Analytics) an, die Textdaten automatisch vorbereiten und verarbeiten können. Erkenntnisse entstehen dabei aus Analysen von Wortarten und Subgruppen, Korrelations- sowie Zeitreihenanalysen. Dieser Beitrag stellt die Konzeption und Realisierung eines Prototypen vor, mit dem Anwender bibliographische Daten aus der etablierten Literaturdatenbank EBSCO Discovery Service mithilfe textanalytischer Methoden erschließen können. Der Prototyp basiert auf dem Analysesystem IBM Watson Explorer, das Hochschulen lizenzkostenfrei zur Verfügung steht. Potenzielle Adressaten des Prototypen sind Forschungseinrichtungen, Beratungsunternehmen sowie Entscheidungsträger in Politik und Unternehmenspraxis.
Angesichts des anhaltenden Wachstums wissenschaftlicher Veröffentlichungen werden Instrumente benötigt, um Literaturanalysen durch Digitalisierung produktiver zu gestalten. Dieser Beitrag stellt einen Ansatz vor, der bibliographische Daten aus der Literaturdatenbank EBSCO Discovery Service mithilfe von Text-Analytics-Methoden erschließt. Die Lösung basiert auf dem Textanalysesystem IBM Watson Explorer und eignet sich für explorative Literaturanalysen, um beispielsweise den Status quo emergierender Technologiefelder in der Literatur zu reflektieren. Die generierten Ergebnisse sind in den Kontext der zunehmenden Werkzeugunterstützung des Literaturrechercheprozesses einzuordnen und können für intra- sowie interinstitutionelle Wissenstransferprozesse in Forschungs- und Beratungskontexten genutzt werden.
Im Rahmen der digitalen Transformation werden innovative Technologiekonzepte, wie z. B. das Internet der Dinge und Cloud Computing als Treiber für weitreichende Veränderungen von Organisationen und Geschäftsmodellen angesehen. In diesem Kontext ist Robotic Process Automation (RPA) ein neuartiger Ansatz zur Prozessautomatisierung, bei dem manuelle Tätigkeiten durch sogenannte Softwareroboter erlernt und automatisiert ausgeführt werden. Dabei emulieren Softwareroboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Die innovative Idee ist die Transformation der bestehenden Prozessausführung von manuell zu digital, was RPA von traditionellen Ansätzen des Business Process Managements (BPM) unterscheidet, bei denen z. B. prozessgetriebene
Anpassungen auf Ebene der Geschäftslogik notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Gerade bei operativen Prozessen mit sich wiederholenden Verarbeitungsschritten in unterschiedlichen Anwendungssystemen sind gute Ergebnisse durch RPA dokumentiert, wie z. B. die Automatisierung von 35 % der Backoffice-Prozesse bei Telefonica. Durch den vergleichsweise niedrigen Implementierungsaufwand verbunden mit einem hohen Automatisierungspotenzial ist in der Praxis (z. B. Banken, Telekommunikation, Energieversorgung) ein hohes Interesse an RPA vorhanden. Der Beitrag diskutiert RPA als innovativen Ansatz zur
Prozessdigitalisierung und gibt konkrete Handlungsempfehlungen für die Praxis. Dazu wird zwischen modellgetriebenen und selbstlernenden Ansätzen unterschieden. Anhand von generellen Architekturen von RPA-Systemen werden Anwendungsszenarien sowie deren Automatisierungspotenziale, aber auch Einschränkungen, diskutiert. Es folgt ein strukturierter Marktüberblick ausgewählter RPA-Produkte. Anhand von drei konkreten Anwendungsbeispielen wird die Nutzung von RPA in der Praxis verdeutlicht.
Nutzen und Rahmenbedingungen 5 informationsgetriebener Geschäftsmodelle des Internets der Dinge
(2018)
Im Kontext der zunehmenden Digitalisierung wird das Internet der Dinge (englisch: Internet of Things, IoT) als ein technologischer Treiber angesehen, durch den komplett neue Geschäftsmodelle im Zusammenspiel unterschiedlicher Akteure entstehen können. Identifizierte Schlüsselakteure sind unter anderem traditionelle Industrieunternehmen, Kommunen und Telekommunikationsunternehmen. Letztere sorgen mit der Bereitstellung von Konnektivität dafür, dass kleine Geräte mit winzigen Batterien nahezu überall und direkt an das Internet angebunden werden können. Es sind schon viele IoT-Anwendungsfälle auf dem Markt, die eine Vereinfachung für Endkunden darstellen, wie beispielsweise Philips Hue Tap. Neben Geschäftsmodellen basierend auf Konnektivität besteht ein großes Potenzial für informationsgetriebene Geschäftsmodelle, die bestehende Geschäftsmodelle unterstützen sowie weiterentwickeln können. Ein Beispiel dafür ist der IoT-Anwendungsfall Park and Joy der Deutschen Telekom AG, bei dem Parkplätze mithilfe von Sensoren vernetzt und Autofahrer in Echtzeit über verfügbare Parkplätze informiert werden. Informationsgetriebene Geschäftsmodelle können auf Daten aufsetzen, die in IoT-Anwendungsfällen erzeugt werden. Zum Beispiel kann ein Telekommunikationsunternehmen Mehrwert schöpfen, indem es aus Daten entscheidungsrelevantere Informationen – sogenannte Insights – ableitet, die zur Steigerung der Entscheidungsagilität genutzt werden. Außerdem können Insights monetarisiert werden. Die Monetarisierung von Insights kann nur nachhaltig stattfinden, wenn sorgfältig gehandelt wird und Rahmenbedingungen berücksichtigt werden. In diesem Kapitel wird das Konzept informationsgetriebener Geschäftsmodelle erläutert und anhand des konkreten Anwendungsfalls Park and Joy verdeutlicht. Darüber hinaus werden Nutzen, Risiken und Rahmenbedingungen diskutiert.
Prozessorientierte Messung der Customer Experience am Beispiel der Telekommunikationsindustrie
(2018)
Hohe Wettbewerbsintensität und gestiegene Kundenanforderungen erfordern bei Telekommunikationsunternehmen eine aktive Gestaltung der Customer Experience (CX). Ein wichtiger Aspekt dabei ist die CX-Messung. Traditionelle Zufriedenheitsmessungen sind oft nicht ausreichend, um die Kundenerfahrung in komplexen Prozessen vollständig zu erfassen. Daher wird in diesem Kapitel eine prozessübergreifende Referenzlösung zur CX-Messung am Beispiel der Telekommunikationsindustrie vorgeschlagen. Ausgangspunkt ist ein industriespezifisches Prozessmodell, das sich an dem Referenzmodell eTOM orientiert. Dieses wird um Messpunkte erweitert, die Schwachstellen in Bezug auf die CX identifizieren. Für die erkannten Schwachstellen werden über eine Referenzmatrix mögliche Auslöser abgeleitet und anhand von typischen Geschäftsfallmengen bewertet. Somit ist eine direkte Zuordnung und Erfolgsmessung konkreter Maßnahmen zur Behebung der Schwachstellen möglich. Die so entwickelte Referenzlösung wurde im Projekt K1 bei der Deutschen Telekom erfolgreich umgesetzt. Details zur Umsetzung werden als Fallstudien dargestellt.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Malaria infection remains a significant risk for much of the population of tropical and subtropical areas, particularly in developing countries. Therefore, it is of high importance to develop sensitive, accurate and inexpensive malaria diagnosis tests. Here, we present a novel aptamer-based electrochemical biosensor (aptasensor) for malaria detection by impedance spectroscopy, through the specific recognition between a highly discriminatory DNA aptamer and its target Plasmodium falciparum lactate dehydrogenase (PfLDH). Interestingly, due to the isoelectric point (pI) of PfLDH, the aptasensor response showed an adjustable detection range based on the different protein net-charge at variable pH environments. The specific aptamer recognition allows sensitive protein detection with an expanded detection range and a low detection limit, as well as a high specificity for PfLDH compared to analogous proteins. The specific feasibility of the aptasensor is further demonstrated by detection of the target PfLDH in human serum. Furthermore, the aptasensor can be easily regenerated and thus applied for multiple usages. The robustness, sensitivity, and reusability of the presented aptasensor make it a promising candidate for point-of-care diagnostic systems.
The chemical imaging sensor is a semiconductor-based chemical sensor capable of visualizing pH and ion distributions. The spatial resolution depends on the lateral diffusion of photocarriers generated by illumination of the semiconductor substrate. In this study, two types of optical setups, one based on a bundle of optical fibers and the other based on a binocular tube head, were developed to project a hybrid illumination of a modulated light beam and a ring-shaped constant illumination onto the sensor plate. An improved spatial resolution was realized by the ring-shaped constant illumination, which suppressed lateral diffusion of photocarriers by enhanced recombination due to the increased carrier concentration.
As with most high-velocity free-surface flows, stepped spillway flows become self-aerated when the drop height exceeds a critical value. Due to the step-induced macro-roughness, the flow field becomes more turbulent than on a similar smooth-invert chute. For this reason, cascades are oftentimes used as re-aeration structures in wastewater treatment. However, for stepped spillways as flood release structures downstream of deoxygenated reservoirs, gas transfer is also of crucial significance to meet ecological requirements. Prediction of mass transfer velocities becomes challenging, as the flow regime differs from typical previously studied flow conditions. In this paper, detailed air-water flow measurements are conducted on stepped spillway models with different geometry, with the aim to estimate the specific air-water interface. Re-aeration performances are determined by applying the absorption method. In contrast to earlier studies, the aerated water body is considered a continuous mixture up to a level where 75% air concentration is reached. Above this level, a homogenous surface wave field is considered, which is found to significantly affect the total air-water interface available for mass transfer. Geometrical characteristics of these surface waves are obtained from high-speed camera investigations. The results show that both the mean air concentration and the mean flow velocity have influence on the mass transfer. Finally, an empirical relationship for the mass transfer on stepped spillway models is proposed.
Synthetic mimics of natural high-performance structural materials have shown great and partly unforeseen opportunities for the design of multifunctional materials. For nacre-mimetic nanocomposites, it has remained extraordinarily challenging to make ductile materials with high stretchability at high fractions of reinforcements, which is however of crucial importance for flexible barrier materials. Here, highly ductile and tough nacre-mimetic nanocomposites are presented, by implementing weak, but many hydrogen bonds in a ternary nacre-mimetic system consisting of two polymers (poly(vinyl amine) and poly(vinyl alcohol)) and natural nanoclay (montmorillonite) to provide efficient energy dissipation and slippage at high nanoclay content (50 wt%). Tailored interactions enable exceptional combinations of ductility (close to 50% strain) and toughness (up to 27.5 MJ m⁻³). Extensive stress whitening, a clear sign of high internal dynamics at high internal cohesion, can be observed during mechanical deformation, and the materials can be folded like paper into origami planes without fracture. Overall, the new levels of ductility and toughness are unprecedented in highly reinforced bioinspired nanocomposites and are of critical importance to future applications, e.g., as barrier materials needed for encapsulation and as a printing substrate for flexible organic electronics.
False spectra formation in the differential two-channel scheme of the laser Doppler flowmeter
(2018)
Noise in the differential two-channel scheme of a classic laser Doppler flowmetry (LDF) instrument was studied. Formation of false spectral components in the output signal due to beating of electrical signals in the differential amplifier was found out. The improved block-diagram of the flowmeter was developed allowing to reduce the noise.
Background
Culture media containing complex compounds like yeast extract or peptone show numerous disadvantages. The chemical composition of the complex compounds is prone to significant variations from batch to batch and quality control is difficult. Therefore, the use of chemically defined media receives more and more attention in commercial fermentations. This concept results in better reproducibility, it simplifies downstream processing of secreted products and enable rapid scale-up. Culturing bacteria with unknown auxotrophies in chemically defined media is challenging and often not possible without an extensive trial-and-error approach. In this study, a respiration activity monitoring system for shake flasks and its recent version for microtiter plates were used to clarify unknown auxotrophic deficiencies in the model organism Bacillus pumilus DSM 18097.
Results
Bacillus pumilus DSM 18097 was unable to grow in a mineral medium without the addition of complex compounds. Therefore, a rich chemically defined minimal medium was tested containing basically all vitamins, amino acids and nucleobases, which are essential ingredients of complex components. The strain was successfully cultivated in this medium. By monitoring of the respiration activity, nutrients were supplemented to and omitted from the rich chemically defined medium in a rational way, thus enabling a systematic and fast determination of the auxotrophic deficiencies. Experiments have shown that the investigated strain requires amino acids, especially cysteine or histidine and the vitamin biotin for growth.
Conclusions
The introduced method allows an efficient and rapid identification of unknown auxotrophic deficiencies and can be used to develop a simple chemically defined tailor-made medium. B. pumilus DSM 18097 was chosen as a model organism to demonstrate the method. However, the method is generally suitable for a wide range of microorganisms. By combining a systematic combinatorial approach based on monitoring the respiration activity with cultivation in microtiter plates, high throughput experiments with high information content can be conducted. This approach facilitates media development, strain characterization and cultivation of fastidious microorganisms in chemically defined minimal media while simultaneously reducing the experimental effort.
Monitoring of organic acids (OA) and volatile fatty acids (VFA) is crucial for the control of anaerobic digestion. In case of unstable process conditions, an accumulation of these intermediates occurs. In the present work, two different enzyme-based biosensor arrays are combined and presented for facile electrochemical determination of several process-relevant analytes. Each biosensor utilizes a platinum sensor chip (14 × 14 mm²) with five individual working electrodes. The OA biosensor enables simultaneous measurement of ethanol, formate, d- and l-lactate, based on a bi-enzymatic detection principle. The second VFA biosensor provides an amperometric platform for quantification of acetate and propionate, mediated by oxidation of hydrogen peroxide. The cross-sensitivity of both biosensors toward potential interferents, typically present in fermentation samples, was investigated. The potential for practical application in complex media was successfully demonstrated in spiked sludge samples collected from three different biogas plants. Thereby, the results obtained by both of the biosensors were in good agreement to the applied reference measurements by photometry and gas chromatography, respectively. The proposed hybrid biosensor system was also used for long-term monitoring of a lab-scale biogas reactor (0.01 m³) for a period of 2 months. In combination with typically monitored parameters, such as gas quality, pH and FOS/TAC (volatile organic acids/total anorganic carbonate), the amperometric measurements of OA and VFA concentration could enhance the understanding of ongoing fermentation processes.
Algal polysaccharides (extracellular polysaccharides) and carbon nanotubes (CNTs) were adsorbed on dioctadecyldimethylammonium bromide Langmuir monolayers to serve as a matrix for the incorporation of urease. The physicochemical properties of the supramolecular system as a monolayer at the air–water interface were investigated by surface pressure–area isotherms, surface potential–area isotherms, interfacial shear rheology, vibrational spectroscopy, and Brewster angle microscopy. The floating monolayers were transferred to hydrophilic solid supports, quartz, mica, or capacitive electrolyte–insulator–semiconductor (EIS) devices, through the Langmuir–Blodgett (LB) technique, forming mixed films, which were investigated by quartz crystal microbalance, fluorescence spectroscopy, and field emission gun scanning electron microscopy. The enzyme activity was studied with UV–vis spectroscopy, and the feasibility of the thin film as a urea sensor was essayed in an EIS sensor device. The presence of CNT in the enzyme–lipid LB film not only tuned the catalytic activity of urease but also helped to conserve its enzyme activity. Viability as a urease sensor was demonstrated with capacitance–voltage and constant capacitance measurements, exhibiting regular and distinctive output signals over all concentrations used in this work. These results are related to the synergism between the compounds on the active layer, leading to a surface morphology that allowed fast analyte diffusion owing to an adequate molecular accommodation, which also preserved the urease activity. This work demonstrates the feasibility of employing LB films composed of lipids, CNT, algal polysaccharides, and enzymes as EIS devices for biosensing applications.
The light-addressable potentiometric sensor (LAPS) and scanning photo-induced impedance microscopy (SPIM) are two closely related methods to visualise the distributions of chemical species and impedance, respectively, at the interface between the sensing surface and the sample solution. They both have the same field-effect structure based on a semiconductor, which allows spatially resolved and label-free measurement of chemical species and impedance in the form of a photocurrent signal generated by a scanning light beam. In this article, the principles and various operation modes of LAPS and SPIM, functionalisation of the sensing surface for measuring various species, LAPS-based chemical imaging and high-resolution sensors based on silicon-on-sapphire substrates are described and discussed, focusing on their technical details and prospective applications.