Refine
Year of publication
- 2024 (15)
- 2023 (31)
- 2022 (19)
- 2021 (27)
- 2020 (29)
- 2019 (44)
- 2018 (44)
- 2017 (37)
- 2016 (35)
- 2015 (36)
- 2014 (43)
- 2013 (43)
- 2012 (41)
- 2011 (40)
- 2010 (36)
- 2009 (38)
- 2008 (41)
- 2007 (39)
- 2006 (27)
- 2005 (25)
- 2004 (35)
- 2003 (33)
- 2002 (29)
- 2001 (33)
- 2000 (34)
- 1999 (31)
- 1998 (28)
- 1997 (34)
- 1996 (25)
- 1995 (22)
- 1994 (21)
- 1993 (18)
- 1992 (13)
- 1991 (14)
- 1990 (9)
- 1989 (17)
- 1988 (11)
- 1987 (8)
- 1986 (9)
- 1985 (9)
- 1984 (3)
- 1983 (7)
- 1982 (4)
- 1981 (3)
- 1980 (12)
- 1979 (6)
- 1978 (7)
- 1977 (1)
- 1976 (7)
- 1975 (3)
- 1974 (4)
- 1973 (1)
- 1972 (3)
- 1971 (2)
- 1970 (1)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (1187) (remove)
Language
- English (720)
- German (466)
- Multiple languages (1)
Document Type
- Article (629)
- Conference Proceeding (310)
- Book (115)
- Part of a Book (66)
- Patent (17)
- Report (9)
- Other (8)
- Conference: Meeting Abstract (6)
- Contribution to a Periodical (6)
- Course Material (6)
Keywords
- Multimediamarkt (7)
- Enterprise Architecture (5)
- MINLP (5)
- Engineering optimization (4)
- Gamification (4)
- Serious Game (4)
- Auslenkung (3)
- Digitale Transformation (3)
- Digitalisierung (3)
- Education (3)
Software development projects often fail because of insufficient code quality. It is now well documented that the task of testing software, for example, is perceived as uninteresting and rather boring, leading to poor software quality and major challenges to software development companies. One promising approach to increase the motivation for considering software quality is the use of gamification. Initial research works already investigated the effects of gamification on software developers and come to promising. Nevertheless, a lack of results from field experiments exists, which motivates the chapter at hand. By conducting a gamification experiment with five student software projects and by interviewing the project members, the chapter provides insights into the changing programming behavior of information systems students when confronted with a leaderboard. The results reveal a motivational effect as well as a reduction of code smells.
Dieses Lehr- und Fachbuch vermittelt anschaulich die Grundlagen der HF-Technik, gibt konkrete Beschreibungen für den Entwurf von linearen Komponenten aus Bauteilen wie auch Leitungen für High-Speed- und HF-Schaltungen. Dem Leser wird vermittelt, wie Bauteile modelliert und Schaltungen synthetisiert und optimiert werden. Mit Hilfe frei verfügbarer Simulationssoftware können GHz-Schaltungen selbst entwickelt werden. Viele Übungsbeispiele ermöglichen die Eigenkontrolle des Wissensstandes. Weiterhin werden komplexe nichtlineare Komponenten wie Hochfrequenzmischer, Oszillatoren und Synthesegeneratoren in ihrer Funktionalität dargestellt. Die neuen Mixed-Mode-Streuparameter sowie deren Leitungs- und Schaltungstechnik für Anwendungen der schnellen Digital- und der modernen HF-Technik sind ausführlich beschrieben. Es wird auf Systeme für folgende Bereiche eingegangen: Streuparametermesstechnik, verschiedene Funktechniken, UHF-RFID und Lokalisierung- und Ortung. Dem Leser wird somit ermöglicht, komplexe GHz-Schaltungen insbesondere mit Halbleiter-, SMD- und LTCC-Schaltungen zu entwickeln.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
The problem of fair and privacy-preserving ordered set reconciliation arises in a variety of applications like auctions, e-voting, and appointment reconciliation. While several multi-party protocols have been proposed that solve this problem in the semi-honest model, there are no multi-party protocols that are secure in the malicious model so far. In this paper, we close this gap. Our newly proposed protocols are shown to be secure in the malicious model based on a variety of novel non-interactive zero-knowledge-proofs. We describe the implementation of our protocols and evaluate their performance in comparison to protocols solving the problem in the semi-honest case.
The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot’s perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.
Due to the increasing complexity of software projects, software development is becoming more and more dependent on teams. The quality of this teamwork can vary depending on the team composition, as teams are always a combination of different skills and personality types. This paper aims to answer the question of how to describe a software development team and what influence the personality of the team members has on the team dynamics. For this purpose, a systematic literature review (n=48) and a literature search with the AI research assistant Elicit (n=20) were conducted. Result: A person’s personality significantly shapes his or her thinking and actions, which in turn influences his or her behavior in software development teams. It has been shown that team performance and satisfaction can be strongly influenced by personality. The quality of communication and the likelihood of conflict can also be attributed to personality.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
Digital forensics of smartphones is of utmost importance in many criminal cases. As modern smartphones store chats, photos, videos etc. that can be relevant for investigations and as they can have storage capacities of hundreds of gigabytes, they are a primary target for forensic investigators. However, it is exactly this large amount of data that is causing problems: extracting and examining the data from multiple phones seized in the context of a case is taking more and more time. This bears the risk of wasting a lot of time with irrelevant phones while there is not enough time left to analyze a phone which is worth examination. Forensic triage can help in this case: Such a triage is a preselection step based on a subset of data and is performed before fully extracting all the data from the smartphone. Triage can accelerate subsequent investigations and is especially useful in cases where time is essential. The aim of this paper is to determine which and how much data from an Android smartphone can be made directly accessible to the forensic investigator – without tedious investigations. For this purpose, an app has been developed that can be used with extremely limited storage of data in the handset and which outputs the extracted data immediately to the forensic workstation in a human- and machine-readable format.
KNX is a protocol for smart building automation, e.g., for automated heating, air conditioning, or lighting. This paper analyses and evaluates state-of-the-art KNX devices from manufacturers Merten, Gira and Siemens with respect to security. On the one hand, it is investigated if publicly known vulnerabilities like insecure storage of passwords in software, unencrypted communication, or denialof-service attacks, can be reproduced in new devices. On the other hand, the security is analyzed in general, leading to the discovery of a previously unknown and high risk vulnerability related to so-called BCU (authentication) keys.
Nowadays, the most employed devices for recoding videos or capturing images are undoubtedly the smartphones. Our work investigates the application of source camera identification on mobile phones. We present a dataset entirely collected by mobile phones. The dataset contains both still images and videos collected by 67 different smartphones. Part of the images consists in photos of uniform backgrounds, especially collected for the computation of the RSPN. Identifying the source camera given a video is particularly challenging due to the strong video compression. The experiments reported in this paper, show the large variation in performance when testing an highly accurate technique on still images and videos.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
The work in modern open-pit and underground mines requires the transportation of large amounts of resources between fixed points. The navigation to these fixed points is a repetitive task that can be automated. The challenge in automating the navigation of vehicles commonly used in mines is the systemic properties of such vehicles. Many mining vehicles, such as the one we have used in the research for this paper, use steering systems with an articulated joint bending the vehicle’s drive axis to change its course and a hydraulic drive system to actuate axial drive components or the movements of tippers if available. To address the difficulties of controlling such a vehicle, we present a model-predictive approach for controlling the vehicle. While the control optimisation based on a parallel error minimisation of the predicted state has already been established in the past, we provide insight into the design and implementation of an MPC for an articulated mining vehicle and show the results of real-world experiments in an open-pit mine environment.
Stand 01.01.2022 sind in Deutschland 618.460 elektrisch angetriebene KFZ zugelassen. Insgesamt sind derzeit 48.540.878 KFZ zugelassen, was einer Elektromobilitätsquote von ca. 1,2 % entspricht. Derzeit werden Elektromobile über Ladestationen oder Steckdosen mit dem Stromnetz verbunden und üblicherweise mit der vollen Ladekapazität des Anschlusses aufgeladen, bis das Batteriemanagementsystem des Fahrzeugs abhängig vom Ladezustand der Batterie die Ladeleistung reduziert.
This paper addresses the pixel based recognition of 3D objects with bidirectional associative memories. Computational power and memory requirements for this approach are identified and compared to the performance of current computer architectures by benchmarking different processors. It is shown, that the performance of special purpose hardware, like neurocomputers, is between one and two orders of magnitude higher than the performance of mainstream hardware. On the other hand, the calculation of small neural networks is performed more efficiently on mainstream processors. Based on these results a novel concept is developed, which is tailored for the efficient calculation of bidirectional associative memories. The computational efficiency is further enhanced by the application of algorithms and storage techniques which are matched to characteristics of the application at hand.
This paper addresses the pixel based classification of three dimensional objects from arbitrary views. To perform this task a coding strategy, inspired by the biological model of human vision, for pixel data is described. The coding strategy ensures that the input data is invariant against shift, scale and rotation of the object in the input domain. The image data is used as input to a class of self organizing neural networks, the Kohonen-maps or self-organizing feature maps (SOFM). To verify this approach two test sets have been generated: the first set, consisting of artificially generated images, is used to examine the classification properties of the SOFMs; the second test set examines the clustering capabilities of the SOFM when real world image data is applied to the network after it has been preprocessed to be invariant against shift, scale and rotation. It is shown that the clustering capability of the SOFM is strongly dependant on the invariance coding of the images.
This paper describes the realization of a novel neurocomputer which is based on the concepts of a coprocessor. In contrast to existing neurocomputers the main interest was the realization of a scalable, flexible system, which is capable of computing neural networks of arbitrary topology and scale, with full independence of special hardware from the software's point of view. On the other hand, computational power should be added, whenever needed and flexibly adapted to the requirements of the application. Hardware independence is achieved by a run time system which is capable of using all available computing power, including multiple host CPUs and an arbitrary number of neural coprocessors autonomously. The realization of arbitrary neural topologies is provided through the implementation of the elementary operations which can be found in most neural topologies.
Aim of the AXON2 project (Adaptive Expert System for Object Recogniton using Neuml Networks) is the development of an object recognition system (ORS) capable of recognizing isolated 3d objects from arbitrary views. Commonly, classification is based on a single feature extracted from the original image. Here we present an architecture adapted from the Mixtures of Eaqerts algorithm which uses multiple neuml networks to integmte different features. During tmining each neural network specializes in a subset of objects or object views appropriate to the properties of the corresponding feature space. In recognition mode the system dynamically chooses the most relevant features and combines them with maximum eficiency. The remaining less relevant features arz not computed and do therefore not decelerate the-recognition process. Thus, the algorithm is well suited for ml-time applications.
In der Vergangenheit basierten große Systemintegrationsprojekte in der Regel auf Individualentwicklungen für einzelne Kunden. Getrieben durch Kostendruck steigt aber der Bedarf nach standardisierten Lösungen, die gleichzeitig die individuellen Anforderungen des jeweiligen Umfelds berücksichtigen. T-Systems GEI GmbH wird beiden Anforderungen mit Produktkerneln gerecht. Neben den technischen Aspekten der Kernelentwicklung spielen besonders organisatorische Aspekte eine Rolle, um Kernel effizient und qualitativ hochwertig zu entwickeln, ohne deren Funktionalitäten ins Uferlose wachsen zu lassen. Umgesetzt hat T-Systems dieses Konzept für Flughafeninformationssysteme. Damit kann dem wachsenden Bedarf der Flughafenbetreiber nach einer effizienten und kostengünstigen Softwarelösung zur Unterstützung Ihrer Geschäftsprozesse entsprochen werden.
Der Erfolg eines Softwarenentwicklungsprojektes insbesondere eines Systemintegrationsprojektes wird mit der Erfüllung des „Teufelsdreiecks“, „In-Time“, „In-Budget“, „In-Quality“ gemessen. Hierzu ist die Kenntnis der Software- und Prozessqualität essenziell, um die Einhaltung der Qualitätskriterien festzustellen, aber auch, um eine Vorhersage hinsichtlich Termin- und Budgettreue zu treffen. Zu diesem Zweck wurde in der T-Systems Systems Integration ein System aus verschiedenen Key Performance Indikatoren entworfen und in der Organisation implementiert, das genau das leistet und die Kriterien für CMMI Level 3 erfüllt.
In this paper we report on CO2 Meter, a do-it-yourself carbon dioxide measuring device for the classroom. Part of the current measures for dealing with the SARS-CoV-2 pandemic is proper ventilation in indoor settings. This is especially important in schools with students coming back to the classroom even with high incidents rates. Static ventilation patterns do not consider the individual situation for a particular class. Influencing factors like the type of activity, the physical structure or the room occupancy are not incorporated. Also, existing devices are rather expensive and often provide only limited information and only locally without any networking. This leaves the potential of analysing the situation across different settings untapped. Carbon dioxide level can be used as an indicator of air quality, in general, and of aerosol load in particular. Since, according to the latest findings, SARS-CoV-2 can be transmitted primarily in the form of aerosols, carbon dioxide may be used as a proxy for the risk of a virus infection. Hence, schools could improve the indoor air quality and potentially reduce the infection risk if they actually had measuring devices available in the classroom. Our device supports schools in ventilation and it allows for collecting data over the Internet to enable a detailed data analysis and model generation. First deployments in schools at different levels were received very positively. A pilot installation with a larger data collection and analysis is underway.
Existing residential buildings have an average lifetime of 100 years. Many of these buildings will exist for at least another 50 years. To increase the efficiency of these buildings while keeping costs at reasonable rates, they can be retrofitted with sensors that deliver information to central control units for heating, ventilation and electricity. This retrofitting process should happen with minimal intervention into existing infrastructure and requires new approaches for sensor design and data transmission. At FH Aachen University of Applied Sciences, students of different disciplines work together to learn how to design, build, deploy and operate such sensors. The presented teaching project already created a low power design for a combined CO2, temperature and humidity measurement device that can be easily integrated into most home automation systems
With the growing interest in small distributed sensors for the “Internet of Things”, more attention is being paid to energy harvesting techologies. Reducing or eliminating the need for external power sources or batteries make devices more self-sufficient, more reliable, and reduces maintenance requirements. The Wiegand effect is a proven technology for harvesting small amounts of electrical power from mechanical motion.
This article describes an Internet of things (IoT) sensing device with a wireless interface which is powered by the energy-harvesting method of the Wiegand effect. The Wiegand effect, in contrast to continuous sources like photovoltaic or thermal harvesters, provides small amounts of energy discontinuously in pulsed mode. To enable an energy-self-sufficient operation of the sensing device with this pulsed energy source, the output energy of the Wiegand generator is maximized. This energy is used to power up the system and to acquire and process data like position, temperature or other resistively measurable quantities as well as transmit these data via an ultra-low-power ultra-wideband (UWB) data transmitter. A proof-of-concept system was built to prove the feasibility of the approach. The energy consumption of the system during start-up was analysed, traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof of concept, an application prototype was developed.
In this study, the performance of an integrated body-imaging array for 7 T with 32 radiofrequency (RF) channels under consideration of local specific absorption rate (SAR), tissue temperature, and thermal dose limits was evaluated and the imaging performance was compared with a clinical 3 T body coil.
Thirty-two transmit elements were placed in three rings between the bore liner and RF shield of the gradient coil. Slice-selective RF pulse optimizations for B1 shimming and spokes were performed for differently oriented slices in the body under consideration of realistic constraints for power and local SAR. To improve the B1+ homogeneity, safety assessments based on temperature and thermal dose were performed to possibly allow for higher input power for the pulse optimization than permissible with SAR limits.
The results showed that using two spokes, the 7 T array outperformed the 3 T birdcage in all the considered regions of interest. However, a significantly higher SAR or lower duty cycle at 7 T is necessary in some cases to achieve similar B1+ homogeneity as at 3 T. The homogeneity in up to 50 cm-long coronal slices can particularly benefit from the high RF shim performance provided by the 32 RF channels. The thermal dose approach increases the allowable input power and the corresponding local SAR, in one example up to 100 W/kg, without limiting the exposure time necessary for an MR examination.
In conclusion, the integrated antenna array at 7 T enables a clinical workflow for body imaging and comparable imaging performance to a conventional 3 T clinical body coil.
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
Wiegand-Modul
(2022)
Ein Wiegand-Modul (110;210;310) umfassend- eine Sensorspule (112;212;312),- einen ersten Wiegand-Draht (116a;216a;316a), der zumindest teilweise innerhalb der Sensorspule (112;212;312) angeordnet ist, und- einen zweiten Wiegand-Draht (116b;216b;316b), der zumindest teilweise innerhalb der Sensorspule (112;212;312) angeordnet ist und sich im Wesentlichen parallel zu dem ersten Wiegand-Draht (116a;216a;316a) erstreckt, ist bekannt.Um eine effiziente Ausnutzung der durch die Ummagnetisierung der Wiegand-Drähte (116a,116b;216a,216b;316a,316b) in die Sensorspule (112;212;312) induzierten elektrischen Energie zu ermöglichen, sind der erste Wiegand-Draht (116a;216a;316a) und der zweite Wiegand-Draht (116b;216b;316b) bezogen auf eine axiale Richtung der Sensorspule (112;212;312) versetzt zueinander angeordnet.
This paper describes the potential for developing a digital twin of society- a dynamic model that can be used to observe, analyze, and predict the evolution of various societal aspects. Such a digital twin can help governmental agencies and policy makers in interpreting trends, understanding challenges, and making decisions regarding investments or policies necessary to support societal development and ensure future prosperity. The paper reviews related work regarding the digital twin paradigm and its applications. The paper presents a motivating case study- an analysis of opportunities and challenges faced by the German federal employment agency, Bundesagentur f¨ur Arbeit (BA), proposes solutions using digital twins, and describes initial proofs of concept for such solutions.
Dieser Beitrag stellt einen Bewertungsrahmen für Smart Services vor, der auf dem Konzept vollständiger Finanzpläne (VOFI) basiert. Zunächst wird eine IoT-Architektur für Smart Services eingeführt, die die Grundlage für deren Betrachtung aus Sicht der Unternehmensplanung liefert. Hierauf aufbauend wird ein Bewertungsrahmen für die finanzplanorientierte Wirtschaftlichkeitsbewertung von Smart Services geschaffen, mit dem die relevanten Zahlungsfolgen differenziert erfasst werden. Mithilfe des entwickelten VOFI-Systems wird anschließend aufgezeigt, wie mithilfe einer Risikoanalyse die Unsicherheit von Modellparametern berücksichtigt werden kann.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Prozessorientierte Messung der Customer Experience am Beispiel der Telekommunikationsindustrie
(2018)
Hohe Wettbewerbsintensität und gestiegene Kundenanforderungen erfordern bei Telekommunikationsunternehmen eine aktive Gestaltung der Customer Experience (CX). Ein wichtiger Aspekt dabei ist die CX-Messung. Traditionelle Zufriedenheitsmessungen sind oft nicht ausreichend, um die Kundenerfahrung in komplexen Prozessen vollständig zu erfassen. Daher wird in diesem Kapitel eine prozessübergreifende Referenzlösung zur CX-Messung am Beispiel der Telekommunikationsindustrie vorgeschlagen. Ausgangspunkt ist ein industriespezifisches Prozessmodell, das sich an dem Referenzmodell eTOM orientiert. Dieses wird um Messpunkte erweitert, die Schwachstellen in Bezug auf die CX identifizieren. Für die erkannten Schwachstellen werden über eine Referenzmatrix mögliche Auslöser abgeleitet und anhand von typischen Geschäftsfallmengen bewertet. Somit ist eine direkte Zuordnung und Erfolgsmessung konkreter Maßnahmen zur Behebung der Schwachstellen möglich. Die so entwickelte Referenzlösung wurde im Projekt K1 bei der Deutschen Telekom erfolgreich umgesetzt. Details zur Umsetzung werden als Fallstudien dargestellt.
Nutzen und Rahmenbedingungen 5 informationsgetriebener Geschäftsmodelle des Internets der Dinge
(2018)
Im Kontext der zunehmenden Digitalisierung wird das Internet der Dinge (englisch: Internet of Things, IoT) als ein technologischer Treiber angesehen, durch den komplett neue Geschäftsmodelle im Zusammenspiel unterschiedlicher Akteure entstehen können. Identifizierte Schlüsselakteure sind unter anderem traditionelle Industrieunternehmen, Kommunen und Telekommunikationsunternehmen. Letztere sorgen mit der Bereitstellung von Konnektivität dafür, dass kleine Geräte mit winzigen Batterien nahezu überall und direkt an das Internet angebunden werden können. Es sind schon viele IoT-Anwendungsfälle auf dem Markt, die eine Vereinfachung für Endkunden darstellen, wie beispielsweise Philips Hue Tap. Neben Geschäftsmodellen basierend auf Konnektivität besteht ein großes Potenzial für informationsgetriebene Geschäftsmodelle, die bestehende Geschäftsmodelle unterstützen sowie weiterentwickeln können. Ein Beispiel dafür ist der IoT-Anwendungsfall Park and Joy der Deutschen Telekom AG, bei dem Parkplätze mithilfe von Sensoren vernetzt und Autofahrer in Echtzeit über verfügbare Parkplätze informiert werden. Informationsgetriebene Geschäftsmodelle können auf Daten aufsetzen, die in IoT-Anwendungsfällen erzeugt werden. Zum Beispiel kann ein Telekommunikationsunternehmen Mehrwert schöpfen, indem es aus Daten entscheidungsrelevantere Informationen – sogenannte Insights – ableitet, die zur Steigerung der Entscheidungsagilität genutzt werden. Außerdem können Insights monetarisiert werden. Die Monetarisierung von Insights kann nur nachhaltig stattfinden, wenn sorgfältig gehandelt wird und Rahmenbedingungen berücksichtigt werden. In diesem Kapitel wird das Konzept informationsgetriebener Geschäftsmodelle erläutert und anhand des konkreten Anwendungsfalls Park and Joy verdeutlicht. Darüber hinaus werden Nutzen, Risiken und Rahmenbedingungen diskutiert.
Im Rahmen der digitalen Transformation werden innovative Technologiekonzepte, wie z. B. das Internet der Dinge und Cloud Computing als Treiber für weitreichende Veränderungen von Organisationen und Geschäftsmodellen angesehen. In diesem Kontext ist Robotic Process Automation (RPA) ein neuartiger Ansatz zur Prozessautomatisierung, bei dem manuelle Tätigkeiten durch sogenannte Softwareroboter erlernt und automatisiert ausgeführt werden. Dabei emulieren Softwareroboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Die innovative Idee ist die Transformation der bestehenden Prozessausführung von manuell zu digital, was RPA von traditionellen Ansätzen des Business Process Managements (BPM) unterscheidet, bei denen z. B. prozessgetriebene
Anpassungen auf Ebene der Geschäftslogik notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Gerade bei operativen Prozessen mit sich wiederholenden Verarbeitungsschritten in unterschiedlichen Anwendungssystemen sind gute Ergebnisse durch RPA dokumentiert, wie z. B. die Automatisierung von 35 % der Backoffice-Prozesse bei Telefonica. Durch den vergleichsweise niedrigen Implementierungsaufwand verbunden mit einem hohen Automatisierungspotenzial ist in der Praxis (z. B. Banken, Telekommunikation, Energieversorgung) ein hohes Interesse an RPA vorhanden. Der Beitrag diskutiert RPA als innovativen Ansatz zur
Prozessdigitalisierung und gibt konkrete Handlungsempfehlungen für die Praxis. Dazu wird zwischen modellgetriebenen und selbstlernenden Ansätzen unterschieden. Anhand von generellen Architekturen von RPA-Systemen werden Anwendungsszenarien sowie deren Automatisierungspotenziale, aber auch Einschränkungen, diskutiert. Es folgt ein strukturierter Marktüberblick ausgewählter RPA-Produkte. Anhand von drei konkreten Anwendungsbeispielen wird die Nutzung von RPA in der Praxis verdeutlicht.
Due to the high number of customer contacts, fault clearances, installations, and product provisioning per year, the automation level of operational processes has a significant impact on financial results, quality, and customer experience. Therefore, the telecommunications operator Deutsche Telekom (DT) has defined a digital strategy with the objectives of zero complexity and zero complaint, one touch, agility in service, and disruptive thinking. In this context, Robotic Process Automation (RPA) was identified as an enabling technology to formulate and realize DT’s digital strategy through automation of rule-based, routine, and predictable tasks in combination with structured and stable data.
Information technologies, such as big data analytics, cloud computing,
cyber physical systems, robotic process automation, and the internet of things, provide a sustainable impetus for the structural development of business sectors as well as the digitalization of markets, enterprises, and processes. Within the consulting industry, the proliferation of these technologies opened up the new segment of digital transformation, which focuses on setting up, controlling, and implementing projects for enterprises from a broad range of sectors. These recent developments raise the question, which requirements evolve for IT consultants as important success factors of those digital transformation projects. Therefore, this empirical contribution provides indications regarding the qualifications and competences necessary for IT consultants in the era of digital transformation from a labor market perspective. On the one hand, this knowledge base is interesting for the academic education of consultants, since it supports a market-oriented design of adequate training measures. On the other hand, insights into the competence requirements for consultants are considered relevant for skill and talent management processes in consulting practice. Assuming that consulting companies pursue a strategic human resource management approach, labor market information may also be useful to discover strategic behavioral patterns.
In der Diskussion über die Digitalisierung der Forschung spielt die Frage nach der optimalen IT-Unterstützung für Forschende eine wichtige Rolle. Forschende können heute an ihren Hochschulen bzw. Wissenschaftseinrichtungen auf ein breites Angebot interner IT-Dienstleistungen zurückgreifen, das auch kooperative IT-Dienste umfasst, die von mehreren Institutionen in Zusammenarbeit bereitgestellt werden. Außerhalb der eigenen Organisation und des weiteren Verbunds hat sich im Internet zudem ein breites externes Angebot an innovativen, häufig kostenlos nutzbaren Onlinediensten entwickelt. Neben horizontalen Onlinediensten, die sich prinzipiell an jeden Internetnutzer richten (bspw. Dropbox, Twitter, WhatsApp), nimmt auch die Zahl von vertikalen Diensten für wissenschaftliche bzw. Forschungszwecke immer weiter zu (bspw. GoogleScholar, ResearchGate, figshare). Für Forschende eröffnen sich damit vielfältige neue Möglichkeiten, ihren individuellen Forschungsprozess durch digitale Werkzeuge zu verbessern. Aufgrund rechtlicher, technischer und personeller Restriktionen können jedoch interne Dienstleister bei der Identifizierung, Auswahl und Nutzung externer Onlinedienste nur wenig Unterstützung leisten. Aus einer serviceorientierten Perspektive stehen Forschende zunehmend vor dem Problem, wie sich heterogene IT-Dienste interner und externer Anbieter in den eigenen Forschungsprozess integrieren lassen. Als Lösungsansatz skizziert das Kapitel das Konzept eines persönlichen Forschungsinformationssystems
nach Gesichtspunkten eines digitalen Servicesystems.
Recently, novel AI-based services have emerged in the consumer market. AI-based services can affect the way consumers take commercial decisions. Research on the influence of AI on commercial interactions is in its infancy. In this chapter, a framework creating a first overview of the influence of AI on commercial interactions is introduced. This framework summarizes the findings of comparing numerous customer journeys of novel AI-based services with corresponding non-AI equivalents.
Einfluss von Künstlicher Intelligenz auf Customer Journeys am Beispiel von intelligentem Parken
(2021)
Im Konsumentenmarkt entstehen vermehrt neue Anwendungen von Künstlicher
Intelligenz (KI). Zunehmend drängen auch Geräte und Dienste in den Markt, die
eigenständig über das Internet kommunizieren. Dadurch können diese Geräte und
Dienste mit neuartigen KI-basierten Diensten verbessert werden. Solche Dienste
können die Art und Weise beeinflussen, wie Kunden kommerzielle Entscheidungen
treffen und somit das Kundenerlebnis maßgeblich verändern. Der Einfluss von KI
auf kommerzielle Interaktionen wurde bisher noch nicht umfassend untersucht.
Basierend auf einem Framework, welches einen ersten Überblick über die Effekte
von KI auf kommerzielle Interaktionen gibt, wird in diesem Kapitel der Einfluss von KI auf Customer Journeys am konkreten Anwendungsfall des intelligenten Parkens analysiert. Die daraus gewonnenen Erkenntnisse können in der Praxis als Grundlage
genutzt werden, um das Potenzial von KI zu verstehen und bei der Gestaltung eigener Customer Journeys umzusetzen.
Intelligent autonomous software robots replacing human activities and performing administrative processes are reality in today’s corporate world. This includes, for example, decisions about invoice payments, identification of customers for a marketing campaign, and answering customer complaints. What happens if such a software robot causes a damage? Due to the complete absence of human activities, the question is not trivial. It could even happen that no one is liable for a damage towards a third party, which could create an uncalculatable legal risk for business partners. Furthermore, the implementation and operation of those software robots involves various stakeholders, which result in the unsolvable endeavor of identifying the originator of a damage. Overall it is advisable to all involved parties to carefully consider the legal situation. This chapter discusses the liability of software robots from an interdisciplinary perspective. Based on different technical scenarios the legal aspects of liability are discussed.
The benefits of robotic process automation (RPA) are highly related to the usage of commercial off-the-shelf (COTS) software products that can be easily implemented and customized by business units. But, how to find the best fitting RPA product for a specific situation that creates the expected benefits? This question is related to the general area of software evaluation and selection. In the face of more than 75 RPA products currently on the market, guidance considering those specifics is required. Therefore, this chapter proposes a criteria-based selection method specifically for RPA. The method includes a quantitative evaluation of costs and benefits as well as a qualitative utility analysis based on functional criteria. By using the visualization of financial implications (VOFI) method, an application-oriented structure is provided that opposes the total cost of ownership to the time savings times salary (TSTS). For the utility analysis a detailed list of functional criteria for RPA is offered. The whole method is based on a multi-vocal review of scientific and non-scholarly literature including publications by business practitioners, consultants, and vendors. The application of the method is illustrated by a concrete RPA example. The illustrated
structures, templates, and criteria can be directly utilized by practitioners in their real-life RPA implementations. In addition, a normative decision process for selecting RPA alternatives is proposed before the chapter closes with a discussion and outlook.
Robotic process automation (RPA) has attracted increasing attention in research and practice. This chapter positions, structures, and frames the topic as an introduction to this book. RPA is understood as a broad concept that comprises a variety of concrete solutions. From a management perspective RPA offers an innovative approach for realizing automation potentials, whereas from a technical perspective the implementation based on software products and the impact of artificial intelligence (AI) and machine learning (ML) are relevant. RPA is industry-independent and can be used, for example, in finance, telecommunications, and the public sector. With respect to RPA this chapter discusses definitions, related approaches, a structuring framework, a research framework, and an inside as well as outside architectural view. Furthermore, it provides an overview of the book combined with short summaries of each chapter.
Subject of this case is Deutsche Telekom Services Europe (DTSE), a service center for administrative processes. Due to the high volume of repetitive tasks (e.g., 100k manual uploads of offer documents into SAP per year), automation was identified as an important strategic target with a high management attention and commitment. DTSE has to work with various backend application systems without any possibility to change those systems. Furthermore, the complexity of administrative processes differed. When it comes to the transfer of unstructured data (e.g., offer documents) to structured data (e.g., MS Excel files), further cognitive technologies were needed.
Unternehmen sind in der Regel überzeugt, dass sie die Bedürfnisse ihrer Kunden in den Mittelpunkt stellen. Aber in der direkten Interaktion mit dem Kunden zeigen sie häufig Schwächen. Der folgende Beitrag illustriert, wie durch eine konsequente Ausrichtung der Wertschöpfungsprozesse auf die zentralen Kundenbedürfnisse ein Dreifacheffekt erzielt werden kann: Nachhaltig erhöhte Kundenzufriedenheit, gesteigerte Effizienz und eine Differenzierung im Wettbewerb.
Kundenanforderungen an Netzwerke haben sich in den vergangenen Jahren stark verändert. Mit NFV und SDN sind Unternehmen technisch in der Lage, diesen gerecht zu werden. Die Provider stehen jedoch vor großen Herausforderungen: Insbesondere Produkte und Prozesse müssen angepasst und agiler werden, um die Stärken von NFV und SDN zum Kundenvorteil auszuspielen.