Refine
Year of publication
- 2024 (18)
- 2023 (30)
- 2022 (19)
- 2021 (26)
- 2020 (29)
- 2019 (44)
- 2018 (43)
- 2017 (37)
- 2016 (32)
- 2015 (30)
- 2014 (41)
- 2013 (43)
- 2012 (38)
- 2011 (40)
- 2010 (35)
- 2009 (38)
- 2008 (41)
- 2007 (38)
- 2006 (27)
- 2005 (25)
- 2004 (35)
- 2003 (27)
- 2002 (29)
- 2001 (31)
- 2000 (31)
- 1999 (31)
- 1998 (26)
- 1997 (33)
- 1996 (24)
- 1995 (22)
- 1994 (21)
- 1993 (18)
- 1992 (13)
- 1991 (14)
- 1990 (9)
- 1989 (17)
- 1988 (11)
- 1987 (8)
- 1986 (9)
- 1985 (9)
- 1984 (3)
- 1983 (7)
- 1982 (4)
- 1981 (3)
- 1980 (12)
- 1979 (6)
- 1978 (7)
- 1977 (1)
- 1976 (7)
- 1975 (3)
- 1974 (4)
- 1973 (1)
- 1972 (3)
- 1971 (2)
- 1970 (1)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (1156) (remove)
Has Fulltext
- no (1156) (remove)
Document Type
- Article (623)
- Conference Proceeding (294)
- Book (113)
- Part of a Book (67)
- Patent (17)
- Report (9)
- Other (8)
- Conference: Meeting Abstract (6)
- Contribution to a Periodical (6)
- Doctoral Thesis (6)
Keywords
- Enterprise Architecture (5)
- MINLP (5)
- Engineering optimization (4)
- Digitale Transformation (3)
- Digitalisierung (3)
- Gamification (3)
- Literaturanalyse (3)
- Optimization (3)
- Powertrain (3)
- Referenzmodellierung (3)
This study demonstrates the feasibility of applying free-breathing, cardiac-gated, susceptibility-weighted fast spin-echo imaging together with black blood preparation and navigator-gated respiratory motion compensation for anatomically accurate T₂ mapping of the heart. First, T₂ maps are presented for oil phantoms without and with respiratory motion emulation (T₂ = (22.1 ± 1.7) ms at 1.5 T and T₂ = (22.65 ± 0.89) ms at 3.0 T). T₂ relaxometry of a ferrofluid revealed relaxivities of R2 = (477.9 ± 17) mM⁻¹s⁻¹ and R2 = (449.6 ± 13) mM⁻¹s⁻¹ for UFLARE and multiecho gradient-echo imaging at 1.5 T. For inferoseptal myocardial regions mean T₂ values of 29.9 ± 6.6 ms (1.5 T) and 22.3 ± 4.8 ms (3.0 T) were estimated. For posterior myocardial areas close to the vena cava T₂-values of 24.0 ± 6.4 ms (1.5 T) and 15.4 ± 1.8 ms (3.0 T) were observed. The merits and limitations of the proposed approach are discussed and its implications for cardiac and vascular T₂-mapping are considered.
The simultaneous assessment of glottal dynamics and larynx position can be beneficial for the diagnosis of disordered voice or speech production and swallowing. Up to now, methods either concentrate on assessment of the glottis opening using optical, acoustical or electrical (electroglottography, EGG) methods, or on visualisation of the larynx position using ultrasound, computer tomography or magnetic resonance imaging techniques.
The method presented here makes use of a time-multiplex measurement approach of space-resolved transfer impedances through the larynx. The fast sequence of measurements allows a quasi simultaneous assessment of both larynx position and EGG signal using up to 32 transmit–receive signal paths. The system assesses the dynamic opening status of the glottis as well as the vertical and back/forward motion of the larynx.
Two electrode-arrays are used for the measurement of the electrical transfer impedance through the neck in different directions. From the acquired data the global and individual conductivity is calculated as well as a 2D point spatial representation of the minimum impedance.
The position information is shown together with classical EGG signals allowing a synchronous visual assessment of glottal area and larynx position. A first application to singing voice analysis is presented that indicate a high potential of the method for use as a non-invasive tool in the diagnosis of voice, speech, and swallowing disorders.
Objectives
Interest in cardiovascular magnetic resonance (CMR) at 7 T is motivated by the expected increase in spatial and temporal resolution, but the method is technically challenging. We examined the feasibility of cardiac chamber quantification at 7 T.
Methods
A stack of short axes covering the left ventricle was obtained in nine healthy male volunteers. At 1.5 T, steady-state free precession (SSFP) and fast gradient echo (FGRE) cine imaging with 7 mm slice thickness (STH) were used. At 7 T, FGRE with 7 mm and 4 mm STH were applied. End-diastolic volume, end-systolic volume, ejection fraction and mass were calculated.
Results
All 7 T examinations provided excellent blood/myocardium contrast for all slice directions. No significant difference was found regarding ejection fraction and cardiac volumes between SSFP at 1.5 T and FGRE at 7 T, while volumes obtained from FGRE at 1.5 T were underestimated. Cardiac mass derived from FGRE at 1.5 and 7 T was larger than obtained from SSFP at 1.5 T. Agreement of volumes and mass between SSFP at 1.5 T and FGRE improved for FGRE at 7 T when combined with an STH reduction to 4 mm.
Conclusions
This pilot study demonstrates that cardiac chamber quantification at 7 T using FGRE is feasible and agrees closely with SSFP at 1.5 T.
Background
To demonstrate the applicability of acoustic cardiac triggering (ACT) for imaging of the heart at ultrahigh magnetic fields (7.0 T) by comparing phonocardiogram, conventional vector electrocardiogram (ECG) and traditional pulse oximetry (POX) triggered 2D CINE acquisitions together with (i) a qualitative image quality analysis, (ii) an assessment of the left ventricular function parameter and (iii) an examination of trigger reliability and trigger detection variance derived from the signal waveforms.
Results
ECG was susceptible to severe distortions at 7.0 T. POX and ACT provided waveforms free of interferences from electromagnetic fields or from magneto-hydrodynamic effects. Frequent R-wave mis-registration occurred in ECG-triggered acquisitions with a failure rate of up to 30% resulting in cardiac motion induced artifacts. ACT and POX triggering produced images free of cardiac motion artefacts. ECG showed a severe jitter in the R-wave detection. POX also showed a trigger jitter of approximately Δt = 72 ms which is equivalent to two cardiac phases. ACT showed a jitter of approximately Δt = 5 ms only. ECG waveforms revealed a standard deviation for the cardiac trigger offset larger than that observed for ACT or POX waveforms.
Image quality assessment showed that ACT substantially improved image quality as compared to ECG (image quality score at end-diastole: ECG = 1.7 ± 0.5, ACT = 2.4 ± 0.5, p = 0.04) while the comparison between ECG vs. POX gated acquisitions showed no significant differences in image quality (image quality score: ECG = 1.7 ± 0.5, POX = 2.0 ± 0.5, p = 0.34).
Conclusions
The applicability of acoustic triggering for cardiac CINE imaging at 7.0 T was demonstrated. ACT's trigger reliability and fidelity are superior to that of ECG and POX. ACT promises to be beneficial for cardiovascular magnetic resonance at ultra-high field strengths including 7.0 T.
Purpose
To design and evaluate a four-channel cardiac transceiver coil array for functional cardiac imaging at 7T.
Materials and Methods
A four-element cardiac transceiver surface coil array was developed with two rectangular loops mounted on an anterior former and two rectangular loops on a posterior former. specific absorption rate (SAR) simulations were performed and a Burn:x-wiley:10531807:media:JMRI22451:tex2gif-stack-1 calibration method was applied prior to obtain 2D FLASH CINE (mSENSE, R = 2) images from nine healthy volunteers with a spatial resolution of up to 1 × 1 × 2.5 mm3.
Results
Tuning and matching was found to be better than 10 dB for all subjects. The decoupling (S21) was measured to be >18 dB between neighboring loops, >20 dB for opposite loops, and >30 dB for other loop combinations. SAR values were well within the limits provided by the IEC. Imaging provided clinically acceptable signal homogeneity with an excellent blood-myocardium contrast applying the Burn:x-wiley:10531807:media:JMRI22451:tex2gif-stack-2 calibration approach.
Conclusion
A four-channel cardiac transceiver coil array for 7T was built, allowing for cardiac imaging with clinically acceptable signal homogeneity and an excellent blood-myocardium contrast. Minor anatomic structures, such as pericardium, mitral, and tricuspid valves and their apparatus, as well as trabeculae, were accurately delineated.
Objective
The purpose of this study is to (i) design a small and mobile Magnetic field ALert SEnsor (MALSE), (ii) to carefully evaluate its sensors to their consistency of activation/deactivation and sensitivity to magnetic fields, and (iii) to demonstrate the applicability of MALSE in 1.5 T, 3.0 T and 7.0 T MR fringe field environments.
Methods
MALSE comprises a set of reed sensors, which activate in response to their exposure to a magnetic field. The activation/deactivation of reed sensors was examined by moving them in/out of the fringe field generated by 7TMR.
Results
The consistency with which individual reed sensors would activate at the same field strength was found to be 100% for the setup used. All of the reed switches investigated required a substantial drop in ambient magnetic field strength before they deactivated.
Conclusions
MALSE is a simple concept for alerting MRI staff to a ferromagnetic object being brought into fringe magnetic fields which exceeds MALSEs activation magnetic field. MALSE can easily be attached to ferromagnetic objects within the vicinity of a scanner, thus creating a barrier for hazardous situations induced by ferromagnetic parts which should not enter the vicinity of an MR-system to occur.
Spontaneous language has rarely been subjected to neuroimaging studies. This study therefore introduces a newly developed method for the analysis of linguistic phenomena observed in continuous language production during fMRI.
Most neuroimaging studies investigating language have so far focussed on single word or — to a smaller extent — sentence processing, mostly due to methodological considerations. Natural language production, however, is far more than the mere combination of words to larger units. Therefore, the present study aimed at relating brain activation to linguistic phenomena like word-finding difficulties or syntactic completeness in a continuous language fMRI paradigm. A picture description task with special constraints was used to provoke hesitation phenomena and speech errors. The transcribed speech sample was segmented into events of one second and each event was assigned to one category of a complex schema especially developed for this purpose. The main results were: conceptual planning engages bilateral activation of the precuneus. Successful lexical retrieval is accompanied – particularly in comparison to unsolved word-finding difficulties – by the left middle and superior temporal gyrus. Syntactic completeness is reflected in activation of the left inferior frontal gyrus (IFG) (area 44). In sum, the method has proven to be useful for investigating the neural correlates of lexical and syntactic phenomena in an overt picture description task. This opens up new prospects for the analysis of spontaneous language production during fMRI.
Purpose:
To investigate the feasibility of using magnetohydrodynamic (MHD) effects for synchronization of magnetic resonance imaging (MRI) with the cardiac cycle.
Materials and Methods:
The MHD effect was scrutinized using a pulsatile flow phantom at B0 = 7.0 T. MHD effects were examined in vivo in healthy volunteers (n = 10) for B0 ranging from 0.05–7.0 T. Noncontrast-enhanced MR angiography (MRA) of the carotids was performed using a gated steady-state free-precession (SSFP) imaging technique in conjunction with electrocardiogram (ECG) and MHD synchronization.
Results:
The MHD potential correlates with flow velocities derived from phase contrast MRI. MHD voltages depend on the orientation between B0 and the flow of a conductive fluid. An increase in the interelectrode spacing along the flow increases the MHD potential. In vivo measurement of the MHD effect provides peak voltages of 1.5 mV for surface areas close to the common carotid artery at B0 = 7.0 T. Synchronization of MRI with the cardiac cycle using MHD triggering is feasible. MHD triggered MRA of the carotids at 3.0 T showed an overall image quality and richness of anatomic detail, which is comparable to ECG-triggered MRAs.
Conclusion:
This feasibility study demonstrates the use of MHD effects for synchronization of MR acquisitions with the cardiac cycle. J. Magn. Reson. Imaging 2012;36:364–372. © 2012 Wiley Periodicals, Inc.
Purpose
To design and evaluate a modular transceiver coil array with 32 independent channels for cardiac MRI at 7.0T.
Methods
The modular coil array comprises eight independent building blocks, each containing four transceiver loop elements. Numerical simulations were used for B1+ field homogenization and radiofrequency (RF) safety validation. RF characteristics were examined in a phantom study. The array's suitability for accelerated high spatial resolution two-dimensional (2D) FLASH CINE imaging of the heart was examined in a volunteer study.
Results
Transmission field adjustments and RF characteristics were found to be suitable for the volunteer study. The signal-to-noise intrinsic to 7.0T together with the coil performance afforded a spatial resolution of 1.1 × 1.1 × 2.5 mm3 for 2D CINE FLASH MRI, which is by a factor of 6 superior to standardized CINE protocols used in clinical practice at 1.5T. The 32-channel transceiver array supports one-dimensional acceleration factors of up to R = 4 without impairing image quality significantly.
Conclusion
The modular 32-channel transceiver cardiac array supports accelerated and high spatial resolution cardiac MRI. The array is compatible with multichannel transmission and provides a technological basis for future clinical assessment of parallel transmission techniques at 7.0T.
A magnetic resonance tomography (MRT) apparatus (1) for the examination of a body (14) comprises parameter acquisition devices (13) for the acquisition of cardiovascular parameters of the body (14) and a control device (15) in communication with the parameter acquisition devices (13) for synchronizing the imaging, wherein the control device (15) is adapted to analyse the data of at least two parameter acquisition devices (13) and to output a control signal based on the analysis.
In the context of the increasing digitalization, the Internet of Things (IoT) is seen as a technological driver through which completely new business models can emerge in the interaction of different players. Identified key players include traditional industrial companies, municipalities and telecommunications companies. The latter, by providing connectivity, ensure that small devices with tiny batteries can be connected almost anywhere and directly to the Internet. There are already many IoT use cases on the market that provide simplification for end users, such as Philips Hue Tap. In addition to business models based on connectivity, there is great potential for information-driven business models that can support or enhance existing business models. One example is the IoT use case Park and Joy, which uses sensors to connect parking spaces and inform drivers about available parking spaces in real time. Information-driven business models can be based on data generated in IoT use cases. For example, a telecommunications company can add value by deriving more decision-relevant information – called insights – from data that is used to increase decision agility. In addition, insights can be monetized. The monetization of insights can only be sustainable, if careful attention is taken and frameworks are considered. In this chapter, the concept of information-driven business models is explained and illustrated with the concrete use case Park and Joy. In addition, the benefits, risks and framework conditions are discussed.
This article addresses the need for an innovative technique in plasma shaping, utilizing antenna structures, Maxwell’s laws, and boundary conditions within a shielded environment. The motivation lies in exploring a novel approach to efficiently generate high-energy density plasma with potential applications across various fields. Implemented in an E01 circular cavity resonator, the proposed method involves the use of an impedance and field matching device with a coaxial connector and a specially optimized monopole antenna. This setup feeds a low-loss cavity resonator, resulting in a high-energy density air plasma with a surface temperature exceeding 3500 o C, achieved with a minimal power input of 80 W. The argon plasma, resembling the shape of a simple monopole antenna with modeled complex dielectric values, offers a more energy-efficient alternative compared to traditional, power-intensive plasma shaping methods. Simulations using a commercial electromagnetic (EM) solver validate the design’s effectiveness, while experimental validation underscores the method’s feasibility and practical implementation. Analyzing various parameters in an argon atmosphere, including hot S -parameters and plasma beam images, the results demonstrate the successful application of this technique, suggesting its potential in coating, furnace technology, fusion, and spectroscopy applications.
A novel method to determine the extruded length of a metallic wire for a directed energy deposition (DED) process using a microwave (MW) plasma jet with a straight-through wire feed is presented. The method is based on the relative comparison of the measured frequency response obtained by the large-signal scattering parameter (Hot-S) technique. In the practical working range, repeatability of less than 6% for a nonactive plasma and 9% for the active plasma state is found. Measurements are conducted with a focus on a simple solution to decrease the processing time and reduce the integration time of the process into the existing hardware. It is shown that monitoring a single frequency for magnitude and phase changes is sufficient to achieve good accuracy. A combination of different measurement values to determine the length is possible. The applicability to different diameter of the same material is shown as well as a contact detection of the wire and metallic substrate.
Positionssensorvorrichtung
(2024)
Die Erfindung betrifft eine Positionssensorvorrichtung zur Bestimmung einer Absolutposition eines beweglichen ersten Teils relativ zu einem ortsfesten zweiten Teil mit einem mit dem beweglichen ersten Teil gekoppelter Codekörper, der dazu eingerichtet ist, eine Codespur mit einer Mehrzahl von in Spurrichtung aufeinanderfolgenden Codeelementen zu enthalten zur Bildung eines Codewortes, mit einer magnetischen Detektionseinrichtung zur Detektion der Codespur, wobei die Detektionseinrichtung zum einen an dem Codekörper befestigte und entlang der Spurrichtung in einem solchen Abstand gegenpolig zueinander angeordnete Permanentmagneten aufweist, dass der Abstand mit einer vorgegebenen Länge der jeweiligen Codeelemente übereinstimmt, und zum anderen eine Anzahl von ortsfest und quer zu dem Codekörper versetzt
angeordnete Wiegandsensoren aufweist, wobei der Abstand des Wiegandsensors zu einer Erstreckungsebene der Permanentmagneten derart gewählt ist, dass bei Überdeckung des Wiegandsensors durch den Permanentmagneten ein Wiegandpuls in dem Wiegandsensor induziert wird.
Im Hinblick auf die Klimaziele der Bundesrepublik Deutschland konzentriert sich das Projekt Diggi Twin auf die nachhaltige Gebäudeoptimierung. Grundlage für eine ganzheitliche Gebäudeüberwachung und -optimierung bildet dabei die Digitalisierung und Automation im Sinne eines Smart Buildings. Das interdisziplinäre Projekt der FH Aachen hat das Ziel, ein bestehendes Hochschulgebäude und einen Neubau an klimaneutrale Standards anzupassen. Im Rahmen des Projekts werden bekannte Verfahren, wie das Building Information Modeling (BIM), so erweitert, dass ein digitaler Gebäudezwilling entsteht. Dieser kann zur Optimierung des Gebäudebetriebs herangezogen werden, sowie als Basis für eine Erweiterung des Bewertungssystems Nachhaltiges Bauen (BNB) dienen. Mithilfe von Sensortechnologie und künstlicher Intelligenz kann so ein präzises Monitoring wichtiger Gebäudedaten erfolgen, um ungenutzte Energieeinsparpotenziale zu erkennen und zu nutzen. Das Projekt erforscht und setzt methodische Erkenntnisse zu BIM und digitalen Gebäudezwillingen praxisnah um, indem es spezifische Fragen zur Energie- und Ressourceneffizienz von Gebäuden untersucht und konkrete Lösungen für die Gebäudeoptimierung entwickelt.
In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle.
Die Erfindung betrifft eine Vorrichtung zur Bestimmung einer Relativlage zwischen einem feststehenden Teil und einem zu demselben in eine Bewegungsrichtung bewegbaren beweglichen Teil, wobei der feststehende Teil mit einem Wiegandsensor versehen ist, wobei der Wiegandsensor zwischen zwei gegenpolig zueinander ausgebildeten Permanentmagneten angeordnet ist und dass der bewegliche Teil eine Mehrzahl von beabstandet zueinander angeordneten Magnetisierungsstegen aus einem magnetisch leitenden Material aufweist, die in der Bewegungsrichtung zumindest eine gleich große Erstreckung aufweisen wie der Permanentmagnet, dass ein Abstand zwischen benachbarten Magnetisierungsstegen derart gewählt ist, dass in einer ersten Relativlage ein erster Permanentmagnet von einem der Magnetisierungsstege überdeckt ist und ein zweiter Permanentmagnet nicht von einem der Magnetisierungsstege überdeckt ist.
In order to reduce energy consumption of homes, it is important to make transparent which devices consume how much energy. However, power consumption is often only monitored aggregated at the house energy meter. Disaggregating this power consumption into the contributions of individual devices can be achieved using Machine Learning. Our work aims at making state of the art disaggregation algorithms accessibe for users of the open source home automation platform Home Assistant.
In addition to the technical content, modern courses at university should also teach professional skills to enhance the competencies of students towards their future work. The competency driven approach including technical as well as professional skills makes it necessary to find a suitable way for the integration into the corresponding module in a scalable and flexible manner. Agile development, for example, is essential for the development of modern systems and applications and makes use of dedicated professional skills of the team members, like structured group dynamics and communication, to enable the fast and reliable development. This paper presents an easy to integrate and flexible approach to integrate Scrum, an agile development method, into the lab of an existing module. Due to the different role models of Scrum the students have an individual learning success, gain valuable insight into modern system development and strengthen their communication and organization skills. The approach is implemented and evaluated in the module Vehicle Systems, but it can be transferred easily to other technical courses as well. The evaluation of the implementation considers feedback of all stakeholders, students, supervisor and lecturers, and monitors the observations during project lifetime.
Eine Sensorvorrichtung (10;110;210;310;410) zur Erfassung eines Magnetfelds, mit einer Wiegand-Sensoreinheit (12;112;212) umfassend: • - mindestens zwei Wiegand-Drähte (20) und • - eine Spulenanordnung (22;122;222), die die mindestens zwei Wiegand-Drähte (20) radial umschließt und die • • • ein Sensorelement (26;126;226) und • • ein Triggerelement (28;128;228), durch das ein Triggermagnetfeld erzeugbar ist, bildet, ist bekannt. Um ein magnetbasiertes Sensorsystem (300;400) zur Erfassung einer Bewegung eines beweglichen Objekts (301;401) zu ermöglichen, das ohne externe Energieversorgung zuverlässig sowie energieeffizient arbeitet und kostengünstig hergestellt werden kann, ist bei der erfindungsgemäßen Sensorvorrichtung (10;110;210;310;410) eine Wiegand-Triggereinheit (14;14a) vorhanden, umfassend: • - einen Wiegand-Draht (30) und • - eine Trigger-Sensorspule (32), die den Wiegand-Draht (30) radial umschließt, wobei ein erstes Ende der Trigger-Sensorspule (32) der Wiegand-Triggereinheit (14;14a) mit einem ersten Ende des Triggerelements (28;128;228) der Wiegand-Sensoreinheit (12;112;212) elektrisch verbunden ist und ein zweites Ende der Trigger-Sensorspule (32) der Wiegand-Triggereinheit (14;14a) mit einem zweiten Ende des Triggerelements (28;128;228) der Wiegand-Sensoreinheit (12;112;212) elektrisch verbunden ist. Auf diese Weise verstärkt ein in der Trigger-Sensorspule (32) erzeugter Impuls das Gesamtmagnetfeld, das auf die Wiegand-Drähte (20) in der Sensoreinheit einwirkt, derart, dass die Triggefeldstärke aller Wiegand-Drähte (20) überschritten wird und diese im wesentlichen zeitgleich auslösen.
Achieving the 17 Sustainable Development Goals (SDGs) set by the United Nations (UN) in 2015 requires global collaboration between different stakeholders. Industry, and in particular engineers who shape industrial developments, have a special role to play as they are confronted with the responsibility to holistically reflect sustainability in industrial processes. This means that, in addition to the technical specifications, engineers must also question the effects of their own actions on an ecological, economic and social level in order to ensure sustainable action and contribute to the achievement of the SDGs. However, this requires competencies that enable engineers to apply all three pillars of sustainability to their own field of activity and to understand the global impact of industrial processes. In this context, it is relevant to understand how industry already reflects sustainability and to identify competences needed for sustainable development.
This paper introduces an inexpensive Wiegand-sensor-based rotary encoder that avoids rotating magnets and is suitable for electrical-drive applications. So far, Wiegand-sensor-based encoders usually include a magnetic pole wheel with rotating permanent magnets. These encoders combine the disadvantages of an increased magnet demand and a limited maximal speed due to the centripetal force acting on the rotating magnets. The proposed approach reduces the total demand of permanent magnets drastically. Moreover, the rotating part is manufacturable from a single piece of steel, which makes it very robust and cheap. This work presents the theoretical operating principle of the proposed approach and validates its benefits on a hardware prototype. The presented proof-of-concept prototype achieves a mechanical resolution of 4.5 ° by using only 4 permanent magnets, 2Wiegand sensors and a rotating steel gear wheel with 20 teeth.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
Autonomous agents require rich environment models for fulfilling their missions. High-definition maps are a well-established map format which allows for representing semantic information besides the usual geometric information of the environment. These are, for instance, road shapes, road markings, traffic signs or barriers. The geometric resolution of HD maps can be as precise as of centimetre level. In this paper, we report on our approach of using HD maps as a map representation for autonomous load-haul-dump vehicles in open-pit mining operations. As the mine undergoes constant change, we also need to constantly update the map. Therefore, we follow a lifelong mapping approach for updating the HD maps based on camera-based object detection and GPS data. We show our mapping algorithm based on the Lanelet 2 map format and show our integration with the navigation stack of the Robot Operating System. We present experimental results on our lifelong mapping approach from a real open-pit mine.
Software development projects often fail because of insufficient code quality. It is now well documented that the task of testing software, for example, is perceived as uninteresting and rather boring, leading to poor software quality and major challenges to software development companies. One promising approach to increase the motivation for considering software quality is the use of gamification. Initial research works already investigated the effects of gamification on software developers and come to promising. Nevertheless, a lack of results from field experiments exists, which motivates the chapter at hand. By conducting a gamification experiment with five student software projects and by interviewing the project members, the chapter provides insights into the changing programming behavior of information systems students when confronted with a leaderboard. The results reveal a motivational effect as well as a reduction of code smells.
Dieses Lehr- und Fachbuch vermittelt anschaulich die Grundlagen der HF-Technik, gibt konkrete Beschreibungen für den Entwurf von linearen Komponenten aus Bauteilen wie auch Leitungen für High-Speed- und HF-Schaltungen. Dem Leser wird vermittelt, wie Bauteile modelliert und Schaltungen synthetisiert und optimiert werden. Mit Hilfe frei verfügbarer Simulationssoftware können GHz-Schaltungen selbst entwickelt werden. Viele Übungsbeispiele ermöglichen die Eigenkontrolle des Wissensstandes. Weiterhin werden komplexe nichtlineare Komponenten wie Hochfrequenzmischer, Oszillatoren und Synthesegeneratoren in ihrer Funktionalität dargestellt. Die neuen Mixed-Mode-Streuparameter sowie deren Leitungs- und Schaltungstechnik für Anwendungen der schnellen Digital- und der modernen HF-Technik sind ausführlich beschrieben. Es wird auf Systeme für folgende Bereiche eingegangen: Streuparametermesstechnik, verschiedene Funktechniken, UHF-RFID und Lokalisierung- und Ortung. Dem Leser wird somit ermöglicht, komplexe GHz-Schaltungen insbesondere mit Halbleiter-, SMD- und LTCC-Schaltungen zu entwickeln.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
The problem of fair and privacy-preserving ordered set reconciliation arises in a variety of applications like auctions, e-voting, and appointment reconciliation. While several multi-party protocols have been proposed that solve this problem in the semi-honest model, there are no multi-party protocols that are secure in the malicious model so far. In this paper, we close this gap. Our newly proposed protocols are shown to be secure in the malicious model based on a variety of novel non-interactive zero-knowledge-proofs. We describe the implementation of our protocols and evaluate their performance in comparison to protocols solving the problem in the semi-honest case.
The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot’s perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.
Due to the increasing complexity of software projects, software development is becoming more and more dependent on teams. The quality of this teamwork can vary depending on the team composition, as teams are always a combination of different skills and personality types. This paper aims to answer the question of how to describe a software development team and what influence the personality of the team members has on the team dynamics. For this purpose, a systematic literature review (n=48) and a literature search with the AI research assistant Elicit (n=20) were conducted. Result: A person’s personality significantly shapes his or her thinking and actions, which in turn influences his or her behavior in software development teams. It has been shown that team performance and satisfaction can be strongly influenced by personality. The quality of communication and the likelihood of conflict can also be attributed to personality.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
Digital forensics of smartphones is of utmost importance in many criminal cases. As modern smartphones store chats, photos, videos etc. that can be relevant for investigations and as they can have storage capacities of hundreds of gigabytes, they are a primary target for forensic investigators. However, it is exactly this large amount of data that is causing problems: extracting and examining the data from multiple phones seized in the context of a case is taking more and more time. This bears the risk of wasting a lot of time with irrelevant phones while there is not enough time left to analyze a phone which is worth examination. Forensic triage can help in this case: Such a triage is a preselection step based on a subset of data and is performed before fully extracting all the data from the smartphone. Triage can accelerate subsequent investigations and is especially useful in cases where time is essential. The aim of this paper is to determine which and how much data from an Android smartphone can be made directly accessible to the forensic investigator – without tedious investigations. For this purpose, an app has been developed that can be used with extremely limited storage of data in the handset and which outputs the extracted data immediately to the forensic workstation in a human- and machine-readable format.
KNX is a protocol for smart building automation, e.g., for automated heating, air conditioning, or lighting. This paper analyses and evaluates state-of-the-art KNX devices from manufacturers Merten, Gira and Siemens with respect to security. On the one hand, it is investigated if publicly known vulnerabilities like insecure storage of passwords in software, unencrypted communication, or denialof-service attacks, can be reproduced in new devices. On the other hand, the security is analyzed in general, leading to the discovery of a previously unknown and high risk vulnerability related to so-called BCU (authentication) keys.
Nowadays, the most employed devices for recoding videos or capturing images are undoubtedly the smartphones. Our work investigates the application of source camera identification on mobile phones. We present a dataset entirely collected by mobile phones. The dataset contains both still images and videos collected by 67 different smartphones. Part of the images consists in photos of uniform backgrounds, especially collected for the computation of the RSPN. Identifying the source camera given a video is particularly challenging due to the strong video compression. The experiments reported in this paper, show the large variation in performance when testing an highly accurate technique on still images and videos.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
The work in modern open-pit and underground mines requires the transportation of large amounts of resources between fixed points. The navigation to these fixed points is a repetitive task that can be automated. The challenge in automating the navigation of vehicles commonly used in mines is the systemic properties of such vehicles. Many mining vehicles, such as the one we have used in the research for this paper, use steering systems with an articulated joint bending the vehicle’s drive axis to change its course and a hydraulic drive system to actuate axial drive components or the movements of tippers if available. To address the difficulties of controlling such a vehicle, we present a model-predictive approach for controlling the vehicle. While the control optimisation based on a parallel error minimisation of the predicted state has already been established in the past, we provide insight into the design and implementation of an MPC for an articulated mining vehicle and show the results of real-world experiments in an open-pit mine environment.
Stand 01.01.2022 sind in Deutschland 618.460 elektrisch angetriebene KFZ zugelassen. Insgesamt sind derzeit 48.540.878 KFZ zugelassen, was einer Elektromobilitätsquote von ca. 1,2 % entspricht. Derzeit werden Elektromobile über Ladestationen oder Steckdosen mit dem Stromnetz verbunden und üblicherweise mit der vollen Ladekapazität des Anschlusses aufgeladen, bis das Batteriemanagementsystem des Fahrzeugs abhängig vom Ladezustand der Batterie die Ladeleistung reduziert.
This paper addresses the pixel based recognition of 3D objects with bidirectional associative memories. Computational power and memory requirements for this approach are identified and compared to the performance of current computer architectures by benchmarking different processors. It is shown, that the performance of special purpose hardware, like neurocomputers, is between one and two orders of magnitude higher than the performance of mainstream hardware. On the other hand, the calculation of small neural networks is performed more efficiently on mainstream processors. Based on these results a novel concept is developed, which is tailored for the efficient calculation of bidirectional associative memories. The computational efficiency is further enhanced by the application of algorithms and storage techniques which are matched to characteristics of the application at hand.
This paper addresses the pixel based classification of three dimensional objects from arbitrary views. To perform this task a coding strategy, inspired by the biological model of human vision, for pixel data is described. The coding strategy ensures that the input data is invariant against shift, scale and rotation of the object in the input domain. The image data is used as input to a class of self organizing neural networks, the Kohonen-maps or self-organizing feature maps (SOFM). To verify this approach two test sets have been generated: the first set, consisting of artificially generated images, is used to examine the classification properties of the SOFMs; the second test set examines the clustering capabilities of the SOFM when real world image data is applied to the network after it has been preprocessed to be invariant against shift, scale and rotation. It is shown that the clustering capability of the SOFM is strongly dependant on the invariance coding of the images.
This paper describes the realization of a novel neurocomputer which is based on the concepts of a coprocessor. In contrast to existing neurocomputers the main interest was the realization of a scalable, flexible system, which is capable of computing neural networks of arbitrary topology and scale, with full independence of special hardware from the software's point of view. On the other hand, computational power should be added, whenever needed and flexibly adapted to the requirements of the application. Hardware independence is achieved by a run time system which is capable of using all available computing power, including multiple host CPUs and an arbitrary number of neural coprocessors autonomously. The realization of arbitrary neural topologies is provided through the implementation of the elementary operations which can be found in most neural topologies.
Aim of the AXON2 project (Adaptive Expert System for Object Recogniton using Neuml Networks) is the development of an object recognition system (ORS) capable of recognizing isolated 3d objects from arbitrary views. Commonly, classification is based on a single feature extracted from the original image. Here we present an architecture adapted from the Mixtures of Eaqerts algorithm which uses multiple neuml networks to integmte different features. During tmining each neural network specializes in a subset of objects or object views appropriate to the properties of the corresponding feature space. In recognition mode the system dynamically chooses the most relevant features and combines them with maximum eficiency. The remaining less relevant features arz not computed and do therefore not decelerate the-recognition process. Thus, the algorithm is well suited for ml-time applications.
In der Vergangenheit basierten große Systemintegrationsprojekte in der Regel auf Individualentwicklungen für einzelne Kunden. Getrieben durch Kostendruck steigt aber der Bedarf nach standardisierten Lösungen, die gleichzeitig die individuellen Anforderungen des jeweiligen Umfelds berücksichtigen. T-Systems GEI GmbH wird beiden Anforderungen mit Produktkerneln gerecht. Neben den technischen Aspekten der Kernelentwicklung spielen besonders organisatorische Aspekte eine Rolle, um Kernel effizient und qualitativ hochwertig zu entwickeln, ohne deren Funktionalitäten ins Uferlose wachsen zu lassen. Umgesetzt hat T-Systems dieses Konzept für Flughafeninformationssysteme. Damit kann dem wachsenden Bedarf der Flughafenbetreiber nach einer effizienten und kostengünstigen Softwarelösung zur Unterstützung Ihrer Geschäftsprozesse entsprochen werden.
Der Erfolg eines Softwarenentwicklungsprojektes insbesondere eines Systemintegrationsprojektes wird mit der Erfüllung des „Teufelsdreiecks“, „In-Time“, „In-Budget“, „In-Quality“ gemessen. Hierzu ist die Kenntnis der Software- und Prozessqualität essenziell, um die Einhaltung der Qualitätskriterien festzustellen, aber auch, um eine Vorhersage hinsichtlich Termin- und Budgettreue zu treffen. Zu diesem Zweck wurde in der T-Systems Systems Integration ein System aus verschiedenen Key Performance Indikatoren entworfen und in der Organisation implementiert, das genau das leistet und die Kriterien für CMMI Level 3 erfüllt.
In this paper we report on CO2 Meter, a do-it-yourself carbon dioxide measuring device for the classroom. Part of the current measures for dealing with the SARS-CoV-2 pandemic is proper ventilation in indoor settings. This is especially important in schools with students coming back to the classroom even with high incidents rates. Static ventilation patterns do not consider the individual situation for a particular class. Influencing factors like the type of activity, the physical structure or the room occupancy are not incorporated. Also, existing devices are rather expensive and often provide only limited information and only locally without any networking. This leaves the potential of analysing the situation across different settings untapped. Carbon dioxide level can be used as an indicator of air quality, in general, and of aerosol load in particular. Since, according to the latest findings, SARS-CoV-2 can be transmitted primarily in the form of aerosols, carbon dioxide may be used as a proxy for the risk of a virus infection. Hence, schools could improve the indoor air quality and potentially reduce the infection risk if they actually had measuring devices available in the classroom. Our device supports schools in ventilation and it allows for collecting data over the Internet to enable a detailed data analysis and model generation. First deployments in schools at different levels were received very positively. A pilot installation with a larger data collection and analysis is underway.
Existing residential buildings have an average lifetime of 100 years. Many of these buildings will exist for at least another 50 years. To increase the efficiency of these buildings while keeping costs at reasonable rates, they can be retrofitted with sensors that deliver information to central control units for heating, ventilation and electricity. This retrofitting process should happen with minimal intervention into existing infrastructure and requires new approaches for sensor design and data transmission. At FH Aachen University of Applied Sciences, students of different disciplines work together to learn how to design, build, deploy and operate such sensors. The presented teaching project already created a low power design for a combined CO2, temperature and humidity measurement device that can be easily integrated into most home automation systems
With the growing interest in small distributed sensors for the “Internet of Things”, more attention is being paid to energy harvesting techologies. Reducing or eliminating the need for external power sources or batteries make devices more self-sufficient, more reliable, and reduces maintenance requirements. The Wiegand effect is a proven technology for harvesting small amounts of electrical power from mechanical motion.
In this study, the performance of an integrated body-imaging array for 7 T with 32 radiofrequency (RF) channels under consideration of local specific absorption rate (SAR), tissue temperature, and thermal dose limits was evaluated and the imaging performance was compared with a clinical 3 T body coil.
Thirty-two transmit elements were placed in three rings between the bore liner and RF shield of the gradient coil. Slice-selective RF pulse optimizations for B1 shimming and spokes were performed for differently oriented slices in the body under consideration of realistic constraints for power and local SAR. To improve the B1+ homogeneity, safety assessments based on temperature and thermal dose were performed to possibly allow for higher input power for the pulse optimization than permissible with SAR limits.
The results showed that using two spokes, the 7 T array outperformed the 3 T birdcage in all the considered regions of interest. However, a significantly higher SAR or lower duty cycle at 7 T is necessary in some cases to achieve similar B1+ homogeneity as at 3 T. The homogeneity in up to 50 cm-long coronal slices can particularly benefit from the high RF shim performance provided by the 32 RF channels. The thermal dose approach increases the allowable input power and the corresponding local SAR, in one example up to 100 W/kg, without limiting the exposure time necessary for an MR examination.
In conclusion, the integrated antenna array at 7 T enables a clinical workflow for body imaging and comparable imaging performance to a conventional 3 T clinical body coil.
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
Wiegand-Modul
(2022)
Ein Wiegand-Modul (110;210;310) umfassend- eine Sensorspule (112;212;312),- einen ersten Wiegand-Draht (116a;216a;316a), der zumindest teilweise innerhalb der Sensorspule (112;212;312) angeordnet ist, und- einen zweiten Wiegand-Draht (116b;216b;316b), der zumindest teilweise innerhalb der Sensorspule (112;212;312) angeordnet ist und sich im Wesentlichen parallel zu dem ersten Wiegand-Draht (116a;216a;316a) erstreckt, ist bekannt.Um eine effiziente Ausnutzung der durch die Ummagnetisierung der Wiegand-Drähte (116a,116b;216a,216b;316a,316b) in die Sensorspule (112;212;312) induzierten elektrischen Energie zu ermöglichen, sind der erste Wiegand-Draht (116a;216a;316a) und der zweite Wiegand-Draht (116b;216b;316b) bezogen auf eine axiale Richtung der Sensorspule (112;212;312) versetzt zueinander angeordnet.
This paper describes the potential for developing a digital twin of society- a dynamic model that can be used to observe, analyze, and predict the evolution of various societal aspects. Such a digital twin can help governmental agencies and policy makers in interpreting trends, understanding challenges, and making decisions regarding investments or policies necessary to support societal development and ensure future prosperity. The paper reviews related work regarding the digital twin paradigm and its applications. The paper presents a motivating case study- an analysis of opportunities and challenges faced by the German federal employment agency, Bundesagentur f¨ur Arbeit (BA), proposes solutions using digital twins, and describes initial proofs of concept for such solutions.
Dieser Beitrag stellt einen Bewertungsrahmen für Smart Services vor, der auf dem Konzept vollständiger Finanzpläne (VOFI) basiert. Zunächst wird eine IoT-Architektur für Smart Services eingeführt, die die Grundlage für deren Betrachtung aus Sicht der Unternehmensplanung liefert. Hierauf aufbauend wird ein Bewertungsrahmen für die finanzplanorientierte Wirtschaftlichkeitsbewertung von Smart Services geschaffen, mit dem die relevanten Zahlungsfolgen differenziert erfasst werden. Mithilfe des entwickelten VOFI-Systems wird anschließend aufgezeigt, wie mithilfe einer Risikoanalyse die Unsicherheit von Modellparametern berücksichtigt werden kann.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Prozessorientierte Messung der Customer Experience am Beispiel der Telekommunikationsindustrie
(2018)
Hohe Wettbewerbsintensität und gestiegene Kundenanforderungen erfordern bei Telekommunikationsunternehmen eine aktive Gestaltung der Customer Experience (CX). Ein wichtiger Aspekt dabei ist die CX-Messung. Traditionelle Zufriedenheitsmessungen sind oft nicht ausreichend, um die Kundenerfahrung in komplexen Prozessen vollständig zu erfassen. Daher wird in diesem Kapitel eine prozessübergreifende Referenzlösung zur CX-Messung am Beispiel der Telekommunikationsindustrie vorgeschlagen. Ausgangspunkt ist ein industriespezifisches Prozessmodell, das sich an dem Referenzmodell eTOM orientiert. Dieses wird um Messpunkte erweitert, die Schwachstellen in Bezug auf die CX identifizieren. Für die erkannten Schwachstellen werden über eine Referenzmatrix mögliche Auslöser abgeleitet und anhand von typischen Geschäftsfallmengen bewertet. Somit ist eine direkte Zuordnung und Erfolgsmessung konkreter Maßnahmen zur Behebung der Schwachstellen möglich. Die so entwickelte Referenzlösung wurde im Projekt K1 bei der Deutschen Telekom erfolgreich umgesetzt. Details zur Umsetzung werden als Fallstudien dargestellt.
Nutzen und Rahmenbedingungen 5 informationsgetriebener Geschäftsmodelle des Internets der Dinge
(2018)
Im Kontext der zunehmenden Digitalisierung wird das Internet der Dinge (englisch: Internet of Things, IoT) als ein technologischer Treiber angesehen, durch den komplett neue Geschäftsmodelle im Zusammenspiel unterschiedlicher Akteure entstehen können. Identifizierte Schlüsselakteure sind unter anderem traditionelle Industrieunternehmen, Kommunen und Telekommunikationsunternehmen. Letztere sorgen mit der Bereitstellung von Konnektivität dafür, dass kleine Geräte mit winzigen Batterien nahezu überall und direkt an das Internet angebunden werden können. Es sind schon viele IoT-Anwendungsfälle auf dem Markt, die eine Vereinfachung für Endkunden darstellen, wie beispielsweise Philips Hue Tap. Neben Geschäftsmodellen basierend auf Konnektivität besteht ein großes Potenzial für informationsgetriebene Geschäftsmodelle, die bestehende Geschäftsmodelle unterstützen sowie weiterentwickeln können. Ein Beispiel dafür ist der IoT-Anwendungsfall Park and Joy der Deutschen Telekom AG, bei dem Parkplätze mithilfe von Sensoren vernetzt und Autofahrer in Echtzeit über verfügbare Parkplätze informiert werden. Informationsgetriebene Geschäftsmodelle können auf Daten aufsetzen, die in IoT-Anwendungsfällen erzeugt werden. Zum Beispiel kann ein Telekommunikationsunternehmen Mehrwert schöpfen, indem es aus Daten entscheidungsrelevantere Informationen – sogenannte Insights – ableitet, die zur Steigerung der Entscheidungsagilität genutzt werden. Außerdem können Insights monetarisiert werden. Die Monetarisierung von Insights kann nur nachhaltig stattfinden, wenn sorgfältig gehandelt wird und Rahmenbedingungen berücksichtigt werden. In diesem Kapitel wird das Konzept informationsgetriebener Geschäftsmodelle erläutert und anhand des konkreten Anwendungsfalls Park and Joy verdeutlicht. Darüber hinaus werden Nutzen, Risiken und Rahmenbedingungen diskutiert.
Im Rahmen der digitalen Transformation werden innovative Technologiekonzepte, wie z. B. das Internet der Dinge und Cloud Computing als Treiber für weitreichende Veränderungen von Organisationen und Geschäftsmodellen angesehen. In diesem Kontext ist Robotic Process Automation (RPA) ein neuartiger Ansatz zur Prozessautomatisierung, bei dem manuelle Tätigkeiten durch sogenannte Softwareroboter erlernt und automatisiert ausgeführt werden. Dabei emulieren Softwareroboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Die innovative Idee ist die Transformation der bestehenden Prozessausführung von manuell zu digital, was RPA von traditionellen Ansätzen des Business Process Managements (BPM) unterscheidet, bei denen z. B. prozessgetriebene
Anpassungen auf Ebene der Geschäftslogik notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Gerade bei operativen Prozessen mit sich wiederholenden Verarbeitungsschritten in unterschiedlichen Anwendungssystemen sind gute Ergebnisse durch RPA dokumentiert, wie z. B. die Automatisierung von 35 % der Backoffice-Prozesse bei Telefonica. Durch den vergleichsweise niedrigen Implementierungsaufwand verbunden mit einem hohen Automatisierungspotenzial ist in der Praxis (z. B. Banken, Telekommunikation, Energieversorgung) ein hohes Interesse an RPA vorhanden. Der Beitrag diskutiert RPA als innovativen Ansatz zur
Prozessdigitalisierung und gibt konkrete Handlungsempfehlungen für die Praxis. Dazu wird zwischen modellgetriebenen und selbstlernenden Ansätzen unterschieden. Anhand von generellen Architekturen von RPA-Systemen werden Anwendungsszenarien sowie deren Automatisierungspotenziale, aber auch Einschränkungen, diskutiert. Es folgt ein strukturierter Marktüberblick ausgewählter RPA-Produkte. Anhand von drei konkreten Anwendungsbeispielen wird die Nutzung von RPA in der Praxis verdeutlicht.
Due to the high number of customer contacts, fault clearances, installations, and product provisioning per year, the automation level of operational processes has a significant impact on financial results, quality, and customer experience. Therefore, the telecommunications operator Deutsche Telekom (DT) has defined a digital strategy with the objectives of zero complexity and zero complaint, one touch, agility in service, and disruptive thinking. In this context, Robotic Process Automation (RPA) was identified as an enabling technology to formulate and realize DT’s digital strategy through automation of rule-based, routine, and predictable tasks in combination with structured and stable data.
Information technologies, such as big data analytics, cloud computing,
cyber physical systems, robotic process automation, and the internet of things, provide a sustainable impetus for the structural development of business sectors as well as the digitalization of markets, enterprises, and processes. Within the consulting industry, the proliferation of these technologies opened up the new segment of digital transformation, which focuses on setting up, controlling, and implementing projects for enterprises from a broad range of sectors. These recent developments raise the question, which requirements evolve for IT consultants as important success factors of those digital transformation projects. Therefore, this empirical contribution provides indications regarding the qualifications and competences necessary for IT consultants in the era of digital transformation from a labor market perspective. On the one hand, this knowledge base is interesting for the academic education of consultants, since it supports a market-oriented design of adequate training measures. On the other hand, insights into the competence requirements for consultants are considered relevant for skill and talent management processes in consulting practice. Assuming that consulting companies pursue a strategic human resource management approach, labor market information may also be useful to discover strategic behavioral patterns.
In der Diskussion über die Digitalisierung der Forschung spielt die Frage nach der optimalen IT-Unterstützung für Forschende eine wichtige Rolle. Forschende können heute an ihren Hochschulen bzw. Wissenschaftseinrichtungen auf ein breites Angebot interner IT-Dienstleistungen zurückgreifen, das auch kooperative IT-Dienste umfasst, die von mehreren Institutionen in Zusammenarbeit bereitgestellt werden. Außerhalb der eigenen Organisation und des weiteren Verbunds hat sich im Internet zudem ein breites externes Angebot an innovativen, häufig kostenlos nutzbaren Onlinediensten entwickelt. Neben horizontalen Onlinediensten, die sich prinzipiell an jeden Internetnutzer richten (bspw. Dropbox, Twitter, WhatsApp), nimmt auch die Zahl von vertikalen Diensten für wissenschaftliche bzw. Forschungszwecke immer weiter zu (bspw. GoogleScholar, ResearchGate, figshare). Für Forschende eröffnen sich damit vielfältige neue Möglichkeiten, ihren individuellen Forschungsprozess durch digitale Werkzeuge zu verbessern. Aufgrund rechtlicher, technischer und personeller Restriktionen können jedoch interne Dienstleister bei der Identifizierung, Auswahl und Nutzung externer Onlinedienste nur wenig Unterstützung leisten. Aus einer serviceorientierten Perspektive stehen Forschende zunehmend vor dem Problem, wie sich heterogene IT-Dienste interner und externer Anbieter in den eigenen Forschungsprozess integrieren lassen. Als Lösungsansatz skizziert das Kapitel das Konzept eines persönlichen Forschungsinformationssystems
nach Gesichtspunkten eines digitalen Servicesystems.
Recently, novel AI-based services have emerged in the consumer market. AI-based services can affect the way consumers take commercial decisions. Research on the influence of AI on commercial interactions is in its infancy. In this chapter, a framework creating a first overview of the influence of AI on commercial interactions is introduced. This framework summarizes the findings of comparing numerous customer journeys of novel AI-based services with corresponding non-AI equivalents.
Einfluss von Künstlicher Intelligenz auf Customer Journeys am Beispiel von intelligentem Parken
(2021)
Im Konsumentenmarkt entstehen vermehrt neue Anwendungen von Künstlicher
Intelligenz (KI). Zunehmend drängen auch Geräte und Dienste in den Markt, die
eigenständig über das Internet kommunizieren. Dadurch können diese Geräte und
Dienste mit neuartigen KI-basierten Diensten verbessert werden. Solche Dienste
können die Art und Weise beeinflussen, wie Kunden kommerzielle Entscheidungen
treffen und somit das Kundenerlebnis maßgeblich verändern. Der Einfluss von KI
auf kommerzielle Interaktionen wurde bisher noch nicht umfassend untersucht.
Basierend auf einem Framework, welches einen ersten Überblick über die Effekte
von KI auf kommerzielle Interaktionen gibt, wird in diesem Kapitel der Einfluss von KI auf Customer Journeys am konkreten Anwendungsfall des intelligenten Parkens analysiert. Die daraus gewonnenen Erkenntnisse können in der Praxis als Grundlage
genutzt werden, um das Potenzial von KI zu verstehen und bei der Gestaltung eigener Customer Journeys umzusetzen.
Intelligent autonomous software robots replacing human activities and performing administrative processes are reality in today’s corporate world. This includes, for example, decisions about invoice payments, identification of customers for a marketing campaign, and answering customer complaints. What happens if such a software robot causes a damage? Due to the complete absence of human activities, the question is not trivial. It could even happen that no one is liable for a damage towards a third party, which could create an uncalculatable legal risk for business partners. Furthermore, the implementation and operation of those software robots involves various stakeholders, which result in the unsolvable endeavor of identifying the originator of a damage. Overall it is advisable to all involved parties to carefully consider the legal situation. This chapter discusses the liability of software robots from an interdisciplinary perspective. Based on different technical scenarios the legal aspects of liability are discussed.
The benefits of robotic process automation (RPA) are highly related to the usage of commercial off-the-shelf (COTS) software products that can be easily implemented and customized by business units. But, how to find the best fitting RPA product for a specific situation that creates the expected benefits? This question is related to the general area of software evaluation and selection. In the face of more than 75 RPA products currently on the market, guidance considering those specifics is required. Therefore, this chapter proposes a criteria-based selection method specifically for RPA. The method includes a quantitative evaluation of costs and benefits as well as a qualitative utility analysis based on functional criteria. By using the visualization of financial implications (VOFI) method, an application-oriented structure is provided that opposes the total cost of ownership to the time savings times salary (TSTS). For the utility analysis a detailed list of functional criteria for RPA is offered. The whole method is based on a multi-vocal review of scientific and non-scholarly literature including publications by business practitioners, consultants, and vendors. The application of the method is illustrated by a concrete RPA example. The illustrated
structures, templates, and criteria can be directly utilized by practitioners in their real-life RPA implementations. In addition, a normative decision process for selecting RPA alternatives is proposed before the chapter closes with a discussion and outlook.
Robotic process automation (RPA) has attracted increasing attention in research and practice. This chapter positions, structures, and frames the topic as an introduction to this book. RPA is understood as a broad concept that comprises a variety of concrete solutions. From a management perspective RPA offers an innovative approach for realizing automation potentials, whereas from a technical perspective the implementation based on software products and the impact of artificial intelligence (AI) and machine learning (ML) are relevant. RPA is industry-independent and can be used, for example, in finance, telecommunications, and the public sector. With respect to RPA this chapter discusses definitions, related approaches, a structuring framework, a research framework, and an inside as well as outside architectural view. Furthermore, it provides an overview of the book combined with short summaries of each chapter.
Subject of this case is Deutsche Telekom Services Europe (DTSE), a service center for administrative processes. Due to the high volume of repetitive tasks (e.g., 100k manual uploads of offer documents into SAP per year), automation was identified as an important strategic target with a high management attention and commitment. DTSE has to work with various backend application systems without any possibility to change those systems. Furthermore, the complexity of administrative processes differed. When it comes to the transfer of unstructured data (e.g., offer documents) to structured data (e.g., MS Excel files), further cognitive technologies were needed.
Unternehmen sind in der Regel überzeugt, dass sie die Bedürfnisse ihrer Kunden in den Mittelpunkt stellen. Aber in der direkten Interaktion mit dem Kunden zeigen sie häufig Schwächen. Der folgende Beitrag illustriert, wie durch eine konsequente Ausrichtung der Wertschöpfungsprozesse auf die zentralen Kundenbedürfnisse ein Dreifacheffekt erzielt werden kann: Nachhaltig erhöhte Kundenzufriedenheit, gesteigerte Effizienz und eine Differenzierung im Wettbewerb.
Kundenanforderungen an Netzwerke haben sich in den vergangenen Jahren stark verändert. Mit NFV und SDN sind Unternehmen technisch in der Lage, diesen gerecht zu werden. Die Provider stehen jedoch vor großen Herausforderungen: Insbesondere Produkte und Prozesse müssen angepasst und agiler werden, um die Stärken von NFV und SDN zum Kundenvorteil auszuspielen.
Die Durchführung einer systematischen Literaturrecherche ist eine zentrale Kompetenz wissenschaftlichen Arbeitens und bildet daher einen festen Ausbildungsbestandteil von Bachelor- und Masterstudiengängen. In entsprechenden Lehrveranstaltungen werden Studierende zwar mit den grundlegenden Hilfsmitteln zur Suche und Verwaltung von Literatur vertraut gemacht, allerdings werden die Potenziale textanalytischer Methoden und Anwendungssysteme (Text Mining, Text Analytics) dabei zumeist nicht abgedeckt. Folglich werden Datenkompetenzen, die zur systemgestützten Analyse und Erschließung von Literaturdaten erforderlich sind, nicht hinreichend ausgeprägt. Um diese Kompetenzlücke zu adressieren, ist an der Hochschule Osnabrück eine Lehrveranstaltung konzipiert und projektorientiert umgesetzt worden, die sich insbesondere an Studierende wirtschaftswissenschaftlicher Studiengänge richtet. Dieser Beitrag dokumentiert die fachliche sowie technische Ausgestaltung dieser Veranstaltung und zeigt Potenziale für die künftige Weiterentwicklung auf.
Die Telekommunikationsindustrie hat in den letzten Jahrzehnten einen enormen Wandel vollzogen. Für Telekommunikationsunternehmen erfordert dies fundamentale Umstrukturierungen von Strategie, Prozessen, Anwendungssystemen und Netzwerktechnologien. Dabei spielen Unternehmensarchitekturen und Referenzmodelle eine wichtige Rolle. Zwar existieren in der Praxis anerkannte Referenzmodelle, aber wie sind diese für eine systematische Transformation zu gestalten? Wie sieht eine konkrete Lösung für die Telekommunikationsindustrie aus?
Als Antwort stellt Christian Czarnecki in seinem Buch eine referenzmodellbasierte Unternehmensarchitektur vor. Basierend auf einer umfangreichen Untersuchung von Transformationsprojekten werden Probleme und Anforderungen der Praxis identifiziert, für die mit Methoden der Unternehmenstransformation, Referenzmodellierung und Unternehmensarchitektur ein Lösungsvorschlag entwickelt und evaluiert wird. Dieser besteht u. a. aus detaillierten Anwendungsfällen, Referenzprozessabläufen, einer Zuordnung von Prozessen zu Anwendungssystemen sowie Handlungsempfehlungen zur Virtualisierung.
Für Wissenschaftler und Studierende der Wirtschaftsinformatik zeigt das Buch neue Erkenntnisse einer anwendungsorientierten Referenzmodellierung. Für Praktiker liefert es eine methodisch fundierte Lösung für die aktuellen Transformationsbedarfe der Telekommunikationsindustrie. Christian Czarnecki arbeitet seit 2004 als Unternehmensberater und hat viele Telekommunikationsunternehmen bei deren Transformation begleitet. In 2013 erfolgte die Promotion zum Doktoringenieur an der Otto-von-Guericke-Universität Magdeburg.
This book reflects the tremendous changes in the telecommunications industry in the course of the past few decades – shorter innovation cycles, stiffer competition and new communication products. It analyzes the transformation of processes, applications and network technologies that are now expected to take place under enormous time pressure. The International Telecommunication Union (ITU) and the TM Forum have provided reference solutions that are broadly recognized and used throughout the value chain of the telecommunications industry, and which can be considered the de facto standard. The book describes how these reference solutions can be used in a practical context: it presents the latest insights into their development, highlights lessons learned from numerous international projects and combines them with well-founded research results in enterprise architecture management and reference modeling. The complete architectural transformation is explained, from the planning and set-up stage to the implementation. Featuring a wealth of examples and illustrations, the book offers a valuable resource for telecommunication professionals, enterprise architects and project managers alike.
How does the implementation of a next generation network influence a telecommunication company?
(2009)
As the potential of a Next Generation Network (NGN) is recognized, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company and even influence the company strategy. To capture the effects of NGN we propose a framework based on concepts of business engineering and technical recommendations for the introduction of NGN technology. The specific design of solutions for the layers "Strategy", "Processes" and "Information Systems" as well as their interdependencies are an essential characteristic of the developed framework. We have per-formed a case study on NGN implementation and observed that all layers captured by our framework are influenced by the introduction of an NGN.
Durch die Fragmentierung von Wertschöpfungsketten ergeben sich neue Herausforderungen für das Management von Kundenbeziehungen. Die Dissertation untersucht die daraus resultierenden Anforderungen an eine übergreifende Integration von Customer Relationship Management in der
Telekommunikationsindustrie. Ziel ist es, durch Anwendung von Methoden eines Enterprise Architecture Framework eine übergreifend Lösung zu gestalten. Grundlegende Prämisse dabei ist, dass die übergreifende Gestaltung eines Customer Relationship Management für alle an der
Wertschöpfung beteiligten Unternehmen vorteilhaft ist.
Market changes have forced telecommunication companies to transform their business. Increased competition, short innovation cycles, changed usage patterns, increased customer expectations and cost reduction are the main drivers. Our objective is to analyze to what extend transformation projects have improved the orientation towards the end-customers. Therefore, we selected 38 real-life case studies that are dealing with customer orientation. Our analysis is based on a telecommunication-specific framework that aligns strategy, business processes and information systems. The result of our analysis shows the following: transformation projects that aim to improve the customer orientation are combined with clear goals on costs and revenue of the enterprise. These projects are usually directly linked to the customer touch points, but also to the development and provisioning of products. Furthermore, the analysis shows that customer orientation is not the sole trigger for transformation. There is no one-fits-all solution; rather, improved customer orientation needs aligned changes of business processes as well as information systems related to different parts of the company.
Die Veränderungen des Telekommunikationsmarktes haben in der Praxis zu einer Vielzahl von Transformationsprojekten geführt. Was gehört aber zu einem “Transformationsprojekt”, welche Prozesse und Systeme werden verändert? Zur Beantwortung dieser Frage haben wir 184 Berichte zu Projekten analysiert, die als "Transformationsprojekte" bezeichnet waren. Für die Analyse haben wir einen Kodierungsrahmen konzipiert und anhand dessen die Berichte mit einem hierarchischen Clustering-Verfahren in Themen gruppiert. Die Ergebnisse liefern Hinweise über die in der Praxis gesetzten Schwerpunkte und Prioritäten. Sie können
somit als Unterstützung für Unternehmen dienen, die ein Transformationsprojekt planen. Sie weisen zudem darauf hin, in welchen Bereichen eines Unternehmens Unterstützung durch wissenschaftlich erprobte Werkzeuge und Modelle nötig ist.
Development of a subject-oriented reference process model for the telecommunications industry
(2016)
Generally the usage of reference models can be structured top-down or bottom-up. The practical need of agile change and flexible organizational implementation requires a consistent mapping to an operational level. In this context, well-established reference process models are typically structured top-down. The subject-oriented Business Process Management (sBPM) offers a modeling concept that is structured bottom-up and concentrates on the process actors on an
operational level. This paper applies sBPM to the enhanced Telecom Operations Map (eTOM), a well-accepted reference process model in the telecommunications industry. The resulting design artifact is a concrete example for a combination of a bottom-up and top-down developed reference model. The results are evaluated and confirmed in practical context through the involvement of the industry body TMForum.
Zur Unterstützung des Transformationsbedarfs von Telekommunikationsunternehmen sind die Referenzmodelle des TM Forums in der Praxis weltweit anerkannt. Dabei findet jedoch meist eine losgelöste Nutzung für spezifische Einzelthemen statt. Daher führt dieser Artikel die bestehenden Inhalte in einer industriespezifischen, übergreifenden Referenzarchitektur zusammen. Der Fokus liegt auf den Ebenen Aufbauorganisation, Prozesse, Applikationen und Daten. Darüber hinaus werden inhaltliche Architekturdomänen zur Strukturierung angeboten. Die Referenzarchitektur ist hierarchisch aufgebaut und wird hier beispielhaft für ausgewählte, aggregierte Inhalte beschrieben. Als erste Evaluation wird die Anwendung der Referenzarchitektur in drei Praxisprojekten erläutert.
The telecommunications industry is currently going through a major transformation. In this context, the enhanced Telecom Operations Map (eTOM) is a domain-specific process reference model that is offered by the industry organization TM Forum. In practice, eTOM is well accepted and confirmed as de facto standard. It provides process definitions and process flows on different levels of detail. This article discusses the reference modeling of eTOM, i.e., the design, the resulting artifact, and its evaluation based on three project cases. The application of eTOM in three projects illustrates the design approach and concrete models on strategic and operational levels. The article follows the Design Science Research (DSR) paradigm. It contributes with concrete design artifacts to the transformational needs of the telecommunications industry and offers lessons-learned from a general DSR perspective.
Primäre Ziele des Internets der Dinge sind die Steuerung physischer Gegenstände aus der Distanz und das Erfassen von Informationen aus dem Umfeld dieser Gegenstände. Dazu werden Hardwarekomponenten in Gegenstände des täglichen Lebens und die Umwelt integriert. Mithilfe von Informations- und Kommunikationstechnologien entsteht hieraus das Internet der Dinge (Internet of Things, IoT). Vor einem Jahr wurde mit Narrowband Internet of Things (NB-IoT) eine Technologie die es ermöglicht, Hardwarekomponenten energieeffizient und unmittelbar über das Mobilfunknetz zu vernetzen. Gegenstände werden dadurch über große Reichweiten eigenständig kommunikationsfähig. Das IoT steht mit NB-IoT vor einem gestiegenen Nutzenpotenzial, da eine zunehmende Anzahl miteinander verbundener Gegenstände und der Austausch größerer Datenmengen realisierbar sind. Damit sind aus wirtschaftlicher Sicht neue, innovative Anwendungsfälle des IoT möglich, die auch bereits in der Praxis diskutiert werden. In diesem Beitrag wird anhand eines konkreten Anwendungsfalls untersucht, welche neuen Geschäfts- bzw. Partnermodelle durch die gemeinsame Nutzung von NB-IoT-Daten und Big Data-Technologien entstehen und welcher qualitative Mehrwert für die an einem Anwendungsfall beteiligten Stakeholder geschaffen wird. Dazu wird – einem konstruktionsorientierten Forschungsansatz folgend – ein Bewertungsrahmen zur qualitativen Wertschöpfungsanalyse von NB-IoT entwickelt, der u.a. auf der Schablone nach Cockburn und dem Business Model Canvas basiert. Anhand dieses Bewertungsrahmens wird ein Anwendungsfall untersucht, der in anonymisierter Form an konkrete Praxisprojekte angelehnt ist. Konkret wird ein Anwendungsfall betrachtet, der einen Fahrradverleih 2.0 basierend auf dem Einsatz von NB-IoT vorschlägt. Aus den Untersuchungsergebnissen gehen beispielsweise Erkenntnisse hervor, wie Geschäftsmodelle auf
Kritische Infrastrukturen sind primäre Ziele krimineller Hacker. Der Deutsche Bundestag reagierte darauf am 25. Juli 2015 mit einem Gesetz zur Verbesserung der Sicherheit von ITSystemen, dem IT-Sicherheitsgesetz. Dies verlangt von Betreibern kritischer Infrastrukturen, angemessene Mindeststandards für organisatorische und technische Sicherheit zu implementieren, um den Betrieb und die Verfügbarkeit dieser Infrastruktur zu gewährleisten. Telekommunikationsunternehmen sind einerseits von diesem Gesetz in besonderem Maße betroffen und verfügen andererseits mit dem Rahmenwerk enhanced Telecom Operations Map (eTOM) über ein international anerkanntes Referenzmodell zur Gestaltung von Geschäftsprozessen in dieser Branche. Da sämtliche Telekommunikationsunternehmen in Deutschland verpflichtet sind, das Gesetz innerhalb eines bestimmten Zeitrahmens zu implementieren, präsentiert dieser Beitrag einen Vorschlag zur Erweiterung von eTOM um die relevanten Anforderungen des deutschen IT-Sicherheitsgesetzes.
Momentan finden in vielen Branchen umfassende Veränderungen von Märkten und Wertschöpfungsketten statt, welche auch als Digitale Transformation bezeichnet werden. In diesem Zusammenhang wird das Internet der Dinge (Internet of Things, IoT) als ein wichtiger technischer Enabler der Veränderungen angesehen. Primäre Ziele des IoT sind die Steuerung physischer Gegen-stände aus der Distanz und das Erfassen von Informationen aus dem Umfeld dieser Gegenstände. Welche neuen Geschäfts-bzw. Partnermodelle entstehen durch die gemeinsame Nutzung von IoT-Daten und Big-Data-Technologien und welcher qualitative Mehrwert wird dadurch geschaffen? Als Antwort wird in diesem Beitrag ein Bewertungsrahmen zur qualitativen Wertschöpfungsanalyse von IoT vorgeschlagen. Anhand dieses Bewertungsrahmens wird ein Anwendungsfall untersucht, der in anonymisierter Form an konkrete Praxisprojekte angelehnt ist. Konkret wird ein Anwendungsfall betrachtet, der eine Abfallwirtschaft 2.0 basierend auf dem Einsatz von IoT vorschlägt. Aus den Untersuchungsergebnissen gehen beispielsweise Erkenntnisse hervor, wie Geschäftsmodelle auf Basis eines unentgeltlichen Informationsaustauschs durch IoT gestaltet werden können.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
As the potential of a next generation network (NGN) is recognised, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company, because it builds upon the separation between service and transport, a flexible bundling of services to products and the streamlining of the IT infrastructure. We propose a holistic framework, structured into the layers ‘strategy’, ‘processes’ and ‘information systems’ and incorporate into each layer all concepts necessary for the implementation of an NGN, as well as the alignment of these concepts. As a first proof-of-concept for our framework we have performed a case study on the introduction of NGN in a large telecommunication company; we show that our framework captures all topics that are affected by an NGN implementation.
Der Telekommunikationsmarkt erfährt substanzielle Veränderungen. Neue Geschäftsmodelle, innovative Dienstleistungen und Technologien erfordern Reengineering, Transformation und Prozessstandardisierung. Mit der Enhanced Telecom Operation Map (eTOM) bietet das TM Forum ein international anerkanntes de facto Referenz-Prozess-Framework basierend auf spezifischen Anforderungen und Ausprägungen der Telekommunikationsindustrie an. Allerdings enthält dieses Referenz-Framework nur eine hierarchische Sammlung von Prozessen auf unterschiedlichen Abstraktionsebenen. Eine Kontrollsicht verstanden als sequenzielle Anordnung von Aktivitäten und daraus resultierend ein realer Prozessablauf fehlt ebenso wie eine Ende-zu-Ende-Sicht auf den Kunden. In diesem Artikel erweitern wir das eTOM-Referenzmodell durch Referenzprozessabläufe, in welchen wir das Wissen über Prozesse in Telekommunikationsunternehmen abstrahieren und generalisieren. Durch die Referenzprozessabläufe werden Unternehmen bei dem strukturierten und transparenten (Re-)Design ihrer Prozesse unterstützt. Wir demonstrieren die Anwendbarkeit und Nützlichkeit unserer Referenzprozessabläufe in zwei Fallstudien und evaluieren diese anhand von Kriterien für die Bewertung von Referenzmodellen. Die Referenzprozessabläufe wurden vom TM Forum in den Standard aufgenommen und als Teil von eTOM Version 9 veröffentlicht. Darüber hinaus diskutieren wir die Komponenten unseres Ansatzes, die auch außerhalb der Telekommunikationsindustrie angewandt werden können.
Das anhaltende Wachstum wissenschaftlicher Veröffentlichungen wirft die Fragestellung auf, wie Literaturana-lysen im Rahmen von Forschungsprozessen digitalisiert und somit produktiver realisiert werden können. Insbesondere in informationstechnischen Fachgebieten ist die Forschungspraxis durch ein rasant wachsendes Publikationsaufkommen gekennzeichnet. Infolgedessen bietet sich der Einsatz von Methoden der Textanalyse (Text Analytics) an, die Textdaten automatisch vorbereiten und verarbeiten können. Erkenntnisse entstehen dabei aus Analysen von Wortarten und Subgruppen, Korrelations- sowie Zeitreihenanalysen. Dieser Beitrag stellt die Konzeption und Realisierung eines Prototypen vor, mit dem Anwender bibliographische Daten aus der etablierten Literaturdatenbank EBSCO Discovery Service mithilfe textanalytischer Methoden erschließen können. Der Prototyp basiert auf dem Analysesystem IBM Watson Explorer, das Hochschulen lizenzkostenfrei zur Verfügung steht. Potenzielle Adressaten des Prototypen sind Forschungseinrichtungen, Beratungsunternehmen sowie Entscheidungsträger in Politik und Unternehmenspraxis.
Im Rahmen der Digitalisierung ist die zunehmende Automatisierung von bisher manuellen Prozessschritten ein Aspekt, der massive Auswirkungen auf die zukünftige Arbeitswelt haben wird. In diesem Kontext werden an den Einsatz von Softwarerobotern zur Prozessautomatisierung hohe Erwartungen geknüpft. Bei den Implementierungsansätzen wird die Diskussion aktuell insbesondere durch Robotic Process Automation (RPA) und Chatbots geprägt. Beide Ansätze verfolgen das gemeinsame Ziel einer 1:1-Automatisierung von menschlichen Handlungen und dadurch ein direktes Ersetzen von Mitarbeitern durch Maschinen. Bei RPA werden Prozesse durch Softwareroboter erlernt und automatisiert ausgeführt. Dabei emulieren RPA-Roboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Durch Chatbots werden Ein- und Ausgaben von Anwendungssystemen über natürliche Sprache realisiert. Dadurch ist die Automatisierung von unternehmensexterner Kommunikation (z. B. mit Kunden) aber auch von unternehmensinternen Assistenztätigkeiten möglich. Der Beitrag diskutiert die Auswirkungen von Softwarerobotern auf die Arbeitswelt anhand von Anwendungsbeispielen und erläutert die unternehmensindividuelle Entscheidung über den Einsatz von Softwarerobotern anhand von Effektivitäts- und Effizienzzielen.
Angesichts des anhaltenden Wachstums wissenschaftlicher Veröffentlichungen werden Instrumente benötigt, um Literaturanalysen durch Digitalisierung produktiver zu gestalten. Dieser Beitrag stellt einen Ansatz vor, der bibliographische Daten aus der Literaturdatenbank EBSCO Discovery Service mithilfe von Text-Analytics-Methoden erschließt. Die Lösung basiert auf dem Textanalysesystem IBM Watson Explorer und eignet sich für explorative Literaturanalysen, um beispielsweise den Status quo emergierender Technologiefelder in der Literatur zu reflektieren. Die generierten Ergebnisse sind in den Kontext der zunehmenden Werkzeugunterstützung des Literaturrechercheprozesses einzuordnen und können für intra- sowie interinstitutionelle Wissenstransferprozesse in Forschungs- und Beratungskontexten genutzt werden.
Am Beispiel der Telekommunikationsindustrie zeigt der Beitrag eine konkrete Ausgestaltung anwendungsorientierter Forschung, die sowohl für die Praxis als auch für die Wissenschaft nutzen- und erkenntnisbringend ist. Forschungsgegenstand sind die Referenzmodelle des Industriegremiums TM Forum, die von vielen Telekommunikationsunternehmen zur Transformation ihrer Strukturen und Systeme genutzt werden. Es wird die langjährige Forschungstätigkeit bei der Weiterentwicklung und Anwendung dieser Referenzmodelle beschrieben. Dabei wird ein konsequent gestaltungsorientierter Forschungsansatz verfolgt. Das Zusammenspiel aus kontinuierlicher Weiterentwicklung in Zusammenarbeit mit einem Industriegremium und der Anwendung in vielfältigen Praxisprojekten führt zu einer erfolgreichen Symbiose aus praktischer Nutzengenerierung sowie wissenschaftlichem Erkenntnisgewinn. Der Beitrag stellt den gewählten Forschungsansatz anhand konkreter Beispiele vor. Darauf basierend werden Empfehlungen und Herausforderungen für eine gestaltungs- und praxisorientierte Forschung diskutiert.
The continuing growth of scientific publications raises the question how research processes can be digitalized and thus realized more productively. Especially in information technology fields, research practice is characterized by a rapidly growing volume of publications. For the search process various information systems exist. However, the analysis of the published content is still a highly manual task. Therefore, we propose a text analytics system that allows a fully digitalized analysis of literature sources. We have realized a prototype by using EBSCO Discovery Service in combination with IBM Watson Explorer and demonstrated the results in real-life research projects. Potential addressees are research institutions, consulting firms, and decision-makers in politics and business practice.