Refine
Year of publication
- 2021 (136) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (43)
- IfB - Institut für Bioengineering (31)
- Fachbereich Elektrotechnik und Informationstechnik (25)
- Fachbereich Luft- und Raumfahrttechnik (22)
- Fachbereich Energietechnik (15)
- INB - Institut für Nano- und Biotechnologien (10)
- ECSM European Center for Sustainable Mobility (8)
- Fachbereich Bauingenieurwesen (8)
- Fachbereich Chemie und Biotechnologie (8)
- Solar-Institut Jülich (8)
Has Fulltext
- no (136) (remove)
Language
- English (136) (remove)
Document Type
- Article (69)
- Conference Proceeding (48)
- Part of a Book (12)
- Book (2)
- Doctoral Thesis (2)
- Conference: Meeting Abstract (1)
- Other (1)
- Preprint (1)
Keywords
- Hydrogen (2)
- NOx emissions (2)
- Out-of-plane load (2)
- PCM (2)
- Principal component analysis (2)
- autonomous driving (2)
- building information modelling (2)
- capacitive field-effect sensor (2)
- constructive alignment (2)
- earthquakes (2)
Robotic process automation (RPA) has attracted increasing attention in research and practice. This chapter positions, structures, and frames the topic as an introduction to this book. RPA is understood as a broad concept that comprises a variety of concrete solutions. From a management perspective RPA offers an innovative approach for realizing automation potentials, whereas from a technical perspective the implementation based on software products and the impact of artificial intelligence (AI) and machine learning (ML) are relevant. RPA is industry-independent and can be used, for example, in finance, telecommunications, and the public sector. With respect to RPA this chapter discusses definitions, related approaches, a structuring framework, a research framework, and an inside as well as outside architectural view. Furthermore, it provides an overview of the book combined with short summaries of each chapter.
Subject of this case is Deutsche Telekom Services Europe (DTSE), a service center for administrative processes. Due to the high volume of repetitive tasks (e.g., 100k manual uploads of offer documents into SAP per year), automation was identified as an important strategic target with a high management attention and commitment. DTSE has to work with various backend application systems without any possibility to change those systems. Furthermore, the complexity of administrative processes differed. When it comes to the transfer of unstructured data (e.g., offer documents) to structured data (e.g., MS Excel files), further cognitive technologies were needed.
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
In this study, a recently proposed NMR standardization approach by 2H integral of deuterated solvent for quantitative multicomponent analysis of complex mixtures is presented. As a proof of principle, the existing NMR routine for the analysis of Aloe vera products was modified. Instead of using absolute integrals of targeted compounds and internal standard (nicotinamide) from 1H-NMR spectra, quantification was performed based on the ratio of a particular 1H-NMR compound integral and 2H-NMR signal of deuterated solvent D2O. Validation characteristics (linearity, repeatability, accuracy) were evaluated and the results showed that the method has the same precision as internal standardization in case of multicomponent screening. Moreover, a dehydration process by freeze drying is not necessary for the new routine. Now, our NMR profiling of A. vera products needs only limited sample preparation and data processing. The new standardization methodology provides an appealing alternative for multicomponent NMR screening. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and is recommended in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
Quantitative nuclear magnetic resonance (qNMR) is routinely performed by the internal or external standardization. The manuscript describes a simple alternative to these common workflows by using NMR signal of another active nuclei of calibration compound. For example, for any arbitrary compound quantification by NMR can be based on the use of an indirect concentration referencing that relies on a solvent having both 1H and 2H signals. To perform high-quality quantification, the deuteration level of the utilized deuterated solvent has to be estimated.
In this contribution the new method was applied to the determination of deuteration levels in different deuterated solvents (MeOD, ACN, CDCl3, acetone, benzene, DMSO-d6). Isopropanol-d6, which contains a defined number of deuterons and protons, was used for standardization. Validation characteristics (precision, accuracy, robustness) were calculated and the results showed that the method can be used in routine practice. Uncertainty budget was also evaluated. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and can be applied in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
The initial idea of Robotic Process Automation (RPA) is the automation of business processes through a simple emulation of user input and output by software robots. Hence, it can be assumed that no changes of the used software systems and existing Enterprise Architecture (EA) is
required. In this short, practical paper we discuss this assumption based on a real-life implementation project. We show that a successful RPA implementation might require architectural work during analysis, implementation, and migration. As practical paper we focus on exemplary lessons-learned and new questions related to RPA and EA.
Digital Shadows as the aggregation, linkage and abstraction of data relating to physical objects are a central vision for the future of production. However, the majority of current research takes a technocentric approach, in which the human actors in production play a minor role. Here, the authors present an alternative anthropocentric perspective that highlights the potential and main challenges of extending the concept of Digital Shadows to humans. Following future research methodology, three prospections that illustrate use cases for Human Digital Shadows across organizational and hierarchical levels are developed: human-robot collaboration for manual work, decision support and work organization, as well as human resource management. Potentials and challenges are identified using separate SWOT analyses for the three prospections and common themes are emphasized in a concluding discussion.
For now, the Planetary Defense Conference Exercise 2021's incoming fictitious(!), asteroid, 2021 PDC, seems headed for impact on October 20th, 2021, exactly 6 months after its discovery. Today (April 26th, 2021), the impact probability is 5%, in a steep rise from 1 in 2500 upon discovery six days ago. We all know how these things end. Or do we? Unless somebody kicked off another headline-grabbing media scare or wants to keep civil defense very idle very soon, chances are that it will hit (note: this is an exercise!). Taking stock, it is barely 6 months to impact, a steadily rising likelihood that it will actually happen, and a huge uncertainty of possible impact energies: First estimates range from 1.2 MtTNT to 13 GtTNT, and this is not even the worst-worst case: a 700 m diameter massive NiFe asteroid (covered by a thin veneer of Ryugu-black rubble to match size and brightness), would come in at 70 GtTNT. In down to Earth terms, this could be all between smashing fireworks over some remote area of the globe and a 7.5 km crater downtown somewhere. Considering the deliberate and sedate ways of development of interplanetary missions it seems we can only stand and stare until we know well enough where to tell people to pack up all that can be moved at all and save themselves. But then, it could just as well be a smaller bright rock. The best estimate is 120 m diameter from optical observation alone, by 13% standard albedo. NASA's upcoming DART mission to binary asteroid (65803) Didymos is designed to hit such a small target, its moonlet Dimorphos. The Deep Impact mission's impactor in 2005 successfully guided itself to the brightest spot on comet 9P/Tempel 1, a relatively small feature on the 6 km nucleus. And 'space' has changed: By the end of this decade, one satellite communication network plans to have launched over 11000 satellites at a pace of 60 per launch every other week. This level of series production is comparable in numbers to the most prolific commercial airliners. Launch vehicle production has not simply increased correspondingly – they can be reused, although in a trade for performance. Optical and radio astronomy as well as planetary radar have made great strides in the past decade, and so has the design and production capability for everyday 'high-tech' products. 60 years ago, spaceflight was invented from scratch within two years, and there are recent examples of fast-paced space projects as well as a drive towards 'responsive space'. It seems it is not quite yet time to abandon all hope. We present what could be done and what is too close to call once thinking is shoved out of the box by a clear and present danger, to show where a little more preparedness or routine would come in handy – or become decisive. And if we fail, let's stand and stare safely and well instrumented anywhere on Earth together in the greatest adventure of science.
The existence of several mobile operating systems, such as Android and iOS, is a challenge for developers because the individual platforms are not compatible with each other and require separate app developments. For this reason, cross-platform approaches have become popular but lack in cloning the native behavior of the different operating systems. Out of the plenty cross-platform approaches, the progressive web app (PWA) approach is perceived as promising but needs further investigation. Therefore, the paper at hand aims at investigating whether PWAs are a suitable alternative for native apps by developing a PWA clone of an existing app. Two surveys are conducted in which potential users test and evaluate the PWA prototype with regard to its usability. The survey results indicate that PWAs have great potential, but cannot be treated as a general alternative to native apps. For guiding developers when and how to use PWAs, four design guidelines for the development of PWA-based apps are derived based on the results.
With the increased interest for interstellar exploration after the discovery of exoplanets and the proposal by Breakthrough Starshot, this paper investigates the optimisation of photon-sail trajectories in Alpha Centauri. The prime objective is to find the optimal steering strategy for a photonic sail to get captured around one of the stars after a minimum-time transfer from Earth. By extending the idea of the Breakthrough Starshot project with a deceleration phase upon arrival, the mission’s scientific yield will be increased. As a secondary objective, transfer trajectories between the stars and orbit-raising manoeuvres to explore the habitable zones of the stars are investigated. All trajectories are optimised for minimum time of flight using the trajectory optimisation software InTrance. Depending on the sail technology, interstellar travel times of 77.6-18,790 years can be achieved, which presents an average improvement of 30% with respect to previous work. Still, significant technological development is required to reach and be captured in the Alpha-Centauri system in less than a century. Therefore, a fly-through mission arguably remains the only option for a first exploratory mission to Alpha Centauri, but the enticing results obtained in this work provide perspective for future long-residence missions to our closest neighbouring star system.
This paper presents the laser-based powder bed fusion (L-PBF) using various glass powders (borosilicate and quartz glass). Compared to metals, these require adapted process strategies. First, the glass powders were characterized with regard to their material properties and their processability in the powder bed. This was followed by investigations of the melting behavior of the glass powders with different laser wavelengths (10.6 µm, 1070 nm). In particular, the experimental setup of a CO2 laser was adapted for the processing of glass powder. An experimental setup with integrated coaxial temperature measurement/control and an inductively heatable build platform was created. This allowed the L-PBF process to be carried out at the transformation temperature of the glasses. Furthermore, the component’s material quality was analyzed on three-dimensional test specimen with regard to porosity, roughness, density and geometrical accuracy in order to evaluate the developed L-PBF parameters and to open up possible applications.
The planned coal phase-out in Germany by 2038 will lead to the dismantling of power plants with a total capacity of approx. 30 GW. A possible further use of these assets is the conversion of the power plants to thermal storage power plants; the use of these power plants on the day-ahead market is considerably limited by their technical parameters. In this paper, the influence of the technical boundary conditions on the operating times of these storage facilities is presented. For this purpose, the storage power plants were described as an MILP problem and two price curves, one from 2015 with a relatively low renewable penetration (33 %) and one from 2020 with a high renewable energy penetration (51 %) are compared. The operating times were examined as a function of the technical parameters and the critical influencing factors were investigated. The thermal storage power plant operation duration and the energy shifted with the price curve of 2020
increases by more than 25 % compared to 2015.
Experimental and numerical investigation on the effect of pressure on micromix hydrogen combustion
(2021)
The micromix (MMX) combustion concept is a DLN gas turbine combustion technology designed for high hydrogen content fuels. Multiple non-premixed miniaturized flames based on jet in cross-flow (JICF) are inherently safe against flashback and ensure a stable operation in various operative conditions.
The objective of this paper is to investigate the influence of pressure on the micromix flame with focus on the flame initiation point and the NOx emissions. A numerical model based on a steady RANS approach and the Complex Chemistry model with relevant reactions of the GRI 3.0 mechanism is used to predict the reactive flow and NOx emissions at various pressure conditions. Regarding the turbulence-chemical interaction, the Laminar Flame Concept (LFC) and the Eddy Dissipation Concept (EDC) are compared. The numerical results are validated against experimental results that have been acquired at a high pressure test facility for industrial can-type gas turbine combustors with regard to flame initiation and NOx emissions.
The numerical approach is adequate to predict the flame initiation point and NOx emission trends. Interestingly, the flame shifts its initiation point during the pressure increase in upstream direction, whereby the flame attachment shifts from anchoring behind a downstream located bluff body towards anchoring directly at the hydrogen jet. The LFC predicts this change and the NOx emissions more accurately than the EDC. The resulting NOx correlation regarding the pressure is similar to a non-premixed type combustion configuration.
Kawasaki Heavy Industries, LTD. (KHI) has research and development projects for a future hydrogen society. These projects comprise the complete hydrogen cycle, including the production of hydrogen gas, the refinement and liquefaction for transportation and storage, and finally the utilization in a gas turbine for electricity and heat supply. Within the development of the hydrogen gas turbine, the key technology is stable and low NOx hydrogen combustion, namely the Dry Low NOx (DLN) hydrogen combustion.
KHI, Aachen University of Applied Science, and B&B-AGEMA have investigated the possibility of low NOx micro-mix hydrogen combustion and its application to an industrial gas turbine combustor. From 2014 to 2018, KHI developed a DLN hydrogen combustor for a 2MW class industrial gas turbine with the micro-mix technology. Thereby, the ignition performance, the flame stability for equivalent rotational speed, and higher load conditions were investigated. NOx emission values were kept about half of the Air Pollution Control Law in Japan: 84ppm (O2-15%). Hereby, the elementary combustor development was completed.
From May 2020, KHI started the engine demonstration operation by using an M1A-17 gas turbine with a co-generation system located in the hydrogen-fueled power generation plant in Kobe City, Japan. During the first engine demonstration tests, adjustments of engine starting and load control with fuel staging were investigated. On 21st May, the electrical power output reached 1,635 kW, which corresponds to 100% load (ambient temperature 20 °C), and thereby NOx emissions of 65 ppm (O2-15, 60 RH%) were verified. Here, for the first time, a DLN hydrogen-fueled gas turbine successfully generated power and heat.
This study investigates the influence of pressure on the temperature distribution of the micromix (MMX) hydrogen flame and the NOx emissions. A steady computational fluid dynamic (CFD) analysis is performed by simulating a reactive flow with a detailed chemical reaction model. The numerical analysis is validated based on experimental investigations. A quantitative correlation is parametrized based on the numerical results. We find, that the flame initiation point shifts with increasing pressure from anchoring behind a downstream located bluff body towards anchoring upstream at the hydrogen jet. The numerical NOx emissions trend regarding to a variation of pressure is in good agreement with the experimental results. The pressure has an impact on both, the residence time within the maximum temperature region and on the peak temperature itself. In conclusion, the numerical model proved to be adequate for future prototype design exploration studies targeting on improving the operating range.
In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.