Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1926)
- Fachbereich Elektrotechnik und Informationstechnik (1150)
- Fachbereich Wirtschaftswissenschaften (1119)
- Fachbereich Energietechnik (1066)
- Fachbereich Chemie und Biotechnologie (892)
- Fachbereich Maschinenbau und Mechatronik (801)
- Fachbereich Luft- und Raumfahrttechnik (768)
- Fachbereich Bauingenieurwesen (664)
- IfB - Institut für Bioengineering (625)
- INB - Institut für Nano- und Biotechnologien (585)
- Fachbereich Gestaltung (343)
- Solar-Institut Jülich (335)
- Fachbereich Architektur (163)
- ECSM European Center for Sustainable Mobility (113)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (66)
- Nowum-Energy (65)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (62)
- Institut fuer Angewandte Polymerchemie (32)
- Sonstiges (24)
- IBB - Institut für Baustoffe und Baukonstruktionen (21)
- Freshman Institute (19)
- Kommission für Forschung und Entwicklung (19)
- Verwaltung (11)
- Arbeitsstelle fuer Hochschuldidaktik und Studienberatung (4)
- FH Aachen (4)
- IaAM - Institut für angewandte Automation und Mechatronik (4)
- IMP - Institut für Mikrowellen- und Plasmatechnik (3)
- Kommission für Planung und Finanzen (2)
- Datenverarbeitungszentrale (1)
- Digitalisierung in Studium & Lehre (1)
- Senat (1)
Has Fulltext
- no (9278) (remove)
Language
Document Type
- Article (5516)
- Conference Proceeding (1413)
- Book (1057)
- Part of a Book (555)
- Patent (174)
- Bachelor Thesis (165)
- Report (82)
- Doctoral Thesis (79)
- Conference: Meeting Abstract (75)
- Other (67)
Keywords
- Illustration (10)
- Nachhaltigkeit (10)
- Corporate Design (9)
- Erscheinungsbild (8)
- Gamification (8)
- Redesign (7)
- Animation (6)
- Datenschutz (6)
- Deutschland (6)
- Digitalisierung (6)
Zur Unterstützung des Transformationsbedarfs von Telekommunikationsunternehmen sind die Referenzmodelle des TM Forums in der Praxis weltweit anerkannt. Dabei findet jedoch meist eine losgelöste Nutzung für spezifische Einzelthemen statt. Daher führt dieser Artikel die bestehenden Inhalte in einer industriespezifischen, übergreifenden Referenzarchitektur zusammen. Der Fokus liegt auf den Ebenen Aufbauorganisation, Prozesse, Applikationen und Daten. Darüber hinaus werden inhaltliche Architekturdomänen zur Strukturierung angeboten. Die Referenzarchitektur ist hierarchisch aufgebaut und wird hier beispielhaft für ausgewählte, aggregierte Inhalte beschrieben. Als erste Evaluation wird die Anwendung der Referenzarchitektur in drei Praxisprojekten erläutert.
Durch die Fragmentierung von Wertschöpfungsketten ergeben sich neue Herausforderungen für das Management von Kundenbeziehungen. Die Dissertation untersucht die daraus resultierenden Anforderungen an eine übergreifende Integration von Customer Relationship Management in der
Telekommunikationsindustrie. Ziel ist es, durch Anwendung von Methoden eines Enterprise Architecture Framework eine übergreifend Lösung zu gestalten. Grundlegende Prämisse dabei ist, dass die übergreifende Gestaltung eines Customer Relationship Management für alle an der
Wertschöpfung beteiligten Unternehmen vorteilhaft ist.
Die Telekommunikationsindustrie hat in den letzten Jahrzehnten einen enormen Wandel vollzogen. Für Telekommunikationsunternehmen erfordert dies fundamentale Umstrukturierungen von Strategie, Prozessen, Anwendungssystemen und Netzwerktechnologien. Dabei spielen Unternehmensarchitekturen und Referenzmodelle eine wichtige Rolle. Zwar existieren in der Praxis anerkannte Referenzmodelle, aber wie sind diese für eine systematische Transformation zu gestalten? Wie sieht eine konkrete Lösung für die Telekommunikationsindustrie aus?
Als Antwort stellt Christian Czarnecki in seinem Buch eine referenzmodellbasierte Unternehmensarchitektur vor. Basierend auf einer umfangreichen Untersuchung von Transformationsprojekten werden Probleme und Anforderungen der Praxis identifiziert, für die mit Methoden der Unternehmenstransformation, Referenzmodellierung und Unternehmensarchitektur ein Lösungsvorschlag entwickelt und evaluiert wird. Dieser besteht u. a. aus detaillierten Anwendungsfällen, Referenzprozessabläufen, einer Zuordnung von Prozessen zu Anwendungssystemen sowie Handlungsempfehlungen zur Virtualisierung.
Für Wissenschaftler und Studierende der Wirtschaftsinformatik zeigt das Buch neue Erkenntnisse einer anwendungsorientierten Referenzmodellierung. Für Praktiker liefert es eine methodisch fundierte Lösung für die aktuellen Transformationsbedarfe der Telekommunikationsindustrie. Christian Czarnecki arbeitet seit 2004 als Unternehmensberater und hat viele Telekommunikationsunternehmen bei deren Transformation begleitet. In 2013 erfolgte die Promotion zum Doktoringenieur an der Otto-von-Guericke-Universität Magdeburg.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Die Stadt Augsburg war eine der größten Textilstädte Europas. Seit 2010 präsentiert das Textil- und Industriemuseum (TIM) eine Vielzahl von Exponaten in der alten Kammgarnspinnerei im ehemaligen Augsburger Textilviertel und ist somit ein wichtiger Bestandteil der bayrischen Museumslandschaft und deutschen Textilgeschichte. Neben seiner besonderen Lage überzeugt das TIM vor allem mit einer großen Musterbuchsammlung. Im Fokus des in dieser Arbeit neugestalteten Erscheinungsbildes stehen die verschiedenen Muster, die den textilen Schwerpunkt in den Vordergrund rücken und dem Museum damit einen einprägsamen Wiedererkennungswert geben. Mithilfe des neuen Corporate Designs soll das Museum für regionale und überregionale Besucher:innen attraktiver gemacht und somit die einmalige geschichtliche Bedeutung vermittelt und erhalten werden.
Intelligent autonomous software robots replacing human activities and performing administrative processes are reality in today’s corporate world. This includes, for example, decisions about invoice payments, identification of customers for a marketing campaign, and answering customer complaints. What happens if such a software robot causes a damage? Due to the complete absence of human activities, the question is not trivial. It could even happen that no one is liable for a damage towards a third party, which could create an uncalculatable legal risk for business partners. Furthermore, the implementation and operation of those software robots involves various stakeholders, which result in the unsolvable endeavor of identifying the originator of a damage. Overall it is advisable to all involved parties to carefully consider the legal situation. This chapter discusses the liability of software robots from an interdisciplinary perspective. Based on different technical scenarios the legal aspects of liability are discussed.
Non-intrusive measuring techniques have attained a lot of interest in relation to both hydraulic modeling and prototype applications. Complimenting acoustic techniques, significant progress has been made for the development of new optical methods. Computer vision techniques can help to extract new information, e. g. high-resolution velocity and depth data, from videos captured with relatively inexpensive, consumer-grade cameras. Depth cameras are sensors providing information on the distance between the camera and observed features. Currently, sensors with different working principles are available. Stereoscopic systems reference physical image features (passive system) from two perspectives; in order to enhance the number of features and improve the results, a sensor may also estimate the disparity from a detected light to its original projection (active stereo system). In the current study, the RGB-D camera Intel RealSense D435, working on such stereo vision principle, is used in different, typical hydraulic modeling applications. All tests have been conducted at the Utah Water Research Laboratory. This paper will demonstrate the performance and limitations of the RGB-D sensor, installed as a single camera and as camera arrays, applied to 1) detect the free surface for highly turbulent, aerated hydraulic jumps, for free-falling jets and for an energy dissipation basin downstream of a labyrinth weir and 2) to monitor local scours upstream and downstream of a Piano Key Weir. It is intended to share the authors’ experiences with respect to camera settings, calibration, lightning conditions and other requirements in order to promote this useful, easily accessible device. Results will be compared to data from classical instrumentation and the literature. It will be shown that even in difficult application, e. g. the detection of a highly turbulent, fluctuating free-surface, the RGB-D sensor may yield similar accuracy as classical, intrusive probes.
The investigation of atomic resonance fluorescence has always been of special interest as a means for the determination of atomic parameters. In addition, information on the interaction mechanism between atoms and radiation can be obtained. In the standard fluorescence experiment the frequency distribution of the incident photons is larger than the natural width of the respective transition; as a consequence the correlation time in the photon-atom interaction is determined by the lifetime of the atoms in the excited state. With the development of lasers and especially of tunable dye lasers in recent years it became possible to study the case where the incident radiation has a spectral distribution which is narrower than the natural width. This corresponds to a correlation time of the incoming light wave which is much longer than the excited-state lifetime. In this chapter a survey of experiments on the resonance fluorescence of atoms in monochromatic laser fields will be given.
Improving the Mechanical Strength of Dental Applications and Lattice Structures SLM Processed
(2020)
To manufacture custom medical parts or scaffolds with reduced defects and high mechanical characteristics, new research on optimizing the selective laser melting (SLM) parameters are needed. In this work, a biocompatible powder, 316L stainless steel, is characterized to understand the particle size, distribution, shape and flowability. Examination revealed that the 316L particles are smooth, nearly spherical, their mean diameter is 39.09 μm and just 10% of them hold a diameter less than 21.18 μm. SLM parameters under consideration include laser power up to 200 W, 250–1500 mm/s scanning speed, 80 μm hatch spacing, 35 μm layer thickness and a preheated platform. The effect of these on processability is evaluated. More than 100 samples are SLM-manufactured with different process parameters. The tensile results show that is possible to raise the ultimate tensile strength up to 840 MPa, adapting the SLM parameters for a stable processability, avoiding the technological defects caused by residual stress. Correlating with other recent studies on SLM technology, the tensile strength is 20% improved. To validate the SLM parameters and conditions established, complex bioengineering applications such as dental bridges and macro-porous grafts are SLM-processed, demonstrating the potential to manufacture medical products with increased mechanical resistance made of 316L.
Numerische Berechnung des Tritium-Verhaltens von Kugelhaufenreaktoren am Beispiel des AVR-Reaktors
(1979)
Impaired cerebral autoregulation and neurovascular coupling (NVC) contribute to delayed cerebral ischemia after subarachnoid hemorrhage (SAH). Retinal vessel analysis (RVA) allows non-invasive assessment of vessel dimension and NVC hereby demonstrating a predictive value in the context of various neurovascular diseases. Using RVA as a translational approach, we aimed to assess the retinal vessels in patients with SAH. RVA was performed prospectively in 24 patients with acute SAH (group A: day 5–14), in 11 patients 3 months after ictus (group B: day 90 ± 35), and in 35 age-matched healthy controls (group C). Data was acquired using a Retinal Vessel Analyzer (Imedos Systems UG, Jena) for examination of retinal vessel dimension and NVC using flicker-light excitation. Diameter of retinal vessels—central retinal arteriolar and venular equivalent—was significantly reduced in the acute phase (p < 0.001) with gradual improvement in group B (p < 0.05). Arterial NVC of group A was significantly impaired with diminished dilatation (p < 0.001) and reduced area under the curve (p < 0.01) when compared to group C. Group B showed persistent prolonged latency of arterial dilation (p < 0.05). Venous NVC was significantly delayed after SAH compared to group C (A p < 0.001; B p < 0.05). To our knowledge, this is the first clinical study to document retinal vasoconstriction and impairment of NVC in patients with SAH. Using non-invasive RVA as a translational approach, characteristic patterns of compromise were detected for the arterial and venous compartment of the neurovascular unit in a time-dependent fashion. Recruitment will continue to facilitate a correlation analysis with clinical course and outcome.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Helle Fensterprofilmaterialien : Alterungsverhalten auf Basis von peroxidisch vernetztem EPDM
(2010)
Purpose
In vivo, a loss of mesh porosity triggers scar tissue formation and restricts functionality. The purpose of this study was to evaluate the properties and configuration changes as mesh deformation and mesh shrinkage of a soft mesh implant compared with a conventional stiff mesh implant in vitro and in a porcine model.
Material and Methods
Tensile tests and digital image correlation were used to determine the textile porosity for both mesh types in vitro. A group of three pigs each were treated with magnetic resonance imaging (MRI) visible conventional stiff polyvinylidene fluoride meshes (PVDF) or with soft thermoplastic polyurethane meshes (TPU) (FEG Textiltechnik mbH, Aachen, Germany), respectively. MRI was performed with a pneumoperitoneum at a pressure of 0 and 15 mmHg, which resulted in bulging of the abdomen. The mesh-induced signal voids were semiautomatically segmented and the mesh areas were determined. With the deformations assessed in both mesh types at both pressure conditions, the porosity change of the meshes after 8 weeks of ingrowth was calculated as an indicator of preserved elastic properties. The explanted specimens were examined histologically for the maturity of the scar (collagen I/III ratio).
Results
In TPU, the in vitro porosity increased constantly, in PVDF, a loss of porosity was observed under mild stresses. In vivo, the mean mesh areas of TPU were 206.8 cm2 (± 5.7 cm2) at 0 mmHg pneumoperitoneum and 274.6 cm2 (± 5.2 cm2) at 15 mmHg; for PVDF the mean areas were 205.5 cm2 (± 8.8 cm2) and 221.5 cm2 (± 11.8 cm2), respectively. The pneumoperitoneum-induced pressure increase resulted in a calculated porosity increase of 8.4% for TPU and of 1.2% for PVDF. The mean collagen I/III ratio was 8.7 (± 0.5) for TPU and 4.7 (± 0.7) for PVDF.
Conclusion
The elastic properties of TPU mesh implants result in improved tissue integration compared to conventional PVDF meshes, and they adapt more efficiently to the abdominal wall. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 827–833, 2018.
Die Fallstudie FAYMONVILLE beschäftigt sich damit, wie es dem Familienunternehmen Faymonville aus Ostbelgien gelungen ist, sich zu einem der führenden Hersteller in seiner Branche zu entwickeln. Die gezielte Identifizierung neuer Märkte, die Fokussierung auf die relevanten Kundenbedürfnisse und eine konsistente Produktpolitik mit einem abgestimmten Fertigungskonzept legen die Grundsteine für den Erfolg. Das vorliegende Fallbeispiel zeigt anschaulich, wie es gelingen kann, den prinzipiellen Widerspruch zwischen wirtschaftlicher und kundenindividueller Fertigung erfolgreich aufzulösen.
Ein Drittel der Mitarbeiter der Saint-Gobain Glass Deutschland
GmbH hat drei Jahre lang regelmäßig seinen Rücken trainiert.
Mit Erfolg, wie eine abschließende Evaluation in Zusammenarbeit
mit der FH Aachen zeigt. Die Fehltage der Trainingsteilnehmer
sind enorm zurückgegangen, während die untrainierten Kollegen
weiterhin unter Rückenbeschwerden leiden.
Bewertungsrelevanz veröffentlichter Kapitalflußrechnungen börsennotierter deutscher Unternehmen
(1999)
A methodology for assessment, seismic verification and strengthening of existing masonry buildings is presented in this paper. The verification is performed using a calculation model calibrated with the results from ambient vibration measurements. The calibrated model serves as an input for a deformation-based verification procedure based on the Capacity Spectrum Method (CSM). The bearing capacity of the building is calculated from experimental capacity curves of the individual walls idealized with bilinear elastic-perfectly plastic curves. The experimental capacity curves were obtained from in-plane cyclic loading tests on unreinforced and strengthened masonry walls with reinforced concrete jackets. The seismic action is compared with the load-bearing capacity of the building considering non-linear material behavior with its post-peak capacity. The application of the CSM to masonry buildings and the influence of a traditional strengthening method are demonstrated on the example of a public school building in Skopje, Macedonia.
Textile reinforced concrete. Part I: Process model for collaborative research and development
(2003)
Investigation of TRPV1 loss-of-function phenotypes in transgenic shRNA expressing and knockout mice
(2008)
Numerical avalanche dynamics models have become an essential part of snow engineering. Coupled with field observations and historical records, they are especially helpful in understanding avalanche flow in complex terrain. However, their application poses several new challenges to avalanche engineers. A detailed understanding of the avalanche phenomena is required to construct hazard scenarios which involve the careful specification of initial conditions (release zone location and dimensions) and definition of appropriate friction parameters. The interpretation of simulation results requires an understanding of the numerical solution schemes and easy to use visualization tools. We discuss these problems by presenting the computer model RAMMS, which was specially designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged equations governing avalanche flow with accurate second-order numerical solution schemes. The model allows the specification of multiple release zones in three-dimensional terrain. Snow cover entrainment is considered. Furthermore, two different flow rheologies can be applied: the standard Voellmy–Salm (VS) approach or a random kinetic energy (RKE) model, which accounts for the random motion and inelastic interaction between snow granules. We present the governing differential equations, highlight some of the input and output features of RAMMS and then apply the models with entrainment to simulate two well-documented avalanche events recorded at the Vallée de la Sionne test site.
Numerical models have become an essential part of snow avalanche engineering. Recent
advances in understanding the rheology of flowing snow and the mechanics of entrainment and
deposition have made numerical models more reliable. Coupled with field observations and historical
records, they are especially helpful in understanding avalanche flow in complex terrain. However, the
application of numerical models poses several new challenges to avalanche engineers. A detailed
understanding of the avalanche phenomena is required to specify initial conditions (release zone
dimensions and snowcover entrainment rates) as well as the friction parameters, which are no longer
based on empirical back-calculations, rather terrain roughness, vegetation and snow properties. In this
paper we discuss these problems by presenting the computer model RAMMS, which was specially
designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged
equations governing avalanche flow with first and second-order numerical solution schemes. A
tremendous effort has been invested in the implementation of advanced input and output features.
Simulation results are therefore clearly and easily visualized to simplify their interpretation. More
importantly, RAMMS has been applied to a series of well-documented avalanches to gauge model
performance. In this paper we present the governing differential equations, highlight some of the input
and output features of RAMMS and then discuss the simulation of the Gatschiefer avalanche that
occurred in April 2008, near Klosters/Monbiel, Switzerland.
Two- and three-dimensional avalanche dynamics models are being increasingly used in hazard-mitigation studies. These models can provide improved and more accurate results for hazard mapping than the simple one-dimensional models presently used in practice. However, two- and three-dimensional models generate an extensive amount of output data, making the interpretation of simulation results more difficult. To perform a simulation in three-dimensional terrain, numerical models require a digital elevation model, specification of avalanche release areas (spatial extent and volume), selection of solution methods, finding an adequate calculation resolution and, finally, the choice of friction parameters. In this paper, the importance and difficulty of correctly setting up and analysing the results of a numerical avalanche dynamics simulation is discussed. We apply the two-dimensional simulation program RAMMS to the 1968 extreme avalanche event In den Arelen. We show the effect of model input variations on simulation results and the dangers and complexities in their interpretation.
MultiChannel Photomultipliers (PM), like the R7600-00-M64 or R5900-00-M64 from Hamamatsu, are often chosen as photodetectors in high-resolution positron emission tomography (PET). A major problem of this PM is the nonuniform channel gain. In order to solve this problem, light attenuating masks were created. The aim of the masks is a homogenization of the output of all 64 channels using different hole sizes at the channel positions. The hole area, which is individually defined for the different channels, is inversely proportional to the channel gain. The measurements by inserting light attenuating masks improved a homogenization to a ratio of 1:1.2.
Gas- und Dampfturbinen-Kraftwerke mit Druckwirbelschicht- oder mit Druckvergasungsverfahren ermöglichen die Verstromung von Kohle mit hohem Wirkungsgrad und niedrigen Emissionen. Eine Voraussetzung für den Betrieb dieser Anlagen ist die Entstaubung der Rauchgase bei hohen Temperaturen und Drücken. Abreinigungsfilter mit keramischen Elementen werden dazu eingesetzt. Eine Reduzierung gasförmiger Schadstoffe unter den gleichen Bedingungen könnte Rauchgaswäsche ersetzen. Ziel des Gesamtvorhabens ist es, die Integration von Heißgasfiltration und katalytischem Abbau der Schadstoffe Kohlenmonoxid, Kohlenwasserstoffe und Stickoxide in einen Verfahrensschritt zu untersuchen. Die Arbeitsschwerpunkte dieses Teilvorhabens betreffen:
die katalytische Wirkung eisenhaltiger Braunkohlenaschen,
die Wirksamkeit des Calciumaluminat als Katalysator des Abbaus unverbrannter Kohlenwasserstoffe im Heißgasfilter,
numerische Simulation der kombinierten Abscheidung von Partikeln und gasförmigen Schadstoffen aus Rauchgasen
Design, evaluation and comparison of endorectal coils for hybrid MR-PET imaging of the prostate
(2020)
Prostate cancer is one of the most common cancers among men and its early detection is critical for its successful treatment. The use of multimodal imaging, such as MR-PET, is most advantageous as it is able to provide detailed information about the prostate. However, as the human prostate is flexible and can move into different positions under external conditions, it is important to localise the focused region-of-interest using both MRI and PET under identical circumstances. In this work, we designed five commonly used linear and quadrature radiofrequency surface coils suitable for hybrid MR-PET use in endorectal applications. Due to the endorectal design and the shielded PET insert, the outer face of the coils investigated was curved and the region to be imaged was outside the volume of the coil. The tilting angles of the coils were varied with respect to the main magnetic field direction. This was done to approximate the various positions from which the prostate could be imaged. The transmit efficiencies and safety excitation efficiencies from simulations, together with the signal-to-noise ratios from the MR images were calculated and analysed. Overall, it was found that the overlapped loops driven in quadrature were superior to the other types of coils we tested. In order to determine the effect of the different coil designs on PET, transmission scans were carried out, and it was observed that the differences between attenuation maps with and without the coils were negligible. The findings of this work can provide useful guidance for the integration of such coil designs into MR-PET hybrid systems in the future.
Orthodontic treatments are concomitant with mechanical forces and thereby cause teeth movements. The applied forces are transmitted to the tooth root and the periodontal ligaments which is compressed on one side and tensed up on the other side. Indeed, strong forces can lead to tooth root resorption and the crown-to-tooth ratio is reduced with the potential for significant clinical impact. The cementum, which covers the tooth root, is a thin mineralized tissue of the periodontium that connects the periodontal ligament with the tooth and is build up by cementoblasts. The impact of tension and compression on these cells is investigated in several in vivo and in vitro studies demonstrating differences in protein expression and signaling pathways. In summary, osteogenic marker changes indicate that cyclic tensile forces support whereas static tension inhibits cementogenesis. Furthermore, cementogenesis experiences the same protein expression changes in static conditions as static tension, but cyclic compression leads to the exact opposite of cyclic tension. Consistent with marker expression changes, the singaling pathways of Wnt/ß-catenin and RANKL/OPG show that tissue compression leads to cementum degradation and tension forces to cementogenesis. However, the cementum, and in particular its cementoblasts, remain a research area which should be explored in more detail to understand the underlying mechanism of bone resorption and remodeling after orthodontic treatments.
This paper describes the potential for developing a digital twin of society- a dynamic model that can be used to observe, analyze, and predict the evolution of various societal aspects. Such a digital twin can help governmental agencies and policy makers in interpreting trends, understanding challenges, and making decisions regarding investments or policies necessary to support societal development and ensure future prosperity. The paper reviews related work regarding the digital twin paradigm and its applications. The paper presents a motivating case study- an analysis of opportunities and challenges faced by the German federal employment agency, Bundesagentur f¨ur Arbeit (BA), proposes solutions using digital twins, and describes initial proofs of concept for such solutions.
The Solar-Institut Jülich (SIJ) and the companies Hilger GmbH and Heliokon GmbH from Germany have developed a small-scale cost-effective heliostat, called “micro heliostat”. Micro heliostats can be deployed in small-scale concentrated solar power (CSP) plants to concentrate the sun's radiation for electricity generation, space or domestic water heating or industrial process heat. In contrast to conventional heliostats, the special feature of a micro heliostat is that it consists of dozens of parallel-moving, interconnected, rotatable mirror facets. The mirror facets array is fixed inside a box-shaped module and is protected from weathering and wind forces by a transparent glass cover. The choice of the building materials for the box, tracking mechanism and mirrors is largely dependent on the selected production process and the intended application of the micro heliostat. Special attention was paid to the material of the tracking mechanism as this has a direct influence on the accuracy of the micro heliostat. The choice of materials for the mirror support structure and the tracking mechanism is made in favor of plastic molded parts. A qualification assessment method has been developed by the SIJ in which a 3D laser scanner is used in combination with a coordinate measuring machine (CMM). For the validation of this assessment method, a single mirror facet was scanned and the slope deviation was computed.
Objective
This study assesses and quantifies impairment of postoperative magnetic resonance imaging (MRI) at 7 Tesla (T) after implantation of titanium cranial fixation plates (CFPs) for neurosurgical bone flap fixation.
Materials and methods
The study group comprised five patients who were intra-individually examined with 3 and 7 T MRI preoperatively and postoperatively (within 72 h/3 months) after implantation of CFPs. Acquired sequences included T₁-weighted magnetization-prepared rapid-acquisition gradient-echo (MPRAGE), T₂-weighted turbo-spin-echo (TSE) imaging, and susceptibility-weighted imaging (SWI). Two experienced neurosurgeons and a neuroradiologist rated image quality and the presence of artifacts in consensus reading.
Results
Minor artifacts occurred around the CFPs in MPRAGE and T2 TSE at both field strengths, with no significant differences between 3 and 7 T. In SWI, artifacts were accentuated in the early postoperative scans at both field strengths due to intracranial air and hemorrhagic remnants. After resorption, the brain tissue directly adjacent to skull bone could still be assessed. Image quality after 3 months was equal to the preoperative examinations at 3 and 7 T.
Conclusion
Image quality after CFP implantation was not significantly impaired in 7 T MRI, and artifacts were comparable to those in 3 T MRI.
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
The implementation of IO-Link in the automation industry has increased over the years. Its main advantage is it offers a digital point-to-point plugand-play interface for any type of device or application. This simplifies the communication between devices and increases productivity with its different features like self-parametrization and maintenance. However, its complete potential is not always used.
The aim of this paper is to create an Arduino based framework for the development of generic IO-Link devices and increase its implementation for rapid prototyping. By generating the IO device description file (IODD) from a graphical user interface, and further customizable options for the device application, the end-user can intuitively develop generic IO-Link devices. The peculiarity of this framework relies on its simplicity and abstraction which allows to implement any sensor functionality and virtually connect any type of device to an IO-Link master. This work consists of the general overview of the framework, the technical background of its development and a proof of concept which demonstrates the workflow for its implementation.
The recent amendment to the Ethernet physical layer known as the IEEE 802.3cg specification, allows to connect devices up to a distance of one kilometer and delivers a maximum of 60 watts of power over a twisted pair of wires. This new standard, also known as 10BASE-TIL, promises to overcome the limits of current physical layers used for field devices and bring them a step closer to Ethernet-based applications. The main advantage of 10BASE- TIL is that it can deliver power and data over the same line over a long distance, where traditional solutions (e.g., CAN, IO-Link, HART) fall short and cannot match its 10 Mbps bandwidth. Due to its recentness, IOBASE- TIL is still not integrated into field devices and it has been less than two years since silicon manufacturers released the first Ethernet-PHY chips. In this paper, we present a design proposal on how field devices could be integrated into a IOBASE-TIL smart switch that allows plug-and-play connectivity for sensors and actuators and is compliant with the Industry 4.0 vision. Instead of presenting a new field-level protocol for this work, we have decided to adopt the IO-Link specification which already includes a plug-and-play approach with features such as diagnosis and device configuration. The main objective of this work is to explore how field devices could be integrated into 10BASE-TIL Ethernet, its adaption with a well-known protocol, and its integration with Industry 4.0 technologies.
The development of protype applications with sensors and actuators in the automation industry requires tools that are independent of manufacturer, and are flexible enough to be modified or extended for any specific requirements. Currently, developing prototypes with industrial sensors and actuators is not straightforward. First of all, the exchange of information depends on the industrial protocol that these devices have. Second, a specific configuration and installation is done based on the hardware that is used, such as automation controllers or industrial gateways. This means that the development for a specific industrial protocol, highly depends on the hardware and the software that vendors provide. In this work we propose a rapid-prototyping framework based on Arduino to solve this problem. For this project we have focused to work with the IO-Link protocol. The framework consists of an Arduino shield that acts as the physical layer, and a software that implements the IO-Link Master protocol. The main advantage of such framework is that an application with industrial devices can be rapid-prototyped with ease as its vendor independent, open-source and can be ported easily to other Arduino compatible boards. In comparison, a typical approach requires proprietary hardware, is not easy to port to another system and is closed-source.
One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.
Visionsbild eines BMW AG internen Tools : Optimierung einer datenzentrierten Plattform der BMW AG
(2022)
Die BMW AG verfolgt mehrere Ansätze eine optimierte Effizienzsteigerung durch eigens entwickelte Softwaretools zu erreichen. Der Fokus hierbei liegt in der Ermöglichung einer datenzentrierten, nutzerorientierten und hochautomatisierten Fahrzeugentwicklung. Das eigen von der BMW AG entwickelte Software-Tool „Parts List“ dient als digitale Bauteilliste für Entwicklungsteams. Das Tool bündelt mehrere Datenbanken und stellt diese in Echtzeit zur Verfügung. Somit kann der manuelle Pflegeaufwand erheblich reduziert und eine Durchgängigkeit einer Vielzahl von Daten ermöglicht werden. Der Fokus der Abschlussarbeit liegt in der konzeptionellen und gestalterischen Optimierung des Tools. Dabei wird insbesondere der User Centered Design Prozess betrachtet, bei dem die Nutzer bzw. die Entwicklungsteams im Zentrum stehen.
A laser-enhanced solar sail is a solar sail that is not solely propelled by solar radiation but additionally by a laser beam that illuminates the sail. This way, the propulsive acceleration of the sail results from the combined action of the solar and the laser radiation pressure onto the sail. The potential source of the laser beam is a laser satellite that coverts solar power (in the inner solar system) or nuclear power (in the outer solar system) into laser power. Such a laser satellite (or many of them) can orbit anywhere in the solar system and its optimal orbit (or their optimal orbits) for a given mission is a subject for future research. This contribution provides the model for an ideal laser-enhanced solar sail and investigates how a laser can enhance the thrusting capability of such a sail. The term ”ideal” means that the solar sail is assumed to be perfectly reflecting and that the laser beam is assumed to have a constant areal power density over the whole sail area. Since a laser beam has a limited divergence, it can provide radiation pressure at much larger solar distances and increase the radiation pressure force into the desired direction. Therefore, laser-enhanced solar sails may make missions feasible, that would otherwise have prohibitively long flight times, e.g. rendezvous missions in the outer solar system. This contribution will also analyze exemplary mission scenarios and present optimial trajectories without laying too much emphasis on the design and operations of the laser satellites. If the mission studies conclude that laser-enhanced solar sails would have advantages with respect to ”traditional” solar sails, a detailed study of the laser satellites and the whole system architecture would be the second next step
The objective of this study is the establishment of a differential scanning calorimetry (DSC) based method for online analysis of the biodegradation of polymers in complex environments. Structural changes during biodegradation, such as an increase in brittleness or crystallinity, can be detected by carefully observing characteristic changes in DSC profiles. Until now, DSC profiles have not been used to draw quantitative conclusions about biodegradation. A new method is presented for quantifying the biodegradation using DSC data, whereby the results were validated using two reference methods.
The proposed method is applied to evaluate the biodegradation of three polymeric biomaterials: polyhydroxybutyrate (PHB), cellulose acetate (CA) and Organosolv lignin. The method is suitable for the precise quantification of the biodegradability of PHB. For CA and lignin, conclusions regarding their biodegradation can be drawn with lower resolutions. The proposed method is also able to quantify the biodegradation of blends or composite materials, which differentiates it from commonly used degradation detection methods.
Die stoffliche Nutzung von Lignin aus Bioraffinerien ist ein wichtiger Bestandteil für den Wertschöpfungsprozess von nachwachsenden, pflanzlichen Rohstoffen. Lignin zählt zu den wenigen erneuerbaren Quellen für phenolische Bestandteile, wird aber derzeit meist nur thermisch verwertet. Ziel dieses Forschungsvorhabens ist die Funktionalisierung von Lignin zur Verbesserung der Adhäsionseigenschaften. Als funktionelle Gruppe wird die aromatische Aminosäure L-DOPA verwendet, die charakteristisch für die Adhäsionskraft von Muscheln ist. Lignin ist ein geeignetes Stützgerüst, da es ein Polymer ist, das durch enzymkatalysierte Polymerisation gebildet wird. Essenziell für die Entwicklung ist ein besseres Verständnis über die Bildung von Lignin-Polymeren und deren verschiedene Eigenschaften. Um die Einflussfaktoren auf Kettenlänge und Polymerisationseffizienz zu untersuchen, werden zurzeit sowohl Ligninmodellkomponenten (LMK) als auch gelöstes Organosolv-Lignin verwendet. Laufende Untersuchungen werden zeigen, ob sich die enzymatische Polymerisationsreaktion auf ein gelöstes Ligninpolymer aus einem Organosolv-Aufschluss übertragen lässt.
Aufgrund von EU-Regularien und Umweltinitiativen wächst der Markt für nachhaltige und abbaubare Klebstoffe stetig. Organosolv (OS)-Lignin ist ein kommerziell wenig ertragreicher Nebenstrom der Lignocellulose-Bioraffinerie. Durch das "Nachahmen" der Adhäsionseigenschaften mit strukturverwandten Muschel-Aminosäuren soll OS-Lignin in einen starkes, vollständig biobasiertes Adhäsiv umgewandelt werden. Funktionsweisend für die Adhäsion des Muschelklebstoffes ist die Catecholgruppe der Aminosäure L-DOPA. Die laccase-katalysierte Polymerisationsreaktion von Lignin und L-DOPA ist schwierig zu kontrollieren, da L-DOPA eine Ringschlussreaktion eingeht. Stattdessen wurde eine zweistufige Reaktion mit einem Diamin als Ankermolekül etabliert. Die Catecholgruppe, die im zweiten Schritt enzymatisch an das Lignin-Amin gebunden wird, kann durch Komplexbildung mit Fe(III)-Ionen sowohl zur Adhäsion als auch zur Kohäsion des Klebstoffes beitragen. Der Lignin-Catechol-Klebstoff ist frei von petrochemischen Chemikalien und biologisch abbaubar. In ersten Stirnzugversuchen konnte eine Haftkraft von 0,3 MPa erreicht werden.
Entwicklung timingabhängiger Marketing Strategien in frühen Phasen des Produktentstehungsprozesses
(1995)
Couponing
(2003)