Wissenschaftlicher Artikel
Filtern
Erscheinungsjahr
- 2024 (40)
- 2023 (79)
- 2022 (93)
- 2021 (109)
- 2020 (135)
- 2019 (123)
- 2018 (127)
- 2017 (109)
- 2016 (118)
- 2015 (126)
- 2014 (142)
- 2013 (139)
- 2012 (130)
- 2011 (182)
- 2010 (176)
- 2009 (199)
- 2008 (180)
- 2007 (176)
- 2006 (180)
- 2005 (188)
- 2004 (214)
- 2003 (154)
- 2002 (167)
- 2001 (157)
- 2000 (173)
- 1999 (153)
- 1998 (165)
- 1997 (154)
- 1996 (140)
- 1995 (147)
- 1994 (136)
- 1993 (108)
- 1992 (102)
- 1991 (74)
- 1990 (82)
- 1989 (79)
- 1988 (80)
- 1987 (77)
- 1986 (65)
- 1985 (59)
- 1984 (56)
- 1983 (47)
- 1982 (38)
- 1981 (39)
- 1980 (50)
- 1979 (43)
- 1978 (41)
- 1977 (22)
- 1976 (25)
- 1975 (18)
- 1974 (13)
- 1973 (6)
- 1972 (15)
- 1971 (7)
- 1970 (2)
- 1968 (2)
- 1967 (1)
Institut
- Fachbereich Medizintechnik und Technomathematik (1596)
- Fachbereich Wirtschaftswissenschaften (705)
- Fachbereich Elektrotechnik und Informationstechnik (637)
- Fachbereich Energietechnik (609)
- Fachbereich Chemie und Biotechnologie (603)
- INB - Institut für Nano- und Biotechnologien (541)
- Fachbereich Maschinenbau und Mechatronik (493)
- IfB - Institut für Bioengineering (452)
- Fachbereich Luft- und Raumfahrttechnik (380)
- Fachbereich Bauingenieurwesen (333)
Sprache
- Englisch (3288)
- Deutsch (2359)
- Russisch (11)
- Italienisch (1)
- Mehrsprachig (1)
- Niederländisch (1)
- Spanisch (1)
Dokumenttyp
- Wissenschaftlicher Artikel (5662) (entfernen)
Schlagworte
- Einspielen <Werkstoff> (7)
- Multimediamarkt (6)
- Rapid prototyping (5)
- avalanche (5)
- Earthquake (4)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- Rapid Prototyping (4)
- additive manufacturing (4)
Der Telekommunikationsmarkt erfährt substanzielle Veränderungen. Neue Geschäftsmodelle, innovative Dienstleistungen und Technologien erfordern Reengineering, Transformation und Prozessstandardisierung. Mit der Enhanced Telecom Operation Map (eTOM) bietet das TM Forum ein international anerkanntes de facto Referenz-Prozess-Framework basierend auf spezifischen Anforderungen und Ausprägungen der Telekommunikationsindustrie an. Allerdings enthält dieses Referenz-Framework nur eine hierarchische Sammlung von Prozessen auf unterschiedlichen Abstraktionsebenen. Eine Kontrollsicht verstanden als sequenzielle Anordnung von Aktivitäten und daraus resultierend ein realer Prozessablauf fehlt ebenso wie eine Ende-zu-Ende-Sicht auf den Kunden. In diesem Artikel erweitern wir das eTOM-Referenzmodell durch Referenzprozessabläufe, in welchen wir das Wissen über Prozesse in Telekommunikationsunternehmen abstrahieren und generalisieren. Durch die Referenzprozessabläufe werden Unternehmen bei dem strukturierten und transparenten (Re-)Design ihrer Prozesse unterstützt. Wir demonstrieren die Anwendbarkeit und Nützlichkeit unserer Referenzprozessabläufe in zwei Fallstudien und evaluieren diese anhand von Kriterien für die Bewertung von Referenzmodellen. Die Referenzprozessabläufe wurden vom TM Forum in den Standard aufgenommen und als Teil von eTOM Version 9 veröffentlicht. Darüber hinaus diskutieren wir die Komponenten unseres Ansatzes, die auch außerhalb der Telekommunikationsindustrie angewandt werden können.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
As the potential of a next generation network (NGN) is recognised, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company, because it builds upon the separation between service and transport, a flexible bundling of services to products and the streamlining of the IT infrastructure. We propose a holistic framework, structured into the layers ‘strategy’, ‘processes’ and ‘information systems’ and incorporate into each layer all concepts necessary for the implementation of an NGN, as well as the alignment of these concepts. As a first proof-of-concept for our framework we have performed a case study on the introduction of NGN in a large telecommunication company; we show that our framework captures all topics that are affected by an NGN implementation.
Unternehmen sind in der Regel überzeugt, dass sie die Bedürfnisse ihrer Kunden in den Mittelpunkt stellen. Aber in der direkten Interaktion mit dem Kunden zeigen sie häufig Schwächen. Der folgende Beitrag illustriert, wie durch eine konsequente Ausrichtung der Wertschöpfungsprozesse auf die zentralen Kundenbedürfnisse ein Dreifacheffekt erzielt werden kann: Nachhaltig erhöhte Kundenzufriedenheit, gesteigerte Effizienz und eine Differenzierung im Wettbewerb.
Kundenanforderungen an Netzwerke haben sich in den vergangenen Jahren stark verändert. Mit NFV und SDN sind Unternehmen technisch in der Lage, diesen gerecht zu werden. Die Provider stehen jedoch vor großen Herausforderungen: Insbesondere Produkte und Prozesse müssen angepasst und agiler werden, um die Stärken von NFV und SDN zum Kundenvorteil auszuspielen.
Am Beispiel der Telekommunikationsindustrie zeigt der Beitrag eine konkrete Ausgestaltung anwendungsorientierter Forschung, die sowohl für die Praxis als auch für die Wissenschaft nutzen- und erkenntnisbringend ist. Forschungsgegenstand sind die Referenzmodelle des Industriegremiums TM Forum, die von vielen Telekommunikationsunternehmen zur Transformation ihrer Strukturen und Systeme genutzt werden. Es wird die langjährige Forschungstätigkeit bei der Weiterentwicklung und Anwendung dieser Referenzmodelle beschrieben. Dabei wird ein konsequent gestaltungsorientierter Forschungsansatz verfolgt. Das Zusammenspiel aus kontinuierlicher Weiterentwicklung in Zusammenarbeit mit einem Industriegremium und der Anwendung in vielfältigen Praxisprojekten führt zu einer erfolgreichen Symbiose aus praktischer Nutzengenerierung sowie wissenschaftlichem Erkenntnisgewinn. Der Beitrag stellt den gewählten Forschungsansatz anhand konkreter Beispiele vor. Darauf basierend werden Empfehlungen und Herausforderungen für eine gestaltungs- und praxisorientierte Forschung diskutiert.
Im Rahmen der Digitalisierung ist die zunehmende Automatisierung von bisher manuellen Prozessschritten ein Aspekt, der massive Auswirkungen auf die zukünftige Arbeitswelt haben wird. In diesem Kontext werden an den Einsatz von Softwarerobotern zur Prozessautomatisierung hohe Erwartungen geknüpft. Bei den Implementierungsansätzen wird die Diskussion aktuell insbesondere durch Robotic Process Automation (RPA) und Chatbots geprägt. Beide Ansätze verfolgen das gemeinsame Ziel einer 1:1-Automatisierung von menschlichen Handlungen und dadurch ein direktes Ersetzen von Mitarbeitern durch Maschinen. Bei RPA werden Prozesse durch Softwareroboter erlernt und automatisiert ausgeführt. Dabei emulieren RPA-Roboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Durch Chatbots werden Ein- und Ausgaben von Anwendungssystemen über natürliche Sprache realisiert. Dadurch ist die Automatisierung von unternehmensexterner Kommunikation (z. B. mit Kunden) aber auch von unternehmensinternen Assistenztätigkeiten möglich. Der Beitrag diskutiert die Auswirkungen von Softwarerobotern auf die Arbeitswelt anhand von Anwendungsbeispielen und erläutert die unternehmensindividuelle Entscheidung über den Einsatz von Softwarerobotern anhand von Effektivitäts- und Effizienzzielen.
Improving the Mechanical Strength of Dental Applications and Lattice Structures SLM Processed
(2020)
To manufacture custom medical parts or scaffolds with reduced defects and high mechanical characteristics, new research on optimizing the selective laser melting (SLM) parameters are needed. In this work, a biocompatible powder, 316L stainless steel, is characterized to understand the particle size, distribution, shape and flowability. Examination revealed that the 316L particles are smooth, nearly spherical, their mean diameter is 39.09 μm and just 10% of them hold a diameter less than 21.18 μm. SLM parameters under consideration include laser power up to 200 W, 250–1500 mm/s scanning speed, 80 μm hatch spacing, 35 μm layer thickness and a preheated platform. The effect of these on processability is evaluated. More than 100 samples are SLM-manufactured with different process parameters. The tensile results show that is possible to raise the ultimate tensile strength up to 840 MPa, adapting the SLM parameters for a stable processability, avoiding the technological defects caused by residual stress. Correlating with other recent studies on SLM technology, the tensile strength is 20% improved. To validate the SLM parameters and conditions established, complex bioengineering applications such as dental bridges and macro-porous grafts are SLM-processed, demonstrating the potential to manufacture medical products with increased mechanical resistance made of 316L.
Impaired cerebral autoregulation and neurovascular coupling (NVC) contribute to delayed cerebral ischemia after subarachnoid hemorrhage (SAH). Retinal vessel analysis (RVA) allows non-invasive assessment of vessel dimension and NVC hereby demonstrating a predictive value in the context of various neurovascular diseases. Using RVA as a translational approach, we aimed to assess the retinal vessels in patients with SAH. RVA was performed prospectively in 24 patients with acute SAH (group A: day 5–14), in 11 patients 3 months after ictus (group B: day 90 ± 35), and in 35 age-matched healthy controls (group C). Data was acquired using a Retinal Vessel Analyzer (Imedos Systems UG, Jena) for examination of retinal vessel dimension and NVC using flicker-light excitation. Diameter of retinal vessels—central retinal arteriolar and venular equivalent—was significantly reduced in the acute phase (p < 0.001) with gradual improvement in group B (p < 0.05). Arterial NVC of group A was significantly impaired with diminished dilatation (p < 0.001) and reduced area under the curve (p < 0.01) when compared to group C. Group B showed persistent prolonged latency of arterial dilation (p < 0.05). Venous NVC was significantly delayed after SAH compared to group C (A p < 0.001; B p < 0.05). To our knowledge, this is the first clinical study to document retinal vasoconstriction and impairment of NVC in patients with SAH. Using non-invasive RVA as a translational approach, characteristic patterns of compromise were detected for the arterial and venous compartment of the neurovascular unit in a time-dependent fashion. Recruitment will continue to facilitate a correlation analysis with clinical course and outcome.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Helle Fensterprofilmaterialien : Alterungsverhalten auf Basis von peroxidisch vernetztem EPDM
(2010)
Purpose
In vivo, a loss of mesh porosity triggers scar tissue formation and restricts functionality. The purpose of this study was to evaluate the properties and configuration changes as mesh deformation and mesh shrinkage of a soft mesh implant compared with a conventional stiff mesh implant in vitro and in a porcine model.
Material and Methods
Tensile tests and digital image correlation were used to determine the textile porosity for both mesh types in vitro. A group of three pigs each were treated with magnetic resonance imaging (MRI) visible conventional stiff polyvinylidene fluoride meshes (PVDF) or with soft thermoplastic polyurethane meshes (TPU) (FEG Textiltechnik mbH, Aachen, Germany), respectively. MRI was performed with a pneumoperitoneum at a pressure of 0 and 15 mmHg, which resulted in bulging of the abdomen. The mesh-induced signal voids were semiautomatically segmented and the mesh areas were determined. With the deformations assessed in both mesh types at both pressure conditions, the porosity change of the meshes after 8 weeks of ingrowth was calculated as an indicator of preserved elastic properties. The explanted specimens were examined histologically for the maturity of the scar (collagen I/III ratio).
Results
In TPU, the in vitro porosity increased constantly, in PVDF, a loss of porosity was observed under mild stresses. In vivo, the mean mesh areas of TPU were 206.8 cm2 (± 5.7 cm2) at 0 mmHg pneumoperitoneum and 274.6 cm2 (± 5.2 cm2) at 15 mmHg; for PVDF the mean areas were 205.5 cm2 (± 8.8 cm2) and 221.5 cm2 (± 11.8 cm2), respectively. The pneumoperitoneum-induced pressure increase resulted in a calculated porosity increase of 8.4% for TPU and of 1.2% for PVDF. The mean collagen I/III ratio was 8.7 (± 0.5) for TPU and 4.7 (± 0.7) for PVDF.
Conclusion
The elastic properties of TPU mesh implants result in improved tissue integration compared to conventional PVDF meshes, and they adapt more efficiently to the abdominal wall. © 2017 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 106B: 827–833, 2018.
Ein Drittel der Mitarbeiter der Saint-Gobain Glass Deutschland
GmbH hat drei Jahre lang regelmäßig seinen Rücken trainiert.
Mit Erfolg, wie eine abschließende Evaluation in Zusammenarbeit
mit der FH Aachen zeigt. Die Fehltage der Trainingsteilnehmer
sind enorm zurückgegangen, während die untrainierten Kollegen
weiterhin unter Rückenbeschwerden leiden.
Investigation of TRPV1 loss-of-function phenotypes in transgenic shRNA expressing and knockout mice
(2008)
Numerical avalanche dynamics models have become an essential part of snow engineering. Coupled with field observations and historical records, they are especially helpful in understanding avalanche flow in complex terrain. However, their application poses several new challenges to avalanche engineers. A detailed understanding of the avalanche phenomena is required to construct hazard scenarios which involve the careful specification of initial conditions (release zone location and dimensions) and definition of appropriate friction parameters. The interpretation of simulation results requires an understanding of the numerical solution schemes and easy to use visualization tools. We discuss these problems by presenting the computer model RAMMS, which was specially designed by the SLF as a practical tool for avalanche engineers. RAMMS solves the depth-averaged equations governing avalanche flow with accurate second-order numerical solution schemes. The model allows the specification of multiple release zones in three-dimensional terrain. Snow cover entrainment is considered. Furthermore, two different flow rheologies can be applied: the standard Voellmy–Salm (VS) approach or a random kinetic energy (RKE) model, which accounts for the random motion and inelastic interaction between snow granules. We present the governing differential equations, highlight some of the input and output features of RAMMS and then apply the models with entrainment to simulate two well-documented avalanche events recorded at the Vallée de la Sionne test site.
Two- and three-dimensional avalanche dynamics models are being increasingly used in hazard-mitigation studies. These models can provide improved and more accurate results for hazard mapping than the simple one-dimensional models presently used in practice. However, two- and three-dimensional models generate an extensive amount of output data, making the interpretation of simulation results more difficult. To perform a simulation in three-dimensional terrain, numerical models require a digital elevation model, specification of avalanche release areas (spatial extent and volume), selection of solution methods, finding an adequate calculation resolution and, finally, the choice of friction parameters. In this paper, the importance and difficulty of correctly setting up and analysing the results of a numerical avalanche dynamics simulation is discussed. We apply the two-dimensional simulation program RAMMS to the 1968 extreme avalanche event In den Arelen. We show the effect of model input variations on simulation results and the dangers and complexities in their interpretation.
MultiChannel Photomultipliers (PM), like the R7600-00-M64 or R5900-00-M64 from Hamamatsu, are often chosen as photodetectors in high-resolution positron emission tomography (PET). A major problem of this PM is the nonuniform channel gain. In order to solve this problem, light attenuating masks were created. The aim of the masks is a homogenization of the output of all 64 channels using different hole sizes at the channel positions. The hole area, which is individually defined for the different channels, is inversely proportional to the channel gain. The measurements by inserting light attenuating masks improved a homogenization to a ratio of 1:1.2.
Design, evaluation and comparison of endorectal coils for hybrid MR-PET imaging of the prostate
(2020)
Prostate cancer is one of the most common cancers among men and its early detection is critical for its successful treatment. The use of multimodal imaging, such as MR-PET, is most advantageous as it is able to provide detailed information about the prostate. However, as the human prostate is flexible and can move into different positions under external conditions, it is important to localise the focused region-of-interest using both MRI and PET under identical circumstances. In this work, we designed five commonly used linear and quadrature radiofrequency surface coils suitable for hybrid MR-PET use in endorectal applications. Due to the endorectal design and the shielded PET insert, the outer face of the coils investigated was curved and the region to be imaged was outside the volume of the coil. The tilting angles of the coils were varied with respect to the main magnetic field direction. This was done to approximate the various positions from which the prostate could be imaged. The transmit efficiencies and safety excitation efficiencies from simulations, together with the signal-to-noise ratios from the MR images were calculated and analysed. Overall, it was found that the overlapped loops driven in quadrature were superior to the other types of coils we tested. In order to determine the effect of the different coil designs on PET, transmission scans were carried out, and it was observed that the differences between attenuation maps with and without the coils were negligible. The findings of this work can provide useful guidance for the integration of such coil designs into MR-PET hybrid systems in the future.
Orthodontic treatments are concomitant with mechanical forces and thereby cause teeth movements. The applied forces are transmitted to the tooth root and the periodontal ligaments which is compressed on one side and tensed up on the other side. Indeed, strong forces can lead to tooth root resorption and the crown-to-tooth ratio is reduced with the potential for significant clinical impact. The cementum, which covers the tooth root, is a thin mineralized tissue of the periodontium that connects the periodontal ligament with the tooth and is build up by cementoblasts. The impact of tension and compression on these cells is investigated in several in vivo and in vitro studies demonstrating differences in protein expression and signaling pathways. In summary, osteogenic marker changes indicate that cyclic tensile forces support whereas static tension inhibits cementogenesis. Furthermore, cementogenesis experiences the same protein expression changes in static conditions as static tension, but cyclic compression leads to the exact opposite of cyclic tension. Consistent with marker expression changes, the singaling pathways of Wnt/ß-catenin and RANKL/OPG show that tissue compression leads to cementum degradation and tension forces to cementogenesis. However, the cementum, and in particular its cementoblasts, remain a research area which should be explored in more detail to understand the underlying mechanism of bone resorption and remodeling after orthodontic treatments.
Objective
This study assesses and quantifies impairment of postoperative magnetic resonance imaging (MRI) at 7 Tesla (T) after implantation of titanium cranial fixation plates (CFPs) for neurosurgical bone flap fixation.
Materials and methods
The study group comprised five patients who were intra-individually examined with 3 and 7 T MRI preoperatively and postoperatively (within 72 h/3 months) after implantation of CFPs. Acquired sequences included T₁-weighted magnetization-prepared rapid-acquisition gradient-echo (MPRAGE), T₂-weighted turbo-spin-echo (TSE) imaging, and susceptibility-weighted imaging (SWI). Two experienced neurosurgeons and a neuroradiologist rated image quality and the presence of artifacts in consensus reading.
Results
Minor artifacts occurred around the CFPs in MPRAGE and T2 TSE at both field strengths, with no significant differences between 3 and 7 T. In SWI, artifacts were accentuated in the early postoperative scans at both field strengths due to intracranial air and hemorrhagic remnants. After resorption, the brain tissue directly adjacent to skull bone could still be assessed. Image quality after 3 months was equal to the preoperative examinations at 3 and 7 T.
Conclusion
Image quality after CFP implantation was not significantly impaired in 7 T MRI, and artifacts were comparable to those in 3 T MRI.
The present work aimed to study the mainstream feasibility of the deammonifying sludge of side stream of municipal wastewater treatment plant (MWWTP) in Kaster, Germany. For this purpose, the deammonifying sludge available at the side stream was investigated for nitrogen (N) removal with respect to the operational factors temperature (15–30°C), pH value (6.0–8.0) and chemical oxygen demand (COD)/N ratio (≤1.5–6.0). The highest and lowest N-removal rates of 0.13 and 0.045 kg/(m³ d) are achieved at 30 and 15°C, respectively. Different conditions of pH and COD/N ratios in the SBRs of Partial nitritation/anammox (PN/A) significantly influenced both the metabolic processes and associated N-removal rates. The scientific insights gained from the current work signifies the possibility of mainstream PN/A at WWTPs. The current study forms a solid basis of operational window for the upcoming semi-technical trails to be conducted prior to the full-scale mainstream PN/A at WWTP Kaster and WWTPs globally.
This study investigated the anaerobic digestion of an algal–bacterial biofilm grown in artificial wastewater in an Algal Turf Scrubber (ATS). The ATS system was located in a greenhouse (50°54′19ʺN, 6°24′55ʺE, Germany) and was exposed to seasonal conditions during the experiment period. The methane (CH4) potential of untreated algal–bacterial biofilm (UAB) and thermally pretreated biofilm (PAB) using different microbial inocula was determined by anaerobic batch fermentation. Methane productivity of UAB differed significantly between microbial inocula of digested wastepaper, a mixture of manure and maize silage, anaerobic sewage sludge, and percolated green waste. UAB using sewage sludge as inoculum showed the highest methane productivity. The share of methane in biogas was dependent on inoculum. Using PAB, a strong positive impact on methane productivity was identified for the digested wastepaper (116.4%) and a mixture of manure and maize silage (107.4%) inocula. By contrast, the methane yield was significantly reduced for the digested anaerobic sewage sludge (50.6%) and percolated green waste (43.5%) inocula. To further evaluate the potential of algal–bacterial biofilm for biogas production in wastewater treatment and biogas plants in a circular bioeconomy, scale-up calculations were conducted. It was found that a 0.116 km2 ATS would be required in an average municipal wastewater treatment plant which can be viewed as problematic in terms of space consumption. However, a substantial amount of energy surplus (4.7–12.5 MWh a−1) can be gained through the addition of algal–bacterial biomass to the anaerobic digester of a municipal wastewater treatment plant. Wastewater treatment and subsequent energy production through algae show dominancy over conventional technologies.
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
The objective of this study is the establishment of a differential scanning calorimetry (DSC) based method for online analysis of the biodegradation of polymers in complex environments. Structural changes during biodegradation, such as an increase in brittleness or crystallinity, can be detected by carefully observing characteristic changes in DSC profiles. Until now, DSC profiles have not been used to draw quantitative conclusions about biodegradation. A new method is presented for quantifying the biodegradation using DSC data, whereby the results were validated using two reference methods.
The proposed method is applied to evaluate the biodegradation of three polymeric biomaterials: polyhydroxybutyrate (PHB), cellulose acetate (CA) and Organosolv lignin. The method is suitable for the precise quantification of the biodegradability of PHB. For CA and lignin, conclusions regarding their biodegradation can be drawn with lower resolutions. The proposed method is also able to quantify the biodegradation of blends or composite materials, which differentiates it from commonly used degradation detection methods.
Entwicklung timingabhängiger Marketing Strategien in frühen Phasen des Produktentstehungsprozesses
(1995)
Couponing
(2003)
Digital elevation models (DEMs), represent the three-dimensional terrain and are the basic input for numerical snow avalanche dynamics simulations. DEMs can be acquired using topographic maps or remote-sensing technologies, such as photogrammetry or lidar. Depending on the acquisition technique, different spatial resolutions and qualities are achieved. However, there is a lack of studies that investigate the sensitivity of snow avalanche simulation algorithms to the quality and resolution of DEMs. Here, we perform calculations using the numerical avalance dynamics model RAMMS, varying the quality and spatial resolution of the underlying DEMs, while holding the simulation parameters constant. We study both channelized and open-terrain avalanche tracks with variable roughness. To quantify the variance of these simulations, we use well-documented large-scale avalanche events from Davos, Switzerland (winter 2007/08), and from our large-scale avalanche test site, Valĺee de la Sionne (winter 2005/06). We find that the DEM resolution and quality is critical for modeled flow paths, run-out distances, deposits, velocities and impact pressures. Although a spatial resolution of ~25 m is sufficient for large-scale avalanche modeling, the DEM datasets must be checked carefully for anomalies and artifacts before using them for dynamics calculations.
Dieser Artikel befasst sich mit dem Investitionsdilemma in der Stromerzeugung, welches in unzureichend ausgestalteten liberalisierten Strommärkten zu einem gesamtwirtschaftlich unerwünscht geringen Niveau an Versorgungssicherheit führt. Die originären Ursachen im deutschen Strommarkt liegen in einer eingeschränkten Schadenersatzpflicht der Lieferanten im Falle eines kapazitätsbedingten Stromausfalls und in der zeitlichen Differenz zwischen letzter Handelsmöglichkeit und Lieferung. Letzteres verhindert ein jederzeitiges individuelles Glattstellen von unerwartet auftretenden Ein- bzw. Ausspeiseänderungen. Des Weiteren führen Faktoren wie die fehlende Partizipation der Endverbraucher am Großhandelsmarkt, die nur undifferenziert mögliche Abschaltung von Endverbrauchern oder time lags durch lange Bau- und Genehmigungszeiten von Erzeugungskapazitäten in Verbindung mit über lange Zeiträume nicht versicherbaren Risiken bezüglich Brennstoff-, CO2-Zertifikate- und Strompreisen zu einer Verschärfung der Problematik. Sinnvolle Lösungsansätze sind zum einen die Erhöhung der Intraday-Handelsliquidität zur Verbesserung der Markträumungsfunktion bis möglichst kurz vor Stromlieferung, was z. B. durch eine Förderung der Direktvermarktung Erneuerbarer Energien erreicht werden kann. Zum anderen trägt ein verstärkter Ausbau von smart metern bei Endverbrauchern zu einer höheren Versorgungssicherheit bei, da dies die Glättung von Lastspitzen und die Artikulation der tatsächlichen Zahlungsbereitschaft von Endverbrauchern am Großhandelsmarkt ermöglicht.
Nach der Bundestagswahl am 27. September 2009 steht der Atomausstieg in Deutschland wieder ganz oben auf der politischen Agenda. Eine aktuelle Bestandsaufnahme aller ma\geblichen Argumente erscheint somit zwingend notwendig. Dabei sollte der Blickwinkel nicht national beschränkt bleiben, sondern vor allem der Einfluss der europäischen Dimension dieser Thematik miteinbezogen werden. Auf europäischer Ebene zeigt sich eine Position zu Gunsten der Kernenergie. Unter den 27 EU-Staaten findet gerade eine Renaissance der Atomkraft statt. Die drei europäischen Organe befürworten den umfangreichen Einsatz der Kernenergie als langfristigen Bestandteil des Energieträgermix. Deutschland gehört mit seinem Beschluss zum Atomausstieg einer Minderheit an. Als Teil eines immer stärker zusammen wachsenden und letztendlich vollständig integrierten europäischen Strommarktes wird Deutschland langfristig stets mit Atomstrom versorgt werden. Dies gilt losgelöst von dem Einsatz von Kernkraftwerken im Inland. Eine Abschaltung der Anlagen führt damit nicht zur Zielerreichung der Atomkraftgegner, sondern lediglich zu zusätzlichen energietechnischen Herausforderungen bei der Sicherstellung der deutschen Stromversorgung. Der deutsche Atomausstieg sollte aus diesem Grund von der neuen Bundesregierung zurück genommen werden.
Next-generation aircraft designs often incorporate multiple large propellers attached along the wingspan (distributed electric propulsion), leading to highly flexible dynamic systems that can exhibit aeroelastic instabilities. This paper introduces a validated methodology to investigate the aeroelastic instabilities of wing–propeller systems and to understand the dynamic mechanism leading to wing and whirl flutter and transition from one to the other. Factors such as nacelle positions along the wing span and chord and its propulsion system mounting stiffness are considered. Additionally, preliminary design guidelines are proposed for flutter-free wing–propeller systems applicable to novel aircraft designs. The study demonstrates how the critical speed of the wing–propeller systems is influenced by the mounting stiffness and propeller position. Weak mounting stiffnesses result in whirl flutter, while hard mounting stiffnesses lead to wing flutter. For the latter, the position of the propeller along the wing span may change the wing mode shapes and thus the flutter mechanism. Propeller positions closer to the wing tip enhance stability, but pusher configurations are more critical due to the mass distribution behind the elastic axis.
Next-generation aircraft designs often incorporate multiple large propellers attached along the wingspan. These highly flexible dynamic systems can exhibit uncommon aeroelastic instabilities, which should be carefully investigated to ensure safe operation. The interaction between the propeller and the wing is of particular importance. It is known that whirl flutter is stabilized by wing motion and wing aerodynamics. This paper investigates the effect of a propeller onto wing flutter as a function of span position and mounting stiffness between the propeller and wing. The analysis of a comparison between a tractor and pusher configuration has shown that the coupled system is more stable than the standalone wing for propeller positions near the wing tip for both configurations. The wing fluttermechanism is mostly affected by the mass of the propeller and the resulting change in eigenfrequencies of the wing. For very weak mounting stiffnesses, whirl flutter occurs, which was shown to be stabilized compared to a standalone propeller due to wing motion. On the other hand, the pusher configuration is, as to be expected, the more critical configuration due to the attached mass behind the elastic axis.
Today’s society is undergoing a paradigm shift driven by the megatrend of sustainability. This undeniably affects all areas of Western life. This paper aims to find out how the luxury industry is dealing with this change and what adjustments are made by the companies. For this purpose, interviews were conducted with managers from the luxury industry, in which they were asked about specific measures taken by their companies as well as trends in the industry. In a subsequent evaluation, the trends in the luxury industry were summarized for the areas of ecological, social, and economic sustainability. It was found that the area of environmental sustainability is significantly more focused than the other sub-areas. Furthermore, the need for a customer survey to validate the industry-based measures was identified.
Two types of microvalves based on temperature-responsive poly(N-isopropylacrylamide) (PNIPAAm) and pH-responsive poly(sodium acrylate) (PSA) hydrogel films have been developed and tested. The PNIPAAm and PSA hydrogel films were prepared by means of in situ photopolymerization directly inside the fluidic channel of a microfluidic chip fabricated by combining Si and SU-8 technologies. The swelling/shrinking properties and height changes of the PNIPAAm and PSA films inside the fluidic channel were studied at temperatures of deionized water from 14 to 36 °C and different pH values (pH 3–12) of Titrisol buffer, respectively. Additionally, in separate experiments, the lower critical solution temperature (LCST) of the PNIPAAm hydrogel was investigated by means of a differential scanning calorimetry (DSC) and a surface plasmon resonance (SPR) method. Mass-flow measurements have shown the feasibility of the prepared hydrogel films to work as an on-chip integrated temperature- or pH-responsive microvalve capable to switch the flow channel on/off.
A microfluidic chip integrating amperometric enzyme sensors for the detection of glucose, glutamate and glutamine in cell-culture fermentation processes has been developed. The enzymes glucose oxidase, glutamate oxidase and glutaminase were immobilized by means of cross-linking with glutaraldehyde on platinum thin-film electrodes integrated within a microfluidic channel. The biosensor chip was coupled to a flow-injection analysis system for electrochemical characterization of the sensors. The sensors have been characterized in terms of sensitivity, linear working range and detection limit. The sensitivity evaluated from the respective peak areas was 1.47, 3.68 and 0.28 μAs/mM for the glucose, glutamate and glutamine sensor, respectively. The calibration curves were linear up to a concentration of 20 mM glucose and glutamine and up to 10 mM for glutamate. The lower detection limit amounted to be 0.05 mM for the glucose and glutamate sensor, respectively, and 0.1 mM for the glutamine sensor. Experiments in cell-culture medium have demonstrated a good correlation between the glutamate, glutamine and glucose concentrations measured with the chip-based biosensors in a differential-mode and the commercially available instrumentation. The obtained results demonstrate the feasibility of the realized microfluidic biosensor chip for monitoring of bioprocesses.
In diesem Beitrag werden Ergebnisse der Entwicklung eines modularen festkörperbasierten Sensorsystems für die Überwachung von Zellkulturfermentationen präsentiert. Zur Messung der Elektrolytleitfähigkeit wurde das Layout von Interdigitalelektroden angepasst, um in vergleichsweise gut leitenden Elektrolyten zu messen. Durch Quervernetzung von Glucose-Oxidase mit Glutaraldehyd und Immobilisierung auf einer Platinelektrode wurde ein amperometrischer Glucosesensor mit einem linearen Messbereich von bis zu 2 mM und einer Sensitivität von 168 nA/mM realisiert.
Capacitive field-effect electrolyte-diamond-insulator-semiconductor (EDIS) structures with O-terminated nanocrystalline diamond (NCD) as sensitive gate material have been realized and investigated for the detection of pH, penicillin concentration, and layer-by-layer adsorption of polyelectrolytes. The surface oxidizing procedure of NCD thin films as well as the seeding and NCD growth process on a Si-SiO2 substrate have been improved to provide high pH-sensitive, non-porous thin films without damage of the underlying SiO2 layer and with a high coverage of O-terminated sites. The NCD surface topography, roughness, and coverage of the surface groups have been characterized by SEM, AFM and XPS methods. The EDIS sensors with O-terminated NCD film treated in oxidizing boiling mixture for 45 min show a pH sensitivity of about 50 mV/pH. The pH-sensitive properties of the NCD have been used to develop an EDIS-based penicillin biosensor with high sensitivity (65-70 mV/decade in the concentration range of 0.25-2.5 mM penicillin G) and low detection limit (5 μM). The results of label-free electrical detection of layer-by-layer adsorption of charged polyelectrolytes are presented, too.
Planar and three-dimensional (3D) interdigitated electrodes (IDE) with electrode digits separated by an insulating barrier of different heights were electrochemically characterized and compared in terms of their sensing properties. Due to the impact of the surface resistance, both types of IDE structures display a non-linear behavior in low-ionic strength solutions. The experimental data were fitted to an electrical equivalent circuit and interpreted taking into account the surface-charge-governed properties. The effect of a charged polyelectrolyte layer electrostatically assembled onto the sensor surface on the surface resistance in solutions with different KCl concentration is studied. In case of the same electrode footprint, 3D-IDEs show a larger cell constant and a higher sensitivity to molecular adsorption than that of planar IDEs. The obtained results demonstrate the potential of 3D-IDEs as a new transducer structure for a direct label-free sensing of charged molecules.
The conjunction of (bio-)chemical recognition elements with nanoscale biological building blocks such as virus particles is considered as a very promising strategy for the creation of biohybrids opening novel opportunities for label-free biosensing. This work presents a new approach for the development of biosensors using tobacco mosaic virus (TMV) nanotubes or coat proteins (CPs) as enzyme nanocarriers. Sensor chips combining an array of Pt electrodes loaded with glucose oxidase (GOD)-modified TMV nanotubes or CP aggregates were used for amperometric detection of glucose as a model system for the first time. The presence of TMV nanotubes or CPs on the sensor surface allows binding of a high amount of precisely positioned enzymes without substantial loss of their activity, and may also ensure accessibility of their active centers for analyte molecules. Specific and efficient immobilization of streptavidin-conjugated GOD ([SA]-GOD) complexes on biotinylated TMV nanotubes or CPs was achieved via bioaffinity binding. These layouts were tested in parallel with glucose sensors with adsorptively immobilized [SA]-GOD, as well as [SA]-GOD crosslinked with glutardialdehyde, and came out to exhibit superior sensor performance. The achieved results underline a great potential of an integration of virus/biomolecule hybrids with electronic transducers for future applications in biosensorics and biochips.
Reinigungsprozesse in der Lebensmittelindustrie. Entwicklung eines Demonstrators zur Überwachung
(2017)
Die erdbebensichere Auslegung von erdverlegten Rohrleitungssystemen ist von wesentlicher Bedeutung zur Sicherstellung der Funktionalität der Versorgungsinfrastruktur nach einem Erdbebenereignis. Zur Vermeidung von Netzausfällen ist es erforderlich, die räumlich weit ausgedehnten Leitungssysteme mit geeigneten rechnerischen Modellen seismisch zu bemessen. Der vorliegende Beitrag behandelt die Beanspruchung von Rohrleitungssystemen durch seismische Welleneinwirkung und stellt geeignete Näherungsansätze und ein detailliertes Rechenmodell für seismische Leitungsanalysen vor. Mit den Ansätzen wird in Berechnungsbeispielen der Einfluss wesentlicher Parameter auf die seismisch induzierten Dehnungen in Rohrleitungssystemen untersucht.
Silos generally work as storage structures between supply and demand for various goods, and their structural safety has long been of interest to the civil engineering profession. This is especially true for dynamically loaded silos, e.g., in case of seismic excitation. Particularly thin-walled cylindrical silos are highly vulnerable to seismic induced pressures, which can cause critical buckling phenomena of the silo shell. The analysis of silos can be carried out in two different ways. In the first, the seismic loading is modeled through statically equivalent loads acting on the shell. Alternatively, a time history analysis might be carried out, in which nonlinear phenomena due to the filling as well as the interaction between the shell and the granular material are taken into account. The paper presents a comparison of these approaches. The model used for the nonlinear time history analysis considers the granular material by means of the intergranular strain approach for hypoplasticity theory. The interaction effects between the granular material and the shell is represented by contact elements. Additionally, soil–structure interaction effects are taken into account.
Mit finanzieller Unterstützung der Deutschen Gesellschaft für Mauerwerks- und Wohnungsbau e.V. (DGfM) und des Deutschen Instituts für Bautechnik in Berlin (DIBt) wurden zwei aufeinander aufbauende Forschungsvorhaben zur Verbesserung der seismischen Nachweise von Mauerwerksbauten in deutschen Erdbebengebieten durchgeführt. Zunächst wurde das seismische Verhalten von drei modernen unbewehrten Mauerwerksgebäuden in der Region Emilia Romagna in Italien während der Erdbebenserie im Jahr 2012 in Kooperation mit der Universität Pavia eingehend untersucht. Aufbauend auf den Erkenntnissen dieser Untersuchungen wurde ein verbessertes seismisches Bemessungskonzept für unbewehrte Mauerwerksbauten erarbeitet. Der Beitrag stellt die wesentlichen Ergebnisse dieser Forschungsarbeiten und deren Eingang in die Normung vor.
Am 1. Oktober 2013 ist das auf drei Jahre angelegte EU-Forschungsprojekt INSYSME – Innovative Systeme für erdbebentaugliche Ausfachungswände aus Ziegelmauerwerk in Stahlbetonrahmentragwerken – gestartet. Unter der Koordination der Universität Padua beteiligen sich 16 Partner aus sechs europäischen Ländern (Deutschland, Griechenland, Italien, Portugal, Rumänien, Türkei). Als deutsche Partner nehmen die Arbeitsgemeinschaft Mauerziegel aus Bonn, die Universität Kassel sowie das Ingenieurbüro SDA-engineering GmbH aus Herzogenrath, teil. Ziel der deutschen Partner ist die Entwicklung von innovativen Ausfachungssystemen aus monolithischem wärmedämmenden Ziegelmauerwerk, mit denen nicht nur eine erhöhte Erdbebensicherheit, sondern auch die sichere Erfüllung der steigenden Anforderungen aus Windbeanspruchungen gewährleistet werden können. Die Forschungsergebnisse sollen vom Partner SDA-engineering GmbH in die bereits seit einigen Jahren verfügbare Softwarelösung MINEA [1] integriert werden. Weitere Informationen stehen auf den Websites des Projektes [2] zur Verfügung. Im vorliegenden Beitrag werden nach einer kurzen thematischen Einführung die Ergebnisse von Tastversuchen an senkrecht zur Ebene belasteten Ausfachungswänden aus Planziegelmauerwerk vorgestellt. Im Anschluss wird das geplante Arbeitsprogramm der deutschen Partner im Projekt INSYSME beschrieben.
Im Rahmen des europäischen Verbundprojekts INSYSME wurden von den deutschen Partnern die Systeme IMES und INODIS zur Verbesserung des seismischen Verhaltens von ausgefachten Stahlbetonrahmen entwickelt. Ziel beider Systeme ist es, Stahlbetonrahmen und Ausfachung zu entkoppeln, anstatt die Tragfähigkeit durch aufwendige und kostspielige zusätzliche Bewehrungseinlagen zu erhöhen. Erste Ergebnisse des Systems IMES für Belastungen in und senkrecht zu der Wandebene werden vorgestellt.
Im Rahmen des europäischen Verbundprojekts INSYSME wurden von den deutschen Partnern die Systeme IMES und INODIS zur Verbesserung des seismischen Verhaltens von ausgefachten Stahlbetonrahmen entwickelt. Ziel beider Systeme ist es, Stahlbetonrahmen und Ausfachung zu entkoppeln, anstatt die Tragfähigkeit durch aufwendige und kostspielige zusätzliche Bewehrungseinlagen zu erhöhen. Erste Ergebnisse des Systems IMES für Belastungen in und senkrecht zu der Wandebene werden vorgestellt.