Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (48)
- IfB - Institut für Bioengineering (22)
- Fachbereich Maschinenbau und Mechatronik (20)
- INB - Institut für Nano- und Biotechnologien (18)
- Fachbereich Chemie und Biotechnologie (14)
- Fachbereich Luft- und Raumfahrttechnik (12)
- Fachbereich Energietechnik (11)
- ECSM European Center for Sustainable Mobility (8)
- Fachbereich Elektrotechnik und Informationstechnik (8)
- Nowum-Energy (7)
Has Fulltext
- yes (128) (remove)
Document Type
- Article (128) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- Multimediamarkt (6)
- Rapid prototyping (5)
- FEM (4)
- Finite-Elemente-Methode (4)
- Rapid Prototyping (4)
- Blitzschutz (3)
- 3D-Printing (2)
- Acyl-amino acids (2)
- Aminoacylase (2)
Low-end-Embedded-Plattformen stellen eine hohe Anforderung an die Entscheidungsfähigkeit des Entwicklers: Zum nächstgrößeren Prozessor greifen und ein Betriebssystem benutzen oder doch besser auf das Betriebssystem verzichten? Die Frage lässt sich einfach beantworten: Einen Nanokernel verwenden und das Embedded-System mit einem minimalen Footprint realisieren. Adam Dunkels Protothreads sind eine ausgesprochen effiziente Art, Mikrocontroller gut strukturiert zu programmieren und gleichzeitig auf Overhead zu verzichten. So können auch mit kleinen 8-bit-Prozessoren anspruchsvolle Aufgaben in einem Thread-Modell bearbeitet werden. Man muss also nicht immer das Rad neu erfinden oder gleich auf Linux-basierte Systeme zurückgreifen.
This article describes an Internet of things (IoT) sensing device with a wireless interface which is powered by the energy-harvesting method of the Wiegand effect. The Wiegand effect, in contrast to continuous sources like photovoltaic or thermal harvesters, provides small amounts of energy discontinuously in pulsed mode. To enable an energy-self-sufficient operation of the sensing device with this pulsed energy source, the output energy of the Wiegand generator is maximized. This energy is used to power up the system and to acquire and process data like position, temperature or other resistively measurable quantities as well as transmit these data via an ultra-low-power ultra-wideband (UWB) data transmitter. A proof-of-concept system was built to prove the feasibility of the approach. The energy consumption of the system during start-up was analysed, traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof of concept, an application prototype was developed.
Die Einleitung zur Norm DIN EN 62305-3 beschreibt klar und ein - deutig: Der vorliegende Teil der IEC 62305 behandelt den Schutz von baulichen Anlagen gegen materielle Schäden und den Schutz von Personen gegen Verletzungen durch Berührungs- und Schrittspannungen. Als das wesentlichste und effektivste Mittel zum Schutz von baulichen Anlagen gegen materielle Schäden gilt das Blitz - schutzsystem (LPS).
Photoelectrochemical (PEC) biosensors are a rather novel type of biosensors thatutilizelighttoprovideinformationaboutthecompositionofananalyte,enablinglight-controlled multi-analyte measurements. For enzymatic PEC biosensors,amperometric detection principles are already known in the literature. In con-trast, there is only a little information on H+-ion sensitive PEC biosensors. Inthis work, we demonstrate the detection of H+ions emerged by H+-generatingenzymes, exemplarily demonstrated with penicillinase as a model enzyme on atitanium dioxide photoanode. First, we describe the pH sensitivity of the sensorand study possible photoelectrocatalytic reactions with penicillin. Second, weshow the enzymatic PEC detection of penicillin.
This work is an attempt to answer the question: How to use convex programming in shakedown analysis of structures made of materials with temperature-dependent properties. Based on recently established shakedown theorems and formulations, a dual relationship between upper and lower bounds of the shakedown limit load is found, an algorithmfor shakedown analysis is proposed. While the original problem is neither convex nor concave, the algorithm presented here has the advantage of employing convex programming tools.
Biomass from various types of organic waste was tested for possible use in hydrogen production. The composition consisted of lignified samples, green waste, and kitchen scraps such as fruit and vegetable peels and leftover food. For this purpose, the enzymatic pretreatment of organic waste with a combination of five different hydrolytic enzymes (cellulase, amylase, glucoamylase, pectinase and xylase) was investigated to determine its ability to produce hydrogen (H2) with the hydrolyzate produced here. In course, the anaerobic rod-shaped bacterium T. neapolitana was used for H2 production. First, the enzymes were investigated using different substrates in preliminary experiments. Subsequently, hydrolyses were carried out using different types of organic waste. In the hydrolysis carried out here for 48 h, an increase in glucose concentration of 481% was measured for waste loads containing starch, corresponding to a glucose concentration at the end of hydrolysis of 7.5 g·L−1. In the subsequent set fermentation in serum bottles, a H2 yield of 1.26 mmol H2 was obtained in the overhead space when Terrific Broth Medium with glucose and yeast extract (TBGY medium) was used. When hydrolyzed organic waste was used, even a H2 yield of 1.37 mmol could be achieved in the overhead space. In addition, a dedicated reactor system for the anaerobic fermentation of T. neapolitana to produce H2 was developed. The bioreactor developed here can ferment anaerobically with a very low loss of produced gas. Here, after 24 h, a hydrogen concentration of 83% could be measured in the overhead space.
In traditional microbial biobutanol production, the solvent must be recovered during fermentation process for a sufficient space-time yield. Thermal separation is not feasible due to the boiling point of n-butanol. As an integrated and selective solid-liquid separation alternative, solvent impregnated resins (SIRs) were applied. Two polymeric resins were evaluated and an extractant screening was conducted. Vacuum application with vapor collection in fixed-bed column as bioreactor bypass was successfully implemented as butanol desorption step. In course of further increasing process economics, fermentation with renewable lignocellulosic substrates was conducted using Clostridium acetobutylicum. Utilization of SIR was shown to be a potential strategy for solvent removal from fermentation broth, while application of a bypass column allows for product removal and recovery at once.
Obstacle avoidance is critical for unmanned aerial vehicles (UAVs) operating autonomously. Obstacle avoidance algorithms either rely on global environment data or local sensor data. Local path planners react to unforeseen objects and plan purely on local sensor information. Similarly, animals need to find feasible paths based on local information about their surroundings. Therefore, their behavior is a valuable source of inspiration for path planning. Bumblebees tend to fly vertically over far-away obstacles and horizontally around close ones, implying two zones for different flight strategies depending on the distance to obstacles. This work enhances the local path planner 3DVFH* with this bio-inspired strategy. The algorithm alters the goal-driven function of the 3DVFH* to climb-preferring if obstacles are far away. Prior experiments with bumblebees led to two definitions of flight zone limits depending on the distance to obstacles, leading to two algorithm variants. Both variants reduce the probability of not reaching the goal of a 3DVFH* implementation in Matlab/Simulink. The best variant, 3DVFH*b-b, reduces this probability from 70.7 to 18.6% in city-like worlds using a strong vertical evasion strategy. Energy consumption is higher, and flight paths are longer compared to the algorithm version with pronounced horizontal evasion tendency. A parameter study analyzes the effect of different weighting factors in the cost function. The best parameter combination shows a failure probability of 6.9% in city-like worlds and reduces energy consumption by 28%. Our findings demonstrate the potential of bio-inspired approaches for improving the performance of local path planning algorithms for UAV.
Unmanned Aerial Vehicles (UAV) constantly gain in versatility. However, more reliable path planning algorithms are required until full autonomous UAV operation is possible. This work investigates the algorithm 3DVFH* and analyses its dependency on its cost function weights in 2400 environments. The analysis shows that the 3DVFH* can find a suitable path in every environment. However, a particular type of environment requires a specific choice of cost function weights. For minimal failure, probability interdependencies between the weights of the cost function have to be considered. This dependency reduces the number of control parameters and simplifies the usage of the 3DVFH*. Weights for costs associated with vertical evasion (pitch cost) and vicinity to obstacles (obstacle cost) have the highest influence on the failure probability of the local path planner. Environments with mainly very tall buildings (like large American city centres) require a preference for horizontal avoidance manoeuvres (achieved with high pitch cost weights). In contrast, environments with medium-to-low buildings (like European city centres) benefit from vertical avoidance manoeuvres (achieved with low pitch cost weights). The cost of the vicinity to obstacles also plays an essential role and must be chosen adequately for the environment. Choosing these two weights ideal is sufficient to reduce the failure probability below 10%.
„Smartes“ Laden an öffentlich zugänglichen Ladesäulen – Teil 2: USER-Verhalten und -Erwartungen
(2021)
Sieht man sich die umfangreichen Betätigungsfelder für einen Projektsteuerer in den Publikationen der einschlägigen Verbände und der Anbieter etwas genauer an, so wird man feststellen, das nach der eigentlichen Projektvorbereitungsphase mit Wirtschaftlichkeitsberechnungen und Sicherstellung der Finanzierung erhebliche Überschneidungen zu den in der HOAI ausgewiesenen Tätigkeiten der weiteren Planungsbeteiligten, insbesondere des Gebäudeplaners, also des Architekten bestehen. Geht man nun davon aus, dass der Bauherr diese Leistungen nicht doppelt bezahlen will, wäre die logische Konsequenz aus der vollumfänglichen Beauftragung eines Projektsteuerers die Verminderung des Auftragsumfangs an den Architekten, verbunden mit einer Honorarminderung für den Architekten. Damit bricht dem Architekten bei eingehender Betrachtung am Ende mehr als die Hälfte seiner Tätigkeit und damit seiner Grundlage zur Honorarerzielung weg. Der Bauherr muss in erster Linie seine Wünsche definieren und sein Budget bestimmen. Er beauftragt die Planungsbeteiligten und nimmt deren Leistungen entgegen. Sein Problem dabei ist, dass er diese Leistung nicht beurteilen kann, weder in Bezug auf deren Vollständigkeit, noch in Bezug auf deren Inhalt. Hier steht der Projektsteuerer im eigentlichen Sinne. Er muss wissen, was die Planungsbeteiligten für ihr Geld zu leisten haben und wie er diese Leistungen durchsetzen kann. Letztendlich sorgt er dann aber auch dafür, das die Architektenleistungen, also Planung und Ausschreibungsunterlagen vom Bauherrn verstanden werden. Warum aber kann der Architekt selbst seine Leistungen und damit den Nachweis der Leistungserfüllung nicht selbst dem Bauherrn verständlich und damit glaubhaft machen? Es liegt also letztlich in der Hand der Architekten, ob ihr Betätigungsfeld weiter durch in die Planung und Gestaltung eingreifende, zusätzliche Projektsteuerer und Generalunternehmer eingeengt oder sogar weggenommen werden kann. Die Frage, wer das Baugeschehen steuert und lenkt bleibt solange ungeklärt, wie die Architekten dieses Tätigkeitsfeld des Architekten im Baubetrieb weiterhin nur unzulänglich ausfüllen können und wollen.
Shock waves, explosions, impacts or cavitation bubble collapses may generate stress waves in solids causing cracks or unexpected dammage due to focussing, physical nonlinearity or interaction with existing cracks. There is a growing interest in wave propagation, which poses many novel problems to experimentalists and theorists.
Improved collapse loads of thick-walled, crack containing pipes and vessels are suggested. Very deep cracks have a residual strength which is better modelled by a global limit load. In all burst tests, the ductility of pressure vessel steels was sufficiently high whereby the burst pressure could be predicted by limit analysis with no need to apply fracture mechanics. The relative prognosis error increases however, for long and deep defects due to uncertainties of geometry and strength data.
The structural reliability with respect to plastic collapse or to inadaptation is formulated on the basis of the lower bound limit and shakedown theorems. A direct definition of the limit state function is achieved which permits the use of the highly effective first order reliability methods (FORM) is achieved. The theorems are implemented into a general purpose FEM program in a way capable of large-scale analysis. The limit state function and its gradient are obtained from a mathematical optimization problem. This direct approach reduces considerably the necessary knowledge of uncertain technological input data, the computing time, and the numerical error, leading to highly effective and precise reliability analyses.
Fatigue analyses are conducted with the aim of verifying that thermal ratcheting is limited. To this end it is important to make a clear distintion between the shakedown range and the ratcheting range (continuing deformation). As part of an EU-supported research project, experiments were carried out using a 4-bar model. The experiment comprised a water-cooled internal tube, and three insulated heatable outer test bars. The system was subjected to alternating axial forces, superimposed with alternating temperatures at the outer bars. The test parameters were partly selected on the basis of previous shakedown analyses. During the test, temperatures and strains were measured as a function of time. The loads and the resulting stresses were confirmed on an ongoing basis during performance of the test, and after it. Different material models were applied for this incremental elasto-plastic analysis using the ANSYS program. The results of the simulation are used to verify the FEM-based shakedown analysis.
Limit loads can be calculated with the finite element method (FEM) for any component, defect geometry, and loading. FEM suggests that published long crack limit formulae for axial defects under-estimate the burst pressure for internal surface defects in thick pipes while limit loads are not conservative for deep cracks and for pressure loaded crack-faces. Very deep cracks have a residual strength, which is modelled by a global collapse load. These observations are combined to derive new analytical local and global collapse loads. The global collapse loads are close to FEM limit analyses for all crack dimensions.
In the new European standard for unfired pressure vessels, EN 13445-3, there are two approaches for carrying out a Design-by-Analysis that cover both the stress categorization method (Annex C) and the direct route method (Annex B) for a check against global plastic deformation and against progressive plastic deformation. This paper presents the direct route in the language of limit and shakedown analysis. This approach leads to an optimization problem. Its solution with Finite Element Analysis is demonstrated for mechanical and thermal actions. One observation from the examples is that the so-called 3f (3Sm) criterion fails to be a reliable check against progressive plastic deformation. Precise conditions are given, which greatly restrict the applicability of the 3f criterion.
Structural design analyses are conducted with the aim of verifying the exclusion of ratchetting. To this end it is important to make a clear distinction between the shakedown range and the ratchetting range. The performed experiment comprised a hollow tension specimen which was subjected to alternating axial forces, superimposed with constant moments. First, a series of uniaxial tests has been carried out in order to calibrate a bounded kinematic hardening rule. The load parameters have been selected on the basis of previous shakedown analyses with the PERMAS code using a kinematic hardening material model. It is shown that this shakedown analysis gives reasonable agreement between the experimental and the numerical results. A linear and a nonlinear kinematic hardening model of two-surface plasticity are compared in material shakedown analysis.
Limit and shakedown analysis are effective methods for assessing the load carrying capacity of a given structure. The elasto–plastic behavior of the structure subjected to loads varying in a given load domain is characterized by the shakedown load factor, defined as the maximum factor which satisfies the sufficient conditions stated in the corresponding static shakedown theorem. The finite element dicretization of the problem may lead to very large convex optimization. For the effective solution a basis reduction method has been developed that makes use of the special problem structure for perfectly plastic material. The paper proposes a modified basis reduction method for direct application to the two-surface plasticity model of bounded kinematic hardening material. The considered numerical examples show an enlargement of the load carrying capacity due to bounded hardening.
The load-carrying capacity or the safety against plastic limit states are the central questions in the design of structures and passive components in the apparatus engineering. A precise answer is most simply given by limit and shakedown analysis. These methods can be based on static and kinematic theorems for lower and upper bound analysis. Both may be formulated as optimization problems for finite element discretizations of structures. The problems of large-scale analysis and the extension towards realistic material modelling will be solved in a European research project. Limit and shakedown analyses are briefly demonstrated with illustrative examples.
Extension fractures are typical for the deformation under low or no confining pressure. They can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. In this article, it is shown that the simple extension strain criterion makes unrealistic strength predictions in biaxial compression and tension. To overcome this major limitation, a new extension strain criterion is proposed by adding a weighted principal shear component to the simple criterion. The shear weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting extension failure modes, which are unexpected in the classical understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain leading to dilatancy. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak stress CP. Different from compressive loading, tensile loading requires only a limited number of critical cracks to cause failure. Therefore, for tensile stresses, the failure criteria must be modified somehow, possibly by a cut-off corresponding to the CI stress. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Companies often build their businesses based on product information and therefore try to automate the process of information extraction (IE). Since the information source is usually heterogeneous and non-standardized, classic extract, transform, load techniques reach their limits. Hence, companies must implement the newest findings from research to tackle the challenges of process automation. They require a flexible and robust system that is extendable and ensures the optimal processing of the different document types. This paper provides a distributed microservice architecture pattern that enables the automated generation of IE pipelines. Since their optimal design is individual for each input document, the system ensures the ad-hoc generation of pipelines depending on specific document characteristics at runtime. Furthermore, it introduces the automated quality determination of each available pipeline and controls the integration of new microservices based on their impact on the business value. The introduced system enables fast prototyping of the newest approaches from research and supports companies in automating their IE processes. Based on the automated quality determination, it ensures that the generated pipelines always meet defined business requirements when they come into productive use.
The demand of replacements for inoperable organs exceeds the amount of available organ transplants. Therefore, tissue engineering developed as a multidisciplinary field of research for autologous in-vitro organs. Such three dimensional tissue constructs request the application of a bioreactor. The UREPLACE bioreactor is used to grow cells on tubular collagen scaffolds OPTIMAIX Sponge 1 with a maximal length of 7 cm, in order to culture in vitro an adequate ureter replacement. With a rotating unit, (urothelial) cells can be placed homogeneously on the inner scaffold surface. Furthermore, a stimulation is combined with this bioreactor resulting in an orientation of muscle cells. These culturing methods request a precise control of several parameters and actuators. A combination of a LabBox and the suitable software LabVision is used to set and conduct parameters like rotation angles, velocities, pressures and other important cell culture values. The bioreactor was tested waterproof successfully. Furthermore, the temperature controlling was adjusted to 37 °C and the CO2 - concentration regulated to 5 %. Additionally, the pH step responses of several substances showed a perfect functioning of the designed flow chamber. All used software was tested and remained stable for several days.
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
REM sleep without atonia (RSWA) is a key feature for the diagnosis of rapid eye movement (REM) sleep behaviour disorder (RBD). We introduce RBDtector, a novel open-source software to score RSWA according to established SINBAR visual scoring criteria. We assessed muscle activity of the mentalis, flexor digitorum superficialis (FDS), and anterior tibialis (AT) muscles. RSWA was scored manually as tonic, phasic, and any activity by human scorers as well as using RBDtector in 20 subjects. Subsequently, 174 subjects (72 without RBD and 102 with RBD) were analysed with RBDtector to show the algorithm’s applicability. We additionally compared RBDtector estimates to a previously published dataset. RBDtector showed robust conformity with human scorings. The highest congruency was achieved for phasic and any activity of the FDS. Combining mentalis any and FDS any, RBDtector identified RBD subjects with 100% specificity and 96% sensitivity applying a cut-off of 20.6%. Comparable performance was obtained without manual artefact removal. RBD subjects also showed muscle bouts of higher amplitude and longer duration. RBDtector provides estimates of tonic, phasic, and any activity comparable to human scorings. RBDtector, which is freely available, can help identify RBD subjects and provides reliable RSWA metrics.
The international partnership of space agencies has agreed to proceed forward to the Moon sustainably. Activities on the Lunar surface (0.16 g) will allow crewmembers to advance the exploration skills needed when expanding human presence to Mars (0.38 g). Whilst data from actual hypogravity activities are limited to the Apollo missions, simulation studies have indicated that ground reaction forces, mechanical work, muscle activation, and joint angles decrease with declining gravity level. However, these alterations in locomotion biomechanics do not necessarily scale to the gravity level, the reduction in gastrocnemius medialis activation even appears to level off around 0.2 g, while muscle activation pattern remains similar. Thus, it is difficult to predict whether gastrocnemius medialis contractile behavior during running on Moon will basically be the same as on Mars. Therefore, this study investigated lower limb joint kinematics and gastrocnemius medialis behavior during running at 1 g, simulated Martian gravity, and simulated Lunar gravity on the vertical treadmill facility. The results indicate that hypogravity-induced alterations in joint kinematics and contractile behavior still persist between simulated running on the Moon and Mars. This contrasts with the concept of a ceiling effect and should be carefully considered when evaluating exercise prescriptions and the transferability of locomotion practiced in Lunar gravity to Martian gravity.
Contractile behavior of the gastrocnemius medialis muscle during running in simulated hypogravity
(2021)
Vigorous exercise countermeasures in microgravity can largely attenuate muscular degeneration, albeit the extent of applied loading is key for the extent of muscle wasting. Running on the International Space Station is usually performed with maximum loads of 70% body weight (0.7 g). However, it has not been investigated how the reduced musculoskeletal loading affects muscle and series elastic element dynamics, and thereby force and power generation. Therefore, this study examined the effects of running on the vertical treadmill facility, a ground-based analog, at simulated 0.7 g on gastrocnemius medialis contractile behavior. The results reveal that fascicle−series elastic element behavior differs between simulated hypogravity and 1 g running. Whilst shorter peak series elastic element lengths at simulated 0.7 g appear to be the result of lower muscular and gravitational forces acting on it, increased fascicle lengths and decreased velocities could not be anticipated, but may inform the development of optimized running training in hypogravity. However, whether the alterations in contractile behavior precipitate musculoskeletal degeneration warrants further study.
Plant physiology and plant stress: Plant physiology will be much more important for human mankind because of yield and cultivation limits of crops determined by their resistance to stress. To assess and counteract various stress factors it is necessary to conduct plant research to gain information and results on plant physiology.
Im Studiengang Mikrosystemtechnik des Fachhochschulstandortes Zweibrücken werden zwei neue moderne Anlagen für die Herstellung von mikrotechnischen Komponenten in Betrieb genommen: Ein Oxidationsofen für Herstellung dünner Oxidschichten auf Silizium-Einkristallen und eine Belichtungsapparatur für die Fotolithografie - das Besondere an diesen Anlagen: Sie existieren nur virtuell, d.h. als Animationen in einer Computerwelt.
Berücksichtigung von No Fault Found im Diagnose- und Instandhaltungssystem von Schienenfahrzeugen
(2020)
Intermittierende und nicht reproduzierbare Fehler, auch als No Fault Found bezeichnet, treten in praktisch allen Bereichen auf und sorgen für hohe Kosten. Diese sind häufig auf unpräzise Fehlerbeschreibungen zurückzuführen. Im vorliegenden Beitrag werden Anpassungen der Vorgehensweise bei der Entwicklung und Anpassungen des Diagnosesystems vorgeschlagen.
This study reviews the practice of brake tests in freight railways, which is time consuming and not suitable to detect certain failure types. Public incident reports are analysed to derive a reasonable brake test hardware and communication architecture, which aims to provide automatic brake tests at lower cost than current solutions. The proposed solutions relies exclusively on brake pipe and brake cylinder pressure sensors, a brake release position switch as well as radio communication via standard protocols. The approach is embedded in the Wagon 4.0 concept, which is a holistic approach to a smart freight wagon. The reduction of manual processes yields a strong incentive due to high savings in manual
labour and increased productivity.
Die IMechE Railway Challenge wird jährlich in Stapleford, Großbritannien ausgetragen. Im Rahmen der Challenge entwickeln und bauen Studierende eine Lokomotive und vergleichen sich in verschiedenen Disziplinen, darunter eine automatisierte Zielbremsung, optimale Energierückgewinnung beim Bremsen und minimale Geräuschemissionen. Neben diesen und weiteren technischen Wettbewerbsdisziplinen treten die Fahrzeuge und die Teams auch in nicht-technischen Disziplinen wie einer Business Case Challenge an.
A new functionalization method to modify capacitive electrolyte–insulator–semiconductor (EIS) structures with nanofilms is presented. Layers of polyallylamine hydrochloride (PAH) and graphene oxide (GO) with the compound polyaniline:poly(2-acrylamido-2-methyl-1-propanesulfonic acid) (PANI:PAAMPSA) are deposited onto a p-Si/SiO2 chip using the layer-by-layer technique (LbL). Two different enzymes (urease and penicillinase) are separately immobilized on top of a five-bilayer stack of the PAH:GO/PANI:PAAMPSA-modified EIS chip, forming a biosensor for detection of urea and penicillin, respectively. Electrochemical characterization is performed by constant capacitance (ConCap) measurements, and the film morphology is characterized by atomic force microscopy (AFM) and scanning electron microscopy (SEM). An increase in the average sensitivity of the modified biosensors (EIS–nanofilm–enzyme) of around 15% is found in relation to sensors, only carrying the enzyme but without the nanofilm (EIS–enzyme). In this sense, the nanofilm acts as a stable bioreceptor onto the EIS chip improving the output signal in terms of sensitivity and stability.
The aerodynamic performance of propellers strongly depends on their geometry and, consequently, on aeroelastic deformations. Knowledge of the extent of the impact is crucial for overall aircraft performance. An integrated simulation environment for steady aeroelastic propeller simulations is presented. The simulation environment is applied to determine the impact of elastic deformations on the aerodynamic propeller performance. The aerodynamic module includes a blade element momentum approach to calculate aerodynamic loads. The structural module is based on finite beam elements, according to Timoshenko theory, including moderate deflections. Several fixed-pitch propellers with thin-walled cross sections made of both isotropic and non-isotropic materials are investigated. The essential parameters are varied: diameter, disc loading, sweep, material, rotational, and flight velocity. The relative change of thrust between rigid and elastic blades quantifies the impact of propeller elasticity. Swept propellers of large diameters or low disc loadings can decrease the thrust significantly. High flight velocities and low material stiffness amplify this tendency. Performance calculations without consideration of propeller elasticity can lead to decreased efficiency. To avoid cost- and time-intense redesigns, propeller elasticity should be considered for swept planforms and low disc loadings.
Lifting propellers are of increasing interest for Advanced Air Mobility. All propellers and rotors are initially twisted beams, showing significant extension–twist coupling and centrifugal twisting. Torsional deformations severely impact aerodynamic performance. This paper presents a novel approach to assess different reasons for torsional deformations. A reduced-order model runs large parameter sweeps with algebraic formulations and numerical solution procedures. Generic beams represent three different propeller types for General Aviation, Commercial Aviation, and Advanced Air Mobility. Simulations include solid and hollow cross-sections made of aluminum, steel, and carbon fiber-reinforced polymer. The investigation shows that centrifugal twisting moments depend on both the elastic and initial twist. The determination of the centrifugal twisting moment solely based on the initial twist suffers from errors exceeding 5% in some cases. The nonlinear parts of the torsional rigidity do not significantly impact the overall torsional rigidity for the investigated propeller types. The extension–twist coupling related to the initial and elastic twist in combination with tension forces significantly impacts the net cross-sectional torsional loads. While the increase in torsional stiffness due to initial twist contributes to the overall stiffness for General and Commercial Aviation propellers, its contribution to the lift propeller’s stiffness is limited. The paper closes with the presentation of approximations for each effect identified as significant. Numerical evaluations are necessary to determine each effect for inhomogeneous cross-sections made of anisotropic material.
Quantitative nuclear magnetic resonance (qNMR) is considered as a powerful tool for multicomponent mixture analysis as well as for the purity determination of single compounds. Special attention is currently paid to the training of operators and study directors involved in qNMR testing. To assure that only qualified personnel are used for sample preparation at our GxP-accredited laboratory, weighing test was proposed. Sixteen participants performed six-fold weighing of the binary mixture of dibutylated hydroxytoluene (BHT) and 1,2,4,5-tetrachloro-3-nitrobenzene (TCNB). To evaluate the quality of data analysis, all spectra were evaluated manually by a qNMR expert and using in-house developed automated routine. The results revealed that mean values are comparable and both evaluation approaches are free of systematic error. However, automated evaluation resulted in an approximately 20% increase in precision. The same findings were revealed for qNMR analysis of 32 compounds used in pharmaceutical industry. Weighing test by six-fold determination in binary mixtures and automated qNMR methodology can be recommended as efficient tools for evaluating staff proficiency. The automated qNMR method significantly increases throughput and precision of qNMR for routine measurements and extends application scope of qNMR.
This study addresses a proof-of-concept experiment with a biocompatible screen-printed carbon electrode deposited onto a biocompatible and biodegradable substrate, which is made of fibroin, a protein derived from silk of the Bombyx mori silkworm. To demonstrate the sensor performance, the carbon electrode is functionalized as a glucose biosensor with the enzyme glucose oxidase and encapsulated with a silicone rubber to ensure biocompatibility of the contact wires. The carbon electrode is fabricated by means of thick-film technology including a curing step to solidify the carbon paste. The influence of the curing temperature and curing time on the electrode morphology is analyzed via scanning electron microscopy. The electrochemical characterization of the glucose biosensor is performed by amperometric/voltammetric measurements of different glucose concentrations in phosphate buffer. Herein, systematic studies at applied potentials from 500 to 1200 mV to the carbon working electrode (vs the Ag/AgCl reference electrode) allow to determine the optimal working potential. Additionally, the influence of the curing parameters on the glucose sensitivity is examined over a time period of up to 361 days. The sensor shows a negligible cross-sensitivity toward ascorbic acid, noradrenaline, and adrenaline. The developed biocompatible biosensor is highly promising for future in vivo and epidermal applications.
Miniaturized electrolyte–insulator–semiconductor capacitors (EISCAPs) with ultrathin gate insulators have been studied in terms of their pH-sensitive sensor characteristics: three different EISCAP systems consisting of Al–p-Si–Ta2O5(5 nm), Al–p-Si–Si3N4(1 or 2 nm)–Ta2O5 (5 nm), and Al–p-Si–SiO2(3.6 nm)–Ta2O5(5 nm) layer structures are characterized in buffer solution with different pH values by means of capacitance–voltage and constant capacitance method. The SiO2 and Si3N4 gate insulators are deposited by rapid thermal oxidation and rapid thermal nitridation, respectively, whereas the Ta2O5 film is prepared by atomic layer deposition. All EISCAP systems have a clear pH response, favoring the stacked gate insulators SiO2–Ta2O5 when considering the overall sensor characteristics, while the Si3N4(1 nm)–Ta2O5 stack delivers the largest accumulation capacitance (due to the lower equivalent oxide thickness) and a higher steepness in the slope of the capacitance–voltage curve among the studied stacked gate insulator systems.
This study analyses the expected utilization of an urban distribution grid under high penetration of photovoltaic and e-mobility with charging infrastructure on a residential level. The grid utilization and the corresponding power flow are evaluated, while varying the control strategies and photovoltaic installed capacity in different scenarios. Four scenarios are used to analyze the impact of e-mobility. The individual mobility demand is modelled based on the largest German studies on mobility “Mobilität in Deutschland”, which is carried out every 5 years. To estimate the ramp-up of photovoltaic generation, a potential analysis of the roof surfaces in the supply area is carried out via an evaluation of an open solar potential study. The photovoltaic feed-in time series is derived individually for each installed system in a resolution of 15 min. The residential consumption is estimated using historical smart meter data, which are collected in London between 2012 and 2014. For a realistic charging demand, each residential household decides daily on the state of charge if their vehicle requires to be charged. The resulting charging time series depends on the underlying behavior scenario. Market prices and mobility demand are therefore used as scenario input parameters for a utility function based on the current state of charge to model individual behavior. The aggregated electricity demand is the starting point of the power flow calculation. The evaluation is carried out for an urban region with approximately 3100 residents. The analysis shows that increased penetration of photovoltaics combined with a flexible and adaptive charging strategy can maximize PV usage and reduce the need for congestion-related intervention by the grid operator by reducing the amount of kWh charged from the grid by 30% which reduces the average price of a charged kWh by 35% to 14 ct/kWh from 21.8 ct/kWh without PV optimization. The resulting grid congestions are managed by implementing an intelligent price or control signal. The analysis took place using data from a real German grid with 10 subgrids. The entire software can be adapted for the analysis of different distribution grids and is publicly available as an open-source software library on GitHub.
There is a growing demand for more flexibility in manufacturing to counter the volatility and unpredictability of the markets and provide more individualization for customers. However, the design and implementation of flexibility within manufacturing systems are costly and only economically viable if applicable to actual demand fluctuations. To this end, companies are considering additive manufacturing (AM) to make production more flexible. This paper develops a conceptual model for the impact quantification of AM on volume and mix flexibility within production systems in the early stages of the factory-planning process. Together with the model, an application guideline is presented to help planners with the flexibility quantification and the factory design process. Following the development of the model and guideline, a case study is presented to indicate the potential impact additive technologies can have on manufacturing flexibility Within the case study, various scenarios with different production system configurations and production programs are analyzed, and the impact of the additive technologies on volume and mix flexibility is calculated. This work will allow factory planners to determine the potential impacts of AM on manufacturing flexibility in an early planning stage and design their production systems accordingly.
Jürgen Lohr, Jahrgang 1962, beschäftigt mit Softwareentwicklung im Projekt "Interaktive Multimedia" bei Telekom AG, Entwicklungszentrum Berlin. Zuerst erschienen in: Telekom-Praxis Ausgabe 1996. Inhaltsverzeichnis: 1. Einleitung 1.1 Einführung 1.2 Neue Dienste und Anwendungen 2 Modell zur Verteilung und Architektur 3 Technologien 3.1 Netzwerk 3.2 Computertechniken 3.3. Aufgaben der Server 4 Geplanter Einsatz der Pilotprojekte 4.1 Pilote der Telekom 4.2 Show-Case Berlin 5 Verwendete Server-Architektur 5.1 Berlin - SEL/Alcatel 5.2 Hanburg - Philips 5.3. Köln/Bonn - Digital, FUBA und Nokia 5.4 Nürnberg - Oracle, nCube und Sequent 5.5 Stuttgart - SEL/Alcatel, Hewlett Packard und Bosch 6 Zukünftige Aspekte 6.1 DVB 6.2 DAVIC 6.3 weitere Aspekte 7 Zusammenfassung 8 Schrifttum 9 verwendete Abkürzungen
zuerst erschienen in Telekom-Praxis Ausgabe 1997. Von Jürgen Lohr, Jahrgang 1962, beschäftigt mit Softwareentwicklung im Projekt "Interaktive Multimedia" bei der Deutschen Telekom AG, Entwicklungszentrum Berlin. 26 S. Der Beitrag befaßt sich mit dem Thema der universellen Kommunikationsplattform für neue, interaktive, multimediale Dienste und Anwendungen. Ausgehend von den Diensten wird ein Referenzmodell für offene Kommunikation und die Kommunikationsplattform kurz vorgestellt. Desweiteren wird die XAPI mit den Grundbegriffen, den Phasen der Kommunikation und dem Status Modell dargelegt. Ebenfalls werden die realisierten Service Provider erläutert. Abschließend werden zukünftige Vorhaben aus den Standardisierungsprojekten ITU und DAVIC sowie weitere Realisierungen aufgezeigt.
Zuerst erschienen in Telekom-Praxis Ausgabe 2000. 24 S. Innovative multimediale Dienste werden durch die Globalisierung und Konvergenz der Märkte, als auch durch Provider-Strategien ausgerichtet. Grundlegende Innovationsfelder sind: Globaler Zugang, Navigation und Intelligenter Inhalt. Die MPEG-Standards - im besonderen MPEG-4 und MPEG-7 - helfen, die oben genannten Forderungen zu erfüllen. Weiterhin ermöglichen sie auch für die Provider und den Kunden eine Zukunftssicherheit zu geben und einen zeitlichen Bestand für innovative Produkte zu sichern. Die Aufwärtkompabilität der MPEG-Standards ermöglicht die Vermeidung von Überschneidung und die Erschließung neuer Dimensionen.
In: Unterrichtsblätter / Deutsche Telekom AG. 53. 2000. 7. S. 326-340. (15 S. ) Die Multimedia-Dienste erhalten durch die Datenreduktion bei der Kompressionstechnologie eine Wirtschaftlichkeit, die den breiteren Einsatz von breitbandigen Diensten erlaubt. Die Dienste benötigen für die verschiedenen Medien nicht mehr so große Übertragungs- und Speicherleistungen. Bei den entwickelten Verfahren, den so genannten MPEG-(Motion Picture Experts Group-)Standards, werden die Video- und Tonsignale in die digitale Ebene überführt und anschließend unrelevante Signalanteile entfernt. Der daraus resultierende Datenstrom benötigt weniger Bandbreite bei der Übertragung zum Endkunden. Die MPEG-Organisation wurde bereits im Jahre 1988 ins Leben gerufen und ist ein gemeinsames Gremium der beiden Organisationen ISO (International Standard Organization) und IEC (International Electrotechnical Commission), welches sich mit der Standardisierung von Kodier- und Kompressionsverfahren für die digitalen Bild-, Video und Audioformate befasst. Mittlerweile sind vier wichtige Standards mit MPEG-1, MPEG-2 und MPEG-4 verabschiedet worden sowie mit MPEG-7 in Vorbereitung. Da die Grundlagen zu MPEG-1, -2 und -Audio bereits in anderen Beiträgen behandelt wurden, werden hier ausschließlich die neuen bzw. aktuellen MPEG-Standards vorgestellt.