Wissenschaftlicher Artikel
Filtern
Erscheinungsjahr
Institut
- Fachbereich Medizintechnik und Technomathematik (1314)
- INB - Institut für Nano- und Biotechnologien (485)
- Fachbereich Chemie und Biotechnologie (459)
- Fachbereich Elektrotechnik und Informationstechnik (413)
- IfB - Institut für Bioengineering (390)
- Fachbereich Energietechnik (355)
- Fachbereich Luft- und Raumfahrttechnik (244)
- Fachbereich Maschinenbau und Mechatronik (144)
- Fachbereich Wirtschaftswissenschaften (114)
- Fachbereich Bauingenieurwesen (65)
Volltext vorhanden
- nein (3199) (entfernen)
Sprache
- Englisch (3199) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (3199) (entfernen)
Schlagworte
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- CellDrum (3)
- Heparin (3)
- capacitive field-effect sensor (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
In a special paired sample case, Hotelling’s T² test based on the differences of the paired random vectors is the likelihood ratio test for testing the hypothesis that the paired random vectors have the same mean; with respect to a special group of affine linear transformations it is the uniformly most powerful invariant test for the general alternative of a difference in mean. We present an elementary straightforward proof of this result. The likelihood ratio test for testing the hypothesis that the covariance structure is of the assumed special form is derived and discussed. Applications to real data are given.
The paper deals with an asymptotic relative efficiency concept for confidence regions of multidimensional parameters that is based on the expected volumes of the confidence regions. Under standard conditions the asymptotic relative efficiencies of confidence regions are seen to be certain powers of the ratio of the limits of the expected volumes. These limits are explicitly derived for confidence regions associated with certain plugin estimators, likelihood ratio tests and Wald tests. Under regularity conditions, the asymptotic relative efficiency of each of these procedures with respect to each one of its competitors is equal to 1. The results are applied to multivariate normal distributions and multinomial distributions in a fairly general setting.
Let X₁,…,Xₙ be independent and identically distributed random variables with distribution F. Assuming that there are measurable functions f:R²→R and g:R²→R characterizing a family F of distributions on the Borel sets of R in the way that the random variables f(X₁,X₂),g(X₁,X₂) are independent, if and only if F∈F, we propose to treat the testing problem H:F∈F,K:F∉F by applying a consistent nonparametric independence test to the bivariate sample variables (f(Xᵢ,Xⱼ),g(Xᵢ,Xⱼ)),1⩽i,j⩽n,i≠j. A parametric bootstrap procedure needed to get critical values is shown to work. The consistency of the test is discussed. The power performance of the procedure is compared with that of the classical tests of Kolmogorov–Smirnov and Cramér–von Mises in the special cases where F is the family of gamma distributions or the family of inverse Gaussian distributions.
Hotelling’s T² tests in paired and independent survey samples are compared using the traditional asymptotic efficiency concepts of Hodges–Lehmann, Bahadur and Pitman, as well as through criteria based on the volumes of corresponding confidence regions. Conditions characterizing the superiority of a procedure are given in terms of population canonical correlation type coefficients. Statistical tests for checking these conditions are developed. Test statistics based on the eigenvalues of a symmetrized sample cross-covariance matrix are suggested, as well as test statistics based on sample canonical correlation type coefficients.
Tricarbonylrhenium(I) and -technetium(I) halide (halide = Cl and Br) complexes of ligands derived from 4,5-diazafluoren-9-one (df) and 1,10-phenanthroline-5,6-dione (phen) derivatives of benzoic and 2-hydroxybenzoic acid hydrazides have been prepared. The complexes have been characterized by elemental analysis, MS, IR, 1H NMR and absorption and emission UV/Vis spectroscopic methods. The metal centres (ReI and TcI) are coordinated through the nitrogen imine atoms and establish five-membered chelate rings, whereas the hydrazone groups stand uncoordinated. The 1H NMR spectra suggest the same behaviour in solution on the basis of only marginal variations in the chemical shifts of the hydrazine protons.
This article describes the fabrication, characterization and application of an epidermal temporary-transfer tattoo-based potentiometric sensor, coupled with a miniaturized wearable wireless transceiver, for real-time monitoring of sodium in the human perspiration. Sodium excreted during perspiration is an excellent marker for electrolyte imbalance and provides valuable information regarding an individual's physical and mental wellbeing. The realization of the new skin-worn non-invasive tattoo-like sensing device has been realized by amalgamating several state-of-the-art thick film, laser printing, solid-state potentiometry, fluidics and wireless technologies. The resulting tattoo-based potentiometric sodium sensor displays a rapid near-Nernstian response with negligible carryover effects, and good resiliency against various mechanical deformations experienced by the human epidermis. On-body testing of the tattoo sensor coupled to a wireless transceiver during exercise activity demonstrated its ability to continuously monitor sweat sodium dynamics. The real-time sweat sodium concentration was transmitted wirelessly via a body-worn transceiver from the sodium tattoo sensor to a notebook while the subjects perspired on a stationary cycle. The favorable analytical performance along with the wearable nature of the wireless transceiver makes the new epidermal potentiometric sensing system attractive for continuous monitoring the sodium dynamics in human perspiration during diverse activities relevant to the healthcare, fitness, military, healthcare and skin-care domains.
Comparison of Intravenous Immunoglobulins for Naturally Occurring Autoantibodies against Amyloid-β
(2010)
BACKGROUND
Immunosuppression is often considered as an indication for antibiotic prophylaxis to prevent surgical site infections (SSI) while performing skin surgery. However, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited.
PATIENTS AND METHODS
All patients of the Department of Dermatology and Allergology at the University Hospital of RWTH Aachen in Aachen, Germany, who underwent hospitalization for a dermatologic surgery between June 2016 and January 2017 (6 months), were followed up after surgery until completion of the wound healing process. The follow-up addressed the occurrence of SSI and the need for systemic antibiotics after the operative procedure. Immunocompromised patients were compared with immunocompetent patients. The investigation was conducted as a retrospective analysis of patient records.
RESULTS
The authors performed 284 dermatologic surgeries in 177 patients. Nineteen percent (54/284) of the skin surgery was performed on immunocompromised patients. The most common indications for surgical treatment were nonmelanoma skin cancer and malignant melanomas. Surgical site infections occurred in 6.7% (19/284) of the cases. In 95% (18/19), systemic antibiotic treatment was needed. Twenty-one percent of all SSI (4/19) were seen in immunosuppressed patients.
CONCLUSION
According to the authors' data, immunosuppression does not represent a significant risk factor for SSI after dermatologic surgery. However, larger prospective studies are needed to make specific recommendations on the use of antibiotic prophylaxis while performing skin surgery in these patients.
The available data on complications after dermatologic surgery have improved over the past years. Particularly, additional risk factors have been identified for surgical site infections (SSI). Purulent surgical sites, older age, involvement of head, neck, and acral regions, and also the involvement of less experienced surgeons have been reported to increase the risk of the SSI after dermatologic surgeries.1 In general, the incidence of SSI after skin surgery is considered to be low.1,2 However, antibiotics in dermatologic surgeries, especially in the perioperative setting, seem to be overused,3,4 particularly regarding developing antibiotic resistances and side effects.
Immunosuppression has been recommended to be taken into consideration as an additional indication for antibiotic prophylaxis to prevent SSI after skin surgery in special cases.5,6 However, these recommendations do not specify the exact dermatologic surgeries, and were not specifically developed for dermatologic surgery patients and treatments, but adopted from other surgical fields.6 According to the survey conducted on American College of Mohs Surgery members in 2012, 13% to 29% of the surgeons administered antibiotic prophylaxis to immunocompromised patients to prevent SSI while performing dermatologic surgery on noninfected skin,3 although this was not recommended by Journal of the American Academy of Dermatology Advisory Statement. Indeed, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited. However, it is possible that due to the insufficient evidence on the risk of SSI occurrence in this patient group, dermatologic surgeons tend to overuse perioperative antibiotic prophylaxis.
To make specific recommendations on the use of antibiotic prophylaxis in immunosuppressed patients in the field of skin surgery, more information about the incidence of SSI after dermatologic surgery in these patients is needed. The aim of this study was to fill this data gap by investigating whether there is an increased risk of SSI after skin surgery in immunocompromised patients compared with immunocompetent patients.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
Combined with the use of renewable energy sources for its production, Hydrogen represents a possible alternative gas turbine fuel within future low emission power generation. Due to the large difference in the physical properties of Hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for Dry Low NOx (DLN) Hydrogen combustion. Thus, the development of DLN combustion technologies is an essential and challenging task for the future of Hydrogen fuelled gas turbines. The DLN Micromix combustion principle for hydrogen fuel has been developed to significantly reduce NOx-emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized diffusion-type flames. The major advantages of this combustion principle are the inherent safety against flash-back and the low NOx-emissions due to a very short residence time of reactants in the flame region of the micro-flames. The Micromix Combustion technology has been already proven experimentally and numerically for pure Hydrogen fuel operation at different energy density levels. The aim of the present study is to analyze the influence of different geometry parameter variations on the flame structure and the NOx emission and to identify the most relevant design parameters, aiming to provide a physical understanding of the Micromix flame sensitivity to the burner design and identify further optimization potential of this innovative combustion technology while increasing its energy density and making it mature enough for real gas turbine application. The study reveals great optimization potential of the Micromix Combustion technology with respect to the DLN characteristics and gives insight into the impact of geometry modifications on flame structure and NOx emission. This allows to further increase the energy density of the Micromix burners and to integrate this technology in industrial gas turbines.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
The ClearPET project
(2004)
The Crystal Clear Collaboration has designed and is building a high-resolution small animal PET scanner. The design is based on the use of the Hamamatsu R7600-M64 multi-anode photomultiplier tube and a LSO/LuYAP phoswich matrix with one to one coupling between the crystals and the photo-detector. The complete system will have 80 PM tubes in four rings with an inner diameter of 137 mm and an axial field of view of 110 mm. The PM pulses are digitized by free-running ADCs and digital data processing determines the gamma energy, the phoswich layer and even the pulse arrival time. Single gamma interactions are recorded and coincidences are found by software. The gantry allows rotation of the detector modules around the field of view. Simulations, and measurements a 2×4 module test set-up predict a spatial resolution of 1.5 mm in the centre of the field of view and a sensitivity of 5.9% for a point source in the centre of the field of view.
The esophageal Doppler monitor (EDM) is a minimally-invasive hemodynamic device which evaluates both cardiac output (CO), and fluid status, by estimating stroke volume (SV) and calculating heart rate (HR). The measurement of these parameters is based upon a continuous and accurate approximation of distal thoracic aortic blood flow. Furthermore, the peak velocity (PV) and mean acceleration (MA), of aortic blood flow at this anatomic location, are also determined by the EDM. The purpose of this preliminary report is to examine additional clinical hemodynamic calculations of: compliance (C), kinetic energy (KE), force (F), and afterload (TSVRi). These data were derived using both velocity-based measurements, provided by the EDM, as well as other contemporaneous physiologic parameters. Data were obtained from anesthetized patients undergoing surgery or who were in a critical care unit. A graphical inspection of these measurements is presented and discussed with respect to each patient’s clinical situation. When normalized to each of their initial values, F and KE both consistently demonstrated more discriminative power than either PV or MA. The EDM offers additional applications for hemodynamic monitoring. Further research regarding the accuracy, utility, and limitations of these parameters is therefore indicated.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.
A microscopic photometric method for measuring erythrocyte deformability. Artmann, Gerhard Michael
(1986)
In the present work, surface functionalization of different sensor materials was studied. Organosilanes are well known to serve as coupling agent for biomolecules or cells on inorganic materials. 3-aminopropyltriethoxysilane (APTES) was used to attach microbiological spores time to an interdigitated sensor surface. The functionality and physical properties of APTES were studied on isolated sensor materials, namely silicon dioxide (SiO2) and platinum (Pt) as well as the combined material on sensor level. A predominant immobilization of spores could be demonstrated on SiO2 surfaces. Additionally, the impedance signal of APTES-functionalized biosensor chips has been investigated.
Optimization of the immobilization of bacterial spores on glass substrates with organosilanes
(2016)
Spores can be immobilized on biosensors to function as sensitive recognition elements. However, the immobilization can affect the sensitivity and reproducibility of the sensor signal. In this work, three different immobilization strategies with organosilanes were optimized and characterized to immobilize Bacillus atrophaeus spores on glass substrates. Five different silanization parameters were investigated: nature of the solvent, concentration of the silane, silanization time, curing process, and silanization temperature. The resulting silane layers were resistant to a buffer solution (e.g., Ringer solution) with a polysorbate (e.g., Tween®80) and sonication.
Prior to immobilization of biomolecules or cells onto biosensor surfaces, the surface must be physically or chemically activated for further functionalization. Organosilanes are a versatile option as they facilitate the immobilization through their terminal groups and also display self-assembly. Incorporating hydroxyl groups is one of the important methods for primary immobilization. This can be done, for example, with oxygen plasma treatment. However, this treatment can affect the performance of the biosensors and this effect is not quite well understood for surface functionalization. In this work, the effect of O2 plasma treatment on EIS sensors was investigated by means of electrochemical characterizations: capacitance–voltage (C–V) and constant capacitance (ConCap) measurements. After O2 plasma treatment, the potential of the EIS sensor dramatically shifts to a more negative value. This was successfully reset by using an annealing process.
Background
True date palms (Phoenix dactylifera L.) are impressive trees and have served as an indispensable source of food for mankind in tropical and subtropical countries for centuries. The aim of this study is to differentiate date palm tree varieties by analysing leaflet cross sections with technical/optical methods and artificial neural networks (ANN).
Results
Fluorescence microscopy images of leaflet cross sections have been taken from a set of five date palm tree cultivars (Hewlat al Jouf, Khlas, Nabot Soltan, Shishi, Um Raheem). After features extraction from images, the obtained data have been fed in a multilayer perceptron ANN with backpropagation learning algorithm.
Conclusions
Overall, an accurate result in prediction and differentiation of date palm tree cultivars was achieved with average prediction in tenfold cross-validation is 89.1% and reached 100% in one of the best ANN.
Microfabrication, characterization and analytical application of a new thin-film silver microsensor
(2009)
The purpose of the current study in combination with our previous published data (Arampatzis et al., 2007) was to examine the effects of a controlled modulation of strain magnitude and strain frequency applied to the Achilles tendon on the plasticity of tendon mechanical and morphological properties. Eleven male adults (23.9±2.2 yr) participated in the study. The participants exercised one leg at low magnitude tendon strain (2.97±0.47%), and the other leg at high tendon strain magnitude (4.72±1.08%) of similar frequency (0.5 Hz, 1 s loading, 1 s relaxation) and exercise volume (integral of the plantar flexion moment over time) for 14 weeks, 4 days per week, 5 sets per session. The exercise volume was similar to the intervention of our earlier study (0.17 Hz frequency; 3 s loading, 3 s relaxation) allowing a direct comparison of the results. Before and after the intervention ankle joint moment has been measured by a dynamometer, tendon–aponeurosis elongation by ultrasound and cross-sectional area of the Achilles tendon by magnet resonance images (MRI). We found a decrease in strain at a given tendon force, an increase in tendon–aponeurosis stiffness and tendon elastic modulus of the Achilles tendon only in the leg exercised at high strain magnitude. The cross-sectional area (CSA) of the Achilles tendon did not show any statistically significant (P>0.05) differences to the pre-exercise values in both legs. The results indicate a superior improvement in tendon properties (stiffness, elastic modulus and CSA) at the low frequency (0.17 Hz) compared to the high strain frequency (0.5 Hz) protocol. These findings provide evidence that the strain magnitude applied to the Achilles tendon should exceed the value, which occurs during habitual activities to trigger adaptational effects and that higher tendon strain duration per contraction leads to superior tendon adaptational responses.
Objective
Hemodialysis patients show an approximately threefold higher prevalence of cognitive impairment compared to the age-matched general population. Impaired microcirculatory function is one of the assumed causes. Dynamic retinal vessel analysis is a quantitative method for measuring neurovascular coupling and microvascular endothelial function. We hypothesize that cognitive impairment is associated with altered microcirculation of retinal vessels.
Methods
152 chronic hemodialysis patients underwent cognitive testing using the Montreal Cognitive Assessment. Retinal microcirculation was assessed by Dynamic Retinal Vessel Analysis, which carries out an examination recording retinal vessels' reaction to a flicker light stimulus under standardized conditions.
Results
In unadjusted as well as in adjusted linear regression analyses a significant association between the visuospatial executive function domain score of the Montreal Cognitive Assessment and the maximum arteriolar dilation as response of retinal arterioles to the flicker light stimulation was obtained.
Conclusion
This is the first study determining retinal microvascular function as surrogate for cerebral microvascular function and cognition in hemodialysis patients. The relationship between impairment in executive function and reduced arteriolar reaction to flicker light stimulation supports the involvement of cerebral small vessel disease as contributing factor for the development of cognitive impairment in this patient population and might be a target for noninvasive disease monitoring and therapeutic intervention.
Algorithmic design and resilience assessment of energy efficient high-rise water supply systems
(2018)
High-rise water supply systems provide water flow and suitable pressure in all levels of tall buildings. To design such state-of-the-art systems, the consideration of energy efficiency and the anticipation of component failures are mandatory. In this paper, we use Mixed-Integer Nonlinear Programming to compute an optimal placement of pipes and pumps, as well as an optimal control strategy.Moreover, we consider the resilience of the system to pump failures. A resilient system is able to fulfill a predefined minimum functionality even though components fail or are restricted in their normal usage. We present models to measure and optimize the resilience. To demonstrate our approach, we design and analyze an optimal resilient decentralized water supply system inspired by a real-life hotel building.
On obligations in the development process of resilient systems with algorithmic design methods
(2018)
Advanced computational methods are needed both for the design of large systems and to compute high accuracy solutions. Such methods are efficient in computation, but the validation of results is very complex, and highly skilled auditors are needed to verify them. We investigate legal questions concerning obligations in the development phase, especially for technical systems developed using advanced methods. In particular, we consider methods of resilient and robust optimization. With these techniques, high performance solutions can be found, despite a high variety of input parameters. However, given the novelty of these methods, it is uncertain whether legal obligations are being met. The aim of this paper is to discuss if and how the choice of a specific computational method affects the developer’s product liability. The review of legal obligations in this paper is based on German law and focuses on the requirements that must be met during the design and development process.
Cheap does not imply cost-effective -- this is rule number one of zeitgeisty system design. The initial investment accounts only for a small portion of the lifecycle costs of a technical system. In fluid systems, about ninety percent of the total costs are caused by other factors like power consumption and maintenance. With modern optimization methods, it is already possible to plan an optimal technical system considering multiple objectives. In this paper, we focus on an often neglected contribution to the lifecycle costs: downtime costs due to spontaneous failures. Consequently, availability becomes an issue.
Planning the layout and operation of a technical system is a common task
for an engineer. Typically, the workflow is divided into consecutive stages: First,
the engineer designs the layout of the system, with the help of his experience or of
heuristic methods. Secondly, he finds a control strategy which is often optimized
by simulation. This usually results in a good operating of an unquestioned sys-
tem topology. In contrast, we apply Operations Research (OR) methods to find a
cost-optimal solution for both stages simultaneously via mixed integer program-
ming (MILP). Technical Operations Research (TOR) allows one to find a provable
global optimal solution within the model formulation. However, the modeling error
due to the abstraction of physical reality remains unknown. We address this ubiq-
uitous problem of OR methods by comparing our computational results with mea-
surements in a test rig. For a practical test case we compute a topology and control
strategy via MILP and verify that the objectives are met up to a deviation of 8.7%.
Resilience as a concept has found its way into different disciplines to describe the ability of an individual or system to withstand and adapt to changes in its environment. In this paper, we provide an overview of the concept in different communities and extend it to the area of mechanical engineering. Furthermore, we present metrics to measure resilience in technical systems and illustrate them by applying them to load-carrying structures. By giving application examples from the Collaborative Research Centre (CRC) 805, we show how the concept of resilience can be used to control uncertainty during different stages of product life.
Persistent infection with the high-risk Human Papillomavirus type 16 (HPV 16) is the causative event for the development of cervical cancer and other malignant tumors of the anogenital tract and of the head and neck. Despite many attempts to develop therapeutic vaccines no candidate has entered late clinical trials. An interesting approach is a DNA based vaccine encompassing the nucleotide sequence of the E6 and E7 viral oncoproteins. Because both proteins are consistently expressed in HPV infected cells they represent excellent targets for immune therapy. Here we report the development of 8 DNA vaccine candidates consisting of differently rearranged HPV-16 E6 and E7 sequences within one molecule providing all naturally occurring epitopes but supposedly lacking transforming activity. The HPV sequences were fused to the J-domain and the SV40 enhancer in order to increase immune responses. We demonstrate that one out of the 8 vaccine candidates induces very strong cellular E6- and E7- specific cellular immune responses in mice and, as shown in regression experiments, efficiently controls growth of HPV 16 positive syngeneic tumors. This data demonstrates the potential of this vaccine candidate to control persistent HPV 16 infection that may lead to malignant disease. It also suggests that different sequence rearrangements influence the immunogenecity by an as yet unknown mechanism.
Detecting synchronization clusters in multivariate time series via coarse-graining of Markov chains
(2007)
In this work, we present a compact, bifunctional chip-based sensor setup that measures the temperature and electrical conductivity of water samples, including specimens from rivers and channels, aquaculture, and the Atlantic Ocean. For conductivity measurements, we utilize the impedance amplitude recorded via interdigitated electrode structures at a single triggering frequency. The results are well in line with data obtained using a calibrated reference instrument. The new setup holds for conductivity values spanning almost two orders of magnitude (river versus ocean water) without the need for equivalent circuit modelling. Temperature measurements were performed in four-point geometry with an on-chip platinum RTD (resistance temperature detector) in the temperature range between 2 °C and 40 °C, showing no hysteresis effects between warming and cooling cycles. Although the meander was not shielded against the liquid, the temperature calibration provided equivalent results to low conductive Milli-Q and highly conductive ocean water. The sensor is therefore suitable for inline and online monitoring purposes in recirculating aquaculture systems.
Simulation model for the transient process behaviour of solar aluminium recycling in a rotary kiln
(2015)
Two of the main environmental problems of today’s society are the continuously increasing production of organic wastes as well as the increase of carbon dioxide in the atmosphere and the related green house effect. A way to solve these problems is the production of biogas. Biogas is a combustible gas consisting of methane, carbon dioxide and small amounts of other gases and trace elements. Production of biogas through anaerobic digestion of animal manure and slurries as well as of a wide range of digestible organic wastes and agricultural residues, converts these substrates into electricity and heat and offers a natural fertiliser for agriculture. The microbiological process of decomposition of organic matter, in the absence of oxygen takes place in reactors, called digesters. Biogas can be used as a fuel in a gas turbine or burner and can be used in a hybrid solar tower system offering a solution for waste treatment of agricultural and animal residues. A solar tower system consists of a heliostat field, which concentrates direct solar irradiation on an open volumetric central receiver. The receiver heats up ambient air to temperatures of around 700°C. The hot air’s heat energy is transferred to a steam Rankine cycle in a heat recovery steam generator (HRSG). The steam drives a steam turbine, which in turn drives a generator for producing electricity. In order to increase the operational hours of a solar tower power plant, a heat storage system and/ or hybridization may be considered. The advantage of solar-fossil hybrid power plants, compared to solar-only systems, lies in low additional investment costs due to an adaptable solar share and reduced technical and economical risks. On sunny days the hybrid system operates in a solar-only mode with the central receiver and on cloudy days and at night with the gas turbine only. As an alternative to methane gas, environmentally neutral biogas can be used for operating the gas turbine. Hence, the hybrid system is operated to 100% from renewable energy sources
Heat production in the windings of the stators of electric machines under stationary condition
(2014)
In electric machines due to high currents and resistive losses (joule heating) heat is produced. To avoid damages by overheating the design of effective cooling systems is required. Therefore the knowledge of heat sources and heat transfer processes is necessary. The purpose of this paper is to illustrate a good and effective calculation method for the temperature analysis based on homogenization techniques. These methods have been applied for the stator windings in a slot of an electric machine consisting of copper wires and resin. The key quantity here is an effective thermal conductivity, which characterizes the heterogeneous wire resin-arrangement inside the stator slot. To illustrate the applicability of the method, the analysis of a simplified, homogenized model is compared with the detailed analysis of temperature behavior inside a slot of an electric machine according to the heat generation. We considered here only the stationary situation. The achieved numerical results are accurate and show that the applied homogenization technique works in practice. Finally the results of simulations for the two cases, the original model of the slot and the homogenized model chosen for the slot (unit cell), are compared to experimental results.