Refine
Year of publication
- 2024 (67)
- 2023 (94)
- 2022 (136)
- 2021 (136)
- 2020 (169)
- 2019 (196)
- 2018 (169)
- 2017 (154)
- 2016 (157)
- 2015 (162)
- 2014 (161)
- 2013 (171)
- 2012 (162)
- 2011 (183)
- 2010 (181)
- 2009 (179)
- 2008 (150)
- 2007 (137)
- 2006 (129)
- 2005 (122)
- 2004 (150)
- 2003 (95)
- 2002 (123)
- 2001 (103)
- 2000 (102)
- 1999 (109)
- 1998 (98)
- 1997 (96)
- 1996 (81)
- 1995 (78)
- 1994 (87)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (44)
- 1988 (56)
- 1987 (32)
- 1986 (19)
- 1985 (33)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1575)
- Fachbereich Elektrotechnik und Informationstechnik (715)
- IfB - Institut für Bioengineering (567)
- Fachbereich Energietechnik (563)
- Fachbereich Chemie und Biotechnologie (541)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (484)
- Fachbereich Maschinenbau und Mechatronik (272)
- Fachbereich Wirtschaftswissenschaften (209)
- Solar-Institut Jülich (161)
Has Fulltext
- no (4735) (remove)
Language
- English (4735) (remove)
Document Type
- Article (3194)
- Conference Proceeding (1065)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (5)
Keywords
- Gamification (6)
- avalanche (6)
- Additive manufacturing (5)
- Earthquake (5)
- Enterprise Architecture (5)
- Industry 4.0 (5)
- MINLP (5)
- Natural language processing (5)
- solar sail (5)
- Additive Manufacturing (4)
Purpose of Study: Thrombosis-related complications are among the leading causes for morbidity and mortality in patients who depend on artificial organs. For the prediction of platelet behavior both the flow conditions inside the device and the thrombogenic properties of the blood-contacting surfaces must be considered. Platelet reactions under the influence of well-defined shear rates are experimentally evaluated and numerically simulated. The approach is intended for the analysis of VAD and oxygenator design.
Methods Used: A mathematical model of platelet activation, adhesion and aggregation has been implemented into a finite element CFD (Computational Fluid Dynamics) code. The approach is based on the advective and diffusive transport equations for resting and activated platelets and platelet released agonists. Experiments with citrate-anticoagulated freshly-drawn whole blood are performed in a perfusion flow chamber as well as in a system of rotating cylinders for Couette and Taylor-vortex flow. Different biomaterials are used. The activation, adhesion and aggregation are quantified using scanning electron microscopy and flow cytometry.
Summary of Results: Regions and flow conditions with a high potential for thrombus growth could be identified. The experiments clearly show the influence of the blood contacting material and governing shear rates. Numerical analysis can explain observed adhesion patterns and the degree of thrombus formation
Aims: Thrombotic complications due to activation of platelets and plasmatic clotting factors belong still to the most investigated topics in the field of study of patho-physiological mechanisms. Mathematical modeling of thrombotic reactions is established and validated in test cases. Aim of this study is to experimentally evaluate and computationally simulate platelets under the influence of well-defined shear flow conditions. Platelet behaviour and reactions are experimentally reproduced, measured and used for validation of the numerical simulation. Methods: A mathematical model of platelet activation, adhesion and aggregation has been implemented into a finite element CFD (Computational Fluid Dynamics) code. The approach is based on the advective and diffusive transport equations for resting platelets, activated platelets and platelet released agonists. Adhesion rates for the reactive surfaces depend on the hemocompatibility properties of the surface and the local shear rate. Experiments with citrate-anticoagulated freshly-drawn whole blood are performed in a perfusion flow chamber as well as in a system of rotating cylinders for Couette and Taylor-vortex flow. Different biomaterials are used. The activation, drop of platelet concentration, adhesion and aggregation are quantified using scanning electron microscopy (SEM) and flow cytometry. Results: Regions and flow conditions with a high potential for thrombus growth could be identified. The experiments clearly show the influence of the blood contacting material and flow properties. By means of SEM diverse platelet adhesion patterns are observed. Numerical analysis can explain the patterns and the degree of thrombus formation. Conclusion: The numerical method shows good agreement with experimental data indicating a possible prediction of initiation of activation and detection of the local adhesion areas in connection with the role of Von-Willebrand-Factor.
Urinary stone formation has been evolved to a widespread disease during the last years. The reason for the formation of urinary stones are little crystals, mostly composed of calcium oxalate, which are formed in human kidneys. The early diagnosis of the risk for urinary stone formation of patients can be determined by the “Bonn-Risk-Index” method based on the potentiometric detection of the Ca2+-ion concentration and an optical determination of the triggered crystallisation of calcium oxalate in unprocessed urine. In this work, miniaturised capacitive field-effect EMIS (electrolyte-membrane-insulator-semiconductor) sensors have been developed for the determination of the Ca2+-ion concentration in human native urine. The Ca2+-sensitive EMIS sensors have been systematically characterised by impedance spectroscopy, capacitance–voltage and constant–capacitance method in terms of sensitivity, signal stability and response time in both CaCl2 solutions and in native urine. The obtained results demonstrate the suitability of EMIS sensors for the measurement of the Ca2+-ion concentration in native urine of patients.
Design and initial performance of PlanTIS: a high-resolution positron emission tomograph for plants
(2010)
Positron emitters such as 11C, 13N and 18F and their labelled compounds are widely used in clinical diagnosis and animal studies, but can also be used to study metabolic and physiological functions in plants dynamically and in vivo. A very particular tracer molecule is 11CO2 since it can be applied to a leaf as a gas. We have developed a Plant Tomographic Imaging System (PlanTIS), a high-resolution PET scanner for plant studies. Detectors, front-end electronics and data acquisition architecture of the scanner are based on the ClearPET™ system. The detectors consist of LSO and LuYAP crystals in phoswich configuration which are coupled to position-sensitive photomultiplier tubes. Signals are continuously sampled by free running ADCs, and data are stored in a list mode format. The detectors are arranged in a horizontal plane to allow the plants to be measured in the natural upright position. Two groups of four detector modules stand face-to-face and rotate around the field-of-view. This special system geometry requires dedicated image reconstruction and normalization procedures. We present the initial performance of the detector system and first phantom and plant measurements.
After a brief introduction of conventional laboratory structures, this work focuses on an innovative and universal approach for a setup of a training laboratory for electric machines and drive systems. The novel approach employs a central 48 V DC bus, which forms the backbone of the structure. Several sets of DC machine, asynchronous machine and synchronous machine are connected to this bus. The advantages of the novel system structure are manifold, both from a didactic and a technical point of view: Student groups can work on their own performance level in a highly parallelized and at the same time individualized way. Additional training setups (similar or different) can easily be added. Only the total power dissipation has to be provided, i.e. the DC bus balances the power flow between the student groups. Comparative results of course evaluations of several cohorts of students are shown.
Objective: As high-field cardiac MRI (CMR) becomes more widespread the propensity of ECG to interference from electromagnetic fields (EMF) and to magneto-hydrodynamic (MHD) effects increases and with it the motivation for a CMR triggering alternative. This study explores the suitability of acoustic cardiac triggering (ACT) for left ventricular (LV) function assessment in healthy subjects (n=14). Methods: Quantitative analysis of 2D CINE steady-state free precession (SSFP) images was conducted to compare ACT’s performance with vector ECG (VCG). Endocardial border sharpness (EBS) was examined paralleled by quantitative LV function assessment. Results: Unlike VCG, ACT provided signal traces free of interference from EMF or MHD effects. In the case of correct Rwave recognition, VCG-triggered 2D CINE SSFP was immune to cardiac motion effects—even at 3.0 T. However, VCG-triggered 2D SSFP CINE imaging was prone to cardiac motion and EBS degradation if R-wave misregistration occurred. ACT-triggered acquisitions yielded LV parameters (end-diastolic volume (EDV), endsystolic volume (ESV), stroke volume (SV), ejection fraction (EF) and left ventricular mass (LVM)) comparable with those derived fromVCG-triggered acquisitions (1.5 T: ESVVCG=(56± 17) ml, EDVVCG=(151±32)ml, LVMVCG=(97±27) g, SVVCG=(94± 19)ml, EFVCG=(63±5)% cf. ESVACT= (56±18) ml, EDVACT=(147±36) ml, LVMACT=(102±29) g, SVACT=(91± 22) ml, EFACT=(62±6)%; 3.0 T: ESVVCG=(55±21) ml, EDVVCG=(151±32) ml, LVMVCG=(101±27) g, SVVCG=(96±15) ml, EFVCG=(65±7)% cf. ESVACT=(54±20) ml, EDVACT=(146±35) ml, LVMACT= (101±30) g, SVACT=(92±17) ml, EFACT=(64±6)%). Conclusions: ACT’s intrinsic insensitivity to interference from electromagnetic fields renders
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
Given the strong increase in regulatory requirements for business processes the management of business process compliance becomes a more and more regarded field in IS research. Several methods have been developed to support compliance checking of conceptual models. However, their focus on distinct modeling languages and mostly linear (i.e., predecessor-successor related) compliance rules may hinder widespread adoption and application in practice. Furthermore, hardly any of them has been evaluated in a real-world setting. We address this issue by applying a generic pattern matching approach for conceptual models to business process compliance checking in the financial sector. It consists of a model query language, a search algorithm and a corresponding modelling tool prototype. It is (1) applicable for all graph-based conceptual modeling languages and (2) for different kinds of compliance rules. Furthermore, based on an applicability check, we (3) evaluate the approach in a financial industry project setting against its relevance for decision support of audit and compliance management tasks.
We prove characterizations of the existence of perfect ƒ-matchings in uniform mengerian and perfect hypergraphs. Moreover, we investigate the ƒ-factor problem in balanced hypergraphs. For uniform balanced hypergraphs we prove two existence theorems with purely combinatorial arguments, whereas for non-uniform balanced hypergraphs we show that the ƒ-factor problem is NP-hard.
Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer.
BIG KARL and COSY
(1995)
A comparative performance analysis of the CFD platforms OpenFOAM and FLOW-3D is presented, focusing on a 3D swirling turbulent flow: a steady hydraulic jump at low Reynolds number. Turbulence is treated using RANS approach RNG k-ε. A Volume Of Fluid (VOF) method is used to track the air–water interface, consequently aeration is modeled using an Eulerian–Eulerian approach. Structured meshes of cubic elements are used to discretize the channel geometry. The numerical model accuracy is assessed comparing representative hydraulic jump variables (sequent depth ratio, roller length, mean velocity profiles, velocity decay or free surface profile) to experimental data. The model results are also compared to previous studies to broaden the result validation. Both codes reproduced the phenomenon under study concurring with experimental data, although special care must be taken when swirling flows occur. Both models can be used to reproduce the hydraulic performance of energy dissipation structures at low Reynolds numbers.
Mechano-pharmacological testing of L-Type Ca²⁺ channel modulators via human vascular celldrum model
(2020)
Background/Aims: This study aimed to establish a precise and well-defined working model, assessing pharmaceutical effects on vascular smooth muscle cell monolayer in-vitro. It describes various analysis techniques to determine the most suitable to measure the biomechanical impact of vasoactive agents by using CellDrum technology. Methods: The so-called CellDrum technology was applied to analyse the biomechanical properties of confluent human aorta muscle cells (haSMC) in monolayer. The cell generated tensions deviations in the range of a few N/m² are evaluated by the CellDrum technology. This study focuses on the dilative and contractive effects of L-type Ca²⁺ channel agonists and antagonists, respectively. We analyzed the effects of Bay K8644, nifedipine and verapamil. Three different measurement modes were developed and applied to determine the most appropriate analysis technique for the study purpose. These three operation modes are called, particular time mode" (PTM), "long term mode" (LTM) and "real-time mode" (RTM). Results: It was possible to quantify the biomechanical response of haSMCs to the addition of vasoactive agents using CellDrum technology. Due to the supplementation of 100nM Bay K8644, the tension increased approximately 10.6% from initial tension maximum, whereas, the treatment with nifedipine and verapamil caused a significant decrease in cellular tension: 10nM nifedipine decreased the biomechanical stress around 6,5% and 50nM verapamil by 2,8%, compared to the initial tension maximum. Additionally, all tested measurement modes provide similar results while focusing on different analysis parameters. Conclusion: The CellDrum technology allows highly sensitive biomechanical stress measurements of cultured haSMC monolayers. The mechanical stress responses evoked by the application of vasoactive calcium channel modulators were quantified functionally (N/m²). All tested operation modes resulted in equal findings, whereas each mode features operation-related data analysis.
Hypertension describes the pathological increase of blood pressure, which is most commonly associated with the increase of vascular wall stiffness [1]. Referring to the “Deutsche Bluthochdruck Liga” this pathology shows a growing trend in our aging society. In order to find novel pharmacological and probably personalized treatments, we want to present a functional approach to study biomechanical properties of a human aortic vascular model.
In this method review we will give an overview of recent studies which were carried out with the CellDrum technology [2] and underline the added value to already existing standard procedures known from the field of physiology.
Herein described CellDrum technology is a system to measure functional mechanical properties of cell monolayers and thin tissue constructs in-vitro. Additionally, the CellDrum enables to elucidate the mechanical response of cells to pharmacological drugs, toxins and vasoactive agents. Due to its highly flexible polymer support, cells can also be mechanically stimulated by steady and cyclic biaxial stretching.
The sandfish (Scincus scincus) is a lizard having the remarkable ability to move through desert sand for significant distances. It is well adapted to living in loose sand by virtue of a combination of morphological and behavioural specializations. We investigated the bodyform of the sandfish using 3D-laserscanning and explored its locomotion in loose desert sand using fast nuclear magnetic resonance (NMR) imaging. The sandfish exhibits an in-plane meandering motion with a frequency of about 3 Hz and an amplitude of about half its body length accompanied by swimming-like (or trotting) movements of its limbs. No torsion of the body was observed, a movement required for a digging-behaviour. Simple calculations based on the Janssen model for granular material related to our findings on bodyform and locomotor behaviour render a local decompaction of the sand surrounding the moving sandfish very likely. Thus the sand locally behaves as a viscous fluid and not as a solid material. In this fluidised sand the sandfish is able to “swim” using its limbs.
Background
Minor changes in protein structure induced by small organic and inorganic molecules can result in significant metabolic effects. The effects can be even more profound if the molecular players are chemically active and present in the cell in considerable amounts. The aim of our study was to investigate effects of a nitric oxide donor (spermine NONOate), ATP and sodium/potassium environment on the dynamics of thermal unfolding of human hemoglobin (Hb). The effect of these molecules was examined by means of circular dichroism spectrometry (CD) in the temperature range between 25°C and 70°C. The alpha-helical content of buffered hemoglobin samples (0.1 mg/ml) was estimated via ellipticity change measurements at a heating rate of 1°C/min.
Results
Major results were:
1) spermine NONOate persistently decreased the hemoglobin unfolding temperature T u irrespectively of the Na + /K + environment,
2) ATP instead increased the unfolding temperature by 3°C in both sodium-based and potassium-based buffers and
3) mutual effects of ATP and NO were strongly influenced by particular buffer ionic compositions. Moreover, the presence of potassium facilitated a partial unfolding of alpha-helical structures even at room temperature.
Conclusion
The obtained data might shed more light on molecular mechanisms and biophysics involved in the regulation of protein activity by small solutes in the cell.
The methodological discourse of mixed-methods research offers general procedures to combine quantitative and qualitative methods for investigating complex fields of research such as higher education. However, integrating different methods still poses considerable challenges. To move beyond general recommendations for mixed-methods research, this chapter proposes to discuss methodological issues with respect to a particular research domain. Taking current studies on the transition to higher education as an example, the authors first provide an overview of the potentials and limitations of quantitative and qualitative methods in the research domain. Second, they show the need for a conceptual framework grounded in the theory of the research object to guide the integration of different methods and findings. Finally, an example study that investigates transition with regard to the interplay of the individual student and the institutional context serves to illustrate the guiding role of theory. The framework integrates different theoretical perspectives on transition, informs the selection of the research methods, and defines the nexus of the two strands that constitute the mixed-methods design. As the interplay of individual and context is of concern for teaching and learning in general, the example presented may be fruitful for the wider field of higher education research.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
Let X₁,…,Xₙ be independent and identically distributed random variables with distribution F. Assuming that there are measurable functions f:R²→R and g:R²→R characterizing a family F of distributions on the Borel sets of R in the way that the random variables f(X₁,X₂),g(X₁,X₂) are independent, if and only if F∈F, we propose to treat the testing problem H:F∈F,K:F∉F by applying a consistent nonparametric independence test to the bivariate sample variables (f(Xᵢ,Xⱼ),g(Xᵢ,Xⱼ)),1⩽i,j⩽n,i≠j. A parametric bootstrap procedure needed to get critical values is shown to work. The consistency of the test is discussed. The power performance of the procedure is compared with that of the classical tests of Kolmogorov–Smirnov and Cramér–von Mises in the special cases where F is the family of gamma distributions or the family of inverse Gaussian distributions.
The paper deals with an asymptotic relative efficiency concept for confidence regions of multidimensional parameters that is based on the expected volumes of the confidence regions. Under standard conditions the asymptotic relative efficiencies of confidence regions are seen to be certain powers of the ratio of the limits of the expected volumes. These limits are explicitly derived for confidence regions associated with certain plugin estimators, likelihood ratio tests and Wald tests. Under regularity conditions, the asymptotic relative efficiency of each of these procedures with respect to each one of its competitors is equal to 1. The results are applied to multivariate normal distributions and multinomial distributions in a fairly general setting.
In a special paired sample case, Hotelling’s T² test based on the differences of the paired random vectors is the likelihood ratio test for testing the hypothesis that the paired random vectors have the same mean; with respect to a special group of affine linear transformations it is the uniformly most powerful invariant test for the general alternative of a difference in mean. We present an elementary straightforward proof of this result. The likelihood ratio test for testing the hypothesis that the covariance structure is of the assumed special form is derived and discussed. Applications to real data are given.
Hotelling’s T² tests in paired and independent survey samples are compared using the traditional asymptotic efficiency concepts of Hodges–Lehmann, Bahadur and Pitman, as well as through criteria based on the volumes of corresponding confidence regions. Conditions characterizing the superiority of a procedure are given in terms of population canonical correlation type coefficients. Statistical tests for checking these conditions are developed. Test statistics based on the eigenvalues of a symmetrized sample cross-covariance matrix are suggested, as well as test statistics based on sample canonical correlation type coefficients.
Tricarbonylrhenium(I) and -technetium(I) halide (halide = Cl and Br) complexes of ligands derived from 4,5-diazafluoren-9-one (df) and 1,10-phenanthroline-5,6-dione (phen) derivatives of benzoic and 2-hydroxybenzoic acid hydrazides have been prepared. The complexes have been characterized by elemental analysis, MS, IR, 1H NMR and absorption and emission UV/Vis spectroscopic methods. The metal centres (ReI and TcI) are coordinated through the nitrogen imine atoms and establish five-membered chelate rings, whereas the hydrazone groups stand uncoordinated. The 1H NMR spectra suggest the same behaviour in solution on the basis of only marginal variations in the chemical shifts of the hydrazine protons.
This article describes the fabrication, characterization and application of an epidermal temporary-transfer tattoo-based potentiometric sensor, coupled with a miniaturized wearable wireless transceiver, for real-time monitoring of sodium in the human perspiration. Sodium excreted during perspiration is an excellent marker for electrolyte imbalance and provides valuable information regarding an individual's physical and mental wellbeing. The realization of the new skin-worn non-invasive tattoo-like sensing device has been realized by amalgamating several state-of-the-art thick film, laser printing, solid-state potentiometry, fluidics and wireless technologies. The resulting tattoo-based potentiometric sodium sensor displays a rapid near-Nernstian response with negligible carryover effects, and good resiliency against various mechanical deformations experienced by the human epidermis. On-body testing of the tattoo sensor coupled to a wireless transceiver during exercise activity demonstrated its ability to continuously monitor sweat sodium dynamics. The real-time sweat sodium concentration was transmitted wirelessly via a body-worn transceiver from the sodium tattoo sensor to a notebook while the subjects perspired on a stationary cycle. The favorable analytical performance along with the wearable nature of the wireless transceiver makes the new epidermal potentiometric sensing system attractive for continuous monitoring the sodium dynamics in human perspiration during diverse activities relevant to the healthcare, fitness, military, healthcare and skin-care domains.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.
This paper describes the concept of an innovative, interdisciplinary, user-oriented earthquake warning and rapid response system coupled with a structural health monitoring system (SHM), capable to detect structural damages in real time. The novel system is based on interconnected decentralized seismic and structural health monitoring sensors. It is developed and will be exemplarily applied on critical infrastructures in Lower Rhine Region, in particular on a road bridge and within a chemical industrial facility. A communication network is responsible to exchange information between sensors and forward warnings and status reports about infrastructures’health condition to the concerned recipients (e.g., facility operators, local authorities). Safety measures such as emergency shutdowns are activated to mitigate structural damages and damage propagation. Local monitoring systems of the infrastructures are integrated in BIM models. The visualization of sensor data and the graphic representation of the detected damages provide spatial content to sensors data and serve as a useful and effective tool for the decision-making processes after an earthquake in the region under consideration.
Comparison of intravenous immunoglobulins for naturally occurring autoantibodies against amyloid-β
(2010)
Intravenous immunoglobulins (IVIG) are currently used for therapeutic purposes in autoimmune disorders. Recently, we demonstrated the presence of naturally occurring antibodies against amyloid- β (nAbs-Aβ) within the pool of IVIG. In this study, we compared different brands of IVIG for nAbs-Aβ and have found differences in the specificity of the nAbs-Aβ towards Aβ1–40 and Aβ1–42 . We analyzed the influence of a pH-shift over the course of antibody storage using ELISA and investigated antibody dimerization at acidic and neutral pH as well as differences in the IgG subclass distributions among the IVIG using both HPLC and a nephelometric assay. Furthermore, we investigated the epitope region of purified nAbs-Aβ. The differences found in Aβ specificity are not directly proportionate to the binding nature of these antibodies when administered in vivo. This information, however, may serve as a guide when choosing the commercial source of IVIG for therapeutic applications in Alzheimer's disease
BACKGROUND
Immunosuppression is often considered as an indication for antibiotic prophylaxis to prevent surgical site infections (SSI) while performing skin surgery. However, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited.
PATIENTS AND METHODS
All patients of the Department of Dermatology and Allergology at the University Hospital of RWTH Aachen in Aachen, Germany, who underwent hospitalization for a dermatologic surgery between June 2016 and January 2017 (6 months), were followed up after surgery until completion of the wound healing process. The follow-up addressed the occurrence of SSI and the need for systemic antibiotics after the operative procedure. Immunocompromised patients were compared with immunocompetent patients. The investigation was conducted as a retrospective analysis of patient records.
RESULTS
The authors performed 284 dermatologic surgeries in 177 patients. Nineteen percent (54/284) of the skin surgery was performed on immunocompromised patients. The most common indications for surgical treatment were nonmelanoma skin cancer and malignant melanomas. Surgical site infections occurred in 6.7% (19/284) of the cases. In 95% (18/19), systemic antibiotic treatment was needed. Twenty-one percent of all SSI (4/19) were seen in immunosuppressed patients.
CONCLUSION
According to the authors' data, immunosuppression does not represent a significant risk factor for SSI after dermatologic surgery. However, larger prospective studies are needed to make specific recommendations on the use of antibiotic prophylaxis while performing skin surgery in these patients.
The available data on complications after dermatologic surgery have improved over the past years. Particularly, additional risk factors have been identified for surgical site infections (SSI). Purulent surgical sites, older age, involvement of head, neck, and acral regions, and also the involvement of less experienced surgeons have been reported to increase the risk of the SSI after dermatologic surgeries.1 In general, the incidence of SSI after skin surgery is considered to be low.1,2 However, antibiotics in dermatologic surgeries, especially in the perioperative setting, seem to be overused,3,4 particularly regarding developing antibiotic resistances and side effects.
Immunosuppression has been recommended to be taken into consideration as an additional indication for antibiotic prophylaxis to prevent SSI after skin surgery in special cases.5,6 However, these recommendations do not specify the exact dermatologic surgeries, and were not specifically developed for dermatologic surgery patients and treatments, but adopted from other surgical fields.6 According to the survey conducted on American College of Mohs Surgery members in 2012, 13% to 29% of the surgeons administered antibiotic prophylaxis to immunocompromised patients to prevent SSI while performing dermatologic surgery on noninfected skin,3 although this was not recommended by Journal of the American Academy of Dermatology Advisory Statement. Indeed, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited. However, it is possible that due to the insufficient evidence on the risk of SSI occurrence in this patient group, dermatologic surgeons tend to overuse perioperative antibiotic prophylaxis.
To make specific recommendations on the use of antibiotic prophylaxis in immunosuppressed patients in the field of skin surgery, more information about the incidence of SSI after dermatologic surgery in these patients is needed. The aim of this study was to fill this data gap by investigating whether there is an increased risk of SSI after skin surgery in immunocompromised patients compared with immunocompetent patients.
Like all preceding transformations of the manufacturing industry, the large-scale usage of production data will reshape the role of humans within the sociotechnical production ecosystem. To ensure that this transformation creates work systems in which employees are empowered, productive, healthy, and motivated, the transformation must be guided by principles of and research on human-centered work design. Specifically, measures must be taken at all levels of work design, ranging from (1) the work tasks to (2) the working conditions to (3) the organizational level and (4) the supra-organizational level. We present selected research across all four levels that showcase the opportunities and requirements that surface when striving for human-centered work design for the Internet of Production (IoP). (1) On the work task level, we illustrate the user-centered design of human-robot collaboration (HRC) and process planning in the composite industry as well as user-centered design factors for cognitive assistance systems. (2) On the working conditions level, we present a newly developed framework for the classification of HRC workplaces. (3) Moving to the organizational level, we show how corporate data can be used to facilitate best practice sharing in production networks, and we discuss the implications of the IoP for new leadership models. Finally, (4) on the supra-organizational level, we examine overarching ethical dimensions, investigating, e.g., how the new work contexts affect our understanding of responsibility and normative values such as autonomy and privacy. Overall, these interdisciplinary research perspectives highlight the importance and necessary scope of considering the human factor in the IoP.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
The problem of creation and use of sorption materials is of current interest for the practice of the modern medicine and agriculture. Practical importance is production of a biostimulant using a carbon sorbent for a significant increase in productivity, which is very relevant for the regions of Kazakhstan. It is known that a plant phytohormone—fusicoccin—in nanogram concentrations transforms cancer cells to the state of apoptosis. In this regard, there is a scientific practical interest in the development of a highly efficient method for producing fusicoccin from extract of germinated wheat seeds. According to the results of computer modeling, cleaning composite components of fusicoccin using microporous carbon adsorbents not suitable as the size of the molecule of fusicoccin more than micropores and the optimum pore size for purification of constituents of fusicoccin was determined by computer simulation.
Combined with the use of renewable energy sources for
its production, Hydrogen represents a possible alternative gas
turbine fuel for future low emission power generation. Due to
its different physical properties compared to other fuels such
as natural gas, well established gas turbine combustion
systems cannot be directly applied for Dry Low NOx (DLN)
Hydrogen combustion. This makes the development of new
combustion technologies an essential and challenging task
for the future of hydrogen fueled gas turbines.
The newly developed and successfully tested “DLN
Micromix” combustion technology offers a great potential to
burn hydrogen in gas turbines at very low NOx emissions.
Aiming to further develop an existing burner design in terms
of increased energy density, a redesign is required in order to
stabilise the flames at higher mass flows and to maintain low
emission levels.
For this purpose, a systematic design exploration has
been carried out with the support of CFD and optimisation
tools to identify the interactions of geometrical and design
parameters on the combustor performance. Aerodynamic
effects as well as flame and emission formation are observed
and understood time- and cost-efficiently. Correlations
between single geometric values, the pressure drop of the
burner and NOx production have been identified as a result.
This numeric methodology helps to reduce the effort of
manufacturing and testing to few designs for single
validation campaigns, in order to confirm the flame stability
and NOx emissions in a wider operating condition field.
Combined with the use of renewable energy sources for its production, Hydrogen represents a possible alternative gas turbine fuel within future low emission power generation. Due to the large difference in the physical properties of Hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for Dry Low NOx (DLN) Hydrogen combustion. Thus, the development of DLN combustion technologies is an essential and challenging task for the future of Hydrogen fuelled gas turbines. The DLN Micromix combustion principle for hydrogen fuel has been developed to significantly reduce NOx-emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized diffusion-type flames. The major advantages of this combustion principle are the inherent safety against flash-back and the low NOx-emissions due to a very short residence time of reactants in the flame region of the micro-flames. The Micromix Combustion technology has been already proven experimentally and numerically for pure Hydrogen fuel operation at different energy density levels. The aim of the present study is to analyze the influence of different geometry parameter variations on the flame structure and the NOx emission and to identify the most relevant design parameters, aiming to provide a physical understanding of the Micromix flame sensitivity to the burner design and identify further optimization potential of this innovative combustion technology while increasing its energy density and making it mature enough for real gas turbine application. The study reveals great optimization potential of the Micromix Combustion technology with respect to the DLN characteristics and gives insight into the impact of geometry modifications on flame structure and NOx emission. This allows to further increase the energy density of the Micromix burners and to integrate this technology in industrial gas turbines.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
The initial idea of Robotic Process Automation (RPA) is the automation of business processes through the presentation layer of existing application systems. For this simple emulation of user input and output by software robots, no changes of the systems and architecture is required. However, considering strategic aspects of aligning business and technology on an enterprise level as well as the growing capabilities of RPA driven by artificial intelligence, interrelations between RPA and Enterprise Architecture (EA) become visible and pose new questions. In this paper we discuss the relationship between RPA and EA in terms of perspectives and implications. As workin- progress we focus on identifying new questions and research opportunities related to RPA and EA.
The ClearPET project
(2004)
The Crystal Clear Collaboration has designed and is building a high-resolution small animal PET scanner. The design is based on the use of the Hamamatsu R7600-M64 multi-anode photomultiplier tube and a LSO/LuYAP phoswich matrix with one to one coupling between the crystals and the photo-detector. The complete system will have 80 PM tubes in four rings with an inner diameter of 137 mm and an axial field of view of 110 mm. The PM pulses are digitized by free-running ADCs and digital data processing determines the gamma energy, the phoswich layer and even the pulse arrival time. Single gamma interactions are recorded and coincidences are found by software. The gantry allows rotation of the detector modules around the field of view. Simulations, and measurements a 2×4 module test set-up predict a spatial resolution of 1.5 mm in the centre of the field of view and a sensitivity of 5.9% for a point source in the centre of the field of view.