Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1314)
- INB - Institut für Nano- und Biotechnologien (485)
- Fachbereich Chemie und Biotechnologie (459)
- Fachbereich Elektrotechnik und Informationstechnik (413)
- IfB - Institut für Bioengineering (390)
- Fachbereich Energietechnik (355)
- Fachbereich Luft- und Raumfahrttechnik (244)
- Fachbereich Maschinenbau und Mechatronik (144)
- Fachbereich Wirtschaftswissenschaften (114)
- Fachbereich Bauingenieurwesen (65)
Has Fulltext
- no (3199) (remove)
Language
- English (3199) (remove)
Document Type
- Article (3199) (remove)
Keywords
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- CellDrum (3)
- Heparin (3)
- capacitive field-effect sensor (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
As a low-input crop, Miscanthus offers numerous advantages that, in addition to agricultural applications, permits its exploitation for energy, fuel, and material production. Depending on the Miscanthus genotype, season, and harvest time as well as plant component (leaf versus stem), correlations between structure and properties of the corresponding isolated lignins differ. Here, a comparative study is presented between lignins isolated from M. x giganteus, M. sinensis, M. robustus and M. nagara using a catalyst-free organosolv pulping process. The lignins from different plant constituents are also compared regarding their similarities and differences regarding monolignol ratio and important linkages. Results showed that the plant genotype has the weakest influence on monolignol content and interunit linkages. In contrast, structural differences are more significant among lignins of different harvest time and/or season. Analyses were performed using fast and simple methods such as nuclear magnetic resonance (NMR) spectroscopy. Data was assigned to four different linkages (A: β-O-4 linkage, B: phenylcoumaran, C: resinol, D: β-unsaturated ester). In conclusion, A content is particularly high in leaf-derived lignins at just under 70% and significantly lower in stem and mixture lignins at around 60% and almost 65%. The second most common linkage pattern is D in all isolated lignins, the proportion of which is also strongly dependent on the crop portion. Both stem and mixture lignins, have a relatively high share of approximately 20% or more (maximum is M. sinensis Sin2 with over 30%). In the leaf-derived lignins, the proportions are significantly lower on average. Stem samples should be chosen if the highest possible lignin content is desired, specifically from the M. x giganteus genotype, which revealed lignin contents up to 27%. Due to the better frost resistance and higher stem stability, M. nagara offers some advantages compared to M. x giganteus. Miscanthus crops are shown to be very attractive lignocellulose feedstock (LCF) for second generation biorefineries and lignin generation in Europe.
High aerodynamic efficiency requires propellers with high aspect ratios, while propeller sweep potentially reduces noise. Propeller sweep and high aspect ratios increase elasticity and coupling of structural mechanics and aerodynamics, affecting the propeller performance and noise. Therefore, this paper analyzes the influence of elasticity on forward-swept, backward-swept, and unswept propellers in hover conditions. A reduced-order blade element momentum approach is coupled with a one-dimensional Timoshenko beam theory and Farassat's formulation 1A. The results of the aeroelastic simulation are used as input for the aeroacoustic calculation. The analysis shows that elasticity influences noise radiation because thickness and loading noise respond differently to deformations. In the case of the backward-swept propeller, the location of the maximum sound pressure level shifts forward by 0.5 °, while in the case of the forward-swept propeller, it shifts backward by 0.5 °. Therefore, aeroacoustic optimization requires the consideration of propeller deformation.
Antibias training is increasingly demanded and practiced in academia and industry to increase employees’ sensitivity to discrimination, racism, and diversity. Under the heading of “Diversity Management,” antibias trainings are mainly offered as one-off workshops intending to raise awareness of unconscious biases, create a diversity-affirming corporate culture, promote awareness of the potential of
diversity, and ultimately enable the reflection of diversity in development processes. However, coming from childhood education, research and scientific articles on the sustainable effectiveness of antibias in adulthood, especially in academia, are very scarce. In order to fill this research gap, the article aims to explore how sustainable the effects of individual antibias trainings on participants’ behavior are. In order to investigate this, participant observation in a qualitative pre–post setting was conducted, analyzing antibias training in an academic context. Two observers actively participated in the training sessions and documented the activities and reflection processes of the participants. Overall, the results question the effectiveness of single antibias trainings and show that a target-group adaptive approach is mandatory owing to the background of the approach in early childhood education. Therefore, antibias work needs to be adapted to the target group’s needs and realities of life. Furthermore, the study reveals that single antibias trainings must be embedded in a holistic diversity management approach to stimulate sustainable reflection processes among the target group. This article is one of the first to scientifically evaluate antibias training effectiveness, especially in engineering sciences and the university context.
To better understand what kinds of sports and exercise could be beneficial for the intervertebral disc (IVD), we performed a review to synthesise the literature on IVD adaptation with loading and exercise. The state of the literature did not permit a systematic review; therefore, we performed a narrative review. The majority of the available data come from cell or whole-disc loading models and animal exercise models. However, some studies have examined the impact of specific sports on IVD degeneration in humans and acute exercise on disc size. Based on the data available in the literature, loading types that are likely beneficial to the IVD are dynamic, axial, at slow to moderate movement speeds, and of a magnitude experienced in walking and jogging. Static loading, torsional loading, flexion with compression, rapid loading, high-impact loading and explosive tasks are likely detrimental for the IVD. Reduced physical activity and disuse appear to be detrimental for the IVD. We also consider the impact of genetics and the likelihood of a ‘critical period’ for the effect of exercise in IVD development. The current review summarises the literature to increase awareness amongst exercise, rehabilitation and ergonomic professionals regarding IVD health and provides recommendations on future directions in research.
Design and initial performance of PlanTIS: a high-resolution positron emission tomograph for plants
(2010)
Positron emitters such as 11C, 13N and 18F and their labelled compounds are widely used in clinical diagnosis and animal studies, but can also be used to study metabolic and physiological functions in plants dynamically and in vivo. A very particular tracer molecule is 11CO2 since it can be applied to a leaf as a gas. We have developed a Plant Tomographic Imaging System (PlanTIS), a high-resolution PET scanner for plant studies. Detectors, front-end electronics and data acquisition architecture of the scanner are based on the ClearPET™ system. The detectors consist of LSO and LuYAP crystals in phoswich configuration which are coupled to position-sensitive photomultiplier tubes. Signals are continuously sampled by free running ADCs, and data are stored in a list mode format. The detectors are arranged in a horizontal plane to allow the plants to be measured in the natural upright position. Two groups of four detector modules stand face-to-face and rotate around the field-of-view. This special system geometry requires dedicated image reconstruction and normalization procedures. We present the initial performance of the detector system and first phantom and plant measurements.
Objective: As high-field cardiac MRI (CMR) becomes more widespread the propensity of ECG to interference from electromagnetic fields (EMF) and to magneto-hydrodynamic (MHD) effects increases and with it the motivation for a CMR triggering alternative. This study explores the suitability of acoustic cardiac triggering (ACT) for left ventricular (LV) function assessment in healthy subjects (n=14). Methods: Quantitative analysis of 2D CINE steady-state free precession (SSFP) images was conducted to compare ACT’s performance with vector ECG (VCG). Endocardial border sharpness (EBS) was examined paralleled by quantitative LV function assessment. Results: Unlike VCG, ACT provided signal traces free of interference from EMF or MHD effects. In the case of correct Rwave recognition, VCG-triggered 2D CINE SSFP was immune to cardiac motion effects—even at 3.0 T. However, VCG-triggered 2D SSFP CINE imaging was prone to cardiac motion and EBS degradation if R-wave misregistration occurred. ACT-triggered acquisitions yielded LV parameters (end-diastolic volume (EDV), endsystolic volume (ESV), stroke volume (SV), ejection fraction (EF) and left ventricular mass (LVM)) comparable with those derived fromVCG-triggered acquisitions (1.5 T: ESVVCG=(56± 17) ml, EDVVCG=(151±32)ml, LVMVCG=(97±27) g, SVVCG=(94± 19)ml, EFVCG=(63±5)% cf. ESVACT= (56±18) ml, EDVACT=(147±36) ml, LVMACT=(102±29) g, SVACT=(91± 22) ml, EFACT=(62±6)%; 3.0 T: ESVVCG=(55±21) ml, EDVVCG=(151±32) ml, LVMVCG=(101±27) g, SVVCG=(96±15) ml, EFVCG=(65±7)% cf. ESVACT=(54±20) ml, EDVACT=(146±35) ml, LVMACT= (101±30) g, SVACT=(92±17) ml, EFACT=(64±6)%). Conclusions: ACT’s intrinsic insensitivity to interference from electromagnetic fields renders
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
Given the strong increase in regulatory requirements for business processes the management of business process compliance becomes a more and more regarded field in IS research. Several methods have been developed to support compliance checking of conceptual models. However, their focus on distinct modeling languages and mostly linear (i.e., predecessor-successor related) compliance rules may hinder widespread adoption and application in practice. Furthermore, hardly any of them has been evaluated in a real-world setting. We address this issue by applying a generic pattern matching approach for conceptual models to business process compliance checking in the financial sector. It consists of a model query language, a search algorithm and a corresponding modelling tool prototype. It is (1) applicable for all graph-based conceptual modeling languages and (2) for different kinds of compliance rules. Furthermore, based on an applicability check, we (3) evaluate the approach in a financial industry project setting against its relevance for decision support of audit and compliance management tasks.
We prove characterizations of the existence of perfect ƒ-matchings in uniform mengerian and perfect hypergraphs. Moreover, we investigate the ƒ-factor problem in balanced hypergraphs. For uniform balanced hypergraphs we prove two existence theorems with purely combinatorial arguments, whereas for non-uniform balanced hypergraphs we show that the ƒ-factor problem is NP-hard.
Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer.
A comparative performance analysis of the CFD platforms OpenFOAM and FLOW-3D is presented, focusing on a 3D swirling turbulent flow: a steady hydraulic jump at low Reynolds number. Turbulence is treated using RANS approach RNG k-ε. A Volume Of Fluid (VOF) method is used to track the air–water interface, consequently aeration is modeled using an Eulerian–Eulerian approach. Structured meshes of cubic elements are used to discretize the channel geometry. The numerical model accuracy is assessed comparing representative hydraulic jump variables (sequent depth ratio, roller length, mean velocity profiles, velocity decay or free surface profile) to experimental data. The model results are also compared to previous studies to broaden the result validation. Both codes reproduced the phenomenon under study concurring with experimental data, although special care must be taken when swirling flows occur. Both models can be used to reproduce the hydraulic performance of energy dissipation structures at low Reynolds numbers.
Mechano-pharmacological testing of L-Type Ca²⁺ channel modulators via human vascular celldrum model
(2020)
Background/Aims: This study aimed to establish a precise and well-defined working model, assessing pharmaceutical effects on vascular smooth muscle cell monolayer in-vitro. It describes various analysis techniques to determine the most suitable to measure the biomechanical impact of vasoactive agents by using CellDrum technology. Methods: The so-called CellDrum technology was applied to analyse the biomechanical properties of confluent human aorta muscle cells (haSMC) in monolayer. The cell generated tensions deviations in the range of a few N/m² are evaluated by the CellDrum technology. This study focuses on the dilative and contractive effects of L-type Ca²⁺ channel agonists and antagonists, respectively. We analyzed the effects of Bay K8644, nifedipine and verapamil. Three different measurement modes were developed and applied to determine the most appropriate analysis technique for the study purpose. These three operation modes are called, particular time mode" (PTM), "long term mode" (LTM) and "real-time mode" (RTM). Results: It was possible to quantify the biomechanical response of haSMCs to the addition of vasoactive agents using CellDrum technology. Due to the supplementation of 100nM Bay K8644, the tension increased approximately 10.6% from initial tension maximum, whereas, the treatment with nifedipine and verapamil caused a significant decrease in cellular tension: 10nM nifedipine decreased the biomechanical stress around 6,5% and 50nM verapamil by 2,8%, compared to the initial tension maximum. Additionally, all tested measurement modes provide similar results while focusing on different analysis parameters. Conclusion: The CellDrum technology allows highly sensitive biomechanical stress measurements of cultured haSMC monolayers. The mechanical stress responses evoked by the application of vasoactive calcium channel modulators were quantified functionally (N/m²). All tested operation modes resulted in equal findings, whereas each mode features operation-related data analysis.
The sandfish (Scincus scincus) is a lizard having the remarkable ability to move through desert sand for significant distances. It is well adapted to living in loose sand by virtue of a combination of morphological and behavioural specializations. We investigated the bodyform of the sandfish using 3D-laserscanning and explored its locomotion in loose desert sand using fast nuclear magnetic resonance (NMR) imaging. The sandfish exhibits an in-plane meandering motion with a frequency of about 3 Hz and an amplitude of about half its body length accompanied by swimming-like (or trotting) movements of its limbs. No torsion of the body was observed, a movement required for a digging-behaviour. Simple calculations based on the Janssen model for granular material related to our findings on bodyform and locomotor behaviour render a local decompaction of the sand surrounding the moving sandfish very likely. Thus the sand locally behaves as a viscous fluid and not as a solid material. In this fluidised sand the sandfish is able to “swim” using its limbs.
Background
Minor changes in protein structure induced by small organic and inorganic molecules can result in significant metabolic effects. The effects can be even more profound if the molecular players are chemically active and present in the cell in considerable amounts. The aim of our study was to investigate effects of a nitric oxide donor (spermine NONOate), ATP and sodium/potassium environment on the dynamics of thermal unfolding of human hemoglobin (Hb). The effect of these molecules was examined by means of circular dichroism spectrometry (CD) in the temperature range between 25°C and 70°C. The alpha-helical content of buffered hemoglobin samples (0.1 mg/ml) was estimated via ellipticity change measurements at a heating rate of 1°C/min.
Results
Major results were:
1) spermine NONOate persistently decreased the hemoglobin unfolding temperature T u irrespectively of the Na + /K + environment,
2) ATP instead increased the unfolding temperature by 3°C in both sodium-based and potassium-based buffers and
3) mutual effects of ATP and NO were strongly influenced by particular buffer ionic compositions. Moreover, the presence of potassium facilitated a partial unfolding of alpha-helical structures even at room temperature.
Conclusion
The obtained data might shed more light on molecular mechanisms and biophysics involved in the regulation of protein activity by small solutes in the cell.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
In a special paired sample case, Hotelling’s T² test based on the differences of the paired random vectors is the likelihood ratio test for testing the hypothesis that the paired random vectors have the same mean; with respect to a special group of affine linear transformations it is the uniformly most powerful invariant test for the general alternative of a difference in mean. We present an elementary straightforward proof of this result. The likelihood ratio test for testing the hypothesis that the covariance structure is of the assumed special form is derived and discussed. Applications to real data are given.
The paper deals with an asymptotic relative efficiency concept for confidence regions of multidimensional parameters that is based on the expected volumes of the confidence regions. Under standard conditions the asymptotic relative efficiencies of confidence regions are seen to be certain powers of the ratio of the limits of the expected volumes. These limits are explicitly derived for confidence regions associated with certain plugin estimators, likelihood ratio tests and Wald tests. Under regularity conditions, the asymptotic relative efficiency of each of these procedures with respect to each one of its competitors is equal to 1. The results are applied to multivariate normal distributions and multinomial distributions in a fairly general setting.
Let X₁,…,Xₙ be independent and identically distributed random variables with distribution F. Assuming that there are measurable functions f:R²→R and g:R²→R characterizing a family F of distributions on the Borel sets of R in the way that the random variables f(X₁,X₂),g(X₁,X₂) are independent, if and only if F∈F, we propose to treat the testing problem H:F∈F,K:F∉F by applying a consistent nonparametric independence test to the bivariate sample variables (f(Xᵢ,Xⱼ),g(Xᵢ,Xⱼ)),1⩽i,j⩽n,i≠j. A parametric bootstrap procedure needed to get critical values is shown to work. The consistency of the test is discussed. The power performance of the procedure is compared with that of the classical tests of Kolmogorov–Smirnov and Cramér–von Mises in the special cases where F is the family of gamma distributions or the family of inverse Gaussian distributions.
Hotelling’s T² tests in paired and independent survey samples are compared using the traditional asymptotic efficiency concepts of Hodges–Lehmann, Bahadur and Pitman, as well as through criteria based on the volumes of corresponding confidence regions. Conditions characterizing the superiority of a procedure are given in terms of population canonical correlation type coefficients. Statistical tests for checking these conditions are developed. Test statistics based on the eigenvalues of a symmetrized sample cross-covariance matrix are suggested, as well as test statistics based on sample canonical correlation type coefficients.
Tricarbonylrhenium(I) and -technetium(I) halide (halide = Cl and Br) complexes of ligands derived from 4,5-diazafluoren-9-one (df) and 1,10-phenanthroline-5,6-dione (phen) derivatives of benzoic and 2-hydroxybenzoic acid hydrazides have been prepared. The complexes have been characterized by elemental analysis, MS, IR, 1H NMR and absorption and emission UV/Vis spectroscopic methods. The metal centres (ReI and TcI) are coordinated through the nitrogen imine atoms and establish five-membered chelate rings, whereas the hydrazone groups stand uncoordinated. The 1H NMR spectra suggest the same behaviour in solution on the basis of only marginal variations in the chemical shifts of the hydrazine protons.
This article describes the fabrication, characterization and application of an epidermal temporary-transfer tattoo-based potentiometric sensor, coupled with a miniaturized wearable wireless transceiver, for real-time monitoring of sodium in the human perspiration. Sodium excreted during perspiration is an excellent marker for electrolyte imbalance and provides valuable information regarding an individual's physical and mental wellbeing. The realization of the new skin-worn non-invasive tattoo-like sensing device has been realized by amalgamating several state-of-the-art thick film, laser printing, solid-state potentiometry, fluidics and wireless technologies. The resulting tattoo-based potentiometric sodium sensor displays a rapid near-Nernstian response with negligible carryover effects, and good resiliency against various mechanical deformations experienced by the human epidermis. On-body testing of the tattoo sensor coupled to a wireless transceiver during exercise activity demonstrated its ability to continuously monitor sweat sodium dynamics. The real-time sweat sodium concentration was transmitted wirelessly via a body-worn transceiver from the sodium tattoo sensor to a notebook while the subjects perspired on a stationary cycle. The favorable analytical performance along with the wearable nature of the wireless transceiver makes the new epidermal potentiometric sensing system attractive for continuous monitoring the sodium dynamics in human perspiration during diverse activities relevant to the healthcare, fitness, military, healthcare and skin-care domains.
Comparison of Intravenous Immunoglobulins for Naturally Occurring Autoantibodies against Amyloid-β
(2010)
BACKGROUND
Immunosuppression is often considered as an indication for antibiotic prophylaxis to prevent surgical site infections (SSI) while performing skin surgery. However, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited.
PATIENTS AND METHODS
All patients of the Department of Dermatology and Allergology at the University Hospital of RWTH Aachen in Aachen, Germany, who underwent hospitalization for a dermatologic surgery between June 2016 and January 2017 (6 months), were followed up after surgery until completion of the wound healing process. The follow-up addressed the occurrence of SSI and the need for systemic antibiotics after the operative procedure. Immunocompromised patients were compared with immunocompetent patients. The investigation was conducted as a retrospective analysis of patient records.
RESULTS
The authors performed 284 dermatologic surgeries in 177 patients. Nineteen percent (54/284) of the skin surgery was performed on immunocompromised patients. The most common indications for surgical treatment were nonmelanoma skin cancer and malignant melanomas. Surgical site infections occurred in 6.7% (19/284) of the cases. In 95% (18/19), systemic antibiotic treatment was needed. Twenty-one percent of all SSI (4/19) were seen in immunosuppressed patients.
CONCLUSION
According to the authors' data, immunosuppression does not represent a significant risk factor for SSI after dermatologic surgery. However, larger prospective studies are needed to make specific recommendations on the use of antibiotic prophylaxis while performing skin surgery in these patients.
The available data on complications after dermatologic surgery have improved over the past years. Particularly, additional risk factors have been identified for surgical site infections (SSI). Purulent surgical sites, older age, involvement of head, neck, and acral regions, and also the involvement of less experienced surgeons have been reported to increase the risk of the SSI after dermatologic surgeries.1 In general, the incidence of SSI after skin surgery is considered to be low.1,2 However, antibiotics in dermatologic surgeries, especially in the perioperative setting, seem to be overused,3,4 particularly regarding developing antibiotic resistances and side effects.
Immunosuppression has been recommended to be taken into consideration as an additional indication for antibiotic prophylaxis to prevent SSI after skin surgery in special cases.5,6 However, these recommendations do not specify the exact dermatologic surgeries, and were not specifically developed for dermatologic surgery patients and treatments, but adopted from other surgical fields.6 According to the survey conducted on American College of Mohs Surgery members in 2012, 13% to 29% of the surgeons administered antibiotic prophylaxis to immunocompromised patients to prevent SSI while performing dermatologic surgery on noninfected skin,3 although this was not recommended by Journal of the American Academy of Dermatology Advisory Statement. Indeed, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited. However, it is possible that due to the insufficient evidence on the risk of SSI occurrence in this patient group, dermatologic surgeons tend to overuse perioperative antibiotic prophylaxis.
To make specific recommendations on the use of antibiotic prophylaxis in immunosuppressed patients in the field of skin surgery, more information about the incidence of SSI after dermatologic surgery in these patients is needed. The aim of this study was to fill this data gap by investigating whether there is an increased risk of SSI after skin surgery in immunocompromised patients compared with immunocompetent patients.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
Combined with the use of renewable energy sources for its production, Hydrogen represents a possible alternative gas turbine fuel within future low emission power generation. Due to the large difference in the physical properties of Hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for Dry Low NOx (DLN) Hydrogen combustion. Thus, the development of DLN combustion technologies is an essential and challenging task for the future of Hydrogen fuelled gas turbines. The DLN Micromix combustion principle for hydrogen fuel has been developed to significantly reduce NOx-emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized diffusion-type flames. The major advantages of this combustion principle are the inherent safety against flash-back and the low NOx-emissions due to a very short residence time of reactants in the flame region of the micro-flames. The Micromix Combustion technology has been already proven experimentally and numerically for pure Hydrogen fuel operation at different energy density levels. The aim of the present study is to analyze the influence of different geometry parameter variations on the flame structure and the NOx emission and to identify the most relevant design parameters, aiming to provide a physical understanding of the Micromix flame sensitivity to the burner design and identify further optimization potential of this innovative combustion technology while increasing its energy density and making it mature enough for real gas turbine application. The study reveals great optimization potential of the Micromix Combustion technology with respect to the DLN characteristics and gives insight into the impact of geometry modifications on flame structure and NOx emission. This allows to further increase the energy density of the Micromix burners and to integrate this technology in industrial gas turbines.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
The ClearPET project
(2004)
The Crystal Clear Collaboration has designed and is building a high-resolution small animal PET scanner. The design is based on the use of the Hamamatsu R7600-M64 multi-anode photomultiplier tube and a LSO/LuYAP phoswich matrix with one to one coupling between the crystals and the photo-detector. The complete system will have 80 PM tubes in four rings with an inner diameter of 137 mm and an axial field of view of 110 mm. The PM pulses are digitized by free-running ADCs and digital data processing determines the gamma energy, the phoswich layer and even the pulse arrival time. Single gamma interactions are recorded and coincidences are found by software. The gantry allows rotation of the detector modules around the field of view. Simulations, and measurements a 2×4 module test set-up predict a spatial resolution of 1.5 mm in the centre of the field of view and a sensitivity of 5.9% for a point source in the centre of the field of view.
The esophageal Doppler monitor (EDM) is a minimally-invasive hemodynamic device which evaluates both cardiac output (CO), and fluid status, by estimating stroke volume (SV) and calculating heart rate (HR). The measurement of these parameters is based upon a continuous and accurate approximation of distal thoracic aortic blood flow. Furthermore, the peak velocity (PV) and mean acceleration (MA), of aortic blood flow at this anatomic location, are also determined by the EDM. The purpose of this preliminary report is to examine additional clinical hemodynamic calculations of: compliance (C), kinetic energy (KE), force (F), and afterload (TSVRi). These data were derived using both velocity-based measurements, provided by the EDM, as well as other contemporaneous physiologic parameters. Data were obtained from anesthetized patients undergoing surgery or who were in a critical care unit. A graphical inspection of these measurements is presented and discussed with respect to each patient’s clinical situation. When normalized to each of their initial values, F and KE both consistently demonstrated more discriminative power than either PV or MA. The EDM offers additional applications for hemodynamic monitoring. Further research regarding the accuracy, utility, and limitations of these parameters is therefore indicated.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
A second-order L-stable exponential time-differencing (ETD) method is developed by combining an ETD scheme with approximating the matrix exponentials by rational functions having real distinct poles (RDP), together with a dimensional splitting integrating factor technique. A variety of non-linear reaction-diffusion equations in two and three dimensions with either Dirichlet, Neumann, or periodic boundary conditions are solved with this scheme and shown to outperform a variety of other second-order implicit-explicit schemes. An additional performance boost is gained through further use of basic parallelization techniques.