Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1309)
- INB - Institut für Nano- und Biotechnologien (485)
- Fachbereich Chemie und Biotechnologie (462)
- Fachbereich Elektrotechnik und Informationstechnik (408)
- IfB - Institut für Bioengineering (384)
- Fachbereich Energietechnik (352)
- Fachbereich Luft- und Raumfahrttechnik (240)
- Fachbereich Maschinenbau und Mechatronik (151)
- Fachbereich Wirtschaftswissenschaften (114)
- Fachbereich Bauingenieurwesen (66)
Has Fulltext
- no (3194) (remove)
Language
- English (3194) (remove)
Document Type
- Article (3194) (remove)
Keywords
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- Additive Manufacturing (3)
- CellDrum (3)
- Heparin (3)
- SLM (3)
- additive manufacturing (3)
Urinary stone formation has been evolved to a widespread disease during the last years. The reason for the formation of urinary stones are little crystals, mostly composed of calcium oxalate, which are formed in human kidneys. The early diagnosis of the risk for urinary stone formation of patients can be determined by the “Bonn-Risk-Index” method based on the potentiometric detection of the Ca2+-ion concentration and an optical determination of the triggered crystallisation of calcium oxalate in unprocessed urine. In this work, miniaturised capacitive field-effect EMIS (electrolyte-membrane-insulator-semiconductor) sensors have been developed for the determination of the Ca2+-ion concentration in human native urine. The Ca2+-sensitive EMIS sensors have been systematically characterised by impedance spectroscopy, capacitance–voltage and constant–capacitance method in terms of sensitivity, signal stability and response time in both CaCl2 solutions and in native urine. The obtained results demonstrate the suitability of EMIS sensors for the measurement of the Ca2+-ion concentration in native urine of patients.
Design and initial performance of PlanTIS: a high-resolution positron emission tomograph for plants
(2010)
Positron emitters such as 11C, 13N and 18F and their labelled compounds are widely used in clinical diagnosis and animal studies, but can also be used to study metabolic and physiological functions in plants dynamically and in vivo. A very particular tracer molecule is 11CO2 since it can be applied to a leaf as a gas. We have developed a Plant Tomographic Imaging System (PlanTIS), a high-resolution PET scanner for plant studies. Detectors, front-end electronics and data acquisition architecture of the scanner are based on the ClearPET™ system. The detectors consist of LSO and LuYAP crystals in phoswich configuration which are coupled to position-sensitive photomultiplier tubes. Signals are continuously sampled by free running ADCs, and data are stored in a list mode format. The detectors are arranged in a horizontal plane to allow the plants to be measured in the natural upright position. Two groups of four detector modules stand face-to-face and rotate around the field-of-view. This special system geometry requires dedicated image reconstruction and normalization procedures. We present the initial performance of the detector system and first phantom and plant measurements.
Objective: As high-field cardiac MRI (CMR) becomes more widespread the propensity of ECG to interference from electromagnetic fields (EMF) and to magneto-hydrodynamic (MHD) effects increases and with it the motivation for a CMR triggering alternative. This study explores the suitability of acoustic cardiac triggering (ACT) for left ventricular (LV) function assessment in healthy subjects (n=14). Methods: Quantitative analysis of 2D CINE steady-state free precession (SSFP) images was conducted to compare ACT’s performance with vector ECG (VCG). Endocardial border sharpness (EBS) was examined paralleled by quantitative LV function assessment. Results: Unlike VCG, ACT provided signal traces free of interference from EMF or MHD effects. In the case of correct Rwave recognition, VCG-triggered 2D CINE SSFP was immune to cardiac motion effects—even at 3.0 T. However, VCG-triggered 2D SSFP CINE imaging was prone to cardiac motion and EBS degradation if R-wave misregistration occurred. ACT-triggered acquisitions yielded LV parameters (end-diastolic volume (EDV), endsystolic volume (ESV), stroke volume (SV), ejection fraction (EF) and left ventricular mass (LVM)) comparable with those derived fromVCG-triggered acquisitions (1.5 T: ESVVCG=(56± 17) ml, EDVVCG=(151±32)ml, LVMVCG=(97±27) g, SVVCG=(94± 19)ml, EFVCG=(63±5)% cf. ESVACT= (56±18) ml, EDVACT=(147±36) ml, LVMACT=(102±29) g, SVACT=(91± 22) ml, EFACT=(62±6)%; 3.0 T: ESVVCG=(55±21) ml, EDVVCG=(151±32) ml, LVMVCG=(101±27) g, SVVCG=(96±15) ml, EFVCG=(65±7)% cf. ESVACT=(54±20) ml, EDVACT=(146±35) ml, LVMACT= (101±30) g, SVACT=(92±17) ml, EFACT=(64±6)%). Conclusions: ACT’s intrinsic insensitivity to interference from electromagnetic fields renders
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
Given the strong increase in regulatory requirements for business processes the management of business process compliance becomes a more and more regarded field in IS research. Several methods have been developed to support compliance checking of conceptual models. However, their focus on distinct modeling languages and mostly linear (i.e., predecessor-successor related) compliance rules may hinder widespread adoption and application in practice. Furthermore, hardly any of them has been evaluated in a real-world setting. We address this issue by applying a generic pattern matching approach for conceptual models to business process compliance checking in the financial sector. It consists of a model query language, a search algorithm and a corresponding modelling tool prototype. It is (1) applicable for all graph-based conceptual modeling languages and (2) for different kinds of compliance rules. Furthermore, based on an applicability check, we (3) evaluate the approach in a financial industry project setting against its relevance for decision support of audit and compliance management tasks.
We prove characterizations of the existence of perfect ƒ-matchings in uniform mengerian and perfect hypergraphs. Moreover, we investigate the ƒ-factor problem in balanced hypergraphs. For uniform balanced hypergraphs we prove two existence theorems with purely combinatorial arguments, whereas for non-uniform balanced hypergraphs we show that the ƒ-factor problem is NP-hard.
Most drugs are no longer produced in their own countries by the pharmaceutical companies, but by contract manufacturers or at manufacturing sites in countries that can produce more cheaply. This not only makes it difficult to trace them back but also leaves room for criminal organizations to fake them unnoticed. For these reasons, it is becoming increasingly difficult to determine the exact origin of drugs. The goal of this work was to investigate how exactly this is possible by using different spectroscopic methods like nuclear magnetic resonance and near- and mid-infrared spectroscopy in combination with multivariate data analysis. As an example, 56 out of 64 different paracetamol preparations, collected from 19 countries around the world, were chosen to investigate whether it is possible to determine the pharmaceutical company, manufacturing site, or country of origin. By means of suitable pre-processing of the spectra and the different information contained in each method, principal component analysis was able to evaluate manufacturing relationships between individual companies and to differentiate between production sites or formulations. Linear discriminant analysis showed different results depending on the spectral method and purpose. For all spectroscopic methods, it was found that the classification of the preparations to their manufacturer achieves better results than the classification to their pharmaceutical company. The best results were obtained with nuclear magnetic resonance and near-infrared data, with 94.6%/99.6% and 98.7/100% of the spectra of the preparations correctly assigned to their pharmaceutical company or manufacturer.
A comparative performance analysis of the CFD platforms OpenFOAM and FLOW-3D is presented, focusing on a 3D swirling turbulent flow: a steady hydraulic jump at low Reynolds number. Turbulence is treated using RANS approach RNG k-ε. A Volume Of Fluid (VOF) method is used to track the air–water interface, consequently aeration is modeled using an Eulerian–Eulerian approach. Structured meshes of cubic elements are used to discretize the channel geometry. The numerical model accuracy is assessed comparing representative hydraulic jump variables (sequent depth ratio, roller length, mean velocity profiles, velocity decay or free surface profile) to experimental data. The model results are also compared to previous studies to broaden the result validation. Both codes reproduced the phenomenon under study concurring with experimental data, although special care must be taken when swirling flows occur. Both models can be used to reproduce the hydraulic performance of energy dissipation structures at low Reynolds numbers.
Mechano-pharmacological testing of L-Type Ca²⁺ channel modulators via human vascular celldrum model
(2020)
Background/Aims: This study aimed to establish a precise and well-defined working model, assessing pharmaceutical effects on vascular smooth muscle cell monolayer in-vitro. It describes various analysis techniques to determine the most suitable to measure the biomechanical impact of vasoactive agents by using CellDrum technology. Methods: The so-called CellDrum technology was applied to analyse the biomechanical properties of confluent human aorta muscle cells (haSMC) in monolayer. The cell generated tensions deviations in the range of a few N/m² are evaluated by the CellDrum technology. This study focuses on the dilative and contractive effects of L-type Ca²⁺ channel agonists and antagonists, respectively. We analyzed the effects of Bay K8644, nifedipine and verapamil. Three different measurement modes were developed and applied to determine the most appropriate analysis technique for the study purpose. These three operation modes are called, particular time mode" (PTM), "long term mode" (LTM) and "real-time mode" (RTM). Results: It was possible to quantify the biomechanical response of haSMCs to the addition of vasoactive agents using CellDrum technology. Due to the supplementation of 100nM Bay K8644, the tension increased approximately 10.6% from initial tension maximum, whereas, the treatment with nifedipine and verapamil caused a significant decrease in cellular tension: 10nM nifedipine decreased the biomechanical stress around 6,5% and 50nM verapamil by 2,8%, compared to the initial tension maximum. Additionally, all tested measurement modes provide similar results while focusing on different analysis parameters. Conclusion: The CellDrum technology allows highly sensitive biomechanical stress measurements of cultured haSMC monolayers. The mechanical stress responses evoked by the application of vasoactive calcium channel modulators were quantified functionally (N/m²). All tested operation modes resulted in equal findings, whereas each mode features operation-related data analysis.
The sandfish (Scincus scincus) is a lizard having the remarkable ability to move through desert sand for significant distances. It is well adapted to living in loose sand by virtue of a combination of morphological and behavioural specializations. We investigated the bodyform of the sandfish using 3D-laserscanning and explored its locomotion in loose desert sand using fast nuclear magnetic resonance (NMR) imaging. The sandfish exhibits an in-plane meandering motion with a frequency of about 3 Hz and an amplitude of about half its body length accompanied by swimming-like (or trotting) movements of its limbs. No torsion of the body was observed, a movement required for a digging-behaviour. Simple calculations based on the Janssen model for granular material related to our findings on bodyform and locomotor behaviour render a local decompaction of the sand surrounding the moving sandfish very likely. Thus the sand locally behaves as a viscous fluid and not as a solid material. In this fluidised sand the sandfish is able to “swim” using its limbs.
Background
Minor changes in protein structure induced by small organic and inorganic molecules can result in significant metabolic effects. The effects can be even more profound if the molecular players are chemically active and present in the cell in considerable amounts. The aim of our study was to investigate effects of a nitric oxide donor (spermine NONOate), ATP and sodium/potassium environment on the dynamics of thermal unfolding of human hemoglobin (Hb). The effect of these molecules was examined by means of circular dichroism spectrometry (CD) in the temperature range between 25°C and 70°C. The alpha-helical content of buffered hemoglobin samples (0.1 mg/ml) was estimated via ellipticity change measurements at a heating rate of 1°C/min.
Results
Major results were:
1) spermine NONOate persistently decreased the hemoglobin unfolding temperature T u irrespectively of the Na + /K + environment,
2) ATP instead increased the unfolding temperature by 3°C in both sodium-based and potassium-based buffers and
3) mutual effects of ATP and NO were strongly influenced by particular buffer ionic compositions. Moreover, the presence of potassium facilitated a partial unfolding of alpha-helical structures even at room temperature.
Conclusion
The obtained data might shed more light on molecular mechanisms and biophysics involved in the regulation of protein activity by small solutes in the cell.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
Let X₁,…,Xₙ be independent and identically distributed random variables with distribution F. Assuming that there are measurable functions f:R²→R and g:R²→R characterizing a family F of distributions on the Borel sets of R in the way that the random variables f(X₁,X₂),g(X₁,X₂) are independent, if and only if F∈F, we propose to treat the testing problem H:F∈F,K:F∉F by applying a consistent nonparametric independence test to the bivariate sample variables (f(Xᵢ,Xⱼ),g(Xᵢ,Xⱼ)),1⩽i,j⩽n,i≠j. A parametric bootstrap procedure needed to get critical values is shown to work. The consistency of the test is discussed. The power performance of the procedure is compared with that of the classical tests of Kolmogorov–Smirnov and Cramér–von Mises in the special cases where F is the family of gamma distributions or the family of inverse Gaussian distributions.
The paper deals with an asymptotic relative efficiency concept for confidence regions of multidimensional parameters that is based on the expected volumes of the confidence regions. Under standard conditions the asymptotic relative efficiencies of confidence regions are seen to be certain powers of the ratio of the limits of the expected volumes. These limits are explicitly derived for confidence regions associated with certain plugin estimators, likelihood ratio tests and Wald tests. Under regularity conditions, the asymptotic relative efficiency of each of these procedures with respect to each one of its competitors is equal to 1. The results are applied to multivariate normal distributions and multinomial distributions in a fairly general setting.
In a special paired sample case, Hotelling’s T² test based on the differences of the paired random vectors is the likelihood ratio test for testing the hypothesis that the paired random vectors have the same mean; with respect to a special group of affine linear transformations it is the uniformly most powerful invariant test for the general alternative of a difference in mean. We present an elementary straightforward proof of this result. The likelihood ratio test for testing the hypothesis that the covariance structure is of the assumed special form is derived and discussed. Applications to real data are given.
Hotelling’s T² tests in paired and independent survey samples are compared using the traditional asymptotic efficiency concepts of Hodges–Lehmann, Bahadur and Pitman, as well as through criteria based on the volumes of corresponding confidence regions. Conditions characterizing the superiority of a procedure are given in terms of population canonical correlation type coefficients. Statistical tests for checking these conditions are developed. Test statistics based on the eigenvalues of a symmetrized sample cross-covariance matrix are suggested, as well as test statistics based on sample canonical correlation type coefficients.
Tricarbonylrhenium(I) and -technetium(I) halide (halide = Cl and Br) complexes of ligands derived from 4,5-diazafluoren-9-one (df) and 1,10-phenanthroline-5,6-dione (phen) derivatives of benzoic and 2-hydroxybenzoic acid hydrazides have been prepared. The complexes have been characterized by elemental analysis, MS, IR, 1H NMR and absorption and emission UV/Vis spectroscopic methods. The metal centres (ReI and TcI) are coordinated through the nitrogen imine atoms and establish five-membered chelate rings, whereas the hydrazone groups stand uncoordinated. The 1H NMR spectra suggest the same behaviour in solution on the basis of only marginal variations in the chemical shifts of the hydrazine protons.
This article describes the fabrication, characterization and application of an epidermal temporary-transfer tattoo-based potentiometric sensor, coupled with a miniaturized wearable wireless transceiver, for real-time monitoring of sodium in the human perspiration. Sodium excreted during perspiration is an excellent marker for electrolyte imbalance and provides valuable information regarding an individual's physical and mental wellbeing. The realization of the new skin-worn non-invasive tattoo-like sensing device has been realized by amalgamating several state-of-the-art thick film, laser printing, solid-state potentiometry, fluidics and wireless technologies. The resulting tattoo-based potentiometric sodium sensor displays a rapid near-Nernstian response with negligible carryover effects, and good resiliency against various mechanical deformations experienced by the human epidermis. On-body testing of the tattoo sensor coupled to a wireless transceiver during exercise activity demonstrated its ability to continuously monitor sweat sodium dynamics. The real-time sweat sodium concentration was transmitted wirelessly via a body-worn transceiver from the sodium tattoo sensor to a notebook while the subjects perspired on a stationary cycle. The favorable analytical performance along with the wearable nature of the wireless transceiver makes the new epidermal potentiometric sensing system attractive for continuous monitoring the sodium dynamics in human perspiration during diverse activities relevant to the healthcare, fitness, military, healthcare and skin-care domains.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.
Comparison of intravenous immunoglobulins for naturally occurring autoantibodies against amyloid-β
(2010)
Intravenous immunoglobulins (IVIG) are currently used for therapeutic purposes in autoimmune disorders. Recently, we demonstrated the presence of naturally occurring antibodies against amyloid- β (nAbs-Aβ) within the pool of IVIG. In this study, we compared different brands of IVIG for nAbs-Aβ and have found differences in the specificity of the nAbs-Aβ towards Aβ1–40 and Aβ1–42 . We analyzed the influence of a pH-shift over the course of antibody storage using ELISA and investigated antibody dimerization at acidic and neutral pH as well as differences in the IgG subclass distributions among the IVIG using both HPLC and a nephelometric assay. Furthermore, we investigated the epitope region of purified nAbs-Aβ. The differences found in Aβ specificity are not directly proportionate to the binding nature of these antibodies when administered in vivo. This information, however, may serve as a guide when choosing the commercial source of IVIG for therapeutic applications in Alzheimer's disease
BACKGROUND
Immunosuppression is often considered as an indication for antibiotic prophylaxis to prevent surgical site infections (SSI) while performing skin surgery. However, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited.
PATIENTS AND METHODS
All patients of the Department of Dermatology and Allergology at the University Hospital of RWTH Aachen in Aachen, Germany, who underwent hospitalization for a dermatologic surgery between June 2016 and January 2017 (6 months), were followed up after surgery until completion of the wound healing process. The follow-up addressed the occurrence of SSI and the need for systemic antibiotics after the operative procedure. Immunocompromised patients were compared with immunocompetent patients. The investigation was conducted as a retrospective analysis of patient records.
RESULTS
The authors performed 284 dermatologic surgeries in 177 patients. Nineteen percent (54/284) of the skin surgery was performed on immunocompromised patients. The most common indications for surgical treatment were nonmelanoma skin cancer and malignant melanomas. Surgical site infections occurred in 6.7% (19/284) of the cases. In 95% (18/19), systemic antibiotic treatment was needed. Twenty-one percent of all SSI (4/19) were seen in immunosuppressed patients.
CONCLUSION
According to the authors' data, immunosuppression does not represent a significant risk factor for SSI after dermatologic surgery. However, larger prospective studies are needed to make specific recommendations on the use of antibiotic prophylaxis while performing skin surgery in these patients.
The available data on complications after dermatologic surgery have improved over the past years. Particularly, additional risk factors have been identified for surgical site infections (SSI). Purulent surgical sites, older age, involvement of head, neck, and acral regions, and also the involvement of less experienced surgeons have been reported to increase the risk of the SSI after dermatologic surgeries.1 In general, the incidence of SSI after skin surgery is considered to be low.1,2 However, antibiotics in dermatologic surgeries, especially in the perioperative setting, seem to be overused,3,4 particularly regarding developing antibiotic resistances and side effects.
Immunosuppression has been recommended to be taken into consideration as an additional indication for antibiotic prophylaxis to prevent SSI after skin surgery in special cases.5,6 However, these recommendations do not specify the exact dermatologic surgeries, and were not specifically developed for dermatologic surgery patients and treatments, but adopted from other surgical fields.6 According to the survey conducted on American College of Mohs Surgery members in 2012, 13% to 29% of the surgeons administered antibiotic prophylaxis to immunocompromised patients to prevent SSI while performing dermatologic surgery on noninfected skin,3 although this was not recommended by Journal of the American Academy of Dermatology Advisory Statement. Indeed, the data on the risk of developing SSI after dermatologic surgery in immunosuppressed patients are limited. However, it is possible that due to the insufficient evidence on the risk of SSI occurrence in this patient group, dermatologic surgeons tend to overuse perioperative antibiotic prophylaxis.
To make specific recommendations on the use of antibiotic prophylaxis in immunosuppressed patients in the field of skin surgery, more information about the incidence of SSI after dermatologic surgery in these patients is needed. The aim of this study was to fill this data gap by investigating whether there is an increased risk of SSI after skin surgery in immunocompromised patients compared with immunocompetent patients.
Melting probes are a proven tool for the exploration of thick ice layers and clean sampling of subglacial water on Earth. Their compact size and ease of operation also make them a key technology for the future exploration of icy moons in our Solar System, most prominently Europa and Enceladus. For both mission planning and hardware engineering, metrics such as efficiency and expected performance in terms of achievable speed, power requirements, and necessary heating power have to be known.
Theoretical studies aim at describing thermal losses on the one hand, while laboratory experiments and field tests allow an empirical investigation of the true performance on the other hand. To investigate the practical value of a performance model for the operational performance in extraterrestrial environments, we first contrast measured data from terrestrial field tests on temperate and polythermal glaciers with results from basic heat loss models and a melt trajectory model. For this purpose, we propose conventions for the determination of two different efficiencies that can be applied to both measured data and models. One definition of efficiency is related to the melting head only, while the other definition considers the melting probe as a whole. We also present methods to combine several sources of heat loss for probes with a circular cross-section, and to translate the geometry of probes with a non-circular cross-section to analyse them in the same way. The models were selected in a way that minimizes the need to make assumptions about unknown parameters of the probe or the ice environment.
The results indicate that currently used models do not yet reliably reproduce the performance of a probe under realistic conditions. Melting velocities and efficiencies are constantly overestimated by 15 to 50 % in the models, but qualitatively agree with the field test data. Hence, losses are observed, that are not yet covered and quantified by the available loss models. We find that the deviation increases with decreasing ice temperature. We suspect that this mismatch is mainly due to the too restrictive idealization of the probe model and the fact that the probe was not operated in an efficiency-optimized manner during the field tests. With respect to space mission engineering, we find that performance and efficiency models must be used with caution in unknown ice environments, as various ice parameters have a significant effect on the melting process. Some of these are difficult to estimate from afar.
Combined with the use of renewable energy sources for its production, Hydrogen represents a possible alternative gas turbine fuel within future low emission power generation. Due to the large difference in the physical properties of Hydrogen compared to other fuels such as natural gas, well established gas turbine combustion systems cannot be directly applied for Dry Low NOx (DLN) Hydrogen combustion. Thus, the development of DLN combustion technologies is an essential and challenging task for the future of Hydrogen fuelled gas turbines. The DLN Micromix combustion principle for hydrogen fuel has been developed to significantly reduce NOx-emissions. This combustion principle is based on cross-flow mixing of air and gaseous hydrogen which reacts in multiple miniaturized diffusion-type flames. The major advantages of this combustion principle are the inherent safety against flash-back and the low NOx-emissions due to a very short residence time of reactants in the flame region of the micro-flames. The Micromix Combustion technology has been already proven experimentally and numerically for pure Hydrogen fuel operation at different energy density levels. The aim of the present study is to analyze the influence of different geometry parameter variations on the flame structure and the NOx emission and to identify the most relevant design parameters, aiming to provide a physical understanding of the Micromix flame sensitivity to the burner design and identify further optimization potential of this innovative combustion technology while increasing its energy density and making it mature enough for real gas turbine application. The study reveals great optimization potential of the Micromix Combustion technology with respect to the DLN characteristics and gives insight into the impact of geometry modifications on flame structure and NOx emission. This allows to further increase the energy density of the Micromix burners and to integrate this technology in industrial gas turbines.