Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1354)
- INB - Institut für Nano- und Biotechnologien (503)
- Fachbereich Chemie und Biotechnologie (476)
- Fachbereich Elektrotechnik und Informationstechnik (409)
- IfB - Institut für Bioengineering (404)
- Fachbereich Energietechnik (358)
- Fachbereich Luft- und Raumfahrttechnik (250)
- Fachbereich Maschinenbau und Mechatronik (156)
- Fachbereich Wirtschaftswissenschaften (116)
- Fachbereich Bauingenieurwesen (70)
Language
- English (3277) (remove)
Document Type
- Article (3277) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- avalanche (5)
- Earthquake (4)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- additive manufacturing (4)
- biosensors (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
For performing point-of-care molecular diagnostics, magnetic immunoassays constitute a promising alternative to established enzyme-linked immunosorbent assays (ELISA) because they are fast, robust and sensitive. Simultaneous detection of multiple biomolecular targets from one body fluid sample is desired. The aim of this work is to show that multiplex magnetic immunodetection based on magnetic frequency mixing by means of modular immunofiltration columns prepared for different targets is feasible. By calculations of the magnetic response signal, the required spacing between the modules was determined. Immunofiltration columns were manufactured by 3D printing and antibody immobilization was performed in a batch approach. It was shown experimentally that two different target molecules in a sample solution could be individually detected in a single assaying step with magnetic measurements of the corresponding immobilization filters. The arrangement order of the filters and of a negative control did not influence the results. Thus, a simple and reliable approach to multi-target magnetic immunodetection was demonstrated.
In modern bioanalytical methods, it is often desired to detect several targets in one sample within one measurement. Immunological methods including those that use superparamagnetic beads are an important group of techniques for these applications. The goal of this work is to investigate the feasibility of simultaneously detecting different superparamagnetic beads acting as markers using the magnetic frequency mixing technique. The frequency of the magnetic excitation field is scanned while the lower driving frequency is kept constant. Due to the particles’ nonlinear magnetization, mixing frequencies are generated. To record their amplitude and phase information, a direct digitization of the pickup-coil’s signal with subsequent Fast Fourier Transformation is performed. By synchronizing both magnetic beads using frequency scanning in magnetic frequency mixing technique magnetic fields, a stable phase information is gained. In this research, it is shown that the amplitude of the dominant mixing component is proportional to the amount of superparamagnetic beads inside a sample. Additionally, it is shown that the phase does not show this behaviour. Excitation frequency scans of different bead types were performed, showing different phases, without correlation to their diverse amplitudes. Two commercially available beads were selected and a determination of their amount in a mixture is performed as a demonstration for multiplex measurements.
Sensitive and rapid detection of cholera toxin subunit B using magnetic frequency mixing detection
(2019)
Cholera is a life-threatening disease caused by the cholera toxin (CT) as produced by some Vibrio cholerae serogroups. In this research we present a method which directly detects the toxin’s B subunit (CTB) in drinking water. For this purpose we performed a magnetic sandwich immunoassay inside a 3D immunofiltration column. We used two different commercially available antibodies to capture CTB and for binding to superparamagnetic beads. ELISA experiments were performed to select the antibody combination. The beads act as labels for the magnetic frequency mixing detection technique. We show that the limit of detection depends on the type of magnetic beads. A nonlinear Hill curve was fitted to the calibration measurements by means of a custom-written python software. We achieved a sensitive and rapid detection of CTB within a broad concentration range from 0.2 ng/ml to more
than 700 ng/ml.
Herein, fibroin, polylactide (PLA), and carbon are investigated for their suitability as biocompatible and biodegradable materials for amperometric biosensors. For this purpose, screen-printed carbon electrodes on the biodegradable substrates fibroin and PLA are modified with a glucose oxidase membrane and then encapsulated with the biocompatible material Ecoflex. The influence of different curing parameters of the carbon electrodes on the resulting biosensor characteristics is studied. The morphology of the electrodes is investigated by scanning electron microscopy, and the biosensor performance is examined by amperometric measurements of glucose (0.5–10 mM) in phosphate buffer solution, pH 7.4, at an applied potential of 1.2 V versus a Ag/AgCl reference electrode. Instead of Ecoflex, fibroin, PLA, and wound adhesive are tested as alternative encapsulation compounds: a series of swelling tests with different fibroin compositions, PLA, and Ecoflex has been performed before characterizing the most promising candidates by chronoamperometry. Therefore, the carbon electrodes are completely covered with the particular encapsulation material. Chronoamperometric measurements with H2O2 concentrations between 0.5 and 10 mM enable studying the leakage current behavior.
It has been shown that muscle fascicle curvature increases with increasing contraction level and decreasing muscle–tendon complex length. The analyses were done with limited examination windows concerning contraction level, muscle–tendon complex length, and/or intramuscular position of ultrasound imaging. With this study we aimed to investigate the correlation between fascicle arching and contraction, muscle–tendon complex length and their associated architectural parameters in gastrocnemius muscles to develop hypotheses concerning the fundamental mechanism of fascicle curving. Twelve participants were tested in five different positions (90°/105°*, 90°/90°*, 135°/90°*, 170°/90°*, and 170°/75°*; *knee/ankle angle). They performed isometric contractions at four different contraction levels (5%, 25%, 50%, and 75% of maximum voluntary contraction) in each position. Panoramic ultrasound images of gastrocnemius muscles were collected at rest and during constant contraction. Aponeuroses and fascicles were tracked in all ultrasound images and the parameters fascicle curvature, muscle–tendon complex strain, contraction level, pennation angle, fascicle length, fascicle strain, intramuscular position, sex and age group were analyzed by linear mixed effect models. Mean fascicle curvature of the medial gastrocnemius increased with contraction level (+5 m−1 from 0% to 100%; p = 0.006). Muscle–tendon complex length had no significant impact on mean fascicle curvature. Mean pennation angle (2.2 m−1 per 10°; p < 0.001), inverse mean fascicle length (20 m−1 per cm−1; p = 0.003), and mean fascicle strain (−0.07 m−1 per +10%; p = 0.004) correlated with mean fascicle curvature. Evidence has also been found for intermuscular, intramuscular, and sex-specific intramuscular differences of fascicle curving. Pennation angle and the inverse fascicle length show the highest predictive capacities for fascicle curving. Due to the strong correlations between pennation angle and fascicle curvature and the intramuscular pattern of curving we suggest for future studies to examine correlations between fascicle curvature and intramuscular fluid pressure.
Low emission zones and truck bans, the rising price of diesel and increases in road tolls: all of these factors are putting serious pressure on the transport industry. Commercial vehicle manufacturers and their suppliers are in the process of identifying new solutions to these challenges as part of their efforts to meet the EEV (enhanced environmentally friendly vehicle) limits, which are currently the most robust European exhaust and emissions standards for trucks and buses.
Germany is a frontrunner in setting frameworks for the transition to a low-carbon system. The mobility sector plays a significant role in this shift, affecting different people and groups on multiple levels. Without acceptance from these stakeholders, emission targets are out of reach. This research analyzes how the heterogeneous preferences of various stakeholders align with the transformation of the mobility sector, looking at the extent to which the German transformation paths are supported and where stakeholders are located.
Under the research objective of comparing stakeholders' preferences to identify which car segments require additional support for a successful climate transition, a status quo of stakeholders and car performance criteria is the foundation for the analysis. Stakeholders' hidden preferences hinder the derivation of criteria weightings from stakeholders; therefore, a ranking from observed preferences is used. This study's inverse multi-criteria decision analysis means that weightings can be predicted and used together with a recalibrated performance matrix to explore future preferences toward car segments.
Results show that stakeholders prefer medium-sized cars, with the trend pointing towards the increased potential for alternative propulsion technologies and electrified vehicles. These insights can guide the improved targeting of policy supporting the energy and mobility transformation. Additionally, the method proposed in this work can fully handle subjective approaches while incorporating a priori information. A software implementation of the proposed method completes this work and is made publicly available.
The aim of the present study was the characterisation of three true subtilisins and one phylogenetically intermediate subtilisin from halotolerant and halophilic microorganisms. Considering the currently growing enzyme market for efficient and novel biocatalysts, data mining is a promising source for novel, as yet uncharacterised enzymes, especially from halophilic or halotolerant Bacillaceae, which offer great potential to meet industrial needs. Both halophilic bacteria Pontibacillus marinus DSM 16465ᵀ and Alkalibacillus haloalkaliphilus DSM 5271ᵀ and both halotolerant bacteria Metabacillus indicus DSM 16189 and Litchfieldia alkalitelluris DSM 16976ᵀ served as a source for the four new subtilisins SPPM, SPAH, SPMI and SPLA. The protease genes were cloned and expressed in Bacillus subtilis DB104. Purification to apparent homogeneity was achieved by ethanol precipitation, desalting and ion-exchange chromatography. Enzyme activity could be observed between pH 5.0–12.0 with an optimum for SPPM, SPMI and SPLA around pH 9.0 and for SPAH at pH 10.0. The optimal temperature for SPMI and SPLA was 70 °C and for SPPM and SPAH 55 °C and 50 °C, respectively. All proteases showed high stability towards 5% (w/v) SDS and were active even at NaCl concentrations of 5 M. The four proteases demonstrate potential for future biotechnological applications.
The aerodynamic performance of propellers strongly depends on their geometry and, consequently, on aeroelastic deformations. Knowledge of the extent of the impact is crucial for overall aircraft performance. An integrated simulation environment for steady aeroelastic propeller simulations is presented. The simulation environment is applied to determine the impact of elastic deformations on the aerodynamic propeller performance. The aerodynamic module includes a blade element momentum approach to calculate aerodynamic loads. The structural module is based on finite beam elements, according to Timoshenko theory, including moderate deflections. Several fixed-pitch propellers with thin-walled cross sections made of both isotropic and non-isotropic materials are investigated. The essential parameters are varied: diameter, disc loading, sweep, material, rotational, and flight velocity. The relative change of thrust between rigid and elastic blades quantifies the impact of propeller elasticity. Swept propellers of large diameters or low disc loadings can decrease the thrust significantly. High flight velocities and low material stiffness amplify this tendency. Performance calculations without consideration of propeller elasticity can lead to decreased efficiency. To avoid cost- and time-intense redesigns, propeller elasticity should be considered for swept planforms and low disc loadings.
Several species of (poly)saccharides and organic acids can be found often simultaneously in various biological matrices, e.g., fruits, plant materials, and biological fluids. The analysis of such matrices sometimes represents a challenging task. Using Aloe vera (A. vera) plant materials as an example, the performance of several spectro-scopic methods (80 MHz benchtop NMR, NIR, ATR-FTIR and UV–vis) for the simultaneous analysis of quality parameters of this plant material was compared. The determined parameters include (poly)saccharides such as aloverose, fructose and glucose as well as organic acids (malic, lactic, citric, isocitric, acetic, fumaric, benzoic and sorbic acids). 500 MHz NMR and high-performance liquid chromatography (HPLC) were used as the reference methods.
UV–vis data can be used only for identification of added preservatives (benzoic and sorbic acids) and drying agent (maltodextrin) and semiquantitative analysis of malic acid. NIR and MIR spectroscopies combined with multivariate regression can deliver more informative overview of A. vera extracts being able to additionally quantify glucose, aloverose, citric, isocitric, malic, lactic acids and fructose. Low-field NMR measurements can be used for the quantification of aloverose, glucose, malic, lactic, acetic, and benzoic acids. The benchtop NMR method was successfully validated in terms of robustness, stability, precision, reproducibility and limit of detection (LOD) and quantification (LOQ), respectively. All spectroscopic techniques are useful for the screening of (poly)saccharides and organic acids in plant extracts and should be applied according to its availability as well as information and confidence required for the specific analytical goal. Benchtop NMR spectroscopy seems to be the most feasible solution for quality control of A. vera products.
Background
Aminoacylases are highly promising enzymes for the green synthesis of acyl-amino acids, potentially replacing the environmentally harmful Schotten-Baumann reaction. Long-chain acyl-amino acids can serve as strong surfactants and emulsifiers, with application in cosmetic industries. Heterologous expression of these enzymes, however, is often hampered, limiting their use in industrial processes.
Results
We identified a novel mycobacterial aminoacylase gene from Mycolicibacterium smegmatis MKD 8, cloned and expressed it in Escherichia coli and Vibrio natriegens using the T7 overexpression system. The recombinant enzyme was prone to aggregate as inclusion bodies, and while V. natriegens Vmax™ could produce soluble aminoacylase upon induction with isopropyl β-d-1-thiogalactopyranoside (IPTG), E. coli BL21 (DE3) needed autoinduction with lactose to produce soluble recombinant protein. We successfully conducted a chaperone co-expression study in both organisms to further enhance aminoacylase production and found that overexpression of chaperones GroEL/S enhanced aminoacylase activity in the cell-free extract 1.8-fold in V. natriegens and E. coli. Eventually, E. coli ArcticExpress™ (DE3), which co-expresses cold-adapted chaperonins Cpn60/10 from Oleispira antarctica, cultivated at 12 °C, rendered the most suitable expression system for this aminoacylase and exhibited twice the aminoacylase activity in the cell-free extract compared to E. coli BL21 (DE3) with GroEL/S co-expression at 20 °C. The purified aminoacylase was characterized based on hydrolytic activities, being most stable and active at pH 7.0, with a maximum activity at 70 °C, and stability at 40 °C and pH 7.0 for 5 days. The aminoacylase strongly prefers short-chain acyl-amino acids with smaller, hydrophobic amino acid residues. Several long-chain amino acids were fairly accepted in hydrolysis as well, especially N-lauroyl-L-methionine. To initially evaluate the relevance of this aminoacylase for the synthesis of N-acyl-amino acids, we demonstrated that lauroyl-methionine can be synthesized from lauric acid and methionine in an aqueous system.
Conclusion
Our results suggest that the recombinant enzyme is well suited for synthesis reactions and will thus be further investigated.
The eVTOL industry is a rapidly growing mass market expected to start in 2024. eVTOL compete, caused by their predicted missions, with ground-based transportation modes, including mainly passenger cars. Therefore, the automotive and classical aircraft design process is reviewed and compared to highlight advantages for eVTOL development. A special focus is on ergonomic comfort and safety. The need for further investigation of eVTOL’s crashworthiness is outlined by, first, specifying the relevance of passive safety via accident statistics and customer perception analysis; second, comparing the current state of regulation and certification; and third, discussing the advantages of integral safety and applying the automotive safety approach for eVTOL development. Integral safety links active and passive safety, while the automotive safety approach means implementing standardized mandatory full-vehicle crash tests for future eVTOL. Subsequently, possible crash impact conditions are analyzed, and three full-vehicle crash load cases are presented.
GHEtool is a Python package that contains all the functionalities needed to deal with borefield design. It is developed for both researchers and practitioners. The core of this package is the automated sizing of borefield under different conditions. The sizing of a borefield is typically slow due to the high complexity of the mathematical background. Because this tool has a lot of precalculated data, GHEtool can size a borefield in the order of tenths of milliseconds. This sizing typically takes the order of minutes. Therefore, this tool is suited for being implemented in typical workflows where iterations are required.
GHEtool also comes with a graphical user interface (GUI). This GUI is prebuilt as an exe-file because this provides access to all the functionalities without coding. A setup to install the GUI at the user-defined place is also implemented and available at: https://www.mech.kuleuven.be/en/tme/research/thermal_systems/tools/ghetool.
In general aviation, too, it is desirable to be able to operate existing internal combustion engines with fuels that produce less CO₂ than Avgas 100LL being widely used today It can be assumed that, in comparison, the fuels CNG, LPG or LNG, which are gaseous under normal conditions, produce significantly lower emissions. Necessary propulsion system adaptations were investigated as part of a research project at Aachen University of Applied Sciences.
This study analyses the expected utilization of an urban distribution grid under high penetration of photovoltaic and e-mobility with charging infrastructure on a residential level. The grid utilization and the corresponding power flow are evaluated, while varying the control strategies and photovoltaic installed capacity in different scenarios. Four scenarios are used to analyze the impact of e-mobility. The individual mobility demand is modelled based on the largest German studies on mobility “Mobilität in Deutschland”, which is carried out every 5 years. To estimate the ramp-up of photovoltaic generation, a potential analysis of the roof surfaces in the supply area is carried out via an evaluation of an open solar potential study. The photovoltaic feed-in time series is derived individually for each installed system in a resolution of 15 min. The residential consumption is estimated using historical smart meter data, which are collected in London between 2012 and 2014. For a realistic charging demand, each residential household decides daily on the state of charge if their vehicle requires to be charged. The resulting charging time series depends on the underlying behavior scenario. Market prices and mobility demand are therefore used as scenario input parameters for a utility function based on the current state of charge to model individual behavior. The aggregated electricity demand is the starting point of the power flow calculation. The evaluation is carried out for an urban region with approximately 3100 residents. The analysis shows that increased penetration of photovoltaics combined with a flexible and adaptive charging strategy can maximize PV usage and reduce the need for congestion-related intervention by the grid operator by reducing the amount of kWh charged from the grid by 30% which reduces the average price of a charged kWh by 35% to 14 ct/kWh from 21.8 ct/kWh without PV optimization. The resulting grid congestions are managed by implementing an intelligent price or control signal. The analysis took place using data from a real German grid with 10 subgrids. The entire software can be adapted for the analysis of different distribution grids and is publicly available as an open-source software library on GitHub.
This paper describes the realization of a novel neurocomputer which is based on the concepts of a coprocessor. In contrast to existing neurocomputers the main interest was the realization of a scalable, flexible system, which is capable of computing neural networks of arbitrary topology and scale, with full independence of special hardware from the software's point of view. On the other hand, computational power should be added, whenever needed and flexibly adapted to the requirements of the application. Hardware independence is achieved by a run time system which is capable of using all available computing power, including multiple host CPUs and an arbitrary number of neural coprocessors autonomously. The realization of arbitrary neural topologies is provided through the implementation of the elementary operations which can be found in most neural topologies.
Environmental emissions, global warming, and energy-related concerns have accelerated the advancements in conventional vehicles that primarily use internal combustion engines. Among the existing technologies, hydrogen fuel cell electric vehicles and fuel cell hybrid electric vehicles may have minimal contributions to greenhouse gas emissions and thus are the prime choices for environmental concerns. However, energy management in fuel cell electric vehicles and fuel cell hybrid electric vehicles is a major challenge. Appropriate control strategies should be used for effective energy management in these vehicles. On the other hand, there has been significant progress in artificial intelligence, machine learning, and designing data-driven intelligent controllers. These techniques have found much attention within the community, and state-of-the-art energy management technologies have been developed based on them. This manuscript reviews the application of machine learning and intelligent controllers for prediction, control, energy management, and vehicle to everything (V2X) in hydrogen fuel cell vehicles. The effectiveness of data-driven control and optimization systems are investigated to evolve, classify, and compare, and future trends and directions for sustainability are discussed.
On the applicability of several tests to models with not identically distributed random effects
(2023)
We consider Kolmogorov–Smirnov and Cramér–von-Mises type tests for testing central symmetry, exchangeability, and independence. In the standard case, the tests are intended for the application to independent and identically distributed data with unknown distribution. The tests are available for multivariate data and bootstrap procedures are suitable to obtain critical values. We discuss the applicability of the tests to random effects models, where the random effects are independent but not necessarily identically distributed and with possibly unknown distributions. Theoretical results show the adequacy of the tests in this situation. The quality of the tests in models with random effects is investigated by simulations. Empirical results obtained confirm the theoretical findings. A real data example illustrates the application.
We consider time-dependent portfolios and discuss the allocation of changes in the risk of a portfolio to changes in the portfolio’s components. For this purpose we adopt established allocation principles. We also use our approach to obtain forecasts for changes in the risk of the portfolio’s components. To put the approach into practice we present an implementation based on the output of a simulation. Allocation is illustrated with an example portfolio in the context of Solvency II. The quality of the forecasts is investigated with an empirical study.
This article describes an Internet of things (IoT) sensing device with a wireless interface which is powered by the energy-harvesting method of the Wiegand effect. The Wiegand effect, in contrast to continuous sources like photovoltaic or thermal harvesters, provides small amounts of energy discontinuously in pulsed mode. To enable an energy-self-sufficient operation of the sensing device with this pulsed energy source, the output energy of the Wiegand generator is maximized. This energy is used to power up the system and to acquire and process data like position, temperature or other resistively measurable quantities as well as transmit these data via an ultra-low-power ultra-wideband (UWB) data transmitter. A proof-of-concept system was built to prove the feasibility of the approach. The energy consumption of the system during start-up was analysed, traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof of concept, an application prototype was developed.
Motile cilia are hair-like cell extensions that beat periodically to generate fluid flow along various epithelial tissues within the body. In dense multiciliated carpets, cilia were shown to exhibit a remarkable coordination of their beat in the form of traveling metachronal waves, a phenomenon which supposedly enhances fluid transport. Yet, how cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine experiments, novel analysis tools, and theory to address this knowledge gap. To investigate collective dynamics of cilia, we studied zebrafish multiciliated epithelia in the nose and the brain. We focused mainly on the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Even though synchronization is local only, we observed global patterns of traveling metachronal waves across the zebrafish multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right noses, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions, i.e., cilia colliding with each other, and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment coincide and generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
Assistance systems have been widely adopted in the manufacturing sector to facilitate various processes and tasks in production environments. However, existing systems are mostly equipped with rigid functional logic and do not provide individual user experiences or adapt to their capabilities. This work integrates human factors in assistance systems by adjusting the hardware and instruction presented to the workers’ cognitive and physical demands. A modular system architecture is designed accordingly, which allows a flexible component exchange according to the user and the work task. Gamification, the use of game elements in non-gaming contexts, has been further adopted in this work to provide level-based instructions and personalised feedback. The developed framework is validated by applying it to a manual workstation for industrial assembly routines.
High aerodynamic efficiency requires propellers with high aspect ratios, while propeller sweep potentially reduces noise. Propeller sweep and high aspect ratios increase elasticity and coupling of structural mechanics and aerodynamics, affecting the propeller performance and noise. Therefore, this paper analyzes the influence of elasticity on forward-swept, backward-swept, and unswept propellers in hover conditions. A reduced-order blade element momentum approach is coupled with a one-dimensional Timoshenko beam theory and Farassat's formulation 1A. The results of the aeroelastic simulation are used as input for the aeroacoustic calculation. The analysis shows that elasticity influences noise radiation because thickness and loading noise respond differently to deformations. In the case of the backward-swept propeller, the location of the maximum sound pressure level shifts forward by 0.5 °, while in the case of the forward-swept propeller, it shifts backward by 0.5 °. Therefore, aeroacoustic optimization requires the consideration of propeller deformation.
Based on the European Space Agency (ESA) Science in Space Environment (SciSpacE) community White Paper “Human Physiology – Musculoskeletal system”, this perspective highlights unmet needs and suggests new avenues for future studies in musculoskeletal research to enable crewed exploration missions. The musculoskeletal system is essential for sustaining physical function and energy metabolism, and the maintenance of health during exploration missions, and consequently mission success, will be tightly linked to musculoskeletal function. Data collection from current space missions from pre-, during-, and post-flight periods would provide important information to understand and ultimately offset musculoskeletal alterations during long-term spaceflight. In addition, understanding the kinetics of the different components of the musculoskeletal system in parallel with a detailed description of the molecular mechanisms driving these alterations appears to be the best approach to address potential musculoskeletal problems that future exploratory-mission crew will face. These research efforts should be accompanied by technical advances in molecular and phenotypic monitoring tools to provide in-flight real-time feedback.
Flexible fuel operation of a Dry-Low-NOx Micromix Combustor with Variable Hydrogen Methane Mixture
(2022)
The role of hydrogen (H2) as a carbon-free energy carrier is discussed since decades for reducing greenhouse gas emissions. As bridge technology towards a hydrogen-based energy supply, fuel mixtures of natural gas or methane (CH4) and hydrogen are possible.
The paper presents the first test results of a low-emission Micromix combustor designed for flexible-fuel operation with variable H2/CH4 mixtures. The numerical and experimental approach for considering variable fuel mixtures instead of recently investigated pure hydrogen is described.
In the experimental studies, a first generation FuelFlex Micromix combustor geometry is tested at atmospheric pressure at gas turbine operating conditions corresponding to part- and full-load. The H2/CH4 fuel mixture composition is varied between 57 and 100 vol.% hydrogen content.
Despite the challenges flexible-fuel operation poses onto the design of a combustion system, the evaluated FuelFlex Micromix prototype shows a significant low NOx performance
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
Field-effect EIS (electrolyte-insulator-semiconductor) sensors modified with a positively charged weak polyelectrolyte layer have been applied for the electrical detection of DNA (deoxyribonucleic acid) immobilization and hybridization by the intrinsic molecular charge. The EIS sensors are able to detect the existence of target DNA amplicons in PCR (polymerase chain reaction) samples and thus, can be used as tool for a quick verification of DNA amplification and the successful PCR process. Due to their miniaturized setup, compatibility with advanced micro- and nanotechnologies, and ability to detect biomolecules by their intrinsic molecular charge, those sensors can serve as possible platform for the development of label-free DNA chips. Possible application fields as well as challenges and limitations will be discussed.
The feasibility of light-addressed detection and manipulation of pH gradients inside an electrochemical microfluidic cell was studied. Local pH changes, induced by a light-addressable electrode (LAE), were detected using a light-addressable potentiometric sensor (LAPS) with different measurement modes representing an actuator-sensor system. Biosensor functionality was examined depending on locally induced pH gradients with the help of the model enzyme penicillinase, which had been immobilized in the microfluidic channel. The surface morphology of the LAE and enzyme-functionalized LAPS was studied by scanning electron microscopy. Furthermore, the penicillin sensitivity of the LAPS inside the microfluidic channel was determined with regard to the analyte’s pH influence on the enzymatic reaction rate. In a final experiment, the LAE-controlled pH inhibition of the enzyme activity was monitored by the LAPS.
Utilizing an appropriate enzyme immobilization strategy is crucial for designing enzyme-based biosensors. Plant virus-like particles represent ideal nanoscaffolds for an extremely dense and precise immobilization of enzymes, due to their regular shape, high surface-to-volume ratio and high density of surface binding sites. In the present work, tobacco mosaic virus (TMV) particles were applied for the co-immobilization of penicillinase and urease onto the gate surface of a field-effect electrolyte-insulator-semiconductor capacitor (EISCAP) with a p-Si-SiO₂-Ta₂O₅ layer structure for the sequential detection of penicillin and urea. The TMV-assisted bi-enzyme EISCAP biosensor exhibited a high urea and penicillin sensitivity of 54 and 85 mV/dec, respectively, in the concentration range of 0.1–3 mM. For comparison, the characteristics of single-enzyme EISCAP biosensors modified with TMV particles immobilized with either penicillinase or urease were also investigated. The surface morphology of the TMV-modified Ta₂O₅-gate was analyzed by scanning electron microscopy. Additionally, the bi-enzyme EISCAP was applied to mimic an XOR (Exclusive OR) enzyme logic gate.
Bacterial cellulose (BC) is a biopolymer produced by different microorganisms, but in biotechnological practice, Komagataeibacter xylinus is used. The micro- and nanofibrillar structure of BC, which forms many different-sized pores, creates prerequisites for the introduction of other polymers into it, including those synthesized by other microorganisms. The study aims to develop a cocultivation system of BC and prebiotic producers to obtain BC-based composite material with prebiotic activity. In this study, pullulan (PUL) was found to stimulate the growth of the probiotic strain Lactobacillus rhamnosus GG better than the other microbial polysaccharides gellan and xanthan. BC/PUL biocomposite with prebiotic properties was obtained by cocultivation of Komagataeibacter xylinus and Aureobasidium pullulans, BC and PUL producers respectively, on molasses medium. The inclusion of PUL in BC is proved gravimetrically by scanning electron microscopy and by Fourier transformed infrared spectroscopy. Cocultivation demonstrated a composite effect on the aggregation and binding of BC fibers, which led to a significant improvement in mechanical properties. The developed approach for “grafting” of prebiotic activity on BC allows preparation of environmentally friendly composites of better quality.
We present a concise mini overview on the approaches to the disposal of nuclear waste currently used or deployed. The disposal of nuclear waste is the end point of nuclear waste management (NWM) activities and is the emplacement of waste in an appropriate facility without the intention to retrieve it. The IAEA has developed an internationally accepted classification scheme based on the end points of NWM, which is used as guidance. Retention times needed for safe isolation of waste radionuclides are estimated based on the radiotoxicity of nuclear waste. Disposal facilities usually rely on a multi-barrier defence system to isolate the waste from the biosphere, which comprises the natural geological barrier and the engineered barrier system. Disposal facilities could be of a trench type, vaults, tunnels, shafts, boreholes, or mined repositories. A graded approach relates the depth of the disposal facilities’ location with the level of hazard. Disposal practices demonstrate the reliability of nuclear waste disposal with minimal expected impacts on the environment and humans.
Acetoin and diacetyl have a major impact on the flavor of alcoholic beverages such as wine or beer. Therefore, their measurement is important during the fermentation process. Until now, gas chromatographic techniques have typically been applied; however, these require expensive laboratory equipment and trained staff, and do not allow for online monitoring. In this work, a capacitive electrolyte–insulator–semiconductor sensor modified with tobacco mosaic virus (TMV) particles as enzyme nanocarriers for the detection of acetoin and diacetyl is presented. The enzyme acetoin reductase from Alkalihalobacillus clausii DSM 8716ᵀ is immobilized via biotin–streptavidin affinity, binding to the surface of the TMV particles. The TMV-assisted biosensor is electrochemically characterized by means of leakage–current, capacitance–voltage, and constant capacitance measurements. In this paper, the novel biosensor is studied regarding its sensitivity and long-term stability in buffer solution. Moreover, the TMV-assisted capacitive field-effect sensor is applied for the detection of diacetyl for the first time. The measurement of acetoin and diacetyl with the same sensor setup is demonstrated. Finally, the successive detection of acetoin and diacetyl in buffer and in diluted beer is studied by tuning the sensitivity of the biosensor using the pH value of the measurement solution.
An improved and convenient ninhydrin assay for aminoacylase activity measurements was developed using the commercial EZ Nin™ reagent. Alternative reagents from literature were also evaluated and compared. The addition of DMSO to the reagent enhanced the solubility of Ruhemann's purple (RP). Furthermore, we found that the use of a basic, aqueous buffer enhances stability of RP. An acidic protocol for the quantification of lysine was developed by addition of glacial acetic acid. The assay allows for parallel processing in a 96-well format with measurements microtiter plates.
The subtilase family (S8), a member of the clan SB of serine proteases are ubiquitous in all kingdoms of life and fulfil different physiological functions. Subtilases are divided in several groups and especially subtilisins are of interest as they are used in various industrial sectors. Therefore, we searched for new subtilisin sequences of the family Bacillaceae using a data mining approach. The obtained 1,400 sequences were phylogenetically classified in the context of the subtilase family. This required an updated comprehensive overview of the different groups within this family. To fill this gap, we conducted a phylogenetic survey of the S8 family with characterised holotypes derived from the MEROPS database. The analysis revealed the presence of eight previously uncharacterised groups and 13 subgroups within the S8 family. The sequences that emerged from the data mining with the set filter parameters were mainly assigned to the subtilisin subgroups of true subtilisins, high-alkaline subtilisins, and phylogenetically intermediate subtilisins and represent an excellent source for new subtilisin candidates.
In this study, an online multi-sensing platform was engineered to simultaneously evaluate various process parameters of food package sterilization using gaseous hydrogen peroxide (H₂O₂). The platform enabled the validation of critical aseptic parameters. In parallel, one series of microbiological count reduction tests was performed using highly resistant spores of B. atrophaeus DSM 675 to act as the reference method for sterility validation. By means of the multi-sensing platform together with microbiological tests, we examined sterilization process parameters to define the most effective conditions with regards to the highest spore kill rate necessary for aseptic packaging. As these parameters are mutually associated, a correlation between different factors was elaborated. The resulting correlation indicated the need for specific conditions regarding the applied H₂O₂ gas temperature, the gas flow and concentration, the relative humidity and the exposure time. Finally, the novel multi-sensing platform together with the mobile electronic readout setup allowed for the online and on-site monitoring of the sterilization process, selecting the best conditions for sterility and, at the same time, reducing the use of the time-consuming and costly microbiological tests that are currently used in the food package industry.
This paper presents a new SIMO radar system based on a harmonic radar (HR) stepped frequency continuous wave (SFCW) architecture. Simple tags that can be electronically individually activated and deactivated via a DC control voltage were developed and combined to form an MO array field. This HR operates in the entire 2.45 GHz ISM band for transmitting the illumination signal and receives at twice the stimulus frequency and bandwidth centered around 4.9 GHz. This paper presents the development, the basic theory of a HR system for the characterization of objects placed into the propagation path in-between the radar and the reflectors (similar to a free-space measurement with a network analyzer) as well as first measurements performed by the system. Further detailed measurement series will be made available later on to other researchers to develop AI and machine learning based signal processing routines or synthetic aperture radar algorithms for imaging, object recognition, and feature extraction. For this purpose, the necessary information is published in this paper. It is explained in detail why this SIMO-HR can be an attractive solution augmenting or replacing existing systems for radar measurements in production technology for material under test measurements and as a simplified MIMO system. The novel HR transfer function, which is a basis for researchers and developers for material characterization or imaging algorithms, is introduced and metrologically verified in a well traceable coaxial setup.
An approach to automatically generate a dynamic energy simulation model in Modelica for a single existing building is presented. It aims at collecting data about the status quo in the preparation of energy retrofits with low effort and costs. The proposed method starts from a polygon model of the outer building envelope obtained from photogrammetrically generated point clouds. The open-source tools TEASER and AixLib are used for data enrichment and model generation. A case study was conducted on a single-family house. The resulting model can accurately reproduce the internal air temperatures during synthetical heating up and cooling down. Modelled and measured whole building heat transfer coefficients (HTC) agree within a 12% range. A sensitivity analysis emphasises the importance of accurate window characterisations and justifies the use of a very simplified interior geometry. Uncertainties arising from the use of archetype U-values are estimated by comparing different typologies, with best- and worst-case estimates showing differences in pre-retrofit heat demand of about ±20% to the average; however, as the assumptions made are permitted by some national standards, the method is already close to practical applicability and opens up a path to quickly estimate possible financial and energy savings after refurbishment.
Magnetic immunoassays employing Frequency Mixing Magnetic Detection (FMMD) have recently become increasingly popular for quantitative detection of various analytes. Simultaneous analysis of a sample for two or more targets is desirable in order to reduce the sample amount, save consumables, and save time. We show that different types of magnetic beads can be distinguished according to their frequency mixing response to a two-frequency magnetic excitation at different static magnetic offset fields. We recorded the offset field dependent FMMD response of two different particle types at frequencies ƒ₁ + n⋅ƒ₂, n = 1, 2, 3, 4 with ƒ₁ = 30.8 kHz and ƒ₂ = 63 Hz. Their signals were clearly distinguishable by the locations of the extremes and zeros of their responses. Binary mixtures of the two particle types were prepared with different mixing ratios. The mixture samples were analyzed by determining the best linear combination of the two pure constituents that best resembled the measured signals of the mixtures. Using a quadratic programming algorithm, the mixing ratios could be determined with an accuracy of greater than 14%. If each particle type is functionalized with a different antibody, multiplex detection of two different analytes becomes feasible.
The on-chip integration of multiple biochemical sensors based on field-effect electrolyte-insulator-semiconductor capacitors (EISCAP) is challenging due to technological difficulties in realization of electrically isolated EISCAPs on the same Si chip. In this work, we present a new simple design for an array of on-chip integrated, individually electrically addressable EISCAPs with an additional control gate (CG-EISCAP). The existence of the CG enables an addressable activation or deactivation of on-chip integrated individual CG-EISCAPs by simple electrical switching the CG of each sensor in various setups, and makes the new design capable for multianalyte detection without cross-talk effects between the sensors in the array. The new designed CG-EISCAP chip was modelled in so-called floating/short-circuited and floating/capacitively-coupled setups, and the corresponding electrical equivalent circuits were developed. In addition, the capacitance-voltage curves of the CG-EISCAP chip in different setups were simulated and compared with that of a single EISCAP sensor. Moreover, the sensitivity of the CG-EISCAP chip to surface potential changes induced by biochemical reactions was simulated and an impact of different parameters, such as gate voltage, insulator thickness and doping concentration in Si, on the sensitivity has been discussed.
Humic substances (HS), as important environmental components, are essential to soil health and agricultural sustainability. The usage of low-rank coal (LRC) for energy generation has declined considerably due to the growing popularity of renewable energy sources and gas. However, their potential as soil amendment aimed to maintain soil quality and productivity deserves more recognition. LRC, a highly heterogeneous material in nature, contains large quantities of HS and may effectively help to restore the physicochemical, biological, and ecological functionality of soil. Multiple emerging studies support the view that LRC and its derivatives can positively impact the soil microclimate, nutrient status, and organic matter turnover. Moreover, the phytotoxic effects of some pollutants can be reduced by subsequent LRC application. Broad geographical availability, relatively low cost, and good technical applicability of LRC offer the advantage of easy fulfilling soil amendment and conditioner requirements worldwide. This review analyzes and emphasizes the potential of LRC and its numerous forms/combinations for soil amelioration and crop production. A great benefit would be a systematic investment strategy implicating safe utilization and long-term application of LRC for sustainable agricultural production.
In this study, the performance of an integrated body-imaging array for 7 T with 32 radiofrequency (RF) channels under consideration of local specific absorption rate (SAR), tissue temperature, and thermal dose limits was evaluated and the imaging performance was compared with a clinical 3 T body coil.
Thirty-two transmit elements were placed in three rings between the bore liner and RF shield of the gradient coil. Slice-selective RF pulse optimizations for B1 shimming and spokes were performed for differently oriented slices in the body under consideration of realistic constraints for power and local SAR. To improve the B1+ homogeneity, safety assessments based on temperature and thermal dose were performed to possibly allow for higher input power for the pulse optimization than permissible with SAR limits.
The results showed that using two spokes, the 7 T array outperformed the 3 T birdcage in all the considered regions of interest. However, a significantly higher SAR or lower duty cycle at 7 T is necessary in some cases to achieve similar B1+ homogeneity as at 3 T. The homogeneity in up to 50 cm-long coronal slices can particularly benefit from the high RF shim performance provided by the 32 RF channels. The thermal dose approach increases the allowable input power and the corresponding local SAR, in one example up to 100 W/kg, without limiting the exposure time necessary for an MR examination.
In conclusion, the integrated antenna array at 7 T enables a clinical workflow for body imaging and comparable imaging performance to a conventional 3 T clinical body coil.
Objective
Hemodialysis patients show an approximately threefold higher prevalence of cognitive impairment compared to the age-matched general population. Impaired microcirculatory function is one of the assumed causes. Dynamic retinal vessel analysis is a quantitative method for measuring neurovascular coupling and microvascular endothelial function. We hypothesize that cognitive impairment is associated with altered microcirculation of retinal vessels.
Methods
152 chronic hemodialysis patients underwent cognitive testing using the Montreal Cognitive Assessment. Retinal microcirculation was assessed by Dynamic Retinal Vessel Analysis, which carries out an examination recording retinal vessels' reaction to a flicker light stimulus under standardized conditions.
Results
In unadjusted as well as in adjusted linear regression analyses a significant association between the visuospatial executive function domain score of the Montreal Cognitive Assessment and the maximum arteriolar dilation as response of retinal arterioles to the flicker light stimulation was obtained.
Conclusion
This is the first study determining retinal microvascular function as surrogate for cerebral microvascular function and cognition in hemodialysis patients. The relationship between impairment in executive function and reduced arteriolar reaction to flicker light stimulation supports the involvement of cerebral small vessel disease as contributing factor for the development of cognitive impairment in this patient population and might be a target for noninvasive disease monitoring and therapeutic intervention.
The coupling of ligand-stabilized gold nanoparticles with field-effect devices offers new possibilities for label-free biosensing. In this work, we study the immobilization of aminooctanethiol-stabilized gold nanoparticles (AuAOTs) on the silicon dioxide surface of a capacitive field-effect sensor. The terminal amino group of the AuAOT is well suited for the functionalization with biomolecules. The attachment of the positively-charged AuAOTs on a capacitive field-effect sensor was detected by direct electrical readout using capacitance-voltage and constant capacitance measurements. With a higher particle density on the sensor surface, the measured signal change was correspondingly more pronounced. The results demonstrate the ability of capacitive field-effect sensors for the non-destructive quantitative validation of nanoparticle immobilization. In addition, the electrostatic binding of the polyanion polystyrene sulfonate to the AuAOT-modified sensor surface was studied as a model system for the label-free detection of charged macromolecules. Most likely, this approach can be transferred to the label-free detection of other charged molecules such as enzymes or antibodies.
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
Frequency mixing magnetic detection (FMMD) has been explored for its applications in fields of magnetic biosensing, multiplex detection of magnetic nanoparticles (MNP) and the determination of core size distribution of MNP samples. Such applications rely on the application of a static offset magnetic field, which is generated traditionally with an electromagnet. Such a setup requires a current source, as well as passive or active cooling strategies, which directly sets a limitation based on the portability aspect that is desired for point of care (POC) monitoring applications. In this work, a measurement head is introduced that involves the utilization of two ring-shaped permanent magnets to generate a static offset magnetic field. A steel cylinder in the ring bores homogenizes the field. By variation of the distance between the ring magnets and of the thickness of the steel cylinder, the magnitude of the magnetic field at the sample position can be adjusted. Furthermore, the measurement setup is compared to the electromagnet offset module based on measured signals and temperature behavior.
Nanoparticles are recognized as highly attractive tunable materials for designing field-effect biosensors with enhanced performance. In this work, we present a theoretical model for electrolyte-insulator-semiconductor capacitors (EISCAP) decorated with ligand-stabilized charged gold nanoparticles. The charged AuNPs are taken into account as additional, nanometer-sized local gates. The capacitance-voltage (C–V) curves and constant-capacitance (ConCap) signals of the AuNP-decorated EISCAPs have been simulated. The impact of the AuNP coverage on the shift of the C–V curves and the ConCap signals was also studied experimentally on Al–p-Si–SiO₂ EISCAPs decorated with positively charged aminooctanethiol-capped AuNPs. In addition, the surface of the EISCAPs, modified with AuNPs, was characterized by scanning electron microscopy for different immobilization times of the nanoparticles.
This work introduces a novel method for the detection of H₂O₂ vapor/aerosol of low concentrations, which is mainly applied in the sterilization of equipment in medical industry. Interdigitated electrode (IDE) structures have been fabricated by means of microfabrication techniques. A differential setup of IDEs was prepared, containing an active sensor element (active IDE) and a passive sensor element (passive IDE), where the former was immobilized with an enzymatic membrane of horseradish peroxidase that is selective towards H₂O₂. Changes in the IDEs’ capacitance values (active sensor element versus passive sensor element) under H₂O₂ vapor/aerosol atmosphere proved the detection in the concentration range up to 630 ppm with a fast response time (<60 s). The influence of relative humidity was also tested with regard to the sensor signal, showing no cross-sensitivity. The repeatability assessment of the IDE biosensors confirmed their stable capacitive signal in eight subsequent cycles of exposure to H₂O₂ vapor/aerosol. Room-temperature detection of H₂O₂ vapor/aerosol with such miniaturized biosensors will allow a future three-dimensional, flexible mapping of aseptic chambers and help to evaluate sterilization assurance in medical industry.
Atmospheric pressure plasma-jet treatment of PAN-nonwovens—carbonization of nanofiber electrodes
(2022)
Carbon nanofibers are produced from dielectric polymer precursors such as polyacrylonitrile (PAN). Carbonized nanofiber nonwovens show high surface area and good electrical conductivity, rendering these fiber materials interesting for application as electrodes in batteries, fuel cells, and supercapacitors. However, thermal processing is slow and costly, which is why new processing techniques have been explored for carbon fiber tows. Alternatives for the conversion of PAN-precursors into carbon fiber nonwovens are scarce. Here, we utilize an atmospheric pressure plasma jet to conduct carbonization of stabilized PAN nanofiber nonwovens. We explore the influence of various processing parameters on the conductivity and degree of carbonization of the converted nanofiber material. The precursor fibers are converted by plasma-jet treatment to carbon fiber nonwovens within seconds, by which they develop a rough surface making subsequent surface activation processes obsolete. The resulting carbon nanofiber nonwovens are applied as supercapacitor electrodes and examined by cyclic voltammetry and impedance spectroscopy. Nonwovens that are carbonized within 60 s show capacitances of up to 5 F g⁻¹.
It was generally believed that coal sources are not favorable as live-in habitats for microorganisms due to their recalcitrant chemical nature and negligible decomposition. However, accumulating evidence has revealed the presence of diverse microbial groups in coal environments and their significant metabolic role in coal biogeochemical dynamics and ecosystem functioning. The high oxygen content, organic fractions, and lignin-like structures of lower-rank coals may provide effective means for microbial attack, still representing a greatly unexplored frontier in microbiology. Coal degradation/conversion technology by native bacterial and fungal species has great potential in agricultural development, chemical industry production, and environmental rehabilitation. Furthermore, native microalgal species can offer a sustainable energy source and an excellent bioremediation strategy applicable to coal spill/seam waters. Additionally, the measures of the fate of the microbial community would serve as an indicator of restoration progress on post-coal-mining sites. This review puts forward a comprehensive vision of coal biodegradation and bioprocessing by microorganisms native to coal environments for determining their biotechnological potential and possible applications.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
Aspergillus oryzae is an industrially relevant organism for the secretory production of heterologous enzymes, especially amylases. The activities of potential heterologous amylases, however, cannot be quantified directly from the supernatant due to the high background activity of native α-amylase. This activity is caused by the gene products of amyA, amyB, and amyC. In this study, an in vitro CRISPR/Cas9 system was established in A. oryzae to delete these genes simultaneously. First, pyrG of A. oryzae NSAR1 was mutated by exploiting NHEJ to generate a counter-selection marker. Next, all amylase genes were deleted simultaneously by co-transforming a repair template carrying pyrG of Aspergillus nidulans and flanking sequences of amylase gene loci. The rate of obtained triple knock-outs was 47%. We showed that triple knockouts do not retain any amylase activity in the supernatant. The established in vitro CRISPR/Cas9 system was used to achieve sequence-specific knock-in of target genes. The system was intended to incorporate a single copy of the gene of interest into the desired host for the development of screening methods. Therefore, an integration cassette for the heterologous Fpi amylase was designed to specifically target the amyB locus. The site-specific integration rate of the plasmid was 78%, with exceptional additional integrations. Integration frequency was assessed via qPCR and directly correlated with heterologous amylase activity. Hence, we could compare the efficiency between two different signal peptides. In summary, we present a strategy to exploit CRISPR/Cas9 for gene mutation, multiplex knock-out, and the targeted knock-in of an expression cassette in A. oryzae. Our system provides straightforward strain engineering and paves the way for development of fungal screening systems.
Ambitious climate targets affect the competitiveness of industries in the international market. To prevent such industries from moving to other countries in the wake of increased climate protection efforts, cost adjustments may become necessary. Their design requires knowledge of country-specific production costs. Here, we present country-specific cost figures for different production routes of steel, paying particular attention to transportation costs. The data can be used in floor price models aiming to assess the competitiveness of different steel production routes in different countries (Rübbelke, 2022).
In proton therapy, the dose from secondary neutrons to the patient can contribute to side effects and the creation of secondary cancer. A simple and fast detection system to distinguish between dose from protons and neutrons both in pretreatment verification as well as potentially in vivo monitoring is needed to minimize dose from secondary neutrons. Two 3 mm long, 1 mm diameter organic scintillators were tested for candidacy to be used in a proton–neutron discrimination detector. The SCSF-3HF (1500) scintillating fibre (Kuraray Co. Chiyoda-ku, Tokyo, Japan) and EJ-260 plastic scintillator (Eljen Technology, Sweetwater, TX, USA) were irradiated at the TRIUMF Neutron Facility and the Proton Therapy Research Centre. In the proton beam, we compared the raw Bragg peak and spread-out Bragg peak response to the industry standard Markus chamber detector. Both scintillator sensors exhibited quenching at high LET in the Bragg peak, presenting a peak-to-entrance ratio of 2.59 for the EJ-260 and 2.63 for the SCSF-3HF fibre, compared to 3.70 for the Markus chamber. The SCSF-3HF sensor demonstrated 1.3 times the sensitivity to protons and 3 times the sensitivity to neutrons as compared to the EJ-260 sensor. Combined with our equations relating neutron and proton contributions to dose during proton irradiations, and the application of Birks’ quenching correction, these fibres provide valid candidates for inexpensive and replicable proton-neutron discrimination detectors
Optical Fibers as Dosimeter Detectors for Mixed Proton/Neutron Fields - A Biological Dosimeter
(2023)
In recent years, proton therapy has gained importance as a cancer treatment modality due to its conformality with the tumor and the sparing of healthy tissue. However, in the interaction of the protons with the beam line elements and patient tissues, potentially harmful secondary neutrons are always generated. To ensure that this neutron dose is as low as possible, treatment plans could be created to also account for and minimize the neutron dose. To monitor such a treatment plan, a compact, easy to use, and inexpensive dosimeter must be developed that not only measures the physical dose, but which can also distinguish between proton and neutron contributions. To that end, plastic optical fibers with scintillation materials (Gd₂O₂S:Tb, Gd₂O₂S:Eu, and YVO₄:Eu) were irradiated with protons and neutrons. It was confirmed that sensors with different scintillation materials have different sensitivities to protons and neutrons. A combination of these three scintillators can be used to build a detector array to create a biological dosimeter.
Influence of slab deflection on the out-of-plane capacity of unreinforced masonry partition walls
(2023)
Severe damage of non-structural elements is noticed in previous earthquakes, causing high economic losses and posing a life threat for the people. Masonry partition walls are one of the most commonly used non-structural elements. Therefore, their behaviour under earthquake loading in out-of-plane (OOP) direction is investigated by several researches in the past years. However, none of the existing experimental campaigns or analytical approaches consider the influence of prior slab deflection on OOP response of partition walls. Moreover, none of the existing construction techniques for the connection of partition walls with surrounding reinforced concrete (RC) is investigated for the combined slab deflection and OOP loading. However, the inevitable time-dependent behaviour of RC slabs leads to high values of final slab deflections which can further influence boundary conditions of partition walls. Therefore, a comprehensive study on the influence of slab deflection on the OOP capacity of masonry partitions is conducted. In the first step, experimental tests are carried out. Results of experimental tests are further used for the calibration of the numerical model employed for a parametric study. Based on the results, behaviour under combined loading for different construction techniques is explained. The results show that slab deflection leads either to severe damage or to a high reduction of OOP capacity. Existing practical solutions do not account for these effects. In this contribution, recommendations to overcome the problems of combined slab deflection and OOP loading on masonry partition walls are given. Possible interaction of in-plane (IP) loading, with the combined slab deflection and OOP loading on partition walls, is not investigated in this study.
Recent earthquakes as the 2012 Emilia earthquake sequence showed that recently built unreinforced masonry (URM) buildings behaved much better than expected and sustained, despite the maximum PGA values ranged between 0.20–0.30 g, either minor damage or structural damage that is deemed repairable. Especially low-rise residential and commercial masonry buildings with a code-conforming seismic design and detailing behaved in general very well without substantial damages. The low damage grades of modern masonry buildings that was observed during this earthquake series highlighted again that codified design procedures based on linear analysis can be rather conservative. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. These q-factors are derived for low-rise URM buildings with rigid diaphragms which represent recent construction practise in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. Furthermore, considerations on the behaviour factor component due to other sources of overstrength in masonry buildings are presented. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0–3.0 are proposed.
Because of simple construction process, high energy efficiency, significant fire resistance and excellent sound isolation, masonry infilled reinforced concrete (RC) frame structures are very popular in most of the countries in the world, as well as in seismic active areas. However, many RC frame structures with masonry infills were seriously damaged during earthquake events, as the traditional infills are generally constructed with direct contact to the RC frame which brings undesirable infill/frame interaction. This interaction leads to the activation of the equivalent diagonal strut in the infill panel, due to the RC frame deformation, and combined with seismically induced loads perpendicular to the infill panel often causes total collapses of the masonry infills and heavy damages to the RC frames. This fact was the motivation for developing different approaches for improving the behaviour of masonry infills, where infill isolation (decoupling) from the frame has been more intensively studied in the last decade. In-plane isolation of the infill wall reduces infill activation, but causes the need for additional measures to restrain out-of-plane movements. This can be provided by installing steel anchors, as proposed by some researchers. Within the framework of European research project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings) the system based on a use of elastomers for in-plane decoupling and steel anchors for out-of-plane restrain was tested. This constructive solution was tested and deeply investigated during the experimental campaign where traditional and decoupled masonry infilled RC frames with anchors were subjected to separate and combined in-plane and out-of-plane loading. Based on a detailed evaluation and comparison of the test results, the performance and effectiveness of the developed system are illustrated.
Monte Carlo Tree Search (MCTS) is a search technique that in the last decade emerged as a major breakthrough for Artificial Intelligence applications regarding board- and video-games. In 2016, AlphaGo, an MCTS-based software agent, outperformed the human world champion of the board game Go. This game was for long considered almost infeasible for machines, due to its immense search space and the need for a long-term strategy. Since this historical success, MCTS is considered as an effective new approach for many other scientific and technical problems. Interestingly, civil structural engineering, as a discipline, offers many tasks whose solution may benefit from intelligent search and in particular from adopting MCTS as a search tool. In this work, we show how MCTS can be adapted to search for suitable solutions of a structural engineering design problem. The problem consists of choosing the load-bearing elements in a reference reinforced concrete structure, so to achieve a set of specific dynamic characteristics. In the paper, we report the results obtained by applying both a plain and a hybrid version of single-agent MCTS. The hybrid approach consists of an integration of both MCTS and classic Genetic Algorithm (GA), the latter also serving as a term of comparison for the results. The study’s outcomes may open new perspectives for the adoption of MCTS as a design tool for civil engineers.
Past earthquakes demonstrated the high vulnerability of industrial facilities equipped with complex process technologies leading to serious damage of process equipment and multiple and simultaneous release of hazardous substances. Nonetheless, current standards for seismic design of industrial facilities are considered inadequate to guarantee proper safety conditions against exceptional events entailing loss of containment and related consequences. On these premises, the SPIF project -Seismic Performance of Multi-Component Systems in Special Risk Industrial Facilities- was proposed within the framework of the European H2020 SERA funding scheme. In detail, the objective of the SPIF project is the investigation of the seismic behaviour of a representative industrial multi-storey frame structure equipped with complex process components by means of shaking table tests. Along this main vein and in a performance-based design perspective, the issues investigated in depth are the interaction between a primary moment resisting frame (MRF) steel structure and secondary process components that influence the performance of the whole system; and a proper check of floor spectra predictions. The evaluation of experimental data clearly shows a favourable performance of the MRF structure, some weaknesses of local details due to the interaction between floor crossbeams and process components and, finally, the overconservatism of current design standards w.r.t. floor spectra predictions.
In a special paired sample case, Hotelling’s T² test based on the differences of the paired random vectors is the likelihood ratio test for testing the hypothesis that the paired random vectors have the same mean; with respect to a special group of affine linear transformations it is the uniformly most powerful invariant test for the general alternative of a difference in mean. We present an elementary straightforward proof of this result. The likelihood ratio test for testing the hypothesis that the covariance structure is of the assumed special form is derived and discussed. Applications to real data are given.
Hotelling’s T² tests in paired and independent survey samples are compared using the traditional asymptotic efficiency concepts of Hodges–Lehmann, Bahadur and Pitman, as well as through criteria based on the volumes of corresponding confidence regions. Conditions characterizing the superiority of a procedure are given in terms of population canonical correlation type coefficients. Statistical tests for checking these conditions are developed. Test statistics based on the eigenvalues of a symmetrized sample cross-covariance matrix are suggested, as well as test statistics based on sample canonical correlation type coefficients.
The paper deals with an asymptotic relative efficiency concept for confidence regions of multidimensional parameters that is based on the expected volumes of the confidence regions. Under standard conditions the asymptotic relative efficiencies of confidence regions are seen to be certain powers of the ratio of the limits of the expected volumes. These limits are explicitly derived for confidence regions associated with certain plugin estimators, likelihood ratio tests and Wald tests. Under regularity conditions, the asymptotic relative efficiency of each of these procedures with respect to each one of its competitors is equal to 1. The results are applied to multivariate normal distributions and multinomial distributions in a fairly general setting.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
A nonparametric goodness-of-fit test for random variables with values in a separable Hilbert space is investigated. To verify the null hypothesis that the data come from a specific distribution, an integral type test based on a Cramér-von-Mises statistic is suggested. The convergence in distribution of the test statistic under the null hypothesis is proved and the test's consistency is concluded. Moreover, properties under local alternatives are discussed. Applications are given for data of huge but finite dimension and for functional data in infinite dimensional spaces. A general approach enables the treatment of incomplete data. In simulation studies the test competes with alternative proposals.
Suppose we have k samples X₁,₁,…,X₁,ₙ₁,…,Xₖ,₁,…,Xₖ,ₙₖ with different sample sizes ₙ₁,…,ₙₖ and unknown underlying distribution functions F₁,…,Fₖ as observations plus k families of distribution functions {G₁(⋅,ϑ);ϑ∈Θ},…,{Gₖ(⋅,ϑ);ϑ∈Θ}, each indexed by elements ϑ from the same parameter set Θ, we consider the new goodness-of-fit problem whether or not (F₁,…,Fₖ) belongs to the parametric family {(G₁(⋅,ϑ),…,Gₖ(⋅,ϑ));ϑ∈Θ}. New test statistics are presented and a parametric bootstrap procedure for the approximation of the unknown null distributions is discussed. Under regularity assumptions, it is proved that the approximation works asymptotically, and the limiting distributions of the test statistics in the null hypothesis case are determined. Simulation studies investigate the quality of the new approach for small and moderate sample sizes. Applications to real-data sets illustrate how the idea can be used for verifying model assumptions.
Let X₁,…,Xₙ be independent and identically distributed random variables with distribution F. Assuming that there are measurable functions f:R²→R and g:R²→R characterizing a family F of distributions on the Borel sets of R in the way that the random variables f(X₁,X₂),g(X₁,X₂) are independent, if and only if F∈F, we propose to treat the testing problem H:F∈F,K:F∉F by applying a consistent nonparametric independence test to the bivariate sample variables (f(Xᵢ,Xⱼ),g(Xᵢ,Xⱼ)),1⩽i,j⩽n,i≠j. A parametric bootstrap procedure needed to get critical values is shown to work. The consistency of the test is discussed. The power performance of the procedure is compared with that of the classical tests of Kolmogorov–Smirnov and Cramér–von Mises in the special cases where F is the family of gamma distributions or the family of inverse Gaussian distributions.
The Rothman–Woodroofe symmetry test statistic is revisited on the basis of independent but not necessarily identically distributed random variables. The distribution-freeness if the underlying distributions are all symmetric and continuous is obtained. The results are applied for testing symmetry in a meta-analysis random effects model. The consistency of the procedure is discussed in this situation as well. A comparison with an alternative proposal from the literature is conducted via simulations. Real data are analyzed to demonstrate how the new approach works in practice.
In the context of the Solvency II directive, the operation of an internal risk model is a possible way for risk assessment and for the determination of the solvency capital requirement of an insurance company in the European Union. A Monte Carlo procedure is customary to generate a model output. To be compliant with the directive, validation of the internal risk model is conducted on the basis of the model output. For this purpose, we suggest a new test for checking whether there is a significant change in the modeled solvency capital requirement. Asymptotic properties of the test statistic are investigated and a bootstrap approximation is justified. A simulation study investigates the performance of the test in the finite sample case and confirms the theoretical results. The internal risk model and the application of the test is illustrated in a simplified example. The method has more general usage for inference of a broad class of law-invariant and coherent risk measures on the basis of a paired sample.
The established Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic is investigated for partly not identically distributed data. Surprisingly, it turns out that the statistic has the well-known distribution-free limiting null distribution of the classical criterion under standard regularity conditions. An application is testing goodness-of-fit for the regression function in a non parametric random effects meta-regression model, where the consistency is obtained as well. Simulations investigate size and power of the approach for small and moderate sample sizes. A real data example based on clinical trials illustrates how the test can be used in applications.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
FEM shakedown analysis of structures under random strength with chance constrained programming
(2022)
Direct methods, comprising limit and shakedown analysis, are a branch of computational mechanics. They play a significant role in mechanical and civil engineering design. The concept of direct methods aims to determine the ultimate load carrying capacity of structures beyond the elastic range. In practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and constraints. If strength and loading are random quantities, the shakedown analysis can be formulated as stochastic programming problem. In this paper, a method called chance constrained programming is presented, which is an effective method of stochastic programming to solve shakedown analysis problems under random conditions of strength. In this study, the loading is deterministic, and the strength is a normally or lognormally distributed variable.
REM sleep without atonia (RSWA) is a key feature for the diagnosis of rapid eye movement (REM) sleep behaviour disorder (RBD). We introduce RBDtector, a novel open-source software to score RSWA according to established SINBAR visual scoring criteria. We assessed muscle activity of the mentalis, flexor digitorum superficialis (FDS), and anterior tibialis (AT) muscles. RSWA was scored manually as tonic, phasic, and any activity by human scorers as well as using RBDtector in 20 subjects. Subsequently, 174 subjects (72 without RBD and 102 with RBD) were analysed with RBDtector to show the algorithm’s applicability. We additionally compared RBDtector estimates to a previously published dataset. RBDtector showed robust conformity with human scorings. The highest congruency was achieved for phasic and any activity of the FDS. Combining mentalis any and FDS any, RBDtector identified RBD subjects with 100% specificity and 96% sensitivity applying a cut-off of 20.6%. Comparable performance was obtained without manual artefact removal. RBD subjects also showed muscle bouts of higher amplitude and longer duration. RBDtector provides estimates of tonic, phasic, and any activity comparable to human scorings. RBDtector, which is freely available, can help identify RBD subjects and provides reliable RSWA metrics.
Purpose
In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset.
Design/methodology/approach
In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments.
Findings
Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model.
Originality/value
For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
The potential of electronic markets in enabling innovative product bundles through flexible and sustainable partnerships is not yet fully exploited in the telecommunication industry. One reason is that bundling requires seamless de-assembling and re-assembling of business processes, whilst processes in telecommunication companies are often product-dependent and hard to virtualize. We propose a framework for the planning of the virtualization of processes, intended to assist the decision maker in prioritizing the processes to be virtualized: (a) we transfer the virtualization pre-requisites stated by the Process Virtualization Theory in the context of customer-oriented processes in the telecommunication industry and assess their importance in this context, (b) we derive IT-oriented requirements for the removal of virtualization barriers and highlight their demand on changes at different levels of the organization. We present a first evaluation of our approach in a case study and report on lessons learned and further steps to be performed.
As the potential of a next generation network (NGN) is recognised, telecommunication companies consider switching to it. Although the implementation of an NGN seems to be merely a modification of the network infrastructure, it may trigger or require changes in the whole company, because it builds upon the separation between service and transport, a flexible bundling of services to products and the streamlining of the IT infrastructure. We propose a holistic framework, structured into the layers ‘strategy’, ‘processes’ and ‘information systems’ and incorporate into each layer all concepts necessary for the implementation of an NGN, as well as the alignment of these concepts. As a first proof-of-concept for our framework we have performed a case study on the introduction of NGN in a large telecommunication company; we show that our framework captures all topics that are affected by an NGN implementation.
The molecular weight properties of lignins are one of the key elements that need to be analyzed for a successful industrial application of these promising biopolymers. In this study, the use of 1H NMR as well as diffusion-ordered spectroscopy (DOSY NMR), combined with multivariate regression methods, was investigated for the determination of the molecular weight (Mw and Mn) and the polydispersity of organosolv lignins (n = 53, Miscanthus x giganteus, Paulownia tomentosa, and Silphium perfoliatum). The suitability of the models was demonstrated by cross validation (CV) as well as by an independent validation set of samples from different biomass origins (beech wood and wheat straw). CV errors of ca. 7–9 and 14–16% were achieved for all parameters with the models from the 1H NMR spectra and the DOSY NMR data, respectively. The prediction errors for the validation samples were in a similar range for the partial least squares model from the 1H NMR data and for a multiple linear regression using the DOSY NMR data. The results indicate the usefulness of NMR measurements combined with multivariate regression methods as a potential alternative to more time-consuming methods such as gel permeation chromatography.
In this study, a recently proposed NMR standardization approach by 2H integral of deuterated solvent for quantitative multicomponent analysis of complex mixtures is presented. As a proof of principle, the existing NMR routine for the analysis of Aloe vera products was modified. Instead of using absolute integrals of targeted compounds and internal standard (nicotinamide) from 1H-NMR spectra, quantification was performed based on the ratio of a particular 1H-NMR compound integral and 2H-NMR signal of deuterated solvent D2O. Validation characteristics (linearity, repeatability, accuracy) were evaluated and the results showed that the method has the same precision as internal standardization in case of multicomponent screening. Moreover, a dehydration process by freeze drying is not necessary for the new routine. Now, our NMR profiling of A. vera products needs only limited sample preparation and data processing. The new standardization methodology provides an appealing alternative for multicomponent NMR screening. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and is recommended in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
The investigation of the possibility to determine various characteristics of powder heparin (n = 115) was carried out with infrared spectroscopy. The evaluation of heparin samples included several parameters such as purity grade, distributing company, animal source as well as heparin species (i.e. Na-heparin, Ca-heparin, and heparinoids). Multivariate analysis using principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), and partial least squares – discriminant analysis (PLS-DA) were applied for the modelling of spectral data. Different pre-processing methods were applied to IR spectral data; multiplicative scatter correction (MSC) was chosen as the most relevant.
Obtained results were confirmed by nuclear magnetic resonance (NMR) spectroscopy. Good predictive ability of this approach demonstrates the potential of IR spectroscopy and chemometrics for screening of heparin quality. This approach, however, is designed as a screening tool and is not considered as a replacement for either of the methods required by USP and FDA.
Quantitative nuclear magnetic resonance (qNMR) is routinely performed by the internal or external standardization. The manuscript describes a simple alternative to these common workflows by using NMR signal of another active nuclei of calibration compound. For example, for any arbitrary compound quantification by NMR can be based on the use of an indirect concentration referencing that relies on a solvent having both 1H and 2H signals. To perform high-quality quantification, the deuteration level of the utilized deuterated solvent has to be estimated.
In this contribution the new method was applied to the determination of deuteration levels in different deuterated solvents (MeOD, ACN, CDCl3, acetone, benzene, DMSO-d6). Isopropanol-d6, which contains a defined number of deuterons and protons, was used for standardization. Validation characteristics (precision, accuracy, robustness) were calculated and the results showed that the method can be used in routine practice. Uncertainty budget was also evaluated. In general, this novel approach, using standardization by 2H integral, benefits from reduced sample preparation steps and uncertainties, and can be applied in different application areas (purity determination, forensics, pharmaceutical analysis, etc.).
Heparin is a natural polysaccharide, which plays essential role in many biological processes. Alterations in building blocks can modify biological roles of commercial heparin products, due to significant changes in the conformation of the polymer chain. The variability structure of heparin leads to difficulty in quality control using different analytical methods, including infrared (IR) spectroscopy. In this paper molecular modelling of heparin disaccharide subunits was performed using quantum chemistry. The structural and spectral parameters of these disaccharides have been calculated using RHF/6-311G. In addition, over-sulphated chondroitin sulphate disaccharide was studied as one of the most widespread contaminants of heparin. Calculated IR spectra were analyzed with respect to specific structure parameters. IR spectroscopic fingerprint was found to be sensitive to substitution pattern of disaccharide subunits. Vibrational assignments of calculated spectra were correlated with experimental IR spectral bands of native heparin. Chemometrics was used to perform multivariate analysis of simulated spectral data.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
NMR standardization approach that uses the 2H integral of deuterated solvent for quantitative multinuclear analysis of pharmaceuticals is described. As a proof of principle, the existing NMR procedure for the analysis of heparin products according to US Pharmacopeia monograph is extended to the determination of Na+ and Cl- content in this matrix. Quantification is performed based on the ratio of a 23Na (35Cl) NMR integral and 2H NMR signal of deuterated solvent, D2O, acquired using the specific spectrometer hardware. As an alternative, the possibility of 133Cs standardization using the addition of Cs2CO3 stock solution is shown. Validation characteristics (linearity, repeatability, sensitivity) are evaluated. A holistic NMR profiling of heparin products can now also be used for the quantitative determination of inorganic compounds in a single analytical run using a single sample. In general, the new standardization methodology provides an appealing alternative for the NMR screening of inorganic and organic components in pharmaceutical products.
Although several successful applications of benchtop nuclear magnetic resonance (NMR) spectroscopy in quantitative mixture analysis exist, the possibility of calibration transfer remains mostly unexplored, especially between high- and low-field NMR. This study investigates for the first time the calibration transfer of partial least squares regressions [weight average molecular weight (Mw) of lignin] between high-field (600 MHz) NMR and benchtop NMR devices (43 and 60 MHz). For the transfer, piecewise direct standardization, calibration transfer based on canonical correlation analysis, and transfer via the extreme learning machine auto-encoder method are employed. Despite the immense resolution difference between high-field and low-field NMR instruments, the results demonstrate that the calibration transfer from high- to low-field is feasible in the case of a physical property, namely, the molecular weight, achieving validation errors close to the original calibration (down to only 1.2 times higher root mean square errors). These results introduce new perspectives for applications of benchtop NMR, in which existing calibrations from expensive high-field instruments can be transferred to cheaper benchtop instruments to economize.
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
This study addresses a proof-of-concept experiment with a biocompatible screen-printed carbon electrode deposited onto a biocompatible and biodegradable substrate, which is made of fibroin, a protein derived from silk of the Bombyx mori silkworm. To demonstrate the sensor performance, the carbon electrode is functionalized as a glucose biosensor with the enzyme glucose oxidase and encapsulated with a silicone rubber to ensure biocompatibility of the contact wires. The carbon electrode is fabricated by means of thick-film technology including a curing step to solidify the carbon paste. The influence of the curing temperature and curing time on the electrode morphology is analyzed via scanning electron microscopy. The electrochemical characterization of the glucose biosensor is performed by amperometric/voltammetric measurements of different glucose concentrations in phosphate buffer. Herein, systematic studies at applied potentials from 500 to 1200 mV to the carbon working electrode (vs the Ag/AgCl reference electrode) allow to determine the optimal working potential. Additionally, the influence of the curing parameters on the glucose sensitivity is examined over a time period of up to 361 days. The sensor shows a negligible cross-sensitivity toward ascorbic acid, noradrenaline, and adrenaline. The developed biocompatible biosensor is highly promising for future in vivo and epidermal applications.
Retinal vessels are similar to cerebral vessels in their structure and function. Moderately low oscillation frequencies of around 0.1 Hz have been reported as the driving force for paravascular drainage in gray matter in mice and are known as the frequencies of lymphatic vessels in humans. We aimed to elucidate whether retinal vessel oscillations are altered in Alzheimer's disease (AD) at the stage of dementia or mild cognitive impairment (MCI). Seventeen patients with mild-to-moderate dementia due to AD (ADD); 23 patients with MCI due to AD, and 18 cognitively healthy controls (HC) were examined using Dynamic Retinal Vessel Analyzer. Oscillatory temporal changes of retinal vessel diameters were evaluated using mathematical signal analysis. Especially at moderately low frequencies around 0.1 Hz, arterial oscillations in ADD and MCI significantly prevailed over HC oscillations and correlated with disease severity. The pronounced retinal arterial vasomotion at moderately low frequencies in the ADD and MCI groups would be compatible with the view of a compensatory upregulation of paravascular drainage in AD and strengthen the amyloid clearance hypothesis.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
The mechanical behavior of the large intestine beyond the ultimate stress has never been investigated. Stretching beyond the ultimate stress may drastically impair the tissue microstructure, which consequently weakens its healthy state functions of absorption, temporary storage, and transportation for defecation. Due to closely similar microstructure and function with humans, biaxial tensile experiments on the porcine large intestine have been performed in this study. In this paper, we report hyperelastic characterization of the large intestine based on experiments in 102 specimens. We also report the theoretical analysis of the experimental results, including an exponential damage evolution function. The fracture energies and the threshold stresses are set as damage material parameters for the longitudinal muscular, the circumferential muscular and the submucosal collagenous layers. A biaxial tensile simulation of a linear brick element has been performed to validate the applicability of the estimated material parameters. The model successfully simulates the biomechanical response of the large intestine under physiological and non-physiological loads.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.
Biomedical applications of magnetic nanoparticles (MNP) fundamentally rely on the particles’ magnetic relaxation as a response to an alternating magnetic field. The magnetic relaxation complexly depends on the interplay of MNP magnetic and physical properties with the applied field parameters. It is commonly accepted that particle core size is a major contributor to signal generation in all the above applications, however, most MNP samples comprise broad distribution spanning nm and more. Therefore, precise knowledge of the exact contribution of individual core sizes to signal generation is desired for optimal MNP design generally for each application. Specifically, we present a magnetic relaxation simulation-driven analysis of experimental frequency mixing magnetic detection (FMMD) for biosensing to quantify the contributions of individual core size fractions towards signal generation. Applying our method to two different experimental MNP systems, we found the most dominant contributions from approx. 20 nm sized particles in the two independent MNP systems. Additional comparison between freely suspended and immobilized MNP also reveals insight in the MNP microstructure, allowing to use FMMD for MNP characterization, as well as to further fine-tune its applicability in biosensing.
Frequency mixing magnetic detection (FMMD) has been widely utilized as a measurement technique in magnetic immunoassays. It can also be used for the characterization and distinction (also known as “colourization”) of different types of magnetic nanoparticles (MNPs) based on their core sizes. In a previous work, it was shown that the large particles contribute most of the FMMD signal. This leads to ambiguities in core size determination from fitting since the contribution of the small-sized particles is almost undetectable among the strong responses from the large ones. In this work, we report on how this ambiguity can be overcome by modelling the signal intensity using the Langevin model in thermodynamic equilibrium including a lognormal core size distribution fL(dc,d0,σ) fitted to experimentally measured FMMD data of immobilized MNPs. For each given median diameter d0, an ambiguous amount of best-fitting pairs of parameters distribution width σ and number of particles Np with R2 > 0.99 are extracted. By determining the samples’ total iron mass, mFe, with inductively coupled plasma optical emission spectrometry (ICP-OES), we are then able to identify the one specific best-fitting pair (σ, Np) one uniquely. With this additional externally measured parameter, we resolved the ambiguity in core size distribution and determined the parameters (d0, σ, Np) directly from FMMD measurements, allowing precise MNPs sample characterization.