Refine
Year of publication
Document Type
- Article (3226)
- Conference Proceeding (1146)
- Part of a Book (184)
- Book (144)
- Doctoral Thesis (30)
- Patent (25)
- Other (9)
- Report (9)
- Working Paper (6)
- Lecture (5)
- Poster (4)
- Preprint (4)
- Talk (4)
- Master's Thesis (2)
- Bachelor Thesis (1)
- Contribution to a Periodical (1)
- Habilitation (1)
Language
- English (4801) (remove)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
- Shakedown analysis (6)
- avalanche (6)
- shakedown analysis (6)
- Clusterion (5)
- Earthquake (5)
- Enterprise Architecture (5)
- MINLP (5)
- solar sail (5)
- Air purification (4)
- Diversity Management (4)
Institute
- Fachbereich Medizintechnik und Technomathematik (1668)
- Fachbereich Elektrotechnik und Informationstechnik (693)
- IfB - Institut für Bioengineering (620)
- Fachbereich Energietechnik (579)
- INB - Institut für Nano- und Biotechnologien (555)
- Fachbereich Chemie und Biotechnologie (534)
- Fachbereich Luft- und Raumfahrttechnik (477)
- Fachbereich Maschinenbau und Mechatronik (278)
- Fachbereich Wirtschaftswissenschaften (207)
- Solar-Institut Jülich (164)
- Fachbereich Bauingenieurwesen (153)
- ECSM European Center for Sustainable Mobility (79)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (67)
- Nowum-Energy (28)
- Fachbereich Gestaltung (25)
- Institut fuer Angewandte Polymerchemie (23)
- Sonstiges (21)
- Fachbereich Architektur (20)
- Freshman Institute (18)
- Kommission für Forschung und Entwicklung (18)
A generalized shear-lag theory for fibres with variable radius is developed to analyse elastic fibre/matrix stress transfer. The theory accounts for the reinforcement of biological composites, such as soft tissue and bone tissue, as well as for the reinforcement of technical composite materials, such as fibre-reinforced polymers (FRP). The original shear-lag theory proposed by Cox in 1952 is generalized for fibres with variable radius and with symmetric and asymmetric ends. Analytical solutions are derived for the distribution of axial and interfacial shear stress in cylindrical and elliptical fibres, as well as conical and paraboloidal fibres with asymmetric ends. Additionally, the distribution of axial and interfacial shear stress for conical and paraboloidal fibres with symmetric ends are numerically predicted. The results are compared with solutions from axisymmetric finite element models. A parameter study is performed, to investigate the suitability of alternative fibre geometries for use in FRP.
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.
We study the possibility to fabricate an arbitrary phase mask in a one-step laser-writing process inside the volume of an optical glass substrate. We derive the phase mask from a Gerchberg–Saxton-type algorithm as an array and create each individual phase shift using a refractive index modification of variable axial length. We realize the variable axial length by superimposing refractive index modifications induced by an ultra-short pulsed laser at different focusing depth. Each single modification is created by applying 1000 pulses with 15 μJ pulse energy at 100 kHz to a fixed spot of 25 μm diameter and the focus is then shifted axially in steps of 10 μm. With several proof-of-principle examples, we show the feasibility of our method. In particular, we identify the induced refractive index change to about a value of Δn=1.5⋅10−3. We also determine our current limitations by calculating the overlap in the form of a scalar product and we discuss possible future improvements.
The mechanical behavior of the large intestine beyond the ultimate stress has never been investigated. Stretching beyond the ultimate stress may drastically impair the tissue microstructure, which consequently weakens its healthy state functions of absorption, temporary storage, and transportation for defecation. Due to closely similar microstructure and function with humans, biaxial tensile experiments on the porcine large intestine have been performed in this study. In this paper, we report hyperelastic characterization of the large intestine based on experiments in 102 specimens. We also report the theoretical analysis of the experimental results, including an exponential damage evolution function. The fracture energies and the threshold stresses are set as damage material parameters for the longitudinal muscular, the circumferential muscular and the submucosal collagenous layers. A biaxial tensile simulation of a linear brick element has been performed to validate the applicability of the estimated material parameters. The model successfully simulates the biomechanical response of the large intestine under physiological and non-physiological loads.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
Retinal vessels are similar to cerebral vessels in their structure and function. Moderately low oscillation frequencies of around 0.1 Hz have been reported as the driving force for paravascular drainage in gray matter in mice and are known as the frequencies of lymphatic vessels in humans. We aimed to elucidate whether retinal vessel oscillations are altered in Alzheimer's disease (AD) at the stage of dementia or mild cognitive impairment (MCI). Seventeen patients with mild-to-moderate dementia due to AD (ADD); 23 patients with MCI due to AD, and 18 cognitively healthy controls (HC) were examined using Dynamic Retinal Vessel Analyzer. Oscillatory temporal changes of retinal vessel diameters were evaluated using mathematical signal analysis. Especially at moderately low frequencies around 0.1 Hz, arterial oscillations in ADD and MCI significantly prevailed over HC oscillations and correlated with disease severity. The pronounced retinal arterial vasomotion at moderately low frequencies in the ADD and MCI groups would be compatible with the view of a compensatory upregulation of paravascular drainage in AD and strengthen the amyloid clearance hypothesis.
This study addresses a proof-of-concept experiment with a biocompatible screen-printed carbon electrode deposited onto a biocompatible and biodegradable substrate, which is made of fibroin, a protein derived from silk of the Bombyx mori silkworm. To demonstrate the sensor performance, the carbon electrode is functionalized as a glucose biosensor with the enzyme glucose oxidase and encapsulated with a silicone rubber to ensure biocompatibility of the contact wires. The carbon electrode is fabricated by means of thick-film technology including a curing step to solidify the carbon paste. The influence of the curing temperature and curing time on the electrode morphology is analyzed via scanning electron microscopy. The electrochemical characterization of the glucose biosensor is performed by amperometric/voltammetric measurements of different glucose concentrations in phosphate buffer. Herein, systematic studies at applied potentials from 500 to 1200 mV to the carbon working electrode (vs the Ag/AgCl reference electrode) allow to determine the optimal working potential. Additionally, the influence of the curing parameters on the glucose sensitivity is examined over a time period of up to 361 days. The sensor shows a negligible cross-sensitivity toward ascorbic acid, noradrenaline, and adrenaline. The developed biocompatible biosensor is highly promising for future in vivo and epidermal applications.
Digital twins enable the modeling and simulation of real-world entities
(objects, processes or systems), resulting in improvements in the associated value
chains. The emerging field of quantum computing holds tremendous promise for
evolving this virtualization towards Quantum (Digital) Twins (QDT) and
ultimately Quantum Twins (QT). The quantum (digital) twin concept is not a
contradiction in terms - but instead describes a hybrid approach that can be
implemented using the technologies available today by combining classical
computing and digital twin concepts with quantum processing. This paper
presents the status quo of research and practice on quantum (digital) twins. It also
discuses their potential to create competitive advantage through real-time
simulation of highly complex, interconnected entities that helps companies better
address changes in their environment and differentiate their products and
services.
Although several successful applications of benchtop nuclear magnetic resonance (NMR) spectroscopy in quantitative mixture analysis exist, the possibility of calibration transfer remains mostly unexplored, especially between high- and low-field NMR. This study investigates for the first time the calibration transfer of partial least squares regressions [weight average molecular weight (Mw) of lignin] between high-field (600 MHz) NMR and benchtop NMR devices (43 and 60 MHz). For the transfer, piecewise direct standardization, calibration transfer based on canonical correlation analysis, and transfer via the extreme learning machine auto-encoder method are employed. Despite the immense resolution difference between high-field and low-field NMR instruments, the results demonstrate that the calibration transfer from high- to low-field is feasible in the case of a physical property, namely, the molecular weight, achieving validation errors close to the original calibration (down to only 1.2 times higher root mean square errors). These results introduce new perspectives for applications of benchtop NMR, in which existing calibrations from expensive high-field instruments can be transferred to cheaper benchtop instruments to economize.
NMR standardization approach that uses the 2H integral of deuterated solvent for quantitative multinuclear analysis of pharmaceuticals is described. As a proof of principle, the existing NMR procedure for the analysis of heparin products according to US Pharmacopeia monograph is extended to the determination of Na+ and Cl- content in this matrix. Quantification is performed based on the ratio of a 23Na (35Cl) NMR integral and 2H NMR signal of deuterated solvent, D2O, acquired using the specific spectrometer hardware. As an alternative, the possibility of 133Cs standardization using the addition of Cs2CO3 stock solution is shown. Validation characteristics (linearity, repeatability, sensitivity) are evaluated. A holistic NMR profiling of heparin products can now also be used for the quantitative determination of inorganic compounds in a single analytical run using a single sample. In general, the new standardization methodology provides an appealing alternative for the NMR screening of inorganic and organic components in pharmaceutical products.
Lignin is a promising renewable biopolymer being investigated worldwide as an environmentally benign substitute of fossil-based aromatic compounds, e.g. for the use as an excipient with antioxidant and antimicrobial properties in drug delivery or even as active compound. For its successful implementation into process streams, a quick, easy, and reliable method is needed for its molecular weight determination. Here we present a method using 1H spectra of benchtop as well as conventional NMR systems in combination with multivariate data analysis, to determine lignin’s molecular weight (Mw and Mn) and polydispersity index (PDI). A set of 36 organosolv lignin samples (from Miscanthus x giganteus, Paulownia tomentosa and Silphium perfoliatum) was used for the calibration and cross validation, and 17 samples were used as external validation set. Validation errors between 5.6% and 12.9% were achieved for all parameters on all NMR devices (43, 60, 500 and 600 MHz). Surprisingly, no significant difference in the performance of the benchtop and high-field devices was found. This facilitates the application of this method for determining lignin’s molecular weight in an industrial environment because of the low maintenance expenditure, small footprint, ruggedness, and low cost of permanent magnet benchtop NMR systems.
Heparin is a natural polysaccharide, which plays essential role in many biological processes. Alterations in building blocks can modify biological roles of commercial heparin products, due to significant changes in the conformation of the polymer chain. The variability structure of heparin leads to difficulty in quality control using different analytical methods, including infrared (IR) spectroscopy. In this paper molecular modelling of heparin disaccharide subunits was performed using quantum chemistry. The structural and spectral parameters of these disaccharides have been calculated using RHF/6-311G. In addition, over-sulphated chondroitin sulphate disaccharide was studied as one of the most widespread contaminants of heparin. Calculated IR spectra were analyzed with respect to specific structure parameters. IR spectroscopic fingerprint was found to be sensitive to substitution pattern of disaccharide subunits. Vibrational assignments of calculated spectra were correlated with experimental IR spectral bands of native heparin. Chemometrics was used to perform multivariate analysis of simulated spectral data.
The recent advances in microbiology have shed light on understanding the role of vitamins beyond the nutritional range. Vitamins are critical in contributing to healthy biodiversity and maintaining the proper function of gut microbiota. The sharing of vitamins among bacterial populations promotes stability in community composition and diversity; however, this balance becomes disturbed in various pathologies. Here, we overview and analyze the ability of different vitamins to selectively and specifically induce changes in the intestinal microbial community. Some schemes and regularities become visible, which may provide new insights and avenues for therapeutic management and functional optimization of the gut microbiota.
Vitamin D plays an essential role in calcium and inorganic phosphate (Pi) homeostasis, maintaining their optimal levels to assure adequate bone mineralization. Vitamin D, as calcitriol (1,25(OH)2D), not only increases intestinal calcium and phosphate absorption but also facilitates their renal reabsorption, leading to elevated serum calcium and phosphate levels. The interaction of 1,25(OH)2D with its receptor (VDR) increases the efficiency of intestinal absorption of calcium to 30–40% and phosphate to nearly 80%. Serum phosphate levels can also influence 1,25 (OH)2D and fibroblast growth factor 23 (FGF23) levels, i.e., higher phosphate concentrations suppress vitamin D activation and stimulate parathyroid hormone (PTH) release, while a high FGF23 serum level leads to reduced vitamin D synthesis. In the vitamin D-deficient state, the intestinal calcium absorption decreases and the secretion of PTH increases, which in turn causes the stimulation of 1,25(OH)2D production, resulting in excessive urinary phosphate loss. Maintenance of phosphate homeostasis is essential as hyperphosphatemia is a risk factor of cardiovascular calcification, chronic kidney diseases (CKD), and premature aging, while hypophosphatemia is usually associated with rickets and osteomalacia. This chapter elaborates on the possible interactions between vitamin D and phosphate in health and disease.
Miniaturized electrolyte–insulator–semiconductor capacitors (EISCAPs) with ultrathin gate insulators have been studied in terms of their pH-sensitive sensor characteristics: three different EISCAP systems consisting of Al–p-Si–Ta2O5(5 nm), Al–p-Si–Si3N4(1 or 2 nm)–Ta2O5 (5 nm), and Al–p-Si–SiO2(3.6 nm)–Ta2O5(5 nm) layer structures are characterized in buffer solution with different pH values by means of capacitance–voltage and constant capacitance method. The SiO2 and Si3N4 gate insulators are deposited by rapid thermal oxidation and rapid thermal nitridation, respectively, whereas the Ta2O5 film is prepared by atomic layer deposition. All EISCAP systems have a clear pH response, favoring the stacked gate insulators SiO2–Ta2O5 when considering the overall sensor characteristics, while the Si3N4(1 nm)–Ta2O5 stack delivers the largest accumulation capacitance (due to the lower equivalent oxide thickness) and a higher steepness in the slope of the capacitance–voltage curve among the studied stacked gate insulator systems.
Virgin passive colon biomechanics and a literature review of active contraction constitutive models
(2022)
The objective of this paper is to present our findings on the biomechanical aspects of the virgin passive anisotropic hyperelasticity of the porcine colon based on equibiaxial tensile experiments. Firstly, the characterization of the intestine tissues is discussed for a nearly incompressible hyperelastic fiber-reinforced Holzapfel–Gasser–Ogden constitutive model in virgin passive loading conditions. The stability of the evaluated material parameters is checked for the polyconvexity of the adopted strain energy function using positive eigenvalue constraints of the Hessian matrix with MATLAB. The constitutive material description of the intestine with two collagen fibers in the submucosal and muscular layer each has been implemented in the FORTRAN platform of the commercial finite element software LS-DYNA, and two equibiaxial tensile simulations are presented to validate the results with the optical strain images obtained from the experiments. Furthermore, this paper also reviews the existing models of the active smooth muscle cells, but these models have not been computationally studied here. The review part shows that the constitutive models originally developed for the active contraction of skeletal muscle based on Hill’s three-element model, Murphy’s four-state cross-bridge chemical kinetic model and Huxley’s sliding-filament hypothesis, which are mainly used for arteries, are appropriate for numerical contraction numerical analysis of the large intestine.
Unsteady shallow meandering flows in rectangular reservoirs: a modal analysis of URANS modelling
(2022)
Shallow flows are common in natural and human-made environments. Even for simple rectangular shallow reservoirs, recent laboratory experiments show that the developing flow fields are particularly complex, involving large-scale turbulent structures. For specific combinations of reservoir size and hydraulic conditions, a meandering jet can be observed. While some aspects of this pseudo-2D flow pattern can be reproduced using a 2D numerical model, new 3D simulations, based on the unsteady Reynolds-Averaged Navier-Stokes equations, show consistent advantages as presented herein. A Proper Orthogonal Decomposition was used to characterize the four most energetic modes of the meandering jet at the free surface level, allowing comparison against experimental data and 2D (depth-averaged) numerical results. Three different isotropic eddy viscosity models (RNG k-ε, k-ε, k-ω) were tested. The 3D models accurately predicted the frequency of the modes, whereas the amplitudes of the modes and associated energy were damped for the friction-dominant cases and augmented for non-frictional ones. The performance of the three turbulence models remained essentially similar, with slightly better predictions by RNG k-ε model in the case with the highest Reynolds number. Finally, the Q-criterion was used to identify vortices and study their dynamics, assisting on the identification of the differences between: i) the three-dimensional phenomenon (here reproduced), ii) its two-dimensional footprint in the free surface (experimental observations) and iii) the depth-averaged case (represented by 2D models).
This study reviews the practice of brake tests in freight railways, which is time consuming and not suitable to detect certain failure types. Public incident reports are analysed to derive a reasonable brake test hardware and communication architecture, which aims to provide automatic brake tests at lower cost than current solutions. The proposed solutions relies exclusively on brake pipe and brake cylinder pressure sensors, a brake release position switch as well as radio communication via standard protocols. The approach is embedded in the Wagon 4.0 concept, which is a holistic approach to a smart freight wagon. The reduction of manual processes yields a strong incentive due to high savings in manual
labour and increased productivity.
Sleep spindles are neurophysiological phenomena that appear to be linked to memory formation and other functions of the central nervous system, and that can be observed in electroencephalographic recordings (EEG) during sleep. Manually identified spindle annotations in EEG recordings suffer from substantial intra- and inter-rater variability, even if raters have been highly trained, which reduces the reliability of spindle measures as a research and diagnostic tool. The Massive Online Data Annotation (MODA) project has recently addressed this problem by forming a consensus from multiple such rating experts, thus providing a corpus of spindle annotations of enhanced quality. Based on this dataset, we present a U-Net-type deep neural network model to automatically detect sleep spindles. Our model’s performance exceeds that of the state-of-the-art detector and of most experts in the MODA dataset. We observed improved detection accuracy in subjects of all ages, including older individuals whose spindles are particularly challenging to detect reliably. Our results underline the potential of automated methods to do repetitive cumbersome tasks with super-human performance.
In this article we describe an Internet-of-Things sensing device with a wireless interface which is powered by the oftenoverlooked harvesting method of the Wiegand effect. The sensor can determine position, temperature or other resistively measurable quantities and can transmit the data via an ultra-low power ultra-wideband (UWB) data transmitter. With this approach we can energy-self-sufficiently acquire, process, and wirelessly transmit data in a pulsed operation. A proof-of-concept system was built up to prove the feasibility of the approach. The energy consumption of the system is analyzed and traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof-of-concept, an application demonstrator was developed. Finally, we point out possible use cases.
Virtual Reality (VR) offers novel possibilities for remote training regardless of the availability of the actual equipment, the presence of specialists, and the training locations. Research shows that training environments that adapt to users' preferences and performance can promote more effective learning. However, the observed results can hardly be traced back to specific adaptive measures but the whole new training approach. This study analyzes the effects of a combined point and leveling VR-based gamification system on assembly training targeting specific training outcomes and users' motivations. The Gamified-VR-Group with 26 subjects received the gamified training, and the Non-Gamified-VR-Group with 27 subjects received the alternative without gamified elements. Both groups conducted their VR training at least three times before assembling the actual structure. The study found that a level system that gradually increases the difficulty and error probability in VR can significantly lower real-world error rates, self-corrections, and support usages. According to our study, a high error occurrence at the highest training level reduced the Gamified-VR-Group's feeling of competence compared to the Non-Gamified-VR-Group, but at the same time also led to lower error probabilities in real-life. It is concluded that a level system with a variable task difficulty should be combined with carefully balanced positive and negative feedback messages. This way, better learning results, and an improved self-evaluation can be achieved while not causing significant impacts on the participants' feeling of competence.
A Gamified Information System (GIS) implements game concepts and elements, such as affordances and game design principles to motivate people. Based on the idea to develop a GIS to increase the motivation of software developers to perform software quality tasks, the research work at hand aims at investigating relevant requirements from that target group. Therefore, 14 interviews with software development experts are conducted and analyzed. According to the results, software developers prefer the affordances points, narrative storytelling in a multiplayer and a round-based setting. Furthermore, six design principles for the development of a GIS are derived.
Concentrating solar power
(2022)
The focus of this chapter is the production of power and the use of the heat produced from concentrated solar thermal power (CSP) systems.
The chapter starts with the general theoretical principles of concentrating systems including the description of the concentration ratio, the energy and mass balance. The power conversion systems is the main part where solar-only operation and the increase in operational hours.
Solar-only operation include the use of steam turbines, gas turbines, organic Rankine cycles and solar dishes. The operational hours can be increased with hybridization and with storage.
Another important topic is the cogeneration where solar cooling, desalination and of heat usage is described.
Many examples of commercial CSP power plants as well as research facilities from the past as well as current installed and in operation are described in detail.
The chapter closes with economic and environmental aspects and with the future potential of the development of CSP around the world.
Solar thermal concentrated power is an emerging technology that provides clean electricity for the growing energy market. To the solar thermal concentrated power plant systems belong the parabolic trough, the Fresnel collector, the solar dish, and the central receiver system.
For high-concentration solar collector systems, optical and thermal analysis is essential. There exist a number of measurement techniques and systems for the optical and thermal characterization of the efficiency of solar thermal concentrated systems.
For each system, structure, components, and specific characteristics types are described. The chapter presents additionally an outline for the calculation of system performance and operation and maintenance topics. One main focus is set to the models of components and their construction details as well as different types on the market. In the later part of this article, different criteria for the choice of technology are analyzed in detail.
When confining pressure is low or absent, extensional fractures are typical, with fractures occurring on unloaded planes in rock. These “paradox” fractures can be explained by a phenomenological extension strain failure criterion. In the past, a simple empirical criterion for fracture initiation in brittle rock has been developed. But this criterion makes unrealistic strength predictions in biaxial compression and tension. A new extension strain criterion overcomes this limitation by adding a weighted principal shear component. The weight is chosen, such that the enriched extension strain criterion represents the same failure surface as the Mohr–Coulomb (MC) criterion. Thus, the MC criterion has been derived as an extension strain criterion predicting failure modes, which are unexpected in the understanding of the failure of cohesive-frictional materials. In progressive damage of rock, the most likely fracture direction is orthogonal to the maximum extension strain. The enriched extension strain criterion is proposed as a threshold surface for crack initiation CI and crack damage CD and as a failure surface at peak P. Examples show that the enriched extension strain criterion predicts much lower volumes of damaged rock mass compared to the simple extension strain criterion.
Purpose
In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset.
Design/methodology/approach
In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments.
Findings
Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model.
Originality/value
For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future.
REM sleep without atonia (RSWA) is a key feature for the diagnosis of rapid eye movement (REM) sleep behaviour disorder (RBD). We introduce RBDtector, a novel open-source software to score RSWA according to established SINBAR visual scoring criteria. We assessed muscle activity of the mentalis, flexor digitorum superficialis (FDS), and anterior tibialis (AT) muscles. RSWA was scored manually as tonic, phasic, and any activity by human scorers as well as using RBDtector in 20 subjects. Subsequently, 174 subjects (72 without RBD and 102 with RBD) were analysed with RBDtector to show the algorithm’s applicability. We additionally compared RBDtector estimates to a previously published dataset. RBDtector showed robust conformity with human scorings. The highest congruency was achieved for phasic and any activity of the FDS. Combining mentalis any and FDS any, RBDtector identified RBD subjects with 100% specificity and 96% sensitivity applying a cut-off of 20.6%. Comparable performance was obtained without manual artefact removal. RBD subjects also showed muscle bouts of higher amplitude and longer duration. RBDtector provides estimates of tonic, phasic, and any activity comparable to human scorings. RBDtector, which is freely available, can help identify RBD subjects and provides reliable RSWA metrics.
FEM shakedown analysis of structures under random strength with chance constrained programming
(2022)
Direct methods, comprising limit and shakedown analysis, are a branch of computational mechanics. They play a significant role in mechanical and civil engineering design. The concept of direct methods aims to determine the ultimate load carrying capacity of structures beyond the elastic range. In practical problems, the direct methods lead to nonlinear convex optimization problems with a large number of variables and constraints. If strength and loading are random quantities, the shakedown analysis can be formulated as stochastic programming problem. In this paper, a method called chance constrained programming is presented, which is an effective method of stochastic programming to solve shakedown analysis problems under random conditions of strength. In this study, the loading is deterministic, and the strength is a normally or lognormally distributed variable.
On the basis of bivariate data, assumed to be observations of independent copies of a random vector (S,N), we consider testing the hypothesis that the distribution of (S,N) belongs to the parametric class of distributions that arise with the compound Poisson exponential model. Typically, this model is used in stochastic hydrology, with N as the number of raindays, and S as total rainfall amount during a certain time period, or in actuarial science, with N as the number of losses, and S as total loss expenditure during a certain time period. The compound Poisson exponential model is characterized in the way that a specific transform associated with the distribution of (S,N) satisfies a certain differential equation. Mimicking the function part of this equation by substituting the empirical counterparts of the transform we obtain an expression the weighted integral of the square of which is used as test statistic. We deal with two variants of the latter, one of which being invariant under scale transformations of the S-part by fixed positive constants. Critical values are obtained by using a parametric bootstrap procedure. The asymptotic behavior of the tests is discussed. A simulation study demonstrates the performance of the tests in the finite sample case. The procedure is applied to rainfall data and to an actuarial dataset. A multivariate extension is also discussed.
The fourth industrial revolution presents a multitude of challenges for industries, one of which being the increased flexibility required of manufacturing lines as a result of increased consumer demand for individualised products. One solution to tackle this challenge is the digital twin, more specifically the standardised model of a digital twin also known as the asset administration shell. The standardisation of an industry wide communications tool is a critical step in enabling inter-company operations. This paper discusses the current state of asset administration shells, the frameworks used to host them and their problems that need to be addressed. To tackle these issues, we propose an event-based server capable of drastically reducing response times between assets and asset administration shells and a multi-agent system used for the orchestration and deployment of the shells in the field.
The work presented in this report provides scientific support to building renovation policies in the EU by promoting a holistic point of view on the topic. Integrated renovation can be seen as a nexus between European policies on disaster resilience, energy efficiency and circularity in the building sector. An overview of policy measures for the seismic and energy upgrading of buildings across EU Member States identified only a few available measures for combined upgrading. Regulatory framework, financial instruments and digital tools similar to those for energy renovation, together with awareness and training may promote integrated renovation. A framework for regional prioritisation of building renovation was put forward, considering seismic risk, energy efficiency, and socioeconomic vulnerability independently and in an integrated way. Results indicate that prioritisation of building renovation is a multidimensional problem. Depending on priorities, different integrated indicators should be used to inform policies and accomplish the highest relative or most spread impact across different sectors. The framework was further extended to assess the impact of renovation scenarios across the EU with a focus on priority regions. Integrated renovation can provide a risk-proofed, sustainable, and inclusive built environment, presenting an economic benefit in the order of magnitude of the highest benefit among the separate interventions. Furthermore, it presents the unique capability of reducing fatalities and energy consumption at the same time and, depending on the scenario, to a greater extent.
Industrial facilities must be thoroughly designed to withstand seismic
actions as they exhibit an increased loss potential due to the possibly wideranging
damage consequences and the valuable process engineering equipment.
Past earthquakes showed the social and political consequences of seismic damage
to industrial facilities and sensitized the population and politicians worldwide
for the possible hazard emanating from industrial facilities. However, a holistic
approach for the seismic design of industrial facilities can presently neither be
found in national nor in international standards. The introduction of EN 1998-4
of the new generation of Eurocode 8 will improve the normative situation with
specific seismic design rules for silos, tanks and pipelines and secondary process
components. The article presents essential aspects of the seismic design of
industrial facilities based on the new generation of Eurocode 8 using the example
of tank structures and secondary process components. The interaction effects of
the process components with the primary structure are illustrated by means of
the experimental results of a shaking table test of a three story moment resisting
steel frame with different process components. Finally, an integrated approach of
digital plant models based on building information modelling (BIM) and structural
health monitoring (SHM) is presented, which provides not only a reliable
decision-making basis for operation, maintenance and repair but also an excellent
tool for rapid assessment of seismic damage.
Inference on the basis of high-dimensional and functional data are two topics which are discussed frequently in the current statistical literature. A possibility to include both topics in a single approach is working on a very general space for the underlying observations, such as a separable Hilbert space. We propose a general method for consistently hypothesis testing on the basis of random variables with values in separable Hilbert spaces. We avoid concerns with the curse of dimensionality due to a projection idea. We apply well-known test statistics from nonparametric inference to the projected data and integrate over all projections from a specific set and with respect to suitable probability measures. In contrast to classical methods, which are applicable for real-valued random variables or random vectors of dimensions lower than the sample size, the tests can be applied to random vectors of dimensions larger than the sample size or even to functional and high-dimensional data. In general, resampling procedures such as bootstrap or permutation are suitable to determine critical values. The idea can be extended to the case of incomplete observations. Moreover, we develop an efficient algorithm for implementing the method. Examples are given for testing goodness-of-fit in a one-sample situation in [1] or for testing marginal homogeneity on the basis of a paired sample in [2]. Here, the test statistics in use can be seen as generalizations of the well-known Cramérvon-Mises test statistics in the one-sample and two-samples case. The treatment of other testing problems is possible as well. By using the theory of U-statistics, for instance, asymptotic null distributions of the test statistics are obtained as the sample size tends to infinity. Standard continuity assumptions ensure the asymptotic exactness of the tests under the null hypothesis and that the tests detect any alternative in the limit. Simulation studies demonstrate size and power of the tests in the finite sample case, confirm the theoretical findings, and are used for the comparison with concurring procedures. A possible application of the general approach is inference for stock market returns, also in high data frequencies. In the field of empirical finance, statistical inference of stock market prices usually takes place on the basis of related log-returns as data. In the classical models for stock prices, i.e., the exponential Lévy model, Black-Scholes model, and Merton model, properties such as independence and stationarity of the increments ensure an independent and identically structure of the data. Specific trends during certain periods of the stock price processes can cause complications in this regard. In fact, our approach can compensate those effects by the treatment of the log-returns as random vectors or even as functional data.
Nanoparticles are recognized as highly attractive tunable materials for designing field-effect biosensors with enhanced performance. In this work, we present a theoretical model for electrolyte-insulator-semiconductor capacitors (EISCAP) decorated with ligand-stabilized charged gold nanoparticles. The charged AuNPs are taken into account as additional, nanometer-sized local gates. The capacitance-voltage (C–V) curves and constant-capacitance (ConCap) signals of the AuNP-decorated EISCAPs have been simulated. The impact of the AuNP coverage on the shift of the C–V curves and the ConCap signals was also studied experimentally on Al–p-Si–SiO₂ EISCAPs decorated with positively charged aminooctanethiol-capped AuNPs. In addition, the surface of the EISCAPs, modified with AuNPs, was characterized by scanning electron microscopy for different immobilization times of the nanoparticles.
Frequency mixing magnetic detection (FMMD) has been explored for its applications in fields of magnetic biosensing, multiplex detection of magnetic nanoparticles (MNP) and the determination of core size distribution of MNP samples. Such applications rely on the application of a static offset magnetic field, which is generated traditionally with an electromagnet. Such a setup requires a current source, as well as passive or active cooling strategies, which directly sets a limitation based on the portability aspect that is desired for point of care (POC) monitoring applications. In this work, a measurement head is introduced that involves the utilization of two ring-shaped permanent magnets to generate a static offset magnetic field. A steel cylinder in the ring bores homogenizes the field. By variation of the distance between the ring magnets and of the thickness of the steel cylinder, the magnitude of the magnetic field at the sample position can be adjusted. Furthermore, the measurement setup is compared to the electromagnet offset module based on measured signals and temperature behavior.
Carbon nanofiber nonwovens represent a powerful class of materials with prospective application in filtration technology or as electrodes with high surface area in batteries, fuel cells, and supercapacitors. While new precursor-to-carbon conversion processes have been explored to overcome productivity restrictions for carbon fiber tows, alternatives for the two-step thermal conversion of polyacrylonitrile precursors into carbon fiber nonwovens are absent. In this work, we develop a continuous roll-to-roll stabilization process using an atmospheric pressure microwave plasma jet. We explore the influence of various plasma-jet parameters on the morphology of the nonwoven and compare the stabilized nonwoven to thermally stabilized samples using scanning electron microscopy, differential scanning calorimetry, and infrared spectroscopy. We show that stabilization with a non-equilibrium plasma-jet can be twice as productive as the conventional thermal stabilization in a convection furnace, while producing electrodes of comparable electrochemical performance.
In this study, an online multi-sensing platform was engineered to simultaneously evaluate various process parameters of food package sterilization using gaseous hydrogen peroxide (H₂O₂). The platform enabled the validation of critical aseptic parameters. In parallel, one series of microbiological count reduction tests was performed using highly resistant spores of B. atrophaeus DSM 675 to act as the reference method for sterility validation. By means of the multi-sensing platform together with microbiological tests, we examined sterilization process parameters to define the most effective conditions with regards to the highest spore kill rate necessary for aseptic packaging. As these parameters are mutually associated, a correlation between different factors was elaborated. The resulting correlation indicated the need for specific conditions regarding the applied H₂O₂ gas temperature, the gas flow and concentration, the relative humidity and the exposure time. Finally, the novel multi-sensing platform together with the mobile electronic readout setup allowed for the online and on-site monitoring of the sterilization process, selecting the best conditions for sterility and, at the same time, reducing the use of the time-consuming and costly microbiological tests that are currently used in the food package industry.
Objective
Hemodialysis patients show an approximately threefold higher prevalence of cognitive impairment compared to the age-matched general population. Impaired microcirculatory function is one of the assumed causes. Dynamic retinal vessel analysis is a quantitative method for measuring neurovascular coupling and microvascular endothelial function. We hypothesize that cognitive impairment is associated with altered microcirculation of retinal vessels.
Methods
152 chronic hemodialysis patients underwent cognitive testing using the Montreal Cognitive Assessment. Retinal microcirculation was assessed by Dynamic Retinal Vessel Analysis, which carries out an examination recording retinal vessels' reaction to a flicker light stimulus under standardized conditions.
Results
In unadjusted as well as in adjusted linear regression analyses a significant association between the visuospatial executive function domain score of the Montreal Cognitive Assessment and the maximum arteriolar dilation as response of retinal arterioles to the flicker light stimulation was obtained.
Conclusion
This is the first study determining retinal microvascular function as surrogate for cerebral microvascular function and cognition in hemodialysis patients. The relationship between impairment in executive function and reduced arteriolar reaction to flicker light stimulation supports the involvement of cerebral small vessel disease as contributing factor for the development of cognitive impairment in this patient population and might be a target for noninvasive disease monitoring and therapeutic intervention.
Monte Carlo Tree Search (MCTS) is a search technique that in the last decade emerged as a major breakthrough for Artificial Intelligence applications regarding board- and video-games. In 2016, AlphaGo, an MCTS-based software agent, outperformed the human world champion of the board game Go. This game was for long considered almost infeasible for machines, due to its immense search space and the need for a long-term strategy. Since this historical success, MCTS is considered as an effective new approach for many other scientific and technical problems. Interestingly, civil structural engineering, as a discipline, offers many tasks whose solution may benefit from intelligent search and in particular from adopting MCTS as a search tool. In this work, we show how MCTS can be adapted to search for suitable solutions of a structural engineering design problem. The problem consists of choosing the load-bearing elements in a reference reinforced concrete structure, so to achieve a set of specific dynamic characteristics. In the paper, we report the results obtained by applying both a plain and a hybrid version of single-agent MCTS. The hybrid approach consists of an integration of both MCTS and classic Genetic Algorithm (GA), the latter also serving as a term of comparison for the results. The study’s outcomes may open new perspectives for the adoption of MCTS as a design tool for civil engineers.
Because of simple construction process, high energy efficiency, significant fire resistance and excellent sound isolation, masonry infilled reinforced concrete (RC) frame structures are very popular in most of the countries in the world, as well as in seismic active areas. However, many RC frame structures with masonry infills were seriously damaged during earthquake events, as the traditional infills are generally constructed with direct contact to the RC frame which brings undesirable infill/frame interaction. This interaction leads to the activation of the equivalent diagonal strut in the infill panel, due to the RC frame deformation, and combined with seismically induced loads perpendicular to the infill panel often causes total collapses of the masonry infills and heavy damages to the RC frames. This fact was the motivation for developing different approaches for improving the behaviour of masonry infills, where infill isolation (decoupling) from the frame has been more intensively studied in the last decade. In-plane isolation of the infill wall reduces infill activation, but causes the need for additional measures to restrain out-of-plane movements. This can be provided by installing steel anchors, as proposed by some researchers. Within the framework of European research project INSYSME (Innovative Systems for Earthquake Resistant Masonry Enclosures in Reinforced Concrete Buildings) the system based on a use of elastomers for in-plane decoupling and steel anchors for out-of-plane restrain was tested. This constructive solution was tested and deeply investigated during the experimental campaign where traditional and decoupled masonry infilled RC frames with anchors were subjected to separate and combined in-plane and out-of-plane loading. Based on a detailed evaluation and comparison of the test results, the performance and effectiveness of the developed system are illustrated.
In this study, the performance of an integrated body-imaging array for 7 T with 32 radiofrequency (RF) channels under consideration of local specific absorption rate (SAR), tissue temperature, and thermal dose limits was evaluated and the imaging performance was compared with a clinical 3 T body coil.
Thirty-two transmit elements were placed in three rings between the bore liner and RF shield of the gradient coil. Slice-selective RF pulse optimizations for B1 shimming and spokes were performed for differently oriented slices in the body under consideration of realistic constraints for power and local SAR. To improve the B1+ homogeneity, safety assessments based on temperature and thermal dose were performed to possibly allow for higher input power for the pulse optimization than permissible with SAR limits.
The results showed that using two spokes, the 7 T array outperformed the 3 T birdcage in all the considered regions of interest. However, a significantly higher SAR or lower duty cycle at 7 T is necessary in some cases to achieve similar B1+ homogeneity as at 3 T. The homogeneity in up to 50 cm-long coronal slices can particularly benefit from the high RF shim performance provided by the 32 RF channels. The thermal dose approach increases the allowable input power and the corresponding local SAR, in one example up to 100 W/kg, without limiting the exposure time necessary for an MR examination.
In conclusion, the integrated antenna array at 7 T enables a clinical workflow for body imaging and comparable imaging performance to a conventional 3 T clinical body coil.
Recent earthquakes as the 2012 Emilia earthquake sequence showed that recently built unreinforced masonry (URM) buildings behaved much better than expected and sustained, despite the maximum PGA values ranged between 0.20–0.30 g, either minor damage or structural damage that is deemed repairable. Especially low-rise residential and commercial masonry buildings with a code-conforming seismic design and detailing behaved in general very well without substantial damages. The low damage grades of modern masonry buildings that was observed during this earthquake series highlighted again that codified design procedures based on linear analysis can be rather conservative. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. These q-factors are derived for low-rise URM buildings with rigid diaphragms which represent recent construction practise in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. Furthermore, considerations on the behaviour factor component due to other sources of overstrength in masonry buildings are presented. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0–3.0 are proposed.
In proton therapy, the dose from secondary neutrons to the patient can contribute to side effects and the creation of secondary cancer. A simple and fast detection system to distinguish between dose from protons and neutrons both in pretreatment verification as well as potentially in vivo monitoring is needed to minimize dose from secondary neutrons. Two 3 mm long, 1 mm diameter organic scintillators were tested for candidacy to be used in a proton–neutron discrimination detector. The SCSF-3HF (1500) scintillating fibre (Kuraray Co. Chiyoda-ku, Tokyo, Japan) and EJ-260 plastic scintillator (Eljen Technology, Sweetwater, TX, USA) were irradiated at the TRIUMF Neutron Facility and the Proton Therapy Research Centre. In the proton beam, we compared the raw Bragg peak and spread-out Bragg peak response to the industry standard Markus chamber detector. Both scintillator sensors exhibited quenching at high LET in the Bragg peak, presenting a peak-to-entrance ratio of 2.59 for the EJ-260 and 2.63 for the SCSF-3HF fibre, compared to 3.70 for the Markus chamber. The SCSF-3HF sensor demonstrated 1.3 times the sensitivity to protons and 3 times the sensitivity to neutrons as compared to the EJ-260 sensor. Combined with our equations relating neutron and proton contributions to dose during proton irradiations, and the application of Birks’ quenching correction, these fibres provide valid candidates for inexpensive and replicable proton-neutron discrimination detectors
Recent earthquakes showed that low-rise URM buildings following codecompliant seismic design and details behaved in general very well without substantial damages. Although advances in simulation tools make nonlinear calculation methods more readily accessible to designers, linear analyses will still be the standard design method for years to come. The present paper aims to improve the linear seismic design method by providing a proper definition of the q-factor of URM buildings. Values of q-factors are derived for low-rise URM buildings with rigid diaphragms, with reference to modern structural configurations realized in low to moderate seismic areas of Italy and Germany. The behaviour factor components for deformation and energy dissipation capacity and for overstrength due to the redistribution of forces are derived by means of pushover analyses. As a result of the investigations, rationally based values of the behaviour factor q to be used in linear analyses in the range of 2.0 to 3.0 are proposed.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
It was generally believed that coal sources are not favorable as live-in habitats for microorganisms due to their recalcitrant chemical nature and negligible decomposition. However, accumulating evidence has revealed the presence of diverse microbial groups in coal environments and their significant metabolic role in coal biogeochemical dynamics and ecosystem functioning. The high oxygen content, organic fractions, and lignin-like structures of lower-rank coals may provide effective means for microbial attack, still representing a greatly unexplored frontier in microbiology. Coal degradation/conversion technology by native bacterial and fungal species has great potential in agricultural development, chemical industry production, and environmental rehabilitation. Furthermore, native microalgal species can offer a sustainable energy source and an excellent bioremediation strategy applicable to coal spill/seam waters. Additionally, the measures of the fate of the microbial community would serve as an indicator of restoration progress on post-coal-mining sites. This review puts forward a comprehensive vision of coal biodegradation and bioprocessing by microorganisms native to coal environments for determining their biotechnological potential and possible applications.
Atmospheric pressure plasma-jet treatment of PAN-nonwovens—carbonization of nanofiber electrodes
(2022)
Carbon nanofibers are produced from dielectric polymer precursors such as polyacrylonitrile (PAN). Carbonized nanofiber nonwovens show high surface area and good electrical conductivity, rendering these fiber materials interesting for application as electrodes in batteries, fuel cells, and supercapacitors. However, thermal processing is slow and costly, which is why new processing techniques have been explored for carbon fiber tows. Alternatives for the conversion of PAN-precursors into carbon fiber nonwovens are scarce. Here, we utilize an atmospheric pressure plasma jet to conduct carbonization of stabilized PAN nanofiber nonwovens. We explore the influence of various processing parameters on the conductivity and degree of carbonization of the converted nanofiber material. The precursor fibers are converted by plasma-jet treatment to carbon fiber nonwovens within seconds, by which they develop a rough surface making subsequent surface activation processes obsolete. The resulting carbon nanofiber nonwovens are applied as supercapacitor electrodes and examined by cyclic voltammetry and impedance spectroscopy. Nonwovens that are carbonized within 60 s show capacitances of up to 5 F g⁻¹.
This work introduces a novel method for the detection of H₂O₂ vapor/aerosol of low concentrations, which is mainly applied in the sterilization of equipment in medical industry. Interdigitated electrode (IDE) structures have been fabricated by means of microfabrication techniques. A differential setup of IDEs was prepared, containing an active sensor element (active IDE) and a passive sensor element (passive IDE), where the former was immobilized with an enzymatic membrane of horseradish peroxidase that is selective towards H₂O₂. Changes in the IDEs’ capacitance values (active sensor element versus passive sensor element) under H₂O₂ vapor/aerosol atmosphere proved the detection in the concentration range up to 630 ppm with a fast response time (<60 s). The influence of relative humidity was also tested with regard to the sensor signal, showing no cross-sensitivity. The repeatability assessment of the IDE biosensors confirmed their stable capacitive signal in eight subsequent cycles of exposure to H₂O₂ vapor/aerosol. Room-temperature detection of H₂O₂ vapor/aerosol with such miniaturized biosensors will allow a future three-dimensional, flexible mapping of aseptic chambers and help to evaluate sterilization assurance in medical industry.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
On the basis of independent and identically distributed bivariate random vectors, where the components are categorial and continuous variables, respectively, the related concomitants, also called induced order statistic, are considered. The main theoretical result is a functional central limit theorem for the empirical process of the concomitants in a triangular array setting. A natural application is hypothesis testing. An independence test and a two-sample test are investigated in detail. The fairly general setting enables limit results under local alternatives and bootstrap samples. For the comparison with existing tests from the literature simulation studies are conducted. The empirical results obtained confirm the theoretical findings.
Altered gastrocnemius contractile behavior in former achilles tendon rupture patients during walking
(2022)
Achilles tendon rupture (ATR) remains associated with functional limitations years after injury. Architectural remodeling of the gastrocnemius medialis (GM) muscle is typically observed in the affected leg and may compensate force deficits caused by a longer tendon. Yet patients seem to retain functional limitations during—low-force—walking gait. To explore the potential limits imposed by the remodeled GM muscle-tendon unit (MTU) on walking gait, we examined the contractile behavior of muscle fascicles during the stance phase. In a cross-sectional design, we studied nine former patients (males; age: 45 ± 9 years; height: 180 ± 7 cm; weight: 83 ± 6 kg) with a history of complete unilateral ATR, approximately 4 years post-surgery. Using ultrasonography, GM tendon morphology, muscle architecture at rest, and fascicular behavior were assessed during walking at 1.5 m⋅s–1 on a treadmill. Walking patterns were recorded with a motion capture system. The unaffected leg served as control. Lower limbs kinematics were largely similar between legs during walking. Typical features of ATR-related MTU remodeling were observed during the stance sub-phases corresponding to series elastic element (SEE) lengthening (energy storage) and SEE shortening (energy release), with shorter GM fascicles (36 and 36%, respectively) and greater pennation angles (8° and 12°, respectively). However, relative to the optimal fascicle length for force production, fascicles operated at comparable length in both legs. Similarly, when expressed relative to optimal fascicle length, fascicle contraction velocity was not different between sides, except at the time-point of peak series elastic element (SEE) length, where it was 39 ± 49% lower in the affected leg. Concomitantly, fascicles rotation during contraction was greater in the affected leg during the whole stance-phase, and architectural gear ratios (AGR) was larger during SEE lengthening. Under the present testing conditions, former ATR patients had recovered a relatively symmetrical walking gait pattern. Differences in seen AGR seem to accommodate the profound changes in MTU architecture, limiting the required fascicle shortening velocity. Overall, the contractile behavior of the GM fascicles does not restrict length- or velocity-dependent force potentials during this locomotor task.
This study aims to quantify the kinematics, kinetics and muscular activity of all-out handcycling exercise and examine their alterations during the course of a 15-s sprint test. Twelve able-bodied competitive triathletes performed a 15-s all-out sprint test in a recumbent racing handcycle that was attached to an ergometer. During the sprint test, tangential crank kinetics, 3D joint kinematics and muscular activity of 10 muscles of the upper extremity and trunk were examined using a power metre, motion capturing and surface electromyography (sEMG), respectively. Parameters were compared between revolution one (R1), revolution two (R2), the average of revolution 3 to 13 (R3) and the average of the remaining revolutions (R4). Shoulder abduction and internal-rotation increased, whereas maximal shoulder retroversion decreased during the sprint. Except for the wrist angles, angular velocity increased for every joint of the upper extremity. Several muscles demonstrated an increase in muscular activation, an earlier onset of muscular activation in crank cycle and an increased range of activation. During the course of a 15-s all-out sprint test in handcycling, the shoulder muscles and the muscles associated to the push phase demonstrate indications for short-duration fatigue. These findings are helpful to prevent injuries and improve performance in all-out handcycling.
Landslides, rock falls or related subaerial and subaqueous mass slides can generate devastating impulse waves in adjacent waterbodies. Such waves can occur in lakes and fjords, or due to glacier calving in bays or at steep ocean coastlines. Infrastructure and residential houses along coastlines of those waterbodies are often situated on low elevation terrain, and are potentially at risk from inundation. Impulse waves, running up a uniform slope and generating an overland flow over an initially dry adjacent horizontal plane, represent a frequently found scenario, which needs to be better understood for disaster planning and mitigation. This study presents a novel set of large-scale flume test focusing on solitary waves propagating over a 1:14.5 slope and breaking onto a horizontal section. Examining the characteristics of overland flow, this study gives, for the first time, insight into the fundamental process of overland flow of a broken solitary wave: its shape and celerity, as well as its momentum when wave breaking has taken place beforehand.
Damage of reinforced concrete (RC) frames with masonry infill walls has been observed after many earthquakes. Brittle behaviour of the masonry infills in combination with the ductile behaviour of the RC frames makes infill walls prone to damage during earthquakes. Interstory deformations lead to an interaction between the infill and the RC frame, which affects the structural response. The result of this interaction is significant damage to the infill wall and sometimes to the surrounding structural system too. In most design codes, infill walls are considered as non-structural elements and neglected in the design process, because taking into account the infills and considering the interaction between frame and infill in software packages can be complicated and impractical. A good way to avoid negative aspects arising from this behavior is to ensure no or low-interaction of the frame and infill wall, for instance by decoupling the infill from the frame. This paper presents the numerical study performed to investigate new connection system called INODIS (Innovative Decoupled Infill System) for decoupling infill walls from surrounding frame with the aim to postpone infill activation to high interstory drifts thus reducing infill/frame interaction and minimizing damage to both infills and frames. The experimental results are first used for calibration and validation of the numerical model, which is then employed for investigating the influence of the material parameters as well as infill’s and frame’s geometry on the in-plane behaviour of the infilled frames with the INODIS system. For all the investigated situations, simulation results show significant improvements in behaviour for decoupled infilled RC frames in comparison to the traditionally infilled frames.
In general aviation, too, it is desirable to be able to operate existing internal combustion engines with fuels that produce less CO₂ than Avgas 100LL being widely used today It can be assumed that, in comparison, the fuels CNG, LPG or LNG, which are gaseous under normal conditions, produce significantly lower emissions. Necessary propulsion system adaptations were investigated as part of a research project at Aachen University of Applied Sciences.
GHEtool is a Python package that contains all the functionalities needed to deal with borefield design. It is developed for both researchers and practitioners. The core of this package is the automated sizing of borefield under different conditions. The sizing of a borefield is typically slow due to the high complexity of the mathematical background. Because this tool has a lot of precalculated data, GHEtool can size a borefield in the order of tenths of milliseconds. This sizing typically takes the order of minutes. Therefore, this tool is suited for being implemented in typical workflows where iterations are required.
GHEtool also comes with a graphical user interface (GUI). This GUI is prebuilt as an exe-file because this provides access to all the functionalities without coding. A setup to install the GUI at the user-defined place is also implemented and available at: https://www.mech.kuleuven.be/en/tme/research/thermal_systems/tools/ghetool.
Using optimization to design a renewable energy system has become a computationally demanding task as the high temporal fluctuations of demand and supply arise within the considered time series. The aggregation of typical operation periods has become a popular method to reduce effort. These operation periods are modelled independently and cannot interact in most cases. Consequently, seasonal storage is not reproducible. This inability can lead to a significant error, especially for energy systems with a high share of fluctuating renewable energy. The previous paper, “Time series aggregation for energy system design: Modeling seasonal storage”, has developed a seasonal storage model to address this issue. Simultaneously, the paper “Optimal design of multi-energy systems with seasonal storage” has developed a different approach. This paper aims to review these models and extend the first model. The extension is a mathematical reformulation to decrease the number of variables and constraints. Furthermore, it aims to reduce the calculation time while achieving the same results.
The development of protype applications with sensors and actuators in the automation industry requires tools that are independent of manufacturer, and are flexible enough to be modified or extended for any specific requirements. Currently, developing prototypes with industrial sensors and actuators is not straightforward. First of all, the exchange of information depends on the industrial protocol that these devices have. Second, a specific configuration and installation is done based on the hardware that is used, such as automation controllers or industrial gateways. This means that the development for a specific industrial protocol, highly depends on the hardware and the software that vendors provide. In this work we propose a rapid-prototyping framework based on Arduino to solve this problem. For this project we have focused to work with the IO-Link protocol. The framework consists of an Arduino shield that acts as the physical layer, and a software that implements the IO-Link Master protocol. The main advantage of such framework is that an application with industrial devices can be rapid-prototyped with ease as its vendor independent, open-source and can be ported easily to other Arduino compatible boards. In comparison, a typical approach requires proprietary hardware, is not easy to port to another system and is closed-source.
Digital twins are seen as one of the key technologies of Industry 4.0. Although many research groups focus on digital twins and create meaningful outputs, the technology has not yet reached a broad application in the industry. The main reasons for this imbalance are the complexity of the topic, the lack of specialists, and the unawareness of the twin opportunities. The project "Digital Twin Academy" aims to overcome these barriers by focusing on three actions: Building a digital twin community for discussion and exchange, offering multi-stage training for various knowledge levels, and implementing realworld use cases for deeper insights and guidance. In this work, we focus on creating a flexible learning platform that allows the user to select a training path adjusted to personal knowledge and needs. Therefore, a mix of basic and advanced modules is created and expanded by individual feedback options. The usage of personas supports the selection of the appropriate modules.
Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.
Quantitative evaluation of health management designs for fuel cell systems in transport vehicles
(2022)
Focusing on transport vehicles, mainly with regard to aviation applications, this paper presents compilation and subsequent quantitative evaluation of methods aimed at building an optimum integrated health management solution for fuel cell systems. The methods are divided into two different main types and compiled in a related scheme. Furthermore, different methods are analysed and evaluated based on parameters specific to the aviation context of this study. Finally, the most suitable method for use in fuel cell health management systems is identified and its performance and suitability is quantified.
Having well-defined control strategies for fuel cells, that can efficiently detect errors and take corrective action is critically important for safety in all applications, and especially so in aviation. The algorithms not only ensure operator safety by monitoring the fuel cell and connected components, but also contribute to extending the health of the fuel cell, its durability and safe operation over its lifetime. While sensors are used to provide peripheral data surrounding the fuel cell, the internal states of the fuel cell cannot be directly measured. To overcome this restriction, Kalman Filter has been implemented as an internal state observer.
Other safety conditions are evaluated using real-time data from every connected sensor and corrective actions automatically take place to ensure safety. The algorithms discussed in this paper have been validated thorough Model-in-the-Loop (MiL) tests as well as practical validation at a dedicated test bench.
The industrial revolution IR4.0 era have driven many states of the art technologies to be introduced especially in the automotive industry. The rapid development of automotive industries in Europe have created wide industry gap between European Union (EU) and developing countries such as in South-East Asia (SEA). Indulging this situation, FH Joanneum, Austria together with European partners from FH Aachen, Germany and Politecnico Di Torino, Italy is taking initiative to close the gap utilizing the Erasmus+ United grant from EU. A consortium was founded to engage with automotive technology transfer using the European ramework to Malaysian, Indonesian and Thailand Higher Education Institutions (HEI) as well as automotive industries. This could be achieved by establishing Engineering Knowledge Transfer Unit (EKTU) in respective SEA institutions guided by the industry partners in their respective countries. This EKTU could offer updated, innovative, and high-quality training courses to increase graduate’s employability in higher education institutions and strengthen relations between HEI and the wider economic and social environment by addressing Universityindustry cooperation which is the regional priority for Asia. It is expected that, the Capacity Building Initiative would improve the quality of higher education and enhancing its relevance for the labor market and society in the SEA partners. The outcome of this project would greatly benefit the partners in strong and complementary partnership targeting the automotive industry and enhanced larger scale international cooperation between the European and SEA partners. It would also prepare the SEA HEI in sustainable partnership with Automotive industry in the region as a mean of income generation in the future.
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
Useful market simulations are key to the evaluation of diferent market designs existing of multiple market mechanisms or rules. Yet a simulation framework which has a comparison of diferent market mechanisms in mind was not found. The need to create an objective view on different sets of market rules while investigating meaningful agent strategies concludes that such a simulation framework is needed to advance the research on this subject. An overview of diferent existing market simulation models is given which also shows the research gap and the missing capabilities of those systems. Finally, a methodology is outlined how a novel market simulation which can answer the research questions can be developed.
Advances in polymer science have significantly increased polymer applications in life sciences. We report the use of free-standing, ultra-thin polydimethylsiloxane (PDMS) membranes, called CellDrum, as cell culture substrates for an in vitro wound model. Dermal fibroblast monolayers from 28- and 88-year-old donors were cultured on CellDrums. By using stainless steel balls, circular cell-free areas were created in the cell layer (wounding). Sinusoidal strain of 1 Hz, 5% strain, was applied to membranes for 30 min in 4 sessions. The gap circumference and closure rate of un-stretched samples (controls) and stretched samples were monitored over 4 days to investigate the effects of donor age and mechanical strain on wound closure. A significant decrease in gap circumference and an increase in gap closure rate were observed in trained samples from younger donors and control samples from older donors. In contrast, a significant decrease in gap closure rate and an increase in wound circumference were observed in the trained samples from older donors. Through these results, we propose the model of a cell monolayer on stretchable CellDrums as a practical tool for wound healing research. The combination of biomechanical cell loading in conjunction with analyses such as gene/protein expression seems promising beyond the scope published here.
Cell spraying has become a feasible application method for cell therapy and tissue engineering approaches. Different devices have been used with varying success. Often, twin-fluid atomizers are used, which require a high gas velocity for optimal aerosolization characteristics. To decrease the amount and velocity of required air, a custom-made atomizer was designed based on the effervescent principle. Different designs were evaluated regarding spray characteristics and their influence on human adipose-derived mesenchymal stromal cells. The arithmetic mean diameters of the droplets were 15.4–33.5 µm with decreasing diameters for increasing gas-to-liquid ratios. The survival rate was >90% of the control for the lowest gas-to-liquid ratio. For higher ratios, cell survival decreased to approximately 50%. Further experiments were performed with the design, which had shown the highest survival rates. After seven days, no significant differences in metabolic activity were observed. The apoptosis rates were not influenced by aerosolization, while high gas-to-liquid ratios caused increased necrosis levels. Tri-lineage differentiation potential into adipocytes, chondrocytes, and osteoblasts was not negatively influenced by aerosolization. Thus, the effervescent aerosolization principle was proven suitable for cell applications requiring reduced amounts of supplied air. This is the first time an effervescent atomizer was used for cell processing.
This dataset was acquired at field tests of the steerable ice-melting probe "EnEx-IceMole" (Dachwald et al., 2014). A field test in summer 2014 was used to test the melting probe's system, before the probe was shipped to Antarctica, where, in international cooperation with the MIDGE project, the objective of a sampling mission in the southern hemisphere summer 2014/2015 was to return a clean englacial sample from the subglacial brine reservoir supplying the Blood Falls at Taylor Glacier (Badgeley et al., 2017, German et al., 2021).
The standardized log-files generated by the IceMole during melting operation include more than 100 operational parameters, housekeeping information, and error states, which are reported to the base station in intervals of 4 s. Occasional packet loss in data transmission resulted in a sparse number of increased sampling intervals, which where compensated for by linear interpolation during post processing. The presented dataset is based on a subset of this data: The penetration distance is calculated based on the ice screw drive encoder signal, providing the rate of rotation, and the screw's thread pitch. The melting speed is calculated from the same data, assuming the rate of rotation to be constant over one sampling interval. The contact force is calculated from the longitudinal screw force, which es measured by strain gauges. The used heating power is calculated from binary states of all heating elements, which can only be either switched on or off. Temperatures are measured at each heating element and averaged for three zones (melting head, side-wall heaters and back-plate heaters).
Exposure to prolonged periods in microgravity is associated with deconditioning of the musculoskeletal system due to chronic changes in mechanical stimulation. Given astronauts will operate on the Lunar surface for extended periods of time, it is critical to quantify both external (e.g., ground reaction forces) and internal (e.g., joint reaction forces) loads of relevant movements performed during Lunar missions. Such knowledge is key to predict musculoskeletal deconditioning and determine appropriate exercise countermeasures associated with extended exposure to hypogravity.
This article describes an Internet of things (IoT) sensing device with a wireless interface which is powered by the energy-harvesting method of the Wiegand effect. The Wiegand effect, in contrast to continuous sources like photovoltaic or thermal harvesters, provides small amounts of energy discontinuously in pulsed mode. To enable an energy-self-sufficient operation of the sensing device with this pulsed energy source, the output energy of the Wiegand generator is maximized. This energy is used to power up the system and to acquire and process data like position, temperature or other resistively measurable quantities as well as transmit these data via an ultra-low-power ultra-wideband (UWB) data transmitter. A proof-of-concept system was built to prove the feasibility of the approach. The energy consumption of the system during start-up was analysed, traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof of concept, an application prototype was developed.
Motile cilia are hair-like cell extensions that beat periodically to generate fluid flow along various epithelial tissues within the body. In dense multiciliated carpets, cilia were shown to exhibit a remarkable coordination of their beat in the form of traveling metachronal waves, a phenomenon which supposedly enhances fluid transport. Yet, how cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine experiments, novel analysis tools, and theory to address this knowledge gap. To investigate collective dynamics of cilia, we studied zebrafish multiciliated epithelia in the nose and the brain. We focused mainly on the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Even though synchronization is local only, we observed global patterns of traveling metachronal waves across the zebrafish multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right noses, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions, i.e., cilia colliding with each other, and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment coincide and generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
High aerodynamic efficiency requires propellers with high aspect ratios, while propeller sweep potentially reduces noise. Propeller sweep and high aspect ratios increase elasticity and coupling of structural mechanics and aerodynamics, affecting the propeller performance and noise. Therefore, this paper analyzes the influence of elasticity on forward-swept, backward-swept, and unswept propellers in hover conditions. A reduced-order blade element momentum approach is coupled with a one-dimensional Timoshenko beam theory and Farassat's formulation 1A. The results of the aeroelastic simulation are used as input for the aeroacoustic calculation. The analysis shows that elasticity influences noise radiation because thickness and loading noise respond differently to deformations. In the case of the backward-swept propeller, the location of the maximum sound pressure level shifts forward by 0.5 °, while in the case of the forward-swept propeller, it shifts backward by 0.5 °. Therefore, aeroacoustic optimization requires the consideration of propeller deformation.
This paper presents an approach to predicting the sound exposure on the ground caused by a landing aircraft with recuperating propellers. The noise source along the trajectory of a flight specified for a steeper approach is simulated based on measurements of sound power levels and additional parameters of a single propeller placed in a wind tunnel. To validate the measured data/measurement results, these simulations are also supported by overflight measurements of a test aircraft. It is shown that the simple source models of propellers do not provide fully satisfactory results since the sound levels are estimated too low. Nevertheless, with a further reference comparison, margins for an acceptable increase in the sound power level of the aircraft on its now steeper approach path could be estimated. Thus, in this case, a +7 dB increase in SWL would not increase the SEL compared to the conventional approach within only 2 km ahead of the airfield.
Dynamic loads significantly impact the structural design of propeller blades due to fatigue and static strength. Since propellers are elastic structures, deformations and aerodynamic loads are coupled. In the past, propeller manufacturers established procedures to determine unsteady aerodynamic loads and the structural response with analytical steady-state calculations. According to the approach, aeroelastic coupling primarily consists of torsional deformations. They neglect bending deformations, deformation velocities, and inertia terms. This paper validates the assumptions above for a General Aviation propeller and a lift propeller for urban air mobility or large cargo drones. Fully coupled reduced-order simulations determine the dynamic loads in the time domain. A quasi-steady blade element momentum approach transfers loads to one-dimensional finite beam elements. The simulation results are in relatively good agreement with the analytical method for the General Aviation propeller but show increasing errors for the slender lift propeller. The analytical approach is modified to consider the induced velocities. Still, inertia and velocity proportional terms play a significant role for the lift propeller due to increased elasticity. The assumption that only torsional deformations significantly impact the dynamic loads of propellers is not valid. Adequate determination of dynamic loads of such designs requires coupled aeroelastic simulations or advanced analytical procedures.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
Based on the European Space Agency (ESA) Science in Space Environment (SciSpacE) community White Paper “Human Physiology – Musculoskeletal system”, this perspective highlights unmet needs and suggests new avenues for future studies in musculoskeletal research to enable crewed exploration missions. The musculoskeletal system is essential for sustaining physical function and energy metabolism, and the maintenance of health during exploration missions, and consequently mission success, will be tightly linked to musculoskeletal function. Data collection from current space missions from pre-, during-, and post-flight periods would provide important information to understand and ultimately offset musculoskeletal alterations during long-term spaceflight. In addition, understanding the kinetics of the different components of the musculoskeletal system in parallel with a detailed description of the molecular mechanisms driving these alterations appears to be the best approach to address potential musculoskeletal problems that future exploratory-mission crew will face. These research efforts should be accompanied by technical advances in molecular and phenotypic monitoring tools to provide in-flight real-time feedback.
We consider time-dependent portfolios and discuss the allocation of changes in the risk of a portfolio to changes in the portfolio’s components. For this purpose we adopt established allocation principles. We also use our approach to obtain forecasts for changes in the risk of the portfolio’s components. To put the approach into practice we present an implementation based on the output of a simulation. Allocation is illustrated with an example portfolio in the context of Solvency II. The quality of the forecasts is investigated with an empirical study.
On the applicability of several tests to models with not identically distributed random effects
(2023)
We consider Kolmogorov–Smirnov and Cramér–von-Mises type tests for testing central symmetry, exchangeability, and independence. In the standard case, the tests are intended for the application to independent and identically distributed data with unknown distribution. The tests are available for multivariate data and bootstrap procedures are suitable to obtain critical values. We discuss the applicability of the tests to random effects models, where the random effects are independent but not necessarily identically distributed and with possibly unknown distributions. Theoretical results show the adequacy of the tests in this situation. The quality of the tests in models with random effects is investigated by simulations. Empirical results obtained confirm the theoretical findings. A real data example illustrates the application.
In order to reduce energy consumption of homes, it is important to make transparent which devices consume how much energy. However, power consumption is often only monitored aggregated at the house energy meter. Disaggregating this power consumption into the contributions of individual devices can be achieved using Machine Learning. Our work aims at making state of the art disaggregation algorithms accessibe for users of the open source home automation platform Home Assistant.
The feasibility study presents results of a hydrogen combustor integration for a Medium-Range aircraft engine using the Dry-Low-NOₓ Micromix combustion principle. Based on a simplified Airbus A320-type flight mission, a thermodynamic performance model of a kerosene and a hydrogen-powered V2530-A5 engine is used to derive the thermodynamic combustor boundary conditions. A new combustor design using the Dry-Low NOx Micromix principle is investigated by slice model CFD simulations of a single Micromix injector for design and off-design operation of the engine. Combustion characteristics show typical Micromix flame shapes and good combustion efficiencies for all flight mission operating points. Nitric oxide emissions are significant below ICAO CAEP/8 limits. For comparison of the Emission Index (EI) for NOₓ emissions between kerosene and hydrogen operation, an energy (kerosene) equivalent Emission Index is used.
A full 15° sector model CFD simulation of the combustion chamber with multiple Micromix injectors including inflow homogenization and dilution and cooling air flows investigates the combustor integration effects, resulting NOₓ emission and radial temperature distributions at the combustor outlet. The results show that the integration of a Micromix hydrogen combustor in actual aircraft engines is feasible and offers, besides CO₂ free combustion, a significant reduction of NOₓ emissions compared to kerosene operation.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
The integration of high temperature thermal energy storages into existing conventional power plants can help to reduce the CO2 emissions of those plants and lead to lower capital expenditures for building energy storage systems, due to the use of synergy effects [1]. One possibility to implement that, is a molten salt storage system with a powerful power-to-heat unit. This paper presents two possible control concepts for the startup of the charging system of such a facility. The procedures are implemented in a detailed dynamic process model. The performance and safety regarding the film temperatures at heat transmitting surfaces are investigated in the process simulations. To improve the accuracy in predicting the film temperatures, CFD simulations of the electrical heater are carried out and the results are merged with the dynamic model. The results show that both investigated control concepts are safe regarding the temperature limits. The gradient controlled startup performed better than the temperature-controlled startup. Nevertheless, there are several uncertainties that need to be investigated further.
Despite the challenges of pioneering molten salt towers (MST), it remains the leading technology in central receiver power plants today, thanks to cost effective storage integration and high cost reduction potential. The limited controllability in volatile solar conditions can cause significant losses, which are difficult to estimate without comprehensive modeling [1]. This paper presents a Methodology to generate predictions of the dynamic behavior of the receiver system as part of an operating assistance system (OAS). Based on this, it delivers proposals if and when to drain and refill the receiver during a cloudy period in order maximize the net yield and quantifies the amount of net electricity gained by this. After prior analysis with a detailed dynamic two-phase model of the entire receiver system, two different reduced modeling approaches where developed and implemented in the OAS. A tailored decision algorithm utilizes both models to deliver the desired predictions efficiently and with appropriate accuracy.
This paper describes the potential for developing a digital twin of society- a dynamic model that can be used to observe, analyze, and predict the evolution of various societal aspects. Such a digital twin can help governmental agencies and policy makers in interpreting trends, understanding challenges, and making decisions regarding investments or policies necessary to support societal development and ensure future prosperity. The paper reviews related work regarding the digital twin paradigm and its applications. The paper presents a motivating case study- an analysis of opportunities and challenges faced by the German federal employment agency, Bundesagentur f¨ur Arbeit (BA), proposes solutions using digital twins, and describes initial proofs of concept for such solutions.
Influence of slab deflection on the out-of-plane capacity of unreinforced masonry partition walls
(2023)
Severe damage of non-structural elements is noticed in previous earthquakes, causing high economic losses and posing a life threat for the people. Masonry partition walls are one of the most commonly used non-structural elements. Therefore, their behaviour under earthquake loading in out-of-plane (OOP) direction is investigated by several researches in the past years. However, none of the existing experimental campaigns or analytical approaches consider the influence of prior slab deflection on OOP response of partition walls. Moreover, none of the existing construction techniques for the connection of partition walls with surrounding reinforced concrete (RC) is investigated for the combined slab deflection and OOP loading. However, the inevitable time-dependent behaviour of RC slabs leads to high values of final slab deflections which can further influence boundary conditions of partition walls. Therefore, a comprehensive study on the influence of slab deflection on the OOP capacity of masonry partitions is conducted. In the first step, experimental tests are carried out. Results of experimental tests are further used for the calibration of the numerical model employed for a parametric study. Based on the results, behaviour under combined loading for different construction techniques is explained. The results show that slab deflection leads either to severe damage or to a high reduction of OOP capacity. Existing practical solutions do not account for these effects. In this contribution, recommendations to overcome the problems of combined slab deflection and OOP loading on masonry partition walls are given. Possible interaction of in-plane (IP) loading, with the combined slab deflection and OOP loading on partition walls, is not investigated in this study.
Optical Fibers as Dosimeter Detectors for Mixed Proton/Neutron Fields - A Biological Dosimeter
(2023)
In recent years, proton therapy has gained importance as a cancer treatment modality due to its conformality with the tumor and the sparing of healthy tissue. However, in the interaction of the protons with the beam line elements and patient tissues, potentially harmful secondary neutrons are always generated. To ensure that this neutron dose is as low as possible, treatment plans could be created to also account for and minimize the neutron dose. To monitor such a treatment plan, a compact, easy to use, and inexpensive dosimeter must be developed that not only measures the physical dose, but which can also distinguish between proton and neutron contributions. To that end, plastic optical fibers with scintillation materials (Gd₂O₂S:Tb, Gd₂O₂S:Eu, and YVO₄:Eu) were irradiated with protons and neutrons. It was confirmed that sensors with different scintillation materials have different sensitivities to protons and neutrons. A combination of these three scintillators can be used to build a detector array to create a biological dosimeter.
Ambitious climate targets affect the competitiveness of industries in the international market. To prevent such industries from moving to other countries in the wake of increased climate protection efforts, cost adjustments may become necessary. Their design requires knowledge of country-specific production costs. Here, we present country-specific cost figures for different production routes of steel, paying particular attention to transportation costs. The data can be used in floor price models aiming to assess the competitiveness of different steel production routes in different countries (Rübbelke, 2022).
Aspergillus oryzae is an industrially relevant organism for the secretory production of heterologous enzymes, especially amylases. The activities of potential heterologous amylases, however, cannot be quantified directly from the supernatant due to the high background activity of native α-amylase. This activity is caused by the gene products of amyA, amyB, and amyC. In this study, an in vitro CRISPR/Cas9 system was established in A. oryzae to delete these genes simultaneously. First, pyrG of A. oryzae NSAR1 was mutated by exploiting NHEJ to generate a counter-selection marker. Next, all amylase genes were deleted simultaneously by co-transforming a repair template carrying pyrG of Aspergillus nidulans and flanking sequences of amylase gene loci. The rate of obtained triple knock-outs was 47%. We showed that triple knockouts do not retain any amylase activity in the supernatant. The established in vitro CRISPR/Cas9 system was used to achieve sequence-specific knock-in of target genes. The system was intended to incorporate a single copy of the gene of interest into the desired host for the development of screening methods. Therefore, an integration cassette for the heterologous Fpi amylase was designed to specifically target the amyB locus. The site-specific integration rate of the plasmid was 78%, with exceptional additional integrations. Integration frequency was assessed via qPCR and directly correlated with heterologous amylase activity. Hence, we could compare the efficiency between two different signal peptides. In summary, we present a strategy to exploit CRISPR/Cas9 for gene mutation, multiplex knock-out, and the targeted knock-in of an expression cassette in A. oryzae. Our system provides straightforward strain engineering and paves the way for development of fungal screening systems.
By developing innovative solutions to social and environmental problems, sustainable ventures carry greatpotential. Entrepreneurship which focuses especially on new venture creation can be developed through education anduniversities, in particular, are called upon to provide an impetus for social change. But social innovations are associatedwith certain hurdles, which are related to the multi-dimensionality, i.e. the tension between creating social,environmental and economic value and dealing with a multiplicity of stakeholders. The already complex field ofentrepreneurship education has to face these challenges. This paper, therefore, aims to identify starting points for theintegration of sustainability into entrepreneurship education. To pursue this goal experiences from three differentproject initiatives between the partner universities: Lapland University of Applied Sciences, FH Aachen University ofApplied Sciences and Turiba University are reflected and findings are systematically condensed into recommendationsfor education on sustainable entrepreneurship.
This study describes the development of a new combined polysaccharide-matrix-based technology for the immobilization of Lactobacillus rhamnosus GG (LGG) bacteria in biofilm form. The new composition allows for delivering the bacteria to the digestive tract in a manner that improves their robustness compared with planktonic cells and released biofilm cells. Granules consisting of a polysaccharide matrix with probiotic biofilms (PMPB) with high cell density (>9 log CFU/g) were obtained by immobilization in the optimized nutrient medium. Successful probiotic loading was confirmed by fluorescence microscopy and scanning electron microscopy. The developed prebiotic polysaccharide matrix significantly enhanced LGG viability under acidic (pH 2.0) and bile salt (0.3%) stress conditions. Enzymatic extract of feces, mimicking colon fluid in terms of cellulase activity, was used to evaluate the intestinal release of probiotics. PMPB granules showed the ability to gradually release a large number of viable LGG cells in the model colon fluid. In vivo, the oral administration of PMPB granules in rats resulted in the successful release of probiotics in the colon environment. The biofilm-forming incubation method of immobilization on a complex polysaccharide matrix tested in this study has shown high efficacy and promising potential for the development of innovative biotechnologies.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
New European Union (EU) regulations for UAS operations require an operational risk analysis, which includes an estimation of the potential danger of the UAS crashing. A key parameter for the potential ground risk is the kinetic impact energy of the UAS. The kinetic energy depends on the impact velocity of the UAS and, therefore, on the aerodynamic drag and the weight during free fall. Hence, estimating the impact energy of a UAS requires an accurate drag estimation of the UAS in that state. The paper at hand presents the aerodynamic drag estimation of small-scale multirotor UAS. Multirotor UAS of various sizes and configurations were analysed with a fully unsteady Reynolds-averaged Navier–Stokes approach. These simulations included different velocities and various fuselage pitch angles of the UAS. The results were compared against force measurements performed in a subsonic wind tunnel and provided good consistency. Furthermore, the influence of the UAS`s fuselage pitch angle as well as the influence of fixed and free spinning propellers on the aerodynamic drag was analysed. Free spinning propellers may increase the drag by up to 110%, depending on the fuselage pitch angle. Increasing the fuselage pitch angle of the UAS lowers the drag by 40% up to 85%, depending on the UAS. The data presented in this paper allow for increased accuracy of ground risk assessments.
The eVTOL industry is a rapidly growing mass market expected to start in 2024. eVTOL compete, caused by their predicted missions, with ground-based transportation modes, including mainly passenger cars. Therefore, the automotive and classical aircraft design process is reviewed and compared to highlight advantages for eVTOL development. A special focus is on ergonomic comfort and safety. The need for further investigation of eVTOL’s crashworthiness is outlined by, first, specifying the relevance of passive safety via accident statistics and customer perception analysis; second, comparing the current state of regulation and certification; and third, discussing the advantages of integral safety and applying the automotive safety approach for eVTOL development. Integral safety links active and passive safety, while the automotive safety approach means implementing standardized mandatory full-vehicle crash tests for future eVTOL. Subsequently, possible crash impact conditions are analyzed, and three full-vehicle crash load cases are presented.