Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1585) (remove)
Document Type
- Article (1585) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- CellDrum (3)
- Label-free detection (3)
- biosensors (3)
- hydrogen peroxide (3)
- impedance spectroscopy (3)
- shakedown analysis (3)
Muscle function is compromised by gravitational unloading in space affecting overall musculoskeletal health. Astronauts perform daily exercise programmes to mitigate these effects but knowing which muscles to target would optimise effectiveness. Accurate inflight assessment to inform exercise programmes is critical due to lack of technologies suitable for spaceflight. Changes in mechanical properties indicate muscle health status and can be measured rapidly and non-invasively using novel technology. A hand-held MyotonPRO device enabled monitoring of muscle health for the first time in spaceflight (> 180 days). Greater/maintained stiffness indicated countermeasures were effective. Tissue stiffness was preserved in the majority of muscles (neck, shoulder, back, thigh) but Tibialis Anterior (foot lever muscle) stiffness decreased inflight vs. preflight (p < 0.0001; mean difference 149 N/m) in all 12 crewmembers. The calf muscles showed opposing effects, Gastrocnemius increasing in stiffness Soleus decreasing. Selective stiffness decrements indicate lack of preservation despite daily inflight countermeasures. This calls for more targeted exercises for lower leg muscles with vital roles as ankle joint stabilizers and in gait. Muscle stiffness is a digital biomarker for risk monitoring during future planetary explorations (Moon, Mars), for healthcare management in challenging environments or clinical disorders in people on Earth, to enable effective tailored exercise programmes.
The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.
As one class of molecular imprinted polymers (MIPs), surface imprinted polymer (SIP)-based biosensors show great potential in direct whole-bacteria detection. Micro-contact imprinting, that involves stamping the template bacteria immobilized on a substrate into a pre-polymerized polymer matrix, is the most straightforward and prominent method to obtain SIP-based biosensors. However, the major drawbacks of the method arise from the requirement for fresh template bacteria and often non-reproducible bacteria distribution on the stamp substrate. Herein, we developed a positive master stamp containing photolithographic mimics of the template bacteria (E. coli) enabling reproducible fabrication of biomimetic SIP-based biosensors without the need for the “real” bacteria cells. By using atomic force and scanning electron microscopy imaging techniques, respectively, the E. coli-capturing ability of the SIP samples was tested, and compared with non-imprinted polymer (NIP)-based samples and control SIP samples, in which the cavity geometry does not match with E. coli cells. It was revealed that the presence of the biomimetic E. coli imprints with a specifically designed geometry increases the sensor E. coli-capturing ability by an “imprinting factor” of about 3. These findings show the importance of geometry-guided physical recognition in bacterial detection using SIP-based biosensors. In addition, this imprinting strategy was employed to interdigitated electrodes and QCM (quartz crystal microbalance) chips. E. coli detection performance of the sensors was demonstrated with electrochemical impedance spectroscopy (EIS) and QCM measurements with dissipation monitoring technique (QCM-D).
Many important properties of bacterial cellulose (BC), such as moisture absorption capacity, elasticity and tensile strength, largely depend on its structure. This paper presents a study on the effect of the drying method on BC films produced by Medusomyces gisevii using two different procedures: room temperature drying (RT, (24 ± 2 °C, humidity 65 ± 1%, dried until a constant weight was reached) and freeze-drying (FD, treated at − 75 °C for 48 h). BC was synthesized using one of two different carbon sources—either glucose or sucrose. Structural differences in the obtained BC films were evaluated using atomic force microscopy (AFM), scanning electron microscopy (SEM), and X-ray diffraction. Macroscopically, the RT samples appeared semi-transparent and smooth, whereas the FD group exhibited an opaque white color and sponge-like structure. SEM examination showed denser packing of fibrils in FD samples while RT-samples displayed smaller average fiber diameter, lower surface roughness and less porosity. AFM confirmed the SEM observations and showed that the FD material exhibited a more branched structure and a higher surface roughness. The samples cultivated in a glucose-containing nutrient medium, generally displayed a straight and ordered shape of fibrils compared to the sucrose-derived BC, characterized by a rougher and wavier structure. The BC films dried under different conditions showed distinctly different crystallinity degrees, whereas the carbon source in the culture medium was found to have a relatively small effect on the BC crystallinity.
Electrolyte-insulator-semiconductor capacitors (EISCAP) belong to field-effect sensors having an attractive transducer architecture for constructing various biochemical sensors. In this study, a capacitive model of enzyme-modified EISCAPs has been developed and the impact of the surface coverage of immobilized enzymes on its capacitance-voltage and constant-capacitance characteristics was studied theoretically and experimentally. The used multicell arrangement enables a multiplexed electrochemical characterization of up to sixteen EISCAPs. Different enzyme coverages have been achieved by means of parallel electrical connection of bare and enzyme-covered single EISCAPs in diverse combinations. As predicted by the model, with increasing the enzyme coverage, both the shift of capacitance-voltage curves and the amplitude of the constant-capacitance signal increase, resulting in an enhancement of analyte sensitivity of the EISCAP biosensor. In addition, the capability of the multicell arrangement with multi-enzyme covered EISCAPs for sequentially detecting multianalytes (penicillin and urea) utilizing the enzymes penicillinase and urease has been experimentally demonstrated and discussed.
In this work, we present a compact, bifunctional chip-based sensor setup that measures the temperature and electrical conductivity of water samples, including specimens from rivers and channels, aquaculture, and the Atlantic Ocean. For conductivity measurements, we utilize the impedance amplitude recorded via interdigitated electrode structures at a single triggering frequency. The results are well in line with data obtained using a calibrated reference instrument. The new setup holds for conductivity values spanning almost two orders of magnitude (river versus ocean water) without the need for equivalent circuit modelling. Temperature measurements were performed in four-point geometry with an on-chip platinum RTD (resistance temperature detector) in the temperature range between 2 °C and 40 °C, showing no hysteresis effects between warming and cooling cycles. Although the meander was not shielded against the liquid, the temperature calibration provided equivalent results to low conductive Milli-Q and highly conductive ocean water. The sensor is therefore suitable for inline and online monitoring purposes in recirculating aquaculture systems.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
This paper investigates the interior transmission problem for homogeneous media via eigenvalue trajectories parameterized by the magnitude of the refractive index. In the case that the scatterer is the unit disk, we prove that there is a one-to-one correspondence between complex-valued interior transmission eigenvalue trajectories and Dirichlet eigenvalues of the Laplacian which turn out to be exactly the trajectorial limit points as the refractive index tends to infinity. For general simply-connected scatterers in two or three dimensions, a corresponding relation is still open, but further theoretical results and numerical studies indicate a similar connection.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
The artificial olfactory image was proposed by Lundström et al. in 1991 as a new strategy for an electronic nose system which generated a two-dimensional mapping to be interpreted as a fingerprint of the detected gas species. The potential distribution generated by the catalytic metals integrated into a semiconductor field-effect structure was read as a photocurrent signal generated by scanning light pulses. The impact of the proposed technology spread beyond gas sensing, inspiring the development of various imaging modalities based on the light addressing of field-effect structures to obtain spatial maps of pH distribution, ions, molecules, and impedance, and these modalities have been applied in both biological and non-biological systems. These light-addressing technologies have been further developed to realize the position control of a faradaic current on the electrode surface for localized electrochemical reactions and amperometric measurements, as well as the actuation of liquids in microfluidic devices.
The deformation and damage laws of non-homogeneous irregular structural planes in rocks are the basis for studying the stability of rock engineering. To investigate the damage characteristics of rock containing non-parallel fissures, uniaxial compression tests and numerical simulations were conducted on sandstone specimens containing three non-parallel fissures inclined at 0°, 45° and 90° in this study. The characteristics of crack initiation and crack evolution of fissures with different inclinations were analyzed. A constitutive model for the discontinuous fractures of fissured sandstone was proposed. The results show that the fracture behaviors of fissured sandstone specimens are discontinuous. The stress–strain curves are non-smooth and can be divided into nonlinear crack closure stage, linear elastic stage, plastic stage and brittle failure stage, of which the plastic stage contains discontinuous stress drops. During the uniaxial compression test, the middle or ends of 0° fissures were the first to crack compared to 45° and 90° fissures. The end with small distance between 0° and 45° fissures cracked first, and the end with large distance cracked later. After the final failure, 0° fissures in all specimens were fractured, while 45° and 90° fissures were not necessarily fractured. Numerical simulation results show that the concentration of compressive stress at the tips of 0°, 45° and 90° fissures, as well as the concentration of tensile stress on both sides, decreased with the increase of the inclination angle. A constitutive model for the discontinuous fractures of fissured sandstone specimens was derived by combining the logistic model and damage mechanic theory. This model can well describe the discontinuous drops of stress and agrees well with the whole processes of the stress–strain curves of the fissured sandstone specimens.
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
To gain insight on chemical sterilization processes, the influence of temperature (up to 70 °C), intense green light, and hydrogen peroxide (H₂O₂) concentration (up to 30% in aqueous solution) on microbial spore inactivation is evaluated by in-situ Raman spectroscopy with an optical trap. Bacillus atrophaeus is utilized as a model organism. Individual spores are isolated and their chemical makeup is monitored under dynamically changing conditions (temperature, light, and H₂O₂ concentration) to mimic industrially relevant process parameters for sterilization in the field of aseptic food processing. While isolated spores in water are highly stable, even at elevated temperatures of 70 °C, exposure to H₂O₂ leads to a loss of spore integrity characterized by the release of the key spore biomarker dipicolinic acid (DPA) in a concentration-dependent manner, which indicates damage to the inner membrane of the spore. Intensive light or heat, both of which accelerate the decomposition of H₂O₂ into reactive oxygen species (ROS), drastically shorten the spore lifetime, suggesting the formation of ROS as a rate-limiting step during sterilization. It is concluded that Raman spectroscopy can deliver mechanistic insight into the mode of action of H₂O₂-based sterilization and reveal the individual contributions of different sterilization methods acting in tandem.
In this work, the effects of carbon sources and culture media on the production and structural properties of bacterial cellulose (BC) synthesized by Medusomyces gisevii have been studied. The culture medium was composed of different initial concentrations of glucose or sucrose dissolved in 0.4% extract of plain green tea. Parameters of the culture media (titratable acidity, substrate conversion degree etc.) were monitored daily for 20 days of cultivation. The BC pellicles produced on different carbon sources were characterized in terms of biomass yield, crystallinity and morphology by field emission scanning electron microscopy (FE-SEM), atomic force microscopy and X-ray diffraction. Our results showed that Medusomyces gisevii had higher BC yields in media with sugar concentrations close to 10 g L−1 after a 18–20 days incubation period. Glucose in general lead to a higher BC yield (173 g L−1) compared to sucrose (163.5 g L−1). The BC crystallinity degree and surface roughness were higher in the samples synthetized from sucrose. Obtained FE-SEM micrographs show that the BC pellicles synthesized in the sucrose media contained densely packed tangles of cellulose fibrils whereas the BC produced in the glucose media displayed rather linear geometry of the BC fibrils without noticeable aggregates.
Ambitious climate targets affect the competitiveness of industries in the international market. To prevent such industries from moving to other countries in the wake of increased climate protection efforts, cost adjustments may become necessary. Their design requires knowledge of country-specific production costs. Here, we present country-specific cost figures for different production routes of steel, paying particular attention to transportation costs. The data can be used in floor price models aiming to assess the competitiveness of different steel production routes in different countries (Rübbelke, 2022).
Motile cilia are hair-like cell extensions that beat periodically to generate fluid flow along various epithelial tissues within the body. In dense multiciliated carpets, cilia were shown to exhibit a remarkable coordination of their beat in the form of traveling metachronal waves, a phenomenon which supposedly enhances fluid transport. Yet, how cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine experiments, novel analysis tools, and theory to address this knowledge gap. To investigate collective dynamics of cilia, we studied zebrafish multiciliated epithelia in the nose and the brain. We focused mainly on the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Even though synchronization is local only, we observed global patterns of traveling metachronal waves across the zebrafish multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right noses, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions, i.e., cilia colliding with each other, and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment coincide and generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
Lead and nickel, as heavy metals, are still used in industrial processes, and are classified as “environmental health hazards” due to their toxicity and polluting potential. The detection of heavy metals can prevent environmental pollution at toxic levels that are critical to human health. In this sense, the electrolyte–insulator–semiconductor (EIS) field-effect sensor is an attractive sensing platform concerning the fabrication of reusable and robust sensors to detect such substances. This study is aimed to fabricate a sensing unit on an EIS device based on Sn₃O₄ nanobelts embedded in a polyelectrolyte matrix of polyvinylpyrrolidone (PVP) and polyacrylic acid (PAA) using the layer-by-layer (LbL) technique. The EIS-Sn₃O₄ sensor exhibited enhanced electrochemical performance for detecting Pb²⁺ and Ni²⁺ ions, revealing a higher affinity for Pb²⁺ ions, with sensitivities of ca. 25.8 mV/decade and 2.4 mV/decade, respectively. Such results indicate that Sn₃O₄ nanobelts can contemplate a feasible proof-of-concept capacitive field-effect sensor for heavy metal detection, envisaging other future studies focusing on environmental monitoring.
This study evaluates neuromechanical control and muscle-tendon interaction during energy storage and dissipation tasks in hypergravity. During parabolic flights, while 17 subjects performed drop jumps (DJs) and drop landings (DLs), electromyography (EMG) of the lower limb muscles was combined with in vivo fascicle dynamics of the gastrocnemius medialis, two-dimensional (2D) kinematics, and kinetics to measure and analyze changes in energy management. Comparisons were made between movement modalities executed in hypergravity (1.8 G) and gravity on ground (1 G). In 1.8 G, ankle dorsiflexion, knee joint flexion, and vertical center of mass (COM) displacement are lower in DJs than in DLs; within each movement modality, joint flexion amplitudes and COM displacement demonstrate higher values in 1.8 G than in 1 G. Concomitantly, negative peak ankle joint power, vertical ground reaction forces, and leg stiffness are similar between both movement modalities (1.8 G). In DJs, EMG activity in 1.8 G is lower during the COM deceleration phase than in 1 G, thus impairing quasi-isometric fascicle behavior. In DLs, EMG activity before and during the COM deceleration phase is higher, and fascicles are stretched less in 1.8 G than in 1 G. Compared with the situation in 1 G, highly task-specific neuromuscular activity is diminished in 1.8 G, resulting in fascicle lengthening in both movement modalities. Specifically, in DJs, a high magnitude of neuromuscular activity is impaired, resulting in altered energy storage. In contrast, in DLs, linear stiffening of the system due to higher neuromuscular activity combined with lower fascicle stretch enhances the buffering function of the tendon, and thus the capacity to safely dissipate energy.
It has been shown that muscle fascicle curvature increases with increasing contraction level and decreasing muscle–tendon complex length. The analyses were done with limited examination windows concerning contraction level, muscle–tendon complex length, and/or intramuscular position of ultrasound imaging. With this study we aimed to investigate the correlation between fascicle arching and contraction, muscle–tendon complex length and their associated architectural parameters in gastrocnemius muscles to develop hypotheses concerning the fundamental mechanism of fascicle curving. Twelve participants were tested in five different positions (90°/105°*, 90°/90°*, 135°/90°*, 170°/90°*, and 170°/75°*; *knee/ankle angle). They performed isometric contractions at four different contraction levels (5%, 25%, 50%, and 75% of maximum voluntary contraction) in each position. Panoramic ultrasound images of gastrocnemius muscles were collected at rest and during constant contraction. Aponeuroses and fascicles were tracked in all ultrasound images and the parameters fascicle curvature, muscle–tendon complex strain, contraction level, pennation angle, fascicle length, fascicle strain, intramuscular position, sex and age group were analyzed by linear mixed effect models. Mean fascicle curvature of the medial gastrocnemius increased with contraction level (+5 m−1 from 0% to 100%; p = 0.006). Muscle–tendon complex length had no significant impact on mean fascicle curvature. Mean pennation angle (2.2 m−1 per 10°; p < 0.001), inverse mean fascicle length (20 m−1 per cm−1; p = 0.003), and mean fascicle strain (−0.07 m−1 per +10%; p = 0.004) correlated with mean fascicle curvature. Evidence has also been found for intermuscular, intramuscular, and sex-specific intramuscular differences of fascicle curving. Pennation angle and the inverse fascicle length show the highest predictive capacities for fascicle curving. Due to the strong correlations between pennation angle and fascicle curvature and the intramuscular pattern of curving we suggest for future studies to examine correlations between fascicle curvature and intramuscular fluid pressure.
Background
Hip fractures are a common and costly health problem, resulting in significant morbidity and mortality, as well as high costs for healthcare systems, especially for the elderly. Implementing surgical preventive strategies has the potential to improve the quality of life and reduce the burden on healthcare resources, particularly in the long term. However, there are currently limited guidelines for standardizing hip fracture prophylaxis practices.
Methods
This study used a cost-effectiveness analysis with a finite-state Markov model and cohort simulation to evaluate the primary and secondary surgical prevention of hip fractures in the elderly. Patients aged 60 to 90 years were simulated in two different models (A and B) to assess prevention at different levels. Model A assumed prophylaxis was performed during the fracture operation on the contralateral side, while Model B included individuals with high fracture risk factors. Costs were obtained from the Centers for Medicare & Medicaid Services, and transition probabilities and health state utilities were derived from available literature. The baseline assumption was a 10% reduction in fracture risk after prophylaxis. A sensitivity analysis was also conducted to assess the reliability and variability of the results.
Results
With a 10% fracture risk reduction, model A costs between $8,850 and $46,940 per quality-adjusted life-year ($/QALY). Additionally, it proved most cost-effective in the age range between 61 and 81 years. The sensitivity analysis established that a reduction of ≥ 2.8% is needed for prophylaxis to be definitely cost-effective. The cost-effectiveness at the secondary prevention level was most sensitive to the cost of the contralateral side’s prophylaxis, the patient’s age, and fracture treatment cost. For high-risk patients with no fracture history, the cost-effectiveness of a preventive strategy depends on their risk profile. In the baseline analysis, the incremental cost-effectiveness ratio at the primary prevention level varied between $11,000/QALY and $74,000/QALY, which is below the defined willingness to pay threshold.
Conclusion
Due to the high cost of hip fracture treatment and its increased morbidity, surgical prophylaxis strategies have demonstrated that they can significantly relieve the healthcare system. Various key assumptions facilitated the modeling, allowing for adequate room for uncertainty. Further research is needed to evaluate health-state-associated risks.
Background
Post-COVID-19 syndrome (PCS) is a lingering disease with ongoing symptoms such as fatigue and cognitive impairment resulting in a high impact on the daily life of patients. Understanding the pathophysiology of PCS is a public health priority, as it still poses a diagnostic and treatment challenge for physicians.
Methods
In this prospective observational cohort study, we analyzed the retinal microcirculation using Retinal Vessel Analysis (RVA) in a cohort of patients with PCS and compared it to an age- and gender-matched healthy cohort (n = 41, matched out of n = 204).
Measurements and main results
PCS patients exhibit persistent endothelial dysfunction (ED), as indicated by significantly lower venular flicker-induced dilation (vFID; 3.42% ± 1.77% vs. 4.64% ± 2.59%; p = 0.02), narrower central retinal artery equivalent (CRAE; 178.1 [167.5–190.2] vs. 189.1 [179.4–197.2], p = 0.01) and lower arteriolar-venular ratio (AVR; (0.84 [0.8–0.9] vs. 0.88 [0.8–0.9], p = 0.007). When combining AVR and vFID, predicted scores reached good ability to discriminate groups (area under the curve: 0.75). Higher PCS severity scores correlated with lower AVR (R = − 0.37 p = 0.017). The association of microvascular changes with PCS severity were amplified in PCS patients exhibiting higher levels of inflammatory parameters.
Conclusion
Our results demonstrate that prolonged endothelial dysfunction is a hallmark of PCS, and impairments of the microcirculation seem to explain ongoing symptoms in patients. As potential therapies for PCS emerge, RVA parameters may become relevant as clinical biomarkers for diagnosis and therapy management.
We consider time-dependent portfolios and discuss the allocation of changes in the risk of a portfolio to changes in the portfolio’s components. For this purpose we adopt established allocation principles. We also use our approach to obtain forecasts for changes in the risk of the portfolio’s components. To put the approach into practice we present an implementation based on the output of a simulation. Allocation is illustrated with an example portfolio in the context of Solvency II. The quality of the forecasts is investigated with an empirical study.
On the applicability of several tests to models with not identically distributed random effects
(2023)
We consider Kolmogorov–Smirnov and Cramér–von-Mises type tests for testing central symmetry, exchangeability, and independence. In the standard case, the tests are intended for the application to independent and identically distributed data with unknown distribution. The tests are available for multivariate data and bootstrap procedures are suitable to obtain critical values. We discuss the applicability of the tests to random effects models, where the random effects are independent but not necessarily identically distributed and with possibly unknown distributions. Theoretical results show the adequacy of the tests in this situation. The quality of the tests in models with random effects is investigated by simulations. Empirical results obtained confirm the theoretical findings. A real data example illustrates the application.
The Cramér-von-Mises distance is applied to the distribution of the excess over a confidence level. Asymptotics of related statistics are investigated, and it is seen that the obtained limit distributions differ from the classical ones. For that reason, quantiles of the new limit distributions are given and new bootstrap techniques for approximation purposes are introduced and justified. The results motivate new one-sample goodness-of-fit tests for the distribution of the excess over a confidence level and a new confidence interval for the related fitting error. Simulation studies investigate size and power of the tests as well as coverage probabilities of the confidence interval in the finite sample case. A practice-oriented application of the Cramér-von-Mises tests is the determination of an appropriate confidence level for the fitting approach. The adoption of the idea to the well-known problem of threshold detection in the context of peaks over threshold modelling is sketched and illustrated by data examples.
Based on the European Space Agency (ESA) Science in Space Environment (SciSpacE) community White Paper “Human Physiology – Musculoskeletal system”, this perspective highlights unmet needs and suggests new avenues for future studies in musculoskeletal research to enable crewed exploration missions. The musculoskeletal system is essential for sustaining physical function and energy metabolism, and the maintenance of health during exploration missions, and consequently mission success, will be tightly linked to musculoskeletal function. Data collection from current space missions from pre-, during-, and post-flight periods would provide important information to understand and ultimately offset musculoskeletal alterations during long-term spaceflight. In addition, understanding the kinetics of the different components of the musculoskeletal system in parallel with a detailed description of the molecular mechanisms driving these alterations appears to be the best approach to address potential musculoskeletal problems that future exploratory-mission crew will face. These research efforts should be accompanied by technical advances in molecular and phenotypic monitoring tools to provide in-flight real-time feedback.
In this paper, we provide an analytical study of the transmission eigenvalue problem with two conductivity parameters. We will assume that the underlying physical model is given by the scattering of a plane wave for an isotropic scatterer. In previous studies, this eigenvalue problem was analyzed with one conductive boundary parameter whereas we will consider the case of two parameters. We prove the existence and discreteness of the transmission eigenvalues as well as study the dependence on the physical parameters. We are able to prove monotonicity of the first transmission eigenvalue with respect to the parameters and consider the limiting procedure as the second boundary parameter vanishes. Lastly, we provide extensive numerical experiments to validate the theoretical work.
In this work, the bioabsorbable materials, namely fibroin, polylactide acid (PLA), magnesium and magnesium oxide are investigated for their application as transient, resistive temperature detectors (RTD). For this purpose, a thin-film magnesium-based meander-like electrode is deposited onto a flexible, bioabsorbable substrate (fibroin or PLA) and encapsulated (passivated) by additional magnesium oxide layers on top and below the magnesium-based electrode. The morphology of different layered RTDs is analyzed by scanning electron microscopy. The sensor performance and lifetime of the RTD is characterized both under ambient atmospheric conditions between 30°C and 43°C, and wet tissue-like conditions with a constant temperature regime of 37°C. The latter triggers the degradation process of the magnesium-based layers. The 3-layers RTDs on a PLA substrate could achieve a lifetime of 8.5 h. These sensors also show the best sensor performance under ambient atmospheric conditions with a mean sensitivity of 0.48 Ω/°C ± 0.01 Ω/°C.
Herein, fibroin, polylactide (PLA), and carbon are investigated for their suitability as biocompatible and biodegradable materials for amperometric biosensors. For this purpose, screen-printed carbon electrodes on the biodegradable substrates fibroin and PLA are modified with a glucose oxidase membrane and then encapsulated with the biocompatible material Ecoflex. The influence of different curing parameters of the carbon electrodes on the resulting biosensor characteristics is studied. The morphology of the electrodes is investigated by scanning electron microscopy, and the biosensor performance is examined by amperometric measurements of glucose (0.5–10 mM) in phosphate buffer solution, pH 7.4, at an applied potential of 1.2 V versus a Ag/AgCl reference electrode. Instead of Ecoflex, fibroin, PLA, and wound adhesive are tested as alternative encapsulation compounds: a series of swelling tests with different fibroin compositions, PLA, and Ecoflex has been performed before characterizing the most promising candidates by chronoamperometry. Therefore, the carbon electrodes are completely covered with the particular encapsulation material. Chronoamperometric measurements with H2O2 concentrations between 0.5 and 10 mM enable studying the leakage current behavior.
Companies often build their businesses based on product information and therefore try to automate the process of information extraction (IE). Since the information source is usually heterogeneous and non-standardized, classic extract, transform, load techniques reach their limits. Hence, companies must implement the newest findings from research to tackle the challenges of process automation. They require a flexible and robust system that is extendable and ensures the optimal processing of the different document types. This paper provides a distributed microservice architecture pattern that enables the automated generation of IE pipelines. Since their optimal design is individual for each input document, the system ensures the ad-hoc generation of pipelines depending on specific document characteristics at runtime. Furthermore, it introduces the automated quality determination of each available pipeline and controls the integration of new microservices based on their impact on the business value. The introduced system enables fast prototyping of the newest approaches from research and supports companies in automating their IE processes. Based on the automated quality determination, it ensures that the generated pipelines always meet defined business requirements when they come into productive use.
In comparison to single-analyte devices, multiplexed systems for a multianalyte detection offer a reduced assay time and sample volume, low cost, and high throughput. Herein, a multiplexing platform for an automated quasi-simultaneous characterization of multiple (up to 16) capacitive field-effect sensors by the capacitive–voltage (C–V) and the constant-capacitance (ConCap) mode is presented. The sensors are mounted in a newly designed multicell arrangement with one common reference electrode and are electrically connected to the impedance analyzer via the base station. A Python script for the automated characterization of the sensors executes the user-defined measurement protocol. The developed multiplexing system is tested for pH measurements and the label-free detection of ligand-stabilized, charged gold nanoparticles.
A method for detecting and approximating fault lines or surfaces, respectively, or decision curves in two and three dimensions with guaranteed accuracy is presented. Reformulated as a classification problem, our method starts from a set of scattered points along with the corresponding classification algorithm to construct a representation of a decision curve by points with prescribed maximal distance to the true decision curve. Hereby, our algorithm ensures that the representing point set covers the decision curve in its entire extent and features local refinement based on the geometric properties of the decision curve. We demonstrate applications of our method to problems related to the detection of faults, to multi-criteria decision aid and, in combination with Kirsch’s factorization method, to solving an inverse acoustic scattering problem. In all applications we considered in this work, our method requires significantly less pointwise classifications than previously employed algorithms.
Hydrogen peroxide (H₂O₂), a strong oxidizer, is a commonly used sterilization agent employed during aseptic food processing and medical applications. To assess the sterilization efficiency with H₂O₂, bacterial spores are common microbial systems due to their remarkable robustness against a wide variety of decontamination strategies. Despite their widespread use, there is, however, only little information about the detailed time-resolved mechanism underlying the oxidative spore death by H₂O₂. In this work, we investigate chemical and morphological changes of individual Bacillus atrophaeus spores undergoing oxidative damage using optical sensing with trapping Raman microscopy in real-time. The time-resolved experiments reveal that spore death involves two distinct phases: (i) an initial phase dominated by the fast release of dipicolinic acid (DPA), a major spore biomarker, which indicates the rupture of the spore’s core; and (ii) the oxidation of the remaining spore material resulting in the subsequent fragmentation of the spores’ coat. Simultaneous observation of the spore morphology by optical microscopy corroborates these mechanisms. The dependence of the onset of DPA release and the time constant of spore fragmentation on H₂O₂ shows that the formation of reactive oxygen species from H₂O₂ is the rate-limiting factor of oxidative spore death.
Immunosorbent turnip vein clearing virus (TVCV) particles displaying the IgG-binding domains D and E of Staphylococcus aureus protein A (PA) on every coat protein (CP) subunit (TVCVPA) were purified from plants via optimized and new protocols. The latter used polyethylene glycol (PEG) raw precipitates, from which virions were selectively re-solubilized in reverse PEG concentration gradients. This procedure improved the integrity of both TVCVPA and the wild-type subgroup 3 tobamovirus. TVCVPA could be loaded with more than 500 IgGs per virion, which mediated the immunocapture of fluorescent dyes, GFP, and active enzymes. Bi-enzyme ensembles of cooperating glucose oxidase and horseradish peroxidase were tethered together on the TVCVPA carriers via a single antibody type, with one enzyme conjugated chemically to its Fc region, and the other one bound as a target, yielding synthetic multi-enzyme complexes. In microtiter plates, the TVCVPA-displayed sugar-sensing system possessed a considerably increased reusability upon repeated testing, compared to the IgG-bound enzyme pair in the absence of the virus. A high coverage of the viral adapters was also achieved on Ta2O5 sensor chip surfaces coated with a polyelectrolyte interlayer, as a prerequisite for durable TVCVPA-assisted electrochemical biosensing via modularly IgG-assembled sensor enzymes.
Germany is a frontrunner in setting frameworks for the transition to a low-carbon system. The mobility sector plays a significant role in this shift, affecting different people and groups on multiple levels. Without acceptance from these stakeholders, emission targets are out of reach. This research analyzes how the heterogeneous preferences of various stakeholders align with the transformation of the mobility sector, looking at the extent to which the German transformation paths are supported and where stakeholders are located.
Under the research objective of comparing stakeholders' preferences to identify which car segments require additional support for a successful climate transition, a status quo of stakeholders and car performance criteria is the foundation for the analysis. Stakeholders' hidden preferences hinder the derivation of criteria weightings from stakeholders; therefore, a ranking from observed preferences is used. This study's inverse multi-criteria decision analysis means that weightings can be predicted and used together with a recalibrated performance matrix to explore future preferences toward car segments.
Results show that stakeholders prefer medium-sized cars, with the trend pointing towards the increased potential for alternative propulsion technologies and electrified vehicles. These insights can guide the improved targeting of policy supporting the energy and mobility transformation. Additionally, the method proposed in this work can fully handle subjective approaches while incorporating a priori information. A software implementation of the proposed method completes this work and is made publicly available.
Die Bereitstellung von nachhaltig erzeugtem Wasserstoff als Energieträger und Rohstoff ist eine wichtige Schlüsseltechnologie sowohl als Ersatz für fossile Energieträger, aber auch als Produkt im Zusammenhang mit Kreislaufprozessen. In der Abwasserbehandlung bestehen verschiedene Möglichkeiten Wasserstoff herzustellen. Mehrere Wege, mögliche Synergien, aber auch deren Nachteile werden vorgestellt.
Using scenarios is vital in identifying and specifying measures for successfully transforming the energy system. Such transformations can be particularly challenging and require the support of a broader set of stakeholders. Otherwise, there will be opposition in the form of reluctance to adopt the necessary technologies. Usually, processes for considering stakeholders' perspectives are very time-consuming and costly. In particular, there are uncertainties about how to deal with modifications in the scenarios. In principle, new consulting processes will be required. In our study, we show how multi-criteria decision analysis can be used to analyze stakeholders' attitudes toward transition paths. Since stakeholders differ regarding their preferences and time horizons, we employ a multi-criteria decision analysis approach to identify which stakeholders will support or oppose a transition path. We provide a flexible template for analyzing stakeholder preferences toward transition paths. This flexibility comes from the fact that our multi-criteria decision aid-based approach does not involve intensive empirical work with stakeholders. Instead, it involves subjecting assumptions to robustness analysis, which can help identify options to influence stakeholders' attitudes toward transitions.
Subglacial environments on Earth offer important analogs to Ocean World targets in our solar system. These unique microbial ecosystems remain understudied due to the challenges of access through thick glacial ice (tens to hundreds of meters). Additionally, sub-ice collections must be conducted in a clean manner to ensure sample integrity for downstream microbiological and geochemical analyses. We describe the field-based cleaning of a melt probe that was used to collect brine samples from within a glacier conduit at Blood Falls, Antarctica, for geomicrobiological studies. We used a thermoelectric melting probe called the IceMole that was designed to be minimally invasive in that the logistical requirements in support of drilling operations were small and the probe could be cleaned, even in a remote field setting, so as to minimize potential contamination. In our study, the exterior bioburden on the IceMole was reduced to levels measured in most clean rooms, and below that of the ice surrounding our sampling target. Potential microbial contaminants were identified during the cleaning process; however, very few were detected in the final englacial sample collected with the IceMole and were present in extremely low abundances (∼0.063% of 16S rRNA gene amplicon sequences). This cleaning protocol can help minimize contamination when working in remote field locations, support microbiological sampling of terrestrial subglacial environments using melting probes, and help inform planetary protection challenges for Ocean World analog mission concepts.
The European Union's aim to become climate neutral by 2050 necessitates ambitious efforts to reduce carbon emissions. Large reductions can be attained particularly in energy intensive sectors like iron and steel. In order to prevent the relocation of such industries outside the EU in the course of tightening environmental regulations, the establishment of a climate club jointly with other large emitters and alternatively the unilateral implementation of an international cross-border carbon tax mechanism are proposed. This article focuses on the latter option choosing the steel sector as an example. In particular, we investigate the financial conditions under which a European cross border mechanism is capable to protect hydrogen-based steel production routes employed in Europe against more polluting competition from abroad. By using a floor price model, we assess the competitiveness of different steel production routes in selected countries. We evaluate the climate friendliness of steel production on the basis of specific GHG emissions. In addition, we utilize an input-output price model. It enables us to assess impacts of rising cost of steel production on commodities using steel as intermediates. Our results raise concerns that a cross-border tax mechanism will not suffice to bring about competitiveness of hydrogen-based steel production in Europe because the cost tends to remain higher than the cost of steel production in e.g. China. Steel is a classic example for a good used mainly as intermediate for other products. Therefore, a cross-border tax mechanism for steel will increase the price of products produced in the EU that require steel as an input. This can in turn adversely affect competitiveness of these sectors. Hence, the effects of higher steel costs on European exports should be borne in mind and could require the cross-border adjustment mechanism to also subsidize exports.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.
The mechanical behavior of the large intestine beyond the ultimate stress has never been investigated. Stretching beyond the ultimate stress may drastically impair the tissue microstructure, which consequently weakens its healthy state functions of absorption, temporary storage, and transportation for defecation. Due to closely similar microstructure and function with humans, biaxial tensile experiments on the porcine large intestine have been performed in this study. In this paper, we report hyperelastic characterization of the large intestine based on experiments in 102 specimens. We also report the theoretical analysis of the experimental results, including an exponential damage evolution function. The fracture energies and the threshold stresses are set as damage material parameters for the longitudinal muscular, the circumferential muscular and the submucosal collagenous layers. A biaxial tensile simulation of a linear brick element has been performed to validate the applicability of the estimated material parameters. The model successfully simulates the biomechanical response of the large intestine under physiological and non-physiological loads.
Wearable EEG has gained popularity in recent years driven by promising uses outside of clinics and research. The ubiquitous application of continuous EEG requires unobtrusive form-factors that are easily acceptable by the end-users. In this progression, wearable EEG systems have been moving from full scalp to forehead and recently to the ear. The aim of this study is to demonstrate that emerging ear-EEG provides similar impedance and signal properties as established forehead EEG. EEG data using eyes-open and closed alpha paradigm were acquired from ten healthy subjects using generic earpieces fitted with three custom-made electrodes and a forehead electrode (at Fpx) after impedance analysis. Inter-subject variability in in-ear electrode impedance ranged from 20 kΩ to 25 kΩ at 10 Hz. Signal quality was comparable with an SNR of 6 for in-ear and 8 for forehead electrodes. Alpha attenuation was significant during the eyes-open condition in all in-ear electrodes, and it followed the structure of power spectral density plots of forehead electrodes, with the Pearson correlation coefficient of 0.92 between in-ear locations ELE (Left Ear Superior) and ERE (Right Ear Superior) and forehead locations, Fp1 and Fp2, respectively. The results indicate that in-ear EEG is an unobtrusive alternative in terms of impedance, signal properties and information content to established forehead EEG.
Image reconstruction analysis for positron emission tomography with heterostructured scintillators
(2022)
The concept of structure engineering has been proposed for exploring the next generation of radiation detectors with improved performance. A TOF-PET geometry with heterostructured scintillators with a pixel size of 3.0×3.1×15 mm3 was simulated using Monte Carlo. The heterostructures consisted of alternating layers of BGO as a dense material with high stopping power and plastic (EJ232) as a fast light emitter. The detector time resolution was calculated as a function of the deposited and shared energy in both materials on an event-by-event basis. While sensitivity was reduced to 32% for 100 μm thick plastic layers and 52% for 50 μm, the CTR distribution improved to 204±49 ps and 220±41 ps respectively, compared to 276 ps that we considered for bulk BGO. The complex distribution of timing resolutions was accounted for in the reconstruction. We divided the events into three groups based on their CTR and modeled them with different Gaussian TOF kernels. On a NEMA IQ phantom, the heterostructures had better contrast recovery in early iterations. On the other hand, BGO achieved a better contrast to noise ratio (CNR) after the 15th iteration due to the higher sensitivity. The developed simulation and reconstruction methods constitute new tools for evaluating different detector designs with complex time responses.