Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1686)
- Fachbereich Elektrotechnik und Informationstechnik (717)
- IfB - Institut für Bioengineering (621)
- Fachbereich Energietechnik (588)
- INB - Institut für Nano- und Biotechnologien (557)
- Fachbereich Chemie und Biotechnologie (551)
- Fachbereich Luft- und Raumfahrttechnik (496)
- Fachbereich Maschinenbau und Mechatronik (278)
- Fachbereich Wirtschaftswissenschaften (217)
- Solar-Institut Jülich (165)
Language
- English (4904) (remove)
Document Type
- Article (3272)
- Conference Proceeding (1162)
- Part of a Book (191)
- Book (144)
- Doctoral Thesis (30)
- Conference: Meeting Abstract (28)
- Patent (25)
- Other (10)
- Report (9)
- Conference Poster (6)
Keywords
- Biosensor (25)
- Finite-Elemente-Methode (12)
- Einspielen <Werkstoff> (10)
- CAD (8)
- civil engineering (8)
- Bauingenieurwesen (7)
- Blitzschutz (6)
- FEM (6)
- Gamification (6)
- Limit analysis (6)
With the prevalence of glucosamine- and chondroitin-containing dietary supplements for people with osteoarthritis in the marketplace, it is important to have an accurate and reproducible analytical method for the quantitation of these compounds in finished products. NMR spectroscopic method based both on low- (80 MHz) and high- (500–600 MHz) field NMR instrumentation was established, compared and validated for the determination of chondroitin sulfate and glucosamine in dietary supplements. The proposed method was applied for analysis of 20 different dietary supplements. In the majority of cases, quantification results obtained on the low-field NMR spectrometer are similar to those obtained with high-field 500–600 MHz NMR devices. Validation results in terms of accuracy, precision, reproducibility, limit of detection and recovery demonstrated that the developed method is fit for purpose for the marketed products. The NMR method was extended to the analysis of methylsulfonylmethane, adulterant maltodextrin, acetate and inorganic ions. Low-field NMR can be a quicker and cheaper alternative to more expensive high-field NMR measurements for quality control of the investigated dietary supplements. High-field NMR instrumentation can be more favorable for samples with complex composition due to better resolution, simultaneously giving the possibility of analysis of inorganic species such as potassium and chloride.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
This book is based on a multimedia course for biological and chemical engineers, which is designed to trigger students' curiosity and initiative. A solid basic knowledge of thermodynamics and kinetics is necessary for understanding many technical, chemical, and biological processes.
The one-semester basic lecture course was divided into 12 workshops (chapters). Each chapter covers a practically relevant area of physical chemistry and contains the following didactic elements that make this book particularly exciting and understandable:
- Links to Videos at the start of each chapter as preparation for the workshop
- Key terms (in bold) for further research of your own
- Comprehension questions and calculation exercises with solutions as learning checks
- Key illustrations as simple, easy-to-replicate blackboard pictures
Humorous cartoons for each workshop (by Faelis) additionally lighten up the text and facilitate the learning process as a mnemonic. To round out the book, the appendix includes a summary of the most popular experiments in basic physical chemistry courses, as well as suggestions for designing workshops with exhibits, experiments, and "questions of the day."
Suitable for students minoring in chemistry; chemistry majors are sure to find this slimmed-down, didactically valuable book helpful as well. The book is excellent for self-study.
Digital forensics of smartphones is of utmost importance in many criminal cases. As modern smartphones store chats, photos, videos etc. that can be relevant for investigations and as they can have storage capacities of hundreds of gigabytes, they are a primary target for forensic investigators. However, it is exactly this large amount of data that is causing problems: extracting and examining the data from multiple phones seized in the context of a case is taking more and more time. This bears the risk of wasting a lot of time with irrelevant phones while there is not enough time left to analyze a phone which is worth examination. Forensic triage can help in this case: Such a triage is a preselection step based on a subset of data and is performed before fully extracting all the data from the smartphone. Triage can accelerate subsequent investigations and is especially useful in cases where time is essential. The aim of this paper is to determine which and how much data from an Android smartphone can be made directly accessible to the forensic investigator – without tedious investigations. For this purpose, an app has been developed that can be used with extremely limited storage of data in the handset and which outputs the extracted data immediately to the forensic workstation in a human- and machine-readable format.
Experimental determination of the cross sections of proton capture on radioactive nuclei is extremely difficult. Therefore, it is of substantial interest for the understanding of the production of the p-nuclei. For the first time, a direct measurement of proton-capture cross sections on stored, radioactive ions became possible in an energy range of interest for nuclear astrophysics. The experiment was performed at the Experimental Storage Ring (ESR) at GSI by making use of a sensitive method to measure (p,γ) and (p,n) reactions in inverse kinematics. These reaction channels are of high relevance for the nucleosyn-thesis processes in supernovae, which are among the most violent explosions in the universe and are not yet well understood. The cross section of the ¹¹⁸Te(p,γ) reaction has been measured at energies of 6 MeV/u and 7 MeV/u. The heavy ions interacted with a hydrogen gas jet target. The radiative recombination process of the fully stripped ¹¹⁸Te ions and electrons from the hydrogen target was used as a luminosity monitor. An overview of the experimental method and preliminary results from the ongoing analysis will be presented.
Hydrogen peroxide (H₂O₂), a strong oxidizer, is a commonly used sterilization agent employed during aseptic food processing and medical applications. To assess the sterilization efficiency with H₂O₂, bacterial spores are common microbial systems due to their remarkable robustness against a wide variety of decontamination strategies. Despite their widespread use, there is, however, only little information about the detailed time-resolved mechanism underlying the oxidative spore death by H₂O₂. In this work, we investigate chemical and morphological changes of individual Bacillus atrophaeus spores undergoing oxidative damage using optical sensing with trapping Raman microscopy in real-time. The time-resolved experiments reveal that spore death involves two distinct phases: (i) an initial phase dominated by the fast release of dipicolinic acid (DPA), a major spore biomarker, which indicates the rupture of the spore’s core; and (ii) the oxidation of the remaining spore material resulting in the subsequent fragmentation of the spores’ coat. Simultaneous observation of the spore morphology by optical microscopy corroborates these mechanisms. The dependence of the onset of DPA release and the time constant of spore fragmentation on H₂O₂ shows that the formation of reactive oxygen species from H₂O₂ is the rate-limiting factor of oxidative spore death.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
AI-based systems are nearing ubiquity not only in everyday low-stakes activities but also in medical procedures. To protect patients and physicians alike, explainability requirements have been proposed for the operation of AI-based decision support systems (AI-DSS), which adds hurdles to the productive use of AI in clinical contexts. This raises two questions: Who decides these requirements? And how should access to AI-DSS be provided to communities that reject these standards (particularly when such communities are expert-scarce)? This chapter investigates a dilemma that emerges from the implementation of global AI governance. While rejecting global AI governance limits the ability to help communities in need, global AI governance risks undermining and subjecting health-insecure communities to the force of the neo-colonial world order. For this, this chapter first surveys the current landscape of AI governance and introduces the approach of relational egalitarianism as key to (global health) justice. To discuss the two horns of the referred dilemma, the core power imbalances faced by health-insecure collectives (HICs) are examined. The chapter argues that only strong demands of a dual strategy towards health-secure collectives can both remedy the immediate needs of HICs and enable them to become healthcare independent.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Lead and nickel, as heavy metals, are still used in industrial processes, and are classified as “environmental health hazards” due to their toxicity and polluting potential. The detection of heavy metals can prevent environmental pollution at toxic levels that are critical to human health. In this sense, the electrolyte–insulator–semiconductor (EIS) field-effect sensor is an attractive sensing platform concerning the fabrication of reusable and robust sensors to detect such substances. This study is aimed to fabricate a sensing unit on an EIS device based on Sn₃O₄ nanobelts embedded in a polyelectrolyte matrix of polyvinylpyrrolidone (PVP) and polyacrylic acid (PAA) using the layer-by-layer (LbL) technique. The EIS-Sn₃O₄ sensor exhibited enhanced electrochemical performance for detecting Pb²⁺ and Ni²⁺ ions, revealing a higher affinity for Pb²⁺ ions, with sensitivities of ca. 25.8 mV/decade and 2.4 mV/decade, respectively. Such results indicate that Sn₃O₄ nanobelts can contemplate a feasible proof-of-concept capacitive field-effect sensor for heavy metal detection, envisaging other future studies focusing on environmental monitoring.
KNX is a protocol for smart building automation, e.g., for automated heating, air conditioning, or lighting. This paper analyses and evaluates state-of-the-art KNX devices from manufacturers Merten, Gira and Siemens with respect to security. On the one hand, it is investigated if publicly known vulnerabilities like insecure storage of passwords in software, unencrypted communication, or denialof-service attacks, can be reproduced in new devices. On the other hand, the security is analyzed in general, leading to the discovery of a previously unknown and high risk vulnerability related to so-called BCU (authentication) keys.
Selected problems in the field of multivariate statistical analysis are treated. Thereby, one focus is on the paired sample case. Among other things, statistical testing problems of marginal homogeneity are under consideration. In detail, properties of Hotelling‘s T² test in a special parametric situation are obtained. Moreover, the nonparametric problem of marginal homogeneity is discussed on the basis of possibly incomplete data. In the bivariate data case, properties of the Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic on the basis of partly not identically distributed data are investigated. Similar testing problems are treated within the scope of the application of a result for the empirical process of the concomitants for partly categorial data. Furthermore, testing changes in the modeled solvency capital requirement of an insurance company by means of a paired sample from an internal risk model is discussed. Beyond the paired sample case, a new asymptotic relative efficiency concept based on the expected volumes of multidimensional confidence regions is introduced. Besides, a new approach for the treatment of the multi-sample goodness-of-fit problem is presented. Finally, a consistent test for the treatment of the goodness-of-fit problem is developed for the background of huge or infinite dimensional data.
To fulfil the CO2 emission reduction targets of the European Union (EU), heavy-duty (HD) trucks need to operate 15% more efficiently by 2025 and 30% by 2030. Their electrification is necessary as conventional HD trucks are already optimized for the long-haul application. The resulting hybrid electric vehicle (HEV) truck gains most of the fuel saving potential by the recuperation of potential energy and its consecutive utilization. The key to utilizing the full potential of HEV-HD trucks is to maximize the amount of recuperated energy and ensure its intelligent usage while keeping the operating point of the internal combustion engine as efficient as possible. To achieve this goal, an intelligent energy management strategy (EMS) based on ECMS is developed for a parallel HEV-HD truck which uses predictive discharge of the battery and adaptive operating strategy regarding the height profile and the vehicle mass. The presented EMS can reproduce the global optimal operating strategy over long phases and lead to a fuel saving potential of up to 2% compared with a heuristic strategy. Furthermore, the fuel saving potential is correlated with the investigated boundary conditions to deepen the understanding of the impact of intelligent EMS for HEV-HD trucks.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
This study evaluates neuromechanical control and muscle-tendon interaction during energy storage and dissipation tasks in hypergravity. During parabolic flights, while 17 subjects performed drop jumps (DJs) and drop landings (DLs), electromyography (EMG) of the lower limb muscles was combined with in vivo fascicle dynamics of the gastrocnemius medialis, two-dimensional (2D) kinematics, and kinetics to measure and analyze changes in energy management. Comparisons were made between movement modalities executed in hypergravity (1.8 G) and gravity on ground (1 G). In 1.8 G, ankle dorsiflexion, knee joint flexion, and vertical center of mass (COM) displacement are lower in DJs than in DLs; within each movement modality, joint flexion amplitudes and COM displacement demonstrate higher values in 1.8 G than in 1 G. Concomitantly, negative peak ankle joint power, vertical ground reaction forces, and leg stiffness are similar between both movement modalities (1.8 G). In DJs, EMG activity in 1.8 G is lower during the COM deceleration phase than in 1 G, thus impairing quasi-isometric fascicle behavior. In DLs, EMG activity before and during the COM deceleration phase is higher, and fascicles are stretched less in 1.8 G than in 1 G. Compared with the situation in 1 G, highly task-specific neuromuscular activity is diminished in 1.8 G, resulting in fascicle lengthening in both movement modalities. Specifically, in DJs, a high magnitude of neuromuscular activity is impaired, resulting in altered energy storage. In contrast, in DLs, linear stiffening of the system due to higher neuromuscular activity combined with lower fascicle stretch enhances the buffering function of the tendon, and thus the capacity to safely dissipate energy.
It has been shown that muscle fascicle curvature increases with increasing contraction level and decreasing muscle–tendon complex length. The analyses were done with limited examination windows concerning contraction level, muscle–tendon complex length, and/or intramuscular position of ultrasound imaging. With this study we aimed to investigate the correlation between fascicle arching and contraction, muscle–tendon complex length and their associated architectural parameters in gastrocnemius muscles to develop hypotheses concerning the fundamental mechanism of fascicle curving. Twelve participants were tested in five different positions (90°/105°*, 90°/90°*, 135°/90°*, 170°/90°*, and 170°/75°*; *knee/ankle angle). They performed isometric contractions at four different contraction levels (5%, 25%, 50%, and 75% of maximum voluntary contraction) in each position. Panoramic ultrasound images of gastrocnemius muscles were collected at rest and during constant contraction. Aponeuroses and fascicles were tracked in all ultrasound images and the parameters fascicle curvature, muscle–tendon complex strain, contraction level, pennation angle, fascicle length, fascicle strain, intramuscular position, sex and age group were analyzed by linear mixed effect models. Mean fascicle curvature of the medial gastrocnemius increased with contraction level (+5 m−1 from 0% to 100%; p = 0.006). Muscle–tendon complex length had no significant impact on mean fascicle curvature. Mean pennation angle (2.2 m−1 per 10°; p < 0.001), inverse mean fascicle length (20 m−1 per cm−1; p = 0.003), and mean fascicle strain (−0.07 m−1 per +10%; p = 0.004) correlated with mean fascicle curvature. Evidence has also been found for intermuscular, intramuscular, and sex-specific intramuscular differences of fascicle curving. Pennation angle and the inverse fascicle length show the highest predictive capacities for fascicle curving. Due to the strong correlations between pennation angle and fascicle curvature and the intramuscular pattern of curving we suggest for future studies to examine correlations between fascicle curvature and intramuscular fluid pressure.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Preprint: Studies on the enzymatic reduction of levulinic acid using Chiralidon-R and Chiralidon-S
(2023)
The enzymatic reduction of levulinic acid by the chiral catalysts Chiralidon-R and Chiralidon-S which are commercially available superabsorbed alcohol dehydrogenases is described. The Chiralidon®-R/S reduces the levulinic acid to the (R,S)-4-hydroxy valeric acid and the (R)- or (S)- gamma-valerolactone.
In addition to the technical content, modern courses at university should also teach professional skills to enhance the competencies of students towards their future work. The competency driven approach including technical as well as professional skills makes it necessary to find a suitable way for the integration into the corresponding module in a scalable and flexible manner. Agile development, for example, is essential for the development of modern systems and applications and makes use of dedicated professional skills of the team members, like structured group dynamics and communication, to enable the fast and reliable development. This paper presents an easy to integrate and flexible approach to integrate Scrum, an agile development method, into the lab of an existing module. Due to the different role models of Scrum the students have an individual learning success, gain valuable insight into modern system development and strengthen their communication and organization skills. The approach is implemented and evaluated in the module Vehicle Systems, but it can be transferred easily to other technical courses as well. The evaluation of the implementation considers feedback of all stakeholders, students, supervisor and lecturers, and monitors the observations during project lifetime.
By developing innovative solutions to social and environmental problems, sustainable ventures carry greatpotential. Entrepreneurship which focuses especially on new venture creation can be developed through education anduniversities, in particular, are called upon to provide an impetus for social change. But social innovations are associatedwith certain hurdles, which are related to the multi-dimensionality, i.e. the tension between creating social,environmental and economic value and dealing with a multiplicity of stakeholders. The already complex field ofentrepreneurship education has to face these challenges. This paper, therefore, aims to identify starting points for theintegration of sustainability into entrepreneurship education. To pursue this goal experiences from three differentproject initiatives between the partner universities: Lapland University of Applied Sciences, FH Aachen University ofApplied Sciences and Turiba University are reflected and findings are systematically condensed into recommendationsfor education on sustainable entrepreneurship.
Market abstraction of energy markets and policies - application in an agent-based modeling toolbox
(2023)
In light of emerging challenges in energy systems, markets are prone to changing dynamics and market design. Simulation models are commonly used to understand the changing dynamics of future electricity markets. However, existing market models were often created with specific use cases in mind, which limits their flexibility and usability. This can impose challenges for using a single model to compare different market designs. This paper introduces a new method of defining market designs for energy market simulations. The proposed concept makes it easy to incorporate different market designs into electricity market models by using relevant parameters derived from analyzing existing simulation tools, morphological categorization and ontologies. These parameters are then used to derive a market abstraction and integrate it into an agent-based simulation framework, allowing for a unified analysis of diverse market designs. Furthermore, we showcase the usability of integrating new types of long-term contracts and over-the-counter trading. To validate this approach, two case studies are demonstrated: a pay-as-clear market and a pay-as-bid long-term market. These examples demonstrate the capabilities of the proposed framework.
Background
Hip fractures are a common and costly health problem, resulting in significant morbidity and mortality, as well as high costs for healthcare systems, especially for the elderly. Implementing surgical preventive strategies has the potential to improve the quality of life and reduce the burden on healthcare resources, particularly in the long term. However, there are currently limited guidelines for standardizing hip fracture prophylaxis practices.
Methods
This study used a cost-effectiveness analysis with a finite-state Markov model and cohort simulation to evaluate the primary and secondary surgical prevention of hip fractures in the elderly. Patients aged 60 to 90 years were simulated in two different models (A and B) to assess prevention at different levels. Model A assumed prophylaxis was performed during the fracture operation on the contralateral side, while Model B included individuals with high fracture risk factors. Costs were obtained from the Centers for Medicare & Medicaid Services, and transition probabilities and health state utilities were derived from available literature. The baseline assumption was a 10% reduction in fracture risk after prophylaxis. A sensitivity analysis was also conducted to assess the reliability and variability of the results.
Results
With a 10% fracture risk reduction, model A costs between $8,850 and $46,940 per quality-adjusted life-year ($/QALY). Additionally, it proved most cost-effective in the age range between 61 and 81 years. The sensitivity analysis established that a reduction of ≥ 2.8% is needed for prophylaxis to be definitely cost-effective. The cost-effectiveness at the secondary prevention level was most sensitive to the cost of the contralateral side’s prophylaxis, the patient’s age, and fracture treatment cost. For high-risk patients with no fracture history, the cost-effectiveness of a preventive strategy depends on their risk profile. In the baseline analysis, the incremental cost-effectiveness ratio at the primary prevention level varied between $11,000/QALY and $74,000/QALY, which is below the defined willingness to pay threshold.
Conclusion
Due to the high cost of hip fracture treatment and its increased morbidity, surgical prophylaxis strategies have demonstrated that they can significantly relieve the healthcare system. Various key assumptions facilitated the modeling, allowing for adequate room for uncertainty. Further research is needed to evaluate health-state-associated risks.
Background
Post-COVID-19 syndrome (PCS) is a lingering disease with ongoing symptoms such as fatigue and cognitive impairment resulting in a high impact on the daily life of patients. Understanding the pathophysiology of PCS is a public health priority, as it still poses a diagnostic and treatment challenge for physicians.
Methods
In this prospective observational cohort study, we analyzed the retinal microcirculation using Retinal Vessel Analysis (RVA) in a cohort of patients with PCS and compared it to an age- and gender-matched healthy cohort (n = 41, matched out of n = 204).
Measurements and main results
PCS patients exhibit persistent endothelial dysfunction (ED), as indicated by significantly lower venular flicker-induced dilation (vFID; 3.42% ± 1.77% vs. 4.64% ± 2.59%; p = 0.02), narrower central retinal artery equivalent (CRAE; 178.1 [167.5–190.2] vs. 189.1 [179.4–197.2], p = 0.01) and lower arteriolar-venular ratio (AVR; (0.84 [0.8–0.9] vs. 0.88 [0.8–0.9], p = 0.007). When combining AVR and vFID, predicted scores reached good ability to discriminate groups (area under the curve: 0.75). Higher PCS severity scores correlated with lower AVR (R = − 0.37 p = 0.017). The association of microvascular changes with PCS severity were amplified in PCS patients exhibiting higher levels of inflammatory parameters.
Conclusion
Our results demonstrate that prolonged endothelial dysfunction is a hallmark of PCS, and impairments of the microcirculation seem to explain ongoing symptoms in patients. As potential therapies for PCS emerge, RVA parameters may become relevant as clinical biomarkers for diagnosis and therapy management.
Environmental emissions, global warming, and energy-related concerns have accelerated the advancements in conventional vehicles that primarily use internal combustion engines. Among the existing technologies, hydrogen fuel cell electric vehicles and fuel cell hybrid electric vehicles may have minimal contributions to greenhouse gas emissions and thus are the prime choices for environmental concerns. However, energy management in fuel cell electric vehicles and fuel cell hybrid electric vehicles is a major challenge. Appropriate control strategies should be used for effective energy management in these vehicles. On the other hand, there has been significant progress in artificial intelligence, machine learning, and designing data-driven intelligent controllers. These techniques have found much attention within the community, and state-of-the-art energy management technologies have been developed based on them. This manuscript reviews the application of machine learning and intelligent controllers for prediction, control, energy management, and vehicle to everything (V2X) in hydrogen fuel cell vehicles. The effectiveness of data-driven control and optimization systems are investigated to evolve, classify, and compare, and future trends and directions for sustainability are discussed.
The eVTOL industry is a rapidly growing mass market expected to start in 2024. eVTOL compete, caused by their predicted missions, with ground-based transportation modes, including mainly passenger cars. Therefore, the automotive and classical aircraft design process is reviewed and compared to highlight advantages for eVTOL development. A special focus is on ergonomic comfort and safety. The need for further investigation of eVTOL’s crashworthiness is outlined by, first, specifying the relevance of passive safety via accident statistics and customer perception analysis; second, comparing the current state of regulation and certification; and third, discussing the advantages of integral safety and applying the automotive safety approach for eVTOL development. Integral safety links active and passive safety, while the automotive safety approach means implementing standardized mandatory full-vehicle crash tests for future eVTOL. Subsequently, possible crash impact conditions are analyzed, and three full-vehicle crash load cases are presented.
This work proposes a hybrid algorithm combining an Artificial Neural Network (ANN) with a conventional local path planner to navigate UAVs efficiently in various unknown urban environments. The proposed method of a Hybrid Artificial Neural Network Avoidance System is called HANNAS. The ANN analyses a video stream and classifies the current environment. This information about the current Environment is used to set several control parameters of a conventional local path planner, the 3DVFH*. The local path planner then plans the path toward a specific goal point based on distance data from a depth camera. We trained and tested a state-of-the-art image segmentation algorithm, PP-LiteSeg. The proposed HANNAS method reaches a failure probability of 17%, which is less than half the failure probability of the baseline and around half the failure probability of an improved, bio-inspired version of the 3DVFH*. The proposed HANNAS method does not show any disadvantages regarding flight time or flight distance.
The management of knowledge in organizations considers both established long-term processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
Reducing poverty, protecting the planet, and improving life on earth for everyone are the essential goals of the "2030 Agenda for Sustainable Development"committed by the United Nations (UN). Achieving those goals will require technological innovation as well as their implementation in almost all areas of our business and day-to-day life. This paper proposes a high-level framework that collects and structures different uses cases addressing the goals defined by the UN. Hence, it contributes to the discussion by proposing technical innovations that can be used to achieve those goals. As an example, the goal "Climate Actionïs discussed in detail by describing use cases related to tackling biodiversity loss in order to conservate ecosystems.
We present the production of 58mCo on a small, 13 MeV medical cyclotron utilizing a siphon style liquid target system. Different concentrated iron(III)-nitrate solutions of natural isotopic distribution were irradiated at varying initial pressures and subsequently separated by solid phase extraction chromatography. The radio cobalt (58m/gCo and 56Co) was successfully produced with saturation activities of (0.35 ± 0.03) MBq μA−1 for 58mCo with a separation recovery of (75 ± 2) % of cobalt after one separation step utilizing LN-resin.
Density reduction effects on the production of [11C]CO2 in Nb-body targets on a medical cyclotron
(2023)
Medical isotope production of 11C is commonly performed in gaseous targets. The power deposition of the proton beam during the irradiation decreases the target density due to thermodynamic mixing and can cause an increase of penetration depth and divergence of the proton beam. In order to investigate the difference how the target-body length influences the operation conditions and the production yield, a 12 cm and a 22 cm Nb-target body containing N2/O2 gas were irradiated using a 13 MeV proton cyclotron. It was found that the density reduction has a large influence on the pressure rise during irradiation and the achievable radioactive yield. The saturation activity of [11C]CO2 for the long target (0.083 Ci/μA) is about 10% higher than in the short target geometry (0.075 Ci/μA).
Meitner-Auger-electron emitters have a promising potential for targeted radionuclide therapy of cancer because of their short range and the high linear energy transfer of Meitner-Auger-electrons (MAE). One promising MAE candidate is 197m/gHg with its half-life of 23.8 h and 64.1 h, respectively, and high MAE yield. Gold nanoparticles (AuNPs) that are labelled with 197m/gHg could be a helpful tool for radiation treatment of glioblastoma multiforme when infused into the surgical cavity after resection to prevent recurrence. To produce such AuNPs, 197m/gHg was embedded into pristine AuNPs. Two different syntheses were tested starting from irradiated gold containing trace amounts of 197m/gHg. When sodium citrate was used as reducing agent, no 197m/gHg labelled AuNPs were formed, but with tannic acid, 197m/gHg labeled AuNPs were produced. The method was optimized by neutralizing the pH (pH = 7) of the Au/197m/gHg solution, which led to labelled AuNPs with a size of 12.3 ± 2.0 nm as measured by transmission electron microscopy. The labelled AuNPs had a concentration of 50 μg (gold)/mL with an activity of 151 ± 93 kBq/mL (197gHg, time corrected to the end of bombardment).
Assistance systems have been widely adopted in the manufacturing sector to facilitate various processes and tasks in production environments. However, existing systems are mostly equipped with rigid functional logic and do not provide individual user experiences or adapt to their capabilities. This work integrates human factors in assistance systems by adjusting the hardware and instruction presented to the workers’ cognitive and physical demands. A modular system architecture is designed accordingly, which allows a flexible component exchange according to the user and the work task. Gamification, the use of game elements in non-gaming contexts, has been further adopted in this work to provide level-based instructions and personalised feedback. The developed framework is validated by applying it to a manual workstation for industrial assembly routines.
Research on robotic lunar exploration has seen a broad revival, especially since the Google Lunar X-Prize increasingly brought private endeavors into play. This development is supported by national agencies with the aim of enabling long-term lunar infrastructure for in-situ operations and the establishment of a moon village. One challenge for effective exploration missions is developing a compact and lightweight robotic rover to reduce launch costs and open the possibility for secondary payload options. Existing micro rovers for exploration missions are clearly limited by their design for one day of sunlight and their low level of autonomy. For expanding the potential mission applications and range of use, an extension of lifetime could be reached by surviving the lunar night and providing a higher level of autonomy. To address this objective, the paper presents a system design concept for a lightweight micro rover with long-term mission duration capabilities, derived from a multi-day lunar mission scenario at equatorial regions. Technical solution approaches are described, analyzed, and evaluated, with emphasis put on the harmonization of hardware selection due to a strictly limited budget in dimensions and power.
In Europe, efforts are underway to develop key technologies that can be used to explore the Moon and to exploit the resources available. This includes technologies for in-situ resource utilization (ISRU), facilitating the possibility of a future Moon Village. The Moon is the next step for humans and robots to exploit the use of available resources for longer term missions, but also for further exploration of the solar system. A challenge for effective exploration missions is to achieve a compact and lightweight robot to reduce launch costs and open up the possibility of secondary payload options. Current micro rover concepts are primarily designed to last for one day of solar illumination and show a low level of autonomy. Extending the lifetime of the system by enabling survival of the lunar night and implementing a high level of autonomy will significantly increase potential mission applications and the operational range. As a reference mission, the deployment of a micro rover in the equatorial region of the Moon is being considered. An overview of mission parameters and a detailed example mission sequence is given in this paper. The mission parameters are based on an in-depth study of current space agency roadmaps, scientific goals, and upcoming flight opportunities. Furthermore, concepts of the ongoing international micro rover developments are analyzed along with technology solutions identified for survival of lunar nights and a high system autonomy. The results provide a basis of a concise requirements set-up to allow dedicated system developments and qualification measures in the future.
Rocket engine test facilities and launch pads are typically equipped with a guide tube. Its purpose is to ensure the controlled and safe routing of the hot exhaust gases. In addition, the guide tube induces a suction that effects the nozzle flow, namely the flow separation during transient start-up and shut-down of the engine. A cold flow subscale nozzle in combination with a set of guide tubes was studied experimentally
to determine the main influencing parameters.
This paper introduces an inexpensive Wiegand-sensor-based rotary encoder that avoids rotating magnets and is suitable for electrical-drive applications. So far, Wiegand-sensor-based encoders usually include a magnetic pole wheel with rotating permanent magnets. These encoders combine the disadvantages of an increased magnet demand and a limited maximal speed due to the centripetal force acting on the rotating magnets. The proposed approach reduces the total demand of permanent magnets drastically. Moreover, the rotating part is manufacturable from a single piece of steel, which makes it very robust and cheap. This work presents the theoretical operating principle of the proposed approach and validates its benefits on a hardware prototype. The presented proof-of-concept prototype achieves a mechanical resolution of 4.5 ° by using only 4 permanent magnets, 2Wiegand sensors and a rotating steel gear wheel with 20 teeth.
Traditional vulcanization mold manufacturing is complex, costly, and under pressure due to shorter product lifecycles and diverse variations. Additive manufacturing using Fused Filament Fabrication and high-performance polymers like PEEK offer a promising future in this industry. This study assesses the compressive strength of various infill structures (honeycomb, grid, triangle, cubic, and gyroid) when considering two distinct build directions (Z, XY) to enhance PEEK’s economic and resource efficiency in rapid tooling. A comparison with PETG samples shows the behavior of the infill strategies. Additionally, a proof of concept illustrates the application of a PEEK mold in vulcanization. A peak compressive strength of 135.6 MPa was attained in specimens that were 100% solid and subjected to thermal post-treatment. This corresponds to a 20% strength improvement in the Z direction. In terms of time and mechanical properties, the anisotropic grid and isotropic cubic infill have emerged for use in rapid tooling. Furthermore, the study highlights that reducing the layer thickness from 0.15 mm to 0.1 mm can result in a 15% strength increase. The study unveils the successful utilization of a room-temperature FFF-printed PEEK mold in vulcanization injection molding. The parameters and infill strategies identified in this research enable the resource-efficient FFF printing of PEEK without compromising its strength properties. Using PEEK in rapid tooling allows a cost reduction of up to 70% in tool production.
This work presents the Multi-Bees-Tracker (MBT3D) algorithm, a Python framework implementing a deep association tracker for Tracking-By-Detection, to address the challenging task of tracking flight paths of bumblebees in a social group. While tracking algorithms for bumblebees exist, they often come with intensive restrictions, such as the need for sufficient lighting, high contrast between the animal and background, absence of occlusion, significant user input, etc. Tracking flight paths of bumblebees in a social group is challenging. They suddenly adjust movements and change their appearance during different wing beat states while exhibiting significant similarities in their individual appearance. The MBT3D tracker, developed in this research, is an adaptation of an existing ant tracking algorithm for bumblebee tracking. It incorporates an offline trained appearance descriptor along with a Kalman Filter for appearance and motion matching. Different detector architectures for upstream detections (You Only Look Once (YOLOv5), Faster Region Proposal Convolutional Neural Network (Faster R-CNN), and RetinaNet) are investigated in a comparative study to optimize performance. The detection models were trained on a dataset containing 11359 labeled bumblebee images. YOLOv5 reaches an Average Precision of AP = 53, 8%, Faster R-CNN achieves AP = 45, 3% and RetinaNet AP = 38, 4% on the bumblebee validation dataset, which consists of 1323 labeled bumblebee images. The tracker’s appearance model is trained on 144 samples. The tracker (with Faster R-CNN detections) reaches a Multiple Object Tracking Accuracy MOTA = 93, 5% and a Multiple Object Tracking Precision MOTP = 75, 6% on a validation dataset containing 2000 images, competing with state-of-the-art computer vision methods. The framework allows reliable tracking of different bumblebees in the same video stream with rarely occurring identity switches (IDS). MBT3D has much lower IDS than other commonly used algorithms, with one of the lowest false positive rates, competing with state-of-the-art animal tracking algorithms. The developed framework reconstructs the 3-dimensional (3D) flight paths of the bumblebees by triangulation. It also handles and compares two alternative stereo camera pairs if desired.
The feasibility study presents results of a hydrogen combustor integration for a Medium-Range aircraft engine using the Dry-Low-NOₓ Micromix combustion principle. Based on a simplified Airbus A320-type flight mission, a thermodynamic performance model of a kerosene and a hydrogen-powered V2530-A5 engine is used to derive the thermodynamic combustor boundary conditions. A new combustor design using the Dry-Low NOx Micromix principle is investigated by slice model CFD simulations of a single Micromix injector for design and off-design operation of the engine. Combustion characteristics show typical Micromix flame shapes and good combustion efficiencies for all flight mission operating points. Nitric oxide emissions are significant below ICAO CAEP/8 limits. For comparison of the Emission Index (EI) for NOₓ emissions between kerosene and hydrogen operation, an energy (kerosene) equivalent Emission Index is used.
A full 15° sector model CFD simulation of the combustion chamber with multiple Micromix injectors including inflow homogenization and dilution and cooling air flows investigates the combustor integration effects, resulting NOₓ emission and radial temperature distributions at the combustor outlet. The results show that the integration of a Micromix hydrogen combustor in actual aircraft engines is feasible and offers, besides CO₂ free combustion, a significant reduction of NOₓ emissions compared to kerosene operation.
This study analyses the expected utilization of an urban distribution grid under high penetration of photovoltaic and e-mobility with charging infrastructure on a residential level. The grid utilization and the corresponding power flow are evaluated, while varying the control strategies and photovoltaic installed capacity in different scenarios. Four scenarios are used to analyze the impact of e-mobility. The individual mobility demand is modelled based on the largest German studies on mobility “Mobilität in Deutschland”, which is carried out every 5 years. To estimate the ramp-up of photovoltaic generation, a potential analysis of the roof surfaces in the supply area is carried out via an evaluation of an open solar potential study. The photovoltaic feed-in time series is derived individually for each installed system in a resolution of 15 min. The residential consumption is estimated using historical smart meter data, which are collected in London between 2012 and 2014. For a realistic charging demand, each residential household decides daily on the state of charge if their vehicle requires to be charged. The resulting charging time series depends on the underlying behavior scenario. Market prices and mobility demand are therefore used as scenario input parameters for a utility function based on the current state of charge to model individual behavior. The aggregated electricity demand is the starting point of the power flow calculation. The evaluation is carried out for an urban region with approximately 3100 residents. The analysis shows that increased penetration of photovoltaics combined with a flexible and adaptive charging strategy can maximize PV usage and reduce the need for congestion-related intervention by the grid operator by reducing the amount of kWh charged from the grid by 30% which reduces the average price of a charged kWh by 35% to 14 ct/kWh from 21.8 ct/kWh without PV optimization. The resulting grid congestions are managed by implementing an intelligent price or control signal. The analysis took place using data from a real German grid with 10 subgrids. The entire software can be adapted for the analysis of different distribution grids and is publicly available as an open-source software library on GitHub.
Several species of (poly)saccharides and organic acids can be found often simultaneously in various biological matrices, e.g., fruits, plant materials, and biological fluids. The analysis of such matrices sometimes represents a challenging task. Using Aloe vera (A. vera) plant materials as an example, the performance of several spectro-scopic methods (80 MHz benchtop NMR, NIR, ATR-FTIR and UV–vis) for the simultaneous analysis of quality parameters of this plant material was compared. The determined parameters include (poly)saccharides such as aloverose, fructose and glucose as well as organic acids (malic, lactic, citric, isocitric, acetic, fumaric, benzoic and sorbic acids). 500 MHz NMR and high-performance liquid chromatography (HPLC) were used as the reference methods.
UV–vis data can be used only for identification of added preservatives (benzoic and sorbic acids) and drying agent (maltodextrin) and semiquantitative analysis of malic acid. NIR and MIR spectroscopies combined with multivariate regression can deliver more informative overview of A. vera extracts being able to additionally quantify glucose, aloverose, citric, isocitric, malic, lactic acids and fructose. Low-field NMR measurements can be used for the quantification of aloverose, glucose, malic, lactic, acetic, and benzoic acids. The benchtop NMR method was successfully validated in terms of robustness, stability, precision, reproducibility and limit of detection (LOD) and quantification (LOQ), respectively. All spectroscopic techniques are useful for the screening of (poly)saccharides and organic acids in plant extracts and should be applied according to its availability as well as information and confidence required for the specific analytical goal. Benchtop NMR spectroscopy seems to be the most feasible solution for quality control of A. vera products.
This article describes an Internet of things (IoT) sensing device with a wireless interface which is powered by the energy-harvesting method of the Wiegand effect. The Wiegand effect, in contrast to continuous sources like photovoltaic or thermal harvesters, provides small amounts of energy discontinuously in pulsed mode. To enable an energy-self-sufficient operation of the sensing device with this pulsed energy source, the output energy of the Wiegand generator is maximized. This energy is used to power up the system and to acquire and process data like position, temperature or other resistively measurable quantities as well as transmit these data via an ultra-low-power ultra-wideband (UWB) data transmitter. A proof-of-concept system was built to prove the feasibility of the approach. The energy consumption of the system during start-up was analysed, traced back in detail to the individual components, compared to the generated energy and processed to identify further optimization options. Based on the proof of concept, an application prototype was developed.
This paper describes the potential for developing a digital twin of society- a dynamic model that can be used to observe, analyze, and predict the evolution of various societal aspects. Such a digital twin can help governmental agencies and policy makers in interpreting trends, understanding challenges, and making decisions regarding investments or policies necessary to support societal development and ensure future prosperity. The paper reviews related work regarding the digital twin paradigm and its applications. The paper presents a motivating case study- an analysis of opportunities and challenges faced by the German federal employment agency, Bundesagentur f¨ur Arbeit (BA), proposes solutions using digital twins, and describes initial proofs of concept for such solutions.
We consider time-dependent portfolios and discuss the allocation of changes in the risk of a portfolio to changes in the portfolio’s components. For this purpose we adopt established allocation principles. We also use our approach to obtain forecasts for changes in the risk of the portfolio’s components. To put the approach into practice we present an implementation based on the output of a simulation. Allocation is illustrated with an example portfolio in the context of Solvency II. The quality of the forecasts is investigated with an empirical study.