Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1575)
- Fachbereich Elektrotechnik und Informationstechnik (715)
- IfB - Institut für Bioengineering (567)
- Fachbereich Energietechnik (563)
- Fachbereich Chemie und Biotechnologie (541)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (484)
- Fachbereich Maschinenbau und Mechatronik (272)
- Fachbereich Wirtschaftswissenschaften (209)
- Solar-Institut Jülich (161)
Has Fulltext
- no (4735) (remove)
Language
- English (4735) (remove)
Document Type
- Article (3194)
- Conference Proceeding (1065)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (5)
- Preprint (5)
- Talk (3)
- Habilitation (2)
- Master's Thesis (2)
- Working Paper (2)
- Bachelor Thesis (1)
- Contribution to a Periodical (1)
- Review (1)
Keywords
- Gamification (6)
- avalanche (6)
- Additive manufacturing (5)
- Earthquake (5)
- Enterprise Architecture (5)
- Industry 4.0 (5)
- MINLP (5)
- Natural language processing (5)
- solar sail (5)
- Additive Manufacturing (4)
Muscle function is compromised by gravitational unloading in space affecting overall musculoskeletal health. Astronauts perform daily exercise programmes to mitigate these effects but knowing which muscles to target would optimise effectiveness. Accurate inflight assessment to inform exercise programmes is critical due to lack of technologies suitable for spaceflight. Changes in mechanical properties indicate muscle health status and can be measured rapidly and non-invasively using novel technology. A hand-held MyotonPRO device enabled monitoring of muscle health for the first time in spaceflight (> 180 days). Greater/maintained stiffness indicated countermeasures were effective. Tissue stiffness was preserved in the majority of muscles (neck, shoulder, back, thigh) but Tibialis Anterior (foot lever muscle) stiffness decreased inflight vs. preflight (p < 0.0001; mean difference 149 N/m) in all 12 crewmembers. The calf muscles showed opposing effects, Gastrocnemius increasing in stiffness Soleus decreasing. Selective stiffness decrements indicate lack of preservation despite daily inflight countermeasures. This calls for more targeted exercises for lower leg muscles with vital roles as ankle joint stabilizers and in gait. Muscle stiffness is a digital biomarker for risk monitoring during future planetary explorations (Moon, Mars), for healthcare management in challenging environments or clinical disorders in people on Earth, to enable effective tailored exercise programmes.
In the context of the increasing digitalization, the Internet of Things (IoT) is seen as a technological driver through which completely new business models can emerge in the interaction of different players. Identified key players include traditional industrial companies, municipalities and telecommunications companies. The latter, by providing connectivity, ensure that small devices with tiny batteries can be connected almost anywhere and directly to the Internet. There are already many IoT use cases on the market that provide simplification for end users, such as Philips Hue Tap. In addition to business models based on connectivity, there is great potential for information-driven business models that can support or enhance existing business models. One example is the IoT use case Park and Joy, which uses sensors to connect parking spaces and inform drivers about available parking spaces in real time. Information-driven business models can be based on data generated in IoT use cases. For example, a telecommunications company can add value by deriving more decision-relevant information – called insights – from data that is used to increase decision agility. In addition, insights can be monetized. The monetization of insights can only be sustainable, if careful attention is taken and frameworks are considered. In this chapter, the concept of information-driven business models is explained and illustrated with the concrete use case Park and Joy. In addition, the benefits, risks and framework conditions are discussed.
Aircraft configurations with propellers have been drawing more attention in recent times, partly due to new propulsion concepts based on hydrogen fuel cells and electric motors. These configurations are prone to whirl flutter, which is an aeroelastic instability affecting airframes with elastically supported propellers. It commonly needs to be mitigated already during the design phase of such configurations, requiring, among other things, unsteady aerodynamic transfer functions for the propeller. However, no comprehensive assessment of unsteady propeller aerodynamics for aeroelastic analysis is available in the literature. This paper provides a detailed comparison of nine different low- to mid-fidelity aerodynamic methods, demonstrating their impact on linear, unsteady aerodynamics, as well as whirl flutter stability prediction. Quasi-steady and unsteady methods for blade lift with or without coupling to blade element momentum theory are evaluated and compared to mid-fidelity potential flow solvers (UPM and DUST) and classical, derivative-based methods. Time-domain identification of frequency-domain transfer functions for the unsteady propeller hub loads is used to compare the different methods. Predictions of the minimum required pylon stiffness for stability show good agreement among the mid-fidelity methods. The differences in the stability predictions for the low-fidelity methods are higher. Most methods studied yield a more unstable system than classical, derivative-based whirl flutter analysis, indicating that the use of more sophisticated aerodynamic modeling techniques might be required for accurate whirl flutter prediction.
Next-generation aircraft designs often incorporate multiple large propellers attached along the wingspan (distributed electric propulsion), leading to highly flexible dynamic systems that can exhibit aeroelastic instabilities. This paper introduces a validated methodology to investigate the aeroelastic instabilities of wing–propeller systems and to understand the dynamic mechanism leading to wing and whirl flutter and transition from one to the other. Factors such as nacelle positions along the wing span and chord and its propulsion system mounting stiffness are considered. Additionally, preliminary design guidelines are proposed for flutter-free wing–propeller systems applicable to novel aircraft designs. The study demonstrates how the critical speed of the wing–propeller systems is influenced by the mounting stiffness and propeller position. Weak mounting stiffnesses result in whirl flutter, while hard mounting stiffnesses lead to wing flutter. For the latter, the position of the propeller along the wing span may change the wing mode shapes and thus the flutter mechanism. Propeller positions closer to the wing tip enhance stability, but pusher configurations are more critical due to the mass distribution behind the elastic axis.
In this work, the effect of low air relative humidity on the operation of a polymer electrolyte membrane fuel cell is investigated. An innovative method through performing in situ electrochemical impedance spectroscopy is utilised to quantify the effect of inlet air relative humidity at the cathode side on internal ionic resistances and output voltage of the fuel cell. In addition, algorithms are developed to analyse the electrochemical characteristics of the fuel cell. For the specific fuel cell stack used in this study, the membrane resistance drops by over 39 % and the cathode side charge transfer resistance decreases by 23 % after increasing the humidity from 30 % to 85 %, while the results of static operation also show an increase of ∼2.2 % in the voltage output after increasing the relative humidity from 30 % to 85 %. In dynamic operation, visible drying effects occur at < 50 % relative humidity, whereby the increase of the air side stoichiometry increases the drying effects. Furthermore, other parameters, such as hydrogen humidification, internal stack structure, and operating parameters like stoichiometry, pressure, and temperature affect the overall water balance. Therefore, the optimal humidification range must be determined by considering all these parameters to maximise the fuel cell performance and durability. The results of this study are used to develop a health management system to ensure sufficient humidification by continuously monitoring the fuel cell polarisation data and electrochemical impedance spectroscopy indicators.
Many important properties of bacterial cellulose (BC), such as moisture absorption capacity, elasticity and tensile strength, largely depend on its structure. This paper presents a study on the effect of the drying method on BC films produced by Medusomyces gisevii using two different procedures: room temperature drying (RT, (24 ± 2 °C, humidity 65 ± 1%, dried until a constant weight was reached) and freeze-drying (FD, treated at − 75 °C for 48 h). BC was synthesized using one of two different carbon sources—either glucose or sucrose. Structural differences in the obtained BC films were evaluated using atomic force microscopy (AFM), scanning electron microscopy (SEM), and X-ray diffraction. Macroscopically, the RT samples appeared semi-transparent and smooth, whereas the FD group exhibited an opaque white color and sponge-like structure. SEM examination showed denser packing of fibrils in FD samples while RT-samples displayed smaller average fiber diameter, lower surface roughness and less porosity. AFM confirmed the SEM observations and showed that the FD material exhibited a more branched structure and a higher surface roughness. The samples cultivated in a glucose-containing nutrient medium, generally displayed a straight and ordered shape of fibrils compared to the sucrose-derived BC, characterized by a rougher and wavier structure. The BC films dried under different conditions showed distinctly different crystallinity degrees, whereas the carbon source in the culture medium was found to have a relatively small effect on the BC crystallinity.
A novel method to determine the extruded length of a metallic wire for a directed energy deposition (DED) process using a microwave (MW) plasma jet with a straight-through wire feed is presented. The method is based on the relative comparison of the measured frequency response obtained by the large-signal scattering parameter (Hot-S) technique. In the practical working range, repeatability of less than 6% for a nonactive plasma and 9% for the active plasma state is found. Measurements are conducted with a focus on a simple solution to decrease the processing time and reduce the integration time of the process into the existing hardware. It is shown that monitoring a single frequency for magnitude and phase changes is sufficient to achieve good accuracy. A combination of different measurement values to determine the length is possible. The applicability to different diameter of the same material is shown as well as a contact detection of the wire and metallic substrate.
This article addresses the need for an innovative technique in plasma shaping, utilizing antenna structures, Maxwell’s laws, and boundary conditions within a shielded environment. The motivation lies in exploring a novel approach to efficiently generate high-energy density plasma with potential applications across various fields. Implemented in an E01 circular cavity resonator, the proposed method involves the use of an impedance and field matching device with a coaxial connector and a specially optimized monopole antenna. This setup feeds a low-loss cavity resonator, resulting in a high-energy density air plasma with a surface temperature exceeding 3500 o C, achieved with a minimal power input of 80 W. The argon plasma, resembling the shape of a simple monopole antenna with modeled complex dielectric values, offers a more energy-efficient alternative compared to traditional, power-intensive plasma shaping methods. Simulations using a commercial electromagnetic (EM) solver validate the design’s effectiveness, while experimental validation underscores the method’s feasibility and practical implementation. Analyzing various parameters in an argon atmosphere, including hot S -parameters and plasma beam images, the results demonstrate the successful application of this technique, suggesting its potential in coating, furnace technology, fusion, and spectroscopy applications.
Electrolyte-insulator-semiconductor capacitors (EISCAP) belong to field-effect sensors having an attractive transducer architecture for constructing various biochemical sensors. In this study, a capacitive model of enzyme-modified EISCAPs has been developed and the impact of the surface coverage of immobilized enzymes on its capacitance-voltage and constant-capacitance characteristics was studied theoretically and experimentally. The used multicell arrangement enables a multiplexed electrochemical characterization of up to sixteen EISCAPs. Different enzyme coverages have been achieved by means of parallel electrical connection of bare and enzyme-covered single EISCAPs in diverse combinations. As predicted by the model, with increasing the enzyme coverage, both the shift of capacitance-voltage curves and the amplitude of the constant-capacitance signal increase, resulting in an enhancement of analyte sensitivity of the EISCAP biosensor. In addition, the capability of the multicell arrangement with multi-enzyme covered EISCAPs for sequentially detecting multianalytes (penicillin and urea) utilizing the enzymes penicillinase and urease has been experimentally demonstrated and discussed.
In this work, we present a compact, bifunctional chip-based sensor setup that measures the temperature and electrical conductivity of water samples, including specimens from rivers and channels, aquaculture, and the Atlantic Ocean. For conductivity measurements, we utilize the impedance amplitude recorded via interdigitated electrode structures at a single triggering frequency. The results are well in line with data obtained using a calibrated reference instrument. The new setup holds for conductivity values spanning almost two orders of magnitude (river versus ocean water) without the need for equivalent circuit modelling. Temperature measurements were performed in four-point geometry with an on-chip platinum RTD (resistance temperature detector) in the temperature range between 2 °C and 40 °C, showing no hysteresis effects between warming and cooling cycles. Although the meander was not shielded against the liquid, the temperature calibration provided equivalent results to low conductive Milli-Q and highly conductive ocean water. The sensor is therefore suitable for inline and online monitoring purposes in recirculating aquaculture systems.
The connective tissues such as tendons contain an extracellular matrix (ECM) comprising collagen fibrils scattered within the ground substance. These fibrils are instrumental in lending mechanical stability to tissues. Unfortunately, our understanding of how collagen fibrils reinforce the ECM remains limited, with no direct experimental evidence substantiating current theories. Earlier theoretical studies on collagen fibril reinforcement in the ECM have relied predominantly on the assumption of uniform cylindrical fibers, which is inadequate for modelling collagen fibrils, which possessed tapered ends. Recently, Topçu and colleagues published a paper in the International Journal of Solids and Structures, presenting a generalized shear-lag theory for the transfer of elastic stress between the matrix and fibers with tapered ends. This paper is a positive step towards comprehending the mechanics of the ECM and makes a valuable contribution to formulating a complete theory of collagen fibril reinforcement in the ECM.
Critical quantitative evaluation of integrated health management methods for fuel cell applications
(2024)
Online fault diagnostics is a crucial consideration for fuel cell systems, particularly in mobile applications, to limit downtime and degradation, and to increase lifetime. Guided by a critical literature review, in this paper an overview of Health management systems classified in a scheme is presented, introducing commonly utilised methods to diagnose FCs in various applications. In this novel scheme, various Health management system methods are summarised and structured to provide an overview of existing systems including their associated tools. These systems are classified into four categories mainly focused on model-based and non-model-based systems. The individual methods are critically discussed when used individually or combined aimed at further understanding their functionality and suitability in different applications. Additionally, a tool is introduced to evaluate methods from each category based on the scheme presented. This tool applies the technique of matrix evaluation utilising several key parameters to identify the most appropriate methods for a given application. Based on this evaluation, the most suitable methods for each specific application are combined to build an integrated Health management system.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
This paper investigates the interior transmission problem for homogeneous media via eigenvalue trajectories parameterized by the magnitude of the refractive index. In the case that the scatterer is the unit disk, we prove that there is a one-to-one correspondence between complex-valued interior transmission eigenvalue trajectories and Dirichlet eigenvalues of the Laplacian which turn out to be exactly the trajectorial limit points as the refractive index tends to infinity. For general simply-connected scatterers in two or three dimensions, a corresponding relation is still open, but further theoretical results and numerical studies indicate a similar connection.
Ga-doped Li7La3Zr2O12 garnet solid electrolytes exhibit the highest Li-ion conductivities among the oxide-type garnet-structured solid electrolytes, but instabilities toward Li metal hamper their practical application. The instabilities have been assigned to direct chemical reactions between LiGaO2 coexisting phases and Li metal by several groups previously. Yet, the understanding of the role of LiGaO2 in the electrochemical cell and its electrochemical properties is still lacking. Here, we are investigating the electrochemical properties of LiGaO2 through electrochemical tests in galvanostatic cells versus Li metal and complementary ex situ studies via confocal Raman microscopy, quantitative phase analysis based on powder X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron energy loss spectroscopy. The results demonstrate considerable and surprising electrochemical activity, with high reversibility. A three-stage reaction mechanism is derived, including reversible electrochemical reactions that lead to the formation of highly electronically conducting products. The results have considerable implications for the use of Ga-doped Li7La3Zr2O12 electrolytes in all-solid-state Li-metal battery applications and raise the need for advanced materials engineering to realize Ga-doped Li7La3Zr2O12for practical use.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
The FAYMONVILLE case study describes how the family-owned company Faymonville from eastern Belgium has succeeded in becoming one of the leading manufacturers in its sector. The targeted identification of new markets, the focus on relevant customer needs, and a consistent product policy with a coordinated manufacturing concept lay the foundations for success. In this case study, students can learn about how a company can successfully resolve the fundamental contradiction between economic and customized production.
Due to the transition to renewable energies, electricity markets need to be made fit for purpose. To enable the comparison of different energy market designs, modeling tools covering market actors and their heterogeneous behavior are needed. Agent-based models are ideally suited for this task. Such models can be used to simulate and analyze changes to market design or market mechanisms and their impact on market dynamics. In this paper, we conduct an evaluation and comparison of two actively developed open-source energy market simulation models. The two models, namely AMIRIS and ASSUME, are both designed to simulate future energy markets using an agent-based approach. The assessment encompasses modelling features and techniques, model performance, as well as a comparison of model results, which can serve as a blueprint for future comparative studies of simulation models. The main comparison dataset includes data of Germany in 2019 and simulates the Day-Ahead market and participating actors as individual agents. Both models are comparable close to the benchmark dataset with a MAE between 5.6 and 6.4 €/MWh while also modeling the actual dispatch realistically.
In the research domain of energy informatics, the importance of open datais rising rapidly. This can be seen as various new public datasets are created andpublished. Unfortunately, in many cases, the data is not available under a permissivelicense corresponding to the FAIR principles, often lacking accessibility or reusability.Furthermore, the source format often differs from the desired data format or does notmeet the demands to be queried in an efficient way. To solve this on a small scale atoolbox for ETL-processes is provided to create a local energy data server with openaccess data from different valuable sources in a structured format. So while the sourcesitself do not fully comply with the FAIR principles, the provided unique toolbox allows foran efficient processing of the data as if the FAIR principles would be met. The energydata server currently includes information of power systems, weather data, networkfrequency data, European energy and gas data for demand and generation and more.However, a solution to the core problem - missing alignment to the FAIR principles - isstill needed for the National Research Data Infrastructure.
New insights into the influence of pre-culture on robust solvent production of C. acetobutylicum
(2024)
Clostridia are known for their solvent production, especially the production of butanol. Concerning the projected depletion of fossil fuels, this is of great interest. The cultivation of clostridia is known to be challenging, and it is difficult to achieve reproducible results and robust processes. However, existing publications usually concentrate on the cultivation conditions of the main culture. In this paper, the influence of cryo-conservation and pre-culture on growth and solvent production in the resulting main cultivation are examined. A protocol was developed that leads to reproducible cultivations of Clostridium acetobutylicum. Detailed investigation of the cell conservation in cryo-cultures ensured reliable cell growth in the pre-culture. Moreover, a reason for the acid crash in the main culture was found, based on the cultivation conditions of the pre-culture. The critical parameter to avoid the acid crash and accomplish the shift to the solventogenesis of clostridia is the metabolic phase in which the cells of the pre-culture were at the time of inoculation of the main culture; this depends on the cultivation time of the pre-culture. Using cells from the exponential growth phase to inoculate the main culture leads to an acid crash. To achieve the solventogenic phase with butanol production, the inoculum should consist of older cells which are in the stationary growth phase. Considering these parameters, which affect the entire cultivation process, reproducible results and reliable solvent production are ensured.
Drought and water shortage are serious problems in many arid and semi-arid regions. This problem is getting worse and even continues in temperate climatic regions due to climate change. To address this problem, the use of biodegradable hydrogels is increasingly important for the application as water-retaining additives in soil. Furthermore, efficient (micro-)nutrient supply can be provided by the use of tailored hydrogels. Biodegradable polyaspartic acid (PASP) hydrogels with different available (1,6-hexamethylene diamine (HMD) and L-lysine (LYS)) and newly developed crosslinkers based on diesters of glycine (GLY) and (di-)ethylene glycol (DEG and EG, respectively) were synthesized and characterized using Fourier transform infrared (FTIR) spectroscopy and scanning electron microscopy (SEM) and regarding their swelling properties (kinetic, absorbency under load (AUL)) as well as biodegradability of PASP hydrogel. Copper (II) and zinc (II), respectively, were loaded as micronutrients in two different approaches: in situ with crosslinking and subsequent loading of prepared hydrogels. The results showed successful syntheses of di-glycine-ester-based crosslinkers. Hydrogels with good water-absorbing properties were formed. Moreover, the developed crosslinking agents in combination with the specific reaction conditions resulted in higher water absorbency with increased crosslinker content used in synthesis (10% vs. 20%). The prepared hydrogels are candidates for water-storing soil additives due to the biodegradability of PASP, which is shown in an exemple. The incorporation of Cu(II) and Zn(II) ions can provide these micronutrients for plant growth.
The artificial olfactory image was proposed by Lundström et al. in 1991 as a new strategy for an electronic nose system which generated a two-dimensional mapping to be interpreted as a fingerprint of the detected gas species. The potential distribution generated by the catalytic metals integrated into a semiconductor field-effect structure was read as a photocurrent signal generated by scanning light pulses. The impact of the proposed technology spread beyond gas sensing, inspiring the development of various imaging modalities based on the light addressing of field-effect structures to obtain spatial maps of pH distribution, ions, molecules, and impedance, and these modalities have been applied in both biological and non-biological systems. These light-addressing technologies have been further developed to realize the position control of a faradaic current on the electrode surface for localized electrochemical reactions and amperometric measurements, as well as the actuation of liquids in microfluidic devices.
Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations.
In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach.
The deformation and damage laws of non-homogeneous irregular structural planes in rocks are the basis for studying the stability of rock engineering. To investigate the damage characteristics of rock containing non-parallel fissures, uniaxial compression tests and numerical simulations were conducted on sandstone specimens containing three non-parallel fissures inclined at 0°, 45° and 90° in this study. The characteristics of crack initiation and crack evolution of fissures with different inclinations were analyzed. A constitutive model for the discontinuous fractures of fissured sandstone was proposed. The results show that the fracture behaviors of fissured sandstone specimens are discontinuous. The stress–strain curves are non-smooth and can be divided into nonlinear crack closure stage, linear elastic stage, plastic stage and brittle failure stage, of which the plastic stage contains discontinuous stress drops. During the uniaxial compression test, the middle or ends of 0° fissures were the first to crack compared to 45° and 90° fissures. The end with small distance between 0° and 45° fissures cracked first, and the end with large distance cracked later. After the final failure, 0° fissures in all specimens were fractured, while 45° and 90° fissures were not necessarily fractured. Numerical simulation results show that the concentration of compressive stress at the tips of 0°, 45° and 90° fissures, as well as the concentration of tensile stress on both sides, decreased with the increase of the inclination angle. A constitutive model for the discontinuous fractures of fissured sandstone specimens was derived by combining the logistic model and damage mechanic theory. This model can well describe the discontinuous drops of stress and agrees well with the whole processes of the stress–strain curves of the fissured sandstone specimens.
Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. Furthermore, our results underline the relevance of frequency information for sleep stage scoring, while also demonstrating that deep neural networks utilize information beyond frequencies to enhance sleep staging performance, which is consistent with previous research. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces.
After a brief introduction of conventional laboratory structures, this work focuses on an innovative and universal approach for a setup of a training laboratory for electric machines and drive systems. The novel approach employs a central 48 V DC bus, which forms the backbone of the structure. Several sets of DC machine, asynchronous machine and synchronous machine are connected to this bus. The advantages of the novel system structure are manifold, both from a didactic and a technical point of view: Student groups can work on their own performance level in a highly parallelized and at the same time individualized way. Additional training setups (similar or different) can easily be added. Only the total power dissipation has to be provided, i.e. the DC bus balances the power flow between the student groups. Comparative results of course evaluations of several cohorts of students are shown.
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
Magnetic nanoparticles (MNP) are investigated with great interest for biomedical applications in diagnostics (e.g. imaging: magnetic particle imaging (MPI)), therapeutics (e.g. hyperthermia: magnetic fluid hyperthermia (MFH)) and multi-purpose biosensing (e.g. magnetic immunoassays (MIA)). What all of these applications have in common is that they are based on the unique magnetic relaxation mechanisms of MNP in an alternating magnetic field (AMF). While MFH and MPI are currently the most prominent examples of biomedical applications, here we present results on the relatively new biosensing application of frequency mixing magnetic detection (FMMD) from a simulation perspective. In general, we ask how the key parameters of MNP (core size and magnetic anisotropy) affect the FMMD signal: by varying the core size, we investigate the effect of the magnetic volume per MNP; and by changing the effective magnetic anisotropy, we study the MNPs’ flexibility to leave its preferred magnetization direction. From this, we predict the most effective combination of MNP core size and magnetic anisotropy for maximum signal generation.
This easy-to-understand introduction to SAP S/4HANA guides you through the central processes in sales, purchasing and procurement, finance, production, and warehouse management using the model company Global Bike. Familiarize yourself with the basics of business administration, the relevant organizational data, master data, and transactional data, as well as a selection of core business processes in SAP. Using practical examples and tutorials, you will soon become an SAP S/4HANA professional!
Tutorials and exercises for beginners, advanced users, and experts make it easy for you to practice your new knowledge. The prerequisite for this book is access to an SAP S/4HANA client with Global Bike version 4.1.
- Business fundamentals and processes in the SAP system
- Sales, purchasing and procurement, production, finance, and warehouse management
- Tutorials at different qualification levels, exercises, and recap of case studies
- Includes extensive download material for students, lecturers, and professors
In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle.
Direct air capture (DAC) combined with subsequent storage (DACCS) is discussed as one promising carbon dioxide removal option. The aim of this paper is to analyse and comparatively classify the resource consumption (land use, renewable energy and water) and costs of possible DAC implementation pathways for Germany. The paths are based on a selected, existing climate neutrality scenario that requires the removal of 20 Mt of carbon dioxide (CO2) per year by DACCS from 2045. The analysis focuses on the so-called “low-temperature” DAC process, which might be more advantageous for Germany than the “high-temperature” one. In four case studies, we examine potential sites in northern, central and southern Germany, thereby using the most suitable renewable energies for electricity and heat generation. We show that the deployment of DAC results in large-scale land use and high energy needs. The land use in the range of 167–353 km2 results mainly from the area required for renewable energy generation. The total electrical energy demand of 14.4 TWh per year, of which 46% is needed to operate heat pumps to supply the heat demand of the DAC process, corresponds to around 1.4% of Germany's envisaged electricity demand in 2045. 20 Mt of water are provided yearly, corresponding to 40% of the city of Cologne‘s water demand (1.1 million inhabitants). The capture of CO2 (DAC) incurs levelised costs of 125–138 EUR per tonne of CO2, whereby the provision of the required energy via photovoltaics in southern Germany represents the lowest value of the four case studies. This does not include the costs associated with balancing its volatility. Taking into account transporting the CO2 via pipeline to the port of Wilhelmshaven, followed by transporting and sequestering the CO2 in geological storage sites in the Norwegian North Sea (DACCS), the levelised costs increase to 161–176 EUR/tCO2. Due to the longer transport distances from southern and central Germany, a northern German site using wind turbines would be the most favourable.
Humic substances possess distinctive chemical features enabling their use in many advanced applications, including biomedical fields. No chemicals in nature have the same combination of specific chemical and biological properties as humic substances. Traditional medicine and modern research have demonstrated that humic substances from different sources possess immunomodulatory and anti-inflammatory properties, which makes them suitable for the prevention and treatment of chronic dermatoses, allergic rhinitis, atopic dermatitis, and other conditions characterized by inflammatory and allergic responses [1-4]. The use of humic compounds as agentswith antifungal and antiviral properties shows great potential [5-7].
The book covers various numerical field simulation methods, nonlinear circuit technology and its MF-S- and X-parameters, as well as state-of-the-art power amplifier techniques. It also describes newly presented oscillators and the emerging field of GHz plasma technology. Furthermore, it addresses aspects such as waveguides, mixers, phase-locked loops, antennas, and propagation effects, in combination with the bachelor's book 'High-Frequency Engineering,' encompassing all aspects related to the current state of GHz technology.
Pulmonary arterial cannulation is a common and effective method for percutaneous mechanical circulatory support for concurrent right heart and respiratory failure [1]. However, limited data exists to what effect the positioning of the cannula has on the oxygen perfusion throughout the pulmonary artery (PA). This study aims to evaluate, using computational fluid dynamics (CFD), the effect of different cannula positions in the PA with respect to the oxygenation of the different branching vessels in order for an optimal cannula position to be determined. The four chosen different positions (see Fig. 1) of the cannulas are, in the lower part of the main pulmonary artery (MPA), in the MPA at the junction between the right pulmonary artery (RPA) and the left pulmonary artery (LPA), in the RPA at the first branch of the RPA and in the LPA at the first branch of the LPA.
This study presents the concept of AstroBioLab, an autonomous astrobiological field laboratory tailored for the exploration of (sub)glacial habitats. AstroBioLab is an integral component of the TRIPLE (Technologies for Rapid Ice Penetration and subglacial Lake Exploration) DLR-funded project, aimed at advancing astrobiology research through the development and deployment of innovative technologies. AstroBioLab integrates diverse measurement techniques such as fluorescence microscopy, DNA sequencing and fluorescence spectrometry, while leveraging microfluidics for efficient sample delivery and preparation.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
This paper presents a thermal simulation environment for moving objects on the lunar surface. The goal of the thermal simulation environment is to enable the reliable prediction of the temperature development of a given object on the lunar surface by providing the respective heat fluxes for a mission on a given travel path. The user can import any object geometry and freely define the path that the object should travel. Using the path of the object, the relevant lunar surface geometry is imported from a digital elevation model. The relevant parts of the lunar surface are determined based on distance to the defined path. A thermal model of these surface sections is generated, consisting of a porous layer on top and a denser layer below. The object is moved across the lunar surface, and its inclination is adapted depending on the slope of the terrain below it. Finally, a transient thermal analysis of the object and its environment is performed at several positions on its path and the results are visualized. The paper introduces details on the thermal modeling of the lunar surface, as well as its verification. Furthermore, the structure of the created software is presented. The robustness of the environment is verified with the help of sensitivity studies and possible improvements are presented.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
Perennial ryegrass (Lolium perenne) is an underutilized lignocellulosic biomass that has several benefits such as high availability, renewability, and biomass yield. The grass press-juice obtained from the mechanical pretreatment can be used for the bio-based production of chemicals. Lactic acid is a platform chemical that has attracted consideration due to its broad area of applications. For this reason, the more sustainable production of lactic acid is expected to increase. In this work, lactic acid was produced using complex medium at the bench- and reactor scale, and the results were compared to those obtained using an optimized press-juice medium. Bench-scale fermentations were carried out in a pH-control system and lactic acid production reached approximately 21.84 ± 0.95 g/L in complex medium, and 26.61 ± 1.2 g/L in press-juice medium. In the bioreactor, the production yield was 0.91 ± 0.07 g/g, corresponding to a 1.4-fold increase with respect to the complex medium with fructose. As a comparison to the traditional ensiling process, the ensiling of whole grass fractions of different varieties harvested in summer and autumn was performed. Ensiling showed variations in lactic acid yields, with a yield up to 15.2% dry mass for the late-harvested samples, surpassing typical silage yields of 6–10% dry mass.
Several unconnected laboratory experiments are usually offered for students in instrumental analysis lab. To give the students a more rational overview of the most common instrumental techniques, a new laboratory experiment was developed. Marketed pain relief drugs, familiar consumer products with one to three active components, namely, acetaminophen (paracetamol), acetylsalicylic acid (ASA), and caffeine, were selected. Common analytical methods were compared regarding the performance of qualitative and quantitative analysis of unknown tablets: UV–visible (UV–vis), infrared (IR), and nuclear magnetic resonance (NMR) spectroscopies, as well as high-performance liquid chromatography (HPLC). The students successfully uncovered the composition of formulations, which were divided into three difficulty categories. Students were shown that in addition to simple mixtures handled in theoretical classes, the composition of complex drug products can also be uncovered. By comparing the performance of different techniques, students deepen their understanding and compare the efficiency of analytical methods in the context of complex mixtures. The laboratory experiment can be adjusted for graduate level by including extra tasks such as method optimization, validation, and 2D spectroscopic techniques.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
In the face of the current trend towards larger and more complex production tasks in the SLM process and the current limitations in terms of maximum build space, the welding of SLM components to each other or to conventionally manufactured parts is becoming increasingly relevant. The fusion welding of SLM components made of 316L has so far been rarely investigated and if so, then for highly specialised laser welding processes. When welding with industrial gas welding processes such as MIG/MAG or TIG welding, distortions occur which are associated with the resulting residual stresses in the components. This paper investigates process-side influencing factors to avoid resulting residual stresses in SLM components made of 316L. The aim is to develop a strategy to build up SLM components as stress-free as possible in order to join them as profitably as possible with a downstream welding process. For this purpose, influencing parameters such as laser power, scan speed, but also scan vector length and different scan patterns are investigated with regard to their influence on residual stresses.
Establishing high-performance polymers in additive manufacturing opens up new industrial applications. Polyetheretherketone (PEEK) was initially used in aerospace but is now widely applied in automotive, electronics, and medical industries. This study focuses on developing applications using PEEK and Fused Filament Fabrication for cost-efficient vulcanization injection mold production. A proof of concept confirms PEEK’s suitability for AM mold making, withstanding vulcanization conditions. Printing PEEK above its glass transition temperature of 145 °C is preferable due to its narrow process window. A new process strategy at room temperature is discussed, with micrographs showing improved inter-layer bonding at 410°C nozzle temperature and 0.1 mm layer thickness. Minimizing the layer thickness from 0.15 mm to 0.1 mm improves tensile strength by 16%.
The fourth industrial revolution is on its way to reshape manufacturing and value creation in a profound way. The underlying technologies like cyber-physical systems (CPS), big data, collaborative robotics, additive manufacturing or artificial intelligence offer huge potentials for the optimization and evolution of production systems. However, many manufacturing companies struggle to implement these technologies. This can only in part be attributed to the lack of skilled personal within these companies or a missing digitalization strategy. Rather, there is a fundamental incompatibility between the way current production systems and companies (Industry 3.0) are structured across multiple dimensions compared to what is necessary for industry 4.0. This is especially true in manufacturing systems and their transition towards flexible, decentralized and autonomous value creation networks. This paper shows across various dimensions these incompatibilities within manufacturing systems, explores their reasons and discusses a different approach to create a foundation for Industry 4.0 in manufacturing companies.
Additive Manufacturing (AM) is a topic that is becoming more relevant to many companies globally. With AM's progressive development and use for series production, integrating the technology into existing production structures is becoming an important criterion for businesses. This study qualitatively examines the actual state and different perspectives on the integration of AM in production structures. Seven semi-structured interviews were conducted and analyzed. The interview partners were high-level experts in Additive Manufacturing and production systems from industry and science. Four main themes were identified. Key findings are the far-reaching interrelationships and implications of AM within production structures. Specific AM-related aspects were identified. Those can be used to increase the knowledge and practical application of the technology in the industry and as a foundation for economic considerations.
The emergence of automotive-grade LiDARs has given rise to new potential methods to develop novel advanced driver assistance systems (ADAS). However, accurate and reliable parking slot detection (PSD) remains a challenge, especially in the low-light conditions typical of indoor car parks. Existing camera-based approaches struggle with these conditions and require sensor fusion to determine parking slot occupancy. This paper proposes a parking slot detection (PSD) algorithm which utilizes the intensity of a LiDAR point cloud to detect the markings of perpendicular parking slots. LiDAR-based approaches offer robustness in low-light environments and can directly determine occupancy status using 3D information. The proposed PSD algorithm first segments the ground plane from the LiDAR point cloud and detects the main axis along the driving direction using a random sample consensus algorithm (RANSAC). The remaining ground point cloud is filtered by a dynamic Otsu’s threshold, and the markings of parking slots are detected in multiple windows along the driving direction separately. Hypotheses of parking slots are generated between the markings, which are cross-checked with a non-ground point cloud to determine the occupancy status. Test results showed that the proposed algorithm is robust in detecting perpendicular parking slots in well-marked car parks with high precision, low width error, and low variance. The proposed algorithm is designed in such a way that future adoption for parallel parking slots and combination with free-space-based detection approaches is possible. This solution addresses the limitations of camera-based systems and enhances PSD accuracy and reliability in challenging lighting conditions.
This paper presents initial findings from aeroelastic studies conducted on a wing-propeller model, aimed at evaluating the impact of aerodynamic interactions on wing flutter mechanisms and overall aeroelastic performance. The flutter onset is assessed using a frequency-domain method. Mid-fidelity tools based on the time-domain approach are then exploited to account for the complex aerodynamic interaction between the propeller and the wing. Specifically, the open-source software DUST and MBDyn are leveraged for this purpose. The investigation covers both windmilling and thrusting conditions. During the trim process, adjustments to the collective pitch of the blades are made to ensure consistency across operational points. Time histories are then analyzed to pinpoint flutter onset, and corresponding frequencies and damping ratios are identified. The results reveal a marginal destabilizing effect of aerodynamic interaction on flutter speed, approximately 5%. Notably, the thrusting condition demonstrates a greater destabilizing influence compared to the windmilling case. These comprehensive findings enhance the understanding of the aerodynamic behavior of such systems and offer valuable insights for early design predictions and the development of streamlined models for future endeavors.
This paper deals with the problem of determining the optimal capacity of concentrated solar power (CSP) plants, especially in the context of hybrid solar power plants. This work presents an innovative analytical approach to optimizing the capacity of concentrated solar plants. The proposed method is based on the use of additional non-dimensional parameters, in particular, the design factor and the solar multiple factor. This paper presents a mathematical optimization model that focuses on the capacity of concentrated solar power plants where thermal storage plays a key role in the energy source. The analytical approach provides a more complete understanding of the design process for hybrid power plants. In addition, the use of additional factors and the combination of the proposed method with existing numerical methods allows for more refined optimization, which allows for the more accurate selection of the capacity for specific geographical conditions. Importantly, the proposed method significantly increases the speed of computation compared to that of traditional numerical methods. Finally, the authors present the results of the analysis of the proposed system of equations for calculating the levelized cost of electricity (LCOE) for hybrid solar power plants. The nonlinearity of the LCOE on the main calculation parameters is shown
This paper presents initial findings from aeroelastic studies conducted on a wing-propeller model, aimed at evaluating the impact of aerodynamic interactions on wing flutter mechanisms and overall aeroelastic performance. Utilizing a frequency domain method, the flutter onset within a specified flight speed range is assessed. Mid-fidelity tools with a time domain approach are then used to account for the complex aerodynamic interaction between the propeller and the wing. Specifically, open-source software DUST and MBDyn are leveraged for this purpose. This investigation covers both windmilling and thrusting conditions of the wing-propeller model. During the trim process, adjustments to the collective pitch of the blades are made to ensure consistency across operational points. Time histories are then analyzed to pinpoint flutter onset, and corresponding frequencies and damping ratios are meticulously identified. The results reveal a marginal destabilizing effect of aerodynamic interaction on flutter speed, approximately 5%. Notably, the thrusting condition demonstrates a greater destabilizing influence compared to windmilling. These comprehensive findings enhance the understanding of the aerodynamic behavior of such systems and offer valuable insights for early design predictions and the development of streamlined models for future endeavors.
This paper serves as an introduction to the ECTS monitoring system and its potential applications in higher education. It also emphasizes the potential for ECTS monitoring to become a proactive system, supporting students by predicting academic success and identifying groups of potential dropouts for tailored support services. The use of the nearest neighbor analysis is suggested for improving data analysis and prediction accuracy.
The Inverted Rotary Pendulum: Facilitating Practical Teaching in Advanced Control Engineering
(2024)
This paper outlines a practical approach to teach control engineering principles, with an inverted rotary pendulum, serving as an illustrative example. It shows how the pendulum is embedded in an advanced course of control engineering. This approach is incorporated into a flipped-classroom concept, as well as classical teaching concepts, offering students practical experience in control engineering. In addition, the design of the pendulum is shown, using a Raspberry Pi as the target platform for Matlab Simulink. This pendulum can be used in the classroom to evaluate the controller design mentioned above. It is analysed if the use of the pendulum generates a deeper understanding of the learning contents.
Sexism in online media comments is a pervasive challenge that often manifests subtly, complicating moderation efforts as interpretations of what constitutes sexism can vary among individuals. We study monolingual and multilingual open-source text embeddings to reliably detect sexism and misogyny in Germanlanguage online comments from an Austrian newspaper. We observed classifiers trained on text embeddings to mimic closely the individual judgements of human annotators. Our method showed robust performance in the GermEval 2024 GerMS-Detect Subtask 1 challenge, achieving an average macro F1 score of 0.597 (4th place, as reported on Codabench). It also accurately predicted the distribution of human annotations in GerMS-Detect Subtask 2, with an average Jensen-Shannon distance of 0.301 (2nd place). The computational efficiency of our approach suggests potential for scalable applications across various languages and linguistic contexts.
The use of industrial robots allows the precise manipulation of all components necessary for setting up a large-scale particle image velocimetry (PIV) system. The known internal calibration matrix of the cameras in combination with the actual pose of the industrial robots and the calculated transform from the fiducial markers to camera coordinates allow the precise positioning of the individual PIV components according to the measurement demands. In addition, the complete calibration procedure for generating the external camera matrix and the mapping functions for e.g. dewarping the stereo images can be automatically determined without further user interaction and thus the degree of automation can be extended to nearly 100%. This increased degree of automation expands the applications range of PIV systems, in particular for measurement tasks with severe time constraints.
In recent years, more and more digital startups have been founded and many of them work remotely by applying enterprise collaboration systems (ECS). The study investigates the functional affordances of ECS, particularly Slack, and examines its potential as a virtual office environment for cultural development in digital startups. Through a case study and based on affordance theoretical considerations, the paper explores how ECS facilitates remote collaboration, communication, and socialization within digital startups. The findings comprise material properties of ECS (synchrony and asynchrony communication), functional affordances (virtual office and culture development affordances) as well as its realization (through communication practices, openness, and inter-company accessibility) and are conceptualized as a model for ECS affordances in digital startups.
To gain insight on chemical sterilization processes, the influence of temperature (up to 70 °C), intense green light, and hydrogen peroxide (H₂O₂) concentration (up to 30% in aqueous solution) on microbial spore inactivation is evaluated by in-situ Raman spectroscopy with an optical trap. Bacillus atrophaeus is utilized as a model organism. Individual spores are isolated and their chemical makeup is monitored under dynamically changing conditions (temperature, light, and H₂O₂ concentration) to mimic industrially relevant process parameters for sterilization in the field of aseptic food processing. While isolated spores in water are highly stable, even at elevated temperatures of 70 °C, exposure to H₂O₂ leads to a loss of spore integrity characterized by the release of the key spore biomarker dipicolinic acid (DPA) in a concentration-dependent manner, which indicates damage to the inner membrane of the spore. Intensive light or heat, both of which accelerate the decomposition of H₂O₂ into reactive oxygen species (ROS), drastically shorten the spore lifetime, suggesting the formation of ROS as a rate-limiting step during sterilization. It is concluded that Raman spectroscopy can deliver mechanistic insight into the mode of action of H₂O₂-based sterilization and reveal the individual contributions of different sterilization methods acting in tandem.
In this field study we present an approach for the comprehensive and room-specific assessment of
parameters with the overall aim to realize energy-efficient provision of hygienically harmless and
thermally comfortable indoor environmental quality in naturally ventilated non-residential
buildings. The approach is based on (i) conformity assessment of room design parameters, (ii)
empirical determination of theoretically expected occupant-specific supply air flow rates and
corresponding air exchange rates, (iii) experimental determination of real occupant-specific
supply air flow rates and corresponding air exchange rates, (iv) measurement of indoor environmental
exposure conditions of T, RH, cCO2 , cPM2.5 and cTVOC, and (v) determination of real
energy demands for the prevailing ventilation scheme. Underlying assessment criteria comprise
the indoor environmental parameters of category II of EN 16798-1: Temperature T = 20 ◦C–24 ◦C,
and relative humidity RH = 25 %–60 % as well as the guide values of the German Federal
Environment Agency for cCO2 cPM2.5 and cTVOC of 1000 ppm, 15 μg m⁻³, and 1 mg m ⁻³,
respectively.
Investigation objects are six naturally ventilated classrooms of a German secondary school.
Major factors influencing indoor environmental quality in these classrooms are the specific room
volume per occupant and the window opening area. It is concluded that the rigorous implementation
of ventilation recommendations laid down by the German Federal Environment
Agency is ineffective with respect to anticipated indoor environmental parameters and inefficient
with respect to ventilation energy losses on the order of about 10 kWh m⁻² a ⁻¹ to 30 kWh m⁻²
a ⁻¹.
Enhancement of succinic acid production by Actinobacillus succinogenes in an electro-bioreactor
(2024)
This work examines the electrochemically enhanced production of succinic acid using the bacterium Actinobacillus succinogenes. The principal objective is to enhance the metabolic potential of glucose and CO2 utilization via the C4 pathway in order to synthesize succinic acid. We report on the development of an electro-bioreactor system to increase succinic acid production in a power-2-X approach. The use of activated carbon fibers as electrode surfaces and contact areas allows A. succinogenes to self-initiate biofilm formation. The integration of an electrical potential into the system shifts the redox balance from NAD+ to NADH, increasing the efficiency of metabolic processes. Mediators such as neutral red facilitate electron transfer within the system and optimize the redox reactions that are crucial for increased succinic acid production. Furthermore, the role of carbon nanotubes (CNTs) in electron transfer was investigated. The electro-bioreactor system developed here was operated in batch mode for 48 h and showed improvements in succinic acid yield and concentration. In particular, a run with 100 µM neutral red and a voltage of −600 mV achieved a yield of 0.7 gsuccinate·gglucose−1. In the absence of neutral red, a higher yield of 0.72 gsuccinate·gglucose−1 was achieved, which represents an increase of 14% compared to the control. When a potential of −600 mV was used in conjunction with 500 µg∙L−1 CNTs, a 21% increase in succinate concentration was observed after 48 h. An increase of 33% was achieved in the same batch by increasing the stirring speed. These results underscore the potential of the electro-bioreactor system to markedly enhance succinic acid production.
Industrial field devices exchange information through standardized communication interfaces and data models,
encompassing process data, communication properties, and vendor details. Despite enhancing interoperability within a specific
protocol, integrating these devices with diverse systems poses challenges due to data model fragmentation and custom
interfaces. The absence of a universal semantic model for categorizing field device process data independently of standards
necessitates engineers to repetitively devise custom exchange data models for different sensors and actuators, relying on
standards like OPC-UA. In response, this work proposes an ontology-based architecture to tackle information data model
fragmentation, aiming for seamless data interoperability across a universal interface. By focusing on two open-access field
device standards, IO-Link and CANOpen, we compare their information data models, identify existing limitations, and put
forth a semantic information model. The objective is to offer an interoperable interface for Industry 4.0 applications,
showcasing the potential of an ontology-based approach in streamlining data exchange and reducing heterogeneity among
field devices.
Air–water flows
(2024)
High Froude-number open-channel flows can entrain significant volumes of air, a phenomenon that occurs continuously in spillways, in free-falling jets and in hydraulic jumps, or as localized events, notably at the toe of hydraulic jumps or in plunging jets. Within these flows, turbulence generates millions of bubbles and droplets as well as highly distorted wavy air–water interfaces. This phenomenon is crucial from a design perspective, as it influences the behaviour of high-velocity flows, potentially impairing the safety of dam operations. This review examines recent scientific and engineering progress, highlighting foundational studies and emerging developments. Notable advances have been achieved in the past decades through improved sampling of flows and the development of physics-based models. Current challenges are also identified for instrumentation, numerical modelling and (up)scaling that hinder the formulation of fundamental theories, which are instrumental for improving predictive models, able to offer robust support for the design of large hydraulic structures at prototype scale.
This thesis aims at the presentation and discussion of well-accepted and new
imaging techniques applied to different types of flow in common hydraulic
engineering environments. All studies are conducted in laboratory conditions and
focus on flow depth and velocity measurements. Investigated flows cover a wide
range of complexity, e.g. propagation of waves, dam-break flows, slightly and fully
aerated spillway flows as well as highly turbulent hydraulic jumps.
Newimagingmethods are compared to different types of sensorswhich are frequently
employed in contemporary laboratory studies. This classical instrumentation as well
as the general concept of hydraulic modeling is introduced to give an overview on
experimental methods.
Flow depths are commonly measured by means of ultrasonic sensors, also known as
acoustic displacement sensors. These sensors may provide accurate data with high
sample rates in case of simple flow conditions, e.g. low-turbulent clear water flows.
However, with increasing turbulence, higher uncertainty must be considered.
Moreover, ultrasonic sensors can provide point data only, while the relatively large
acoustic beam footprint may lead to another source of uncertainty in case of
relatively short, highly turbulent surface fluctuations (ripples) or free-surface
air-water flows. Analysis of turbulent length and time scales of surface fluctuations
from point measurements is also difficult. Imaging techniques with different
dimensionality, however, may close this gap. It is shown in this thesis that edge
detection methods (known from computer vision) may be used for two-dimensional
free-surface extraction (i.e. from images taken through transparant sidewalls in
laboratory flumes). Another opportunity in hydraulic laboratory studies comes with
the application of stereo vision. Low-cost RGB-D sensors can be used to gather
instantaneous, three-dimensional free-surface elevations, even in flows with very
high complexity (e.g. aerated hydraulic jumps). It will be shown that the uncertainty
of these methods is of similar order as for classical instruments.
Particle Image Velocimetry (PIV) is a well-accepted and widespread imaging
technique for velocity determination in laboratory conditions. In combination with
high-speed cameras, PIV can give time-resolved velocity fields in 2D/3D or even as
volumetric flow fields. PIV is based on a cross-correlation technique applied to small
subimages of seeded flows. The minimum size of these subimages defines the
maximum spatial resolution of resulting velocity fields. A derivative of PIV for
aerated flows is also available, i.e. the so-called Bubble Image Velocimetry (BIV). This
thesis emphasizes the capacities and limitations of both methods, using relatively
simple setups with halogen and LED illuminations. It will be demonstrated that
PIV/BIV images may also be processed by means of Optical Flow (OF) techniques.
OF is another method originating from the computer vision discipline, based on the
assumption of image brightness conservation within a sequence of images. The
Horn-Schunck approach, which has been first employed to hydraulic engineering
problems in the studies presented herein, yields dense velocity fields, i.e. pixelwise
velocity data. As discussed hereinafter, the accuracy of OF competes well with PIV
for clear-water flows and even improves results (compared to BIV) for aerated flow
conditions. In order to independently benchmark the OF approach, synthetic images
with defined turbulence intensitiy are used.
Computer vision offers new opportunities that may help to improve the
understanding of fluid mechanics and fluid-structure interactions in laboratory
investigations. In prototype environments, it can be employed for obstacle detection
(e.g. identification of potential fish migration corridors) and recognition (e.g. fish
species for monitoring in a fishway) or surface reconstruction (e.g. inspection of
hydraulic structures). It can thus be expected that applications to hydraulic
engineering problems will develop rapidly in near future. Current methods have not
been developed for fluids in motion. Systematic future developments are needed to
improve the results in such difficult conditions.
Even the shortest flight through unknown, cluttered environments requires reliable local path planning algorithms to avoid unforeseen obstacles. The algorithm must evaluate alternative flight paths and identify the best path if an obstacle blocks its way. Commonly, weighted sums are used here. This work shows that weighted Chebyshev distances and factorial achievement scalarising functions are suitable alternatives to weighted sums if combined with the 3DVFH* local path planning algorithm. Both methods considerably reduce the failure probability of simulated flights in various environments. The standard 3DVFH* uses a weighted sum and has a failure probability of 50% in the test environments. A factorial achievement scalarising function, which minimises the worst combination of two out of four objective functions, reaches a failure probability of 26%; A weighted Chebyshev distance, which optimises the worst objective, has a failure probability of 30%. These results show promise for further enhancements and to support broader applicability.
Ambitious climate targets affect the competitiveness of industries in the international market. To prevent such industries from moving to other countries in the wake of increased climate protection efforts, cost adjustments may become necessary. Their design requires knowledge of country-specific production costs. Here, we present country-specific cost figures for different production routes of steel, paying particular attention to transportation costs. The data can be used in floor price models aiming to assess the competitiveness of different steel production routes in different countries (Rübbelke, 2022).
Deammonification for nitrogen removal in municipal wastewater in temperate and cold climate zones is currently limited to the side stream of municipal wastewater treatment plants (MWWTP). This study developed a conceptual model of a mainstream deammonification plant, designed for 30,000 P.E., considering possible solutions corresponding to the challenging mainstream conditions in Germany. In addition, the energy-saving potential, nitrogen elimination performance and construction-related costs of mainstream deammonification were compared to a conventional plant model, having a single-stage activated sludge process with upstream denitrification. The results revealed that an additional treatment step by combining chemical precipitation and ultra-fine screening is advantageous prior the mainstream deammonification. Hereby chemical oxygen demand (COD) can be reduced by 80% so that the COD:N ratio can be reduced from 12 to 2.5. Laboratory experiments testing mainstream conditions of temperature (8–20°C), pH (6–9) and COD:N ratio (1–6) showed an achievable volumetric nitrogen removal rate (VNRR) of at least 50 gN/(m3∙d) for various deammonifying sludges from side stream deammonification systems in the state of North Rhine-Westphalia, Germany, where m3 denotes reactor volume. Assuming a retained Norganic content of 0.0035 kgNorg./(P.E.∙d) from the daily loads of N at carbon removal stage and a VNRR of 50 gN/(m3∙d) under mainstream conditions, a resident-specific reactor volume of 0.115 m3/(P.E.) is required for mainstream deammonification. This is in the same order of magnitude as the conventional activated sludge process, i.e., 0.173 m3/(P.E.) for an MWWTP of size class of 4. The conventional plant model yielded a total specific electricity demand of 35 kWh/(P.E.∙a) for the operation of the whole MWWTP and an energy recovery potential of 15.8 kWh/(P.E.∙a) through anaerobic digestion. In contrast, the developed mainstream deammonification model plant would require only a 21.5 kWh/(P.E.∙a) energy demand and result in 24 kWh/(P.E.∙a) energy recovery potential, enabling the mainstream deammonification model plant to be self-sufficient. The retrofitting costs for the implementation of mainstream deammonification in existing conventional MWWTPs are nearly negligible as the existing units like activated sludge reactors, aerators and monitoring technology are reusable. However, the mainstream deammonification must meet the performance requirement of VNRR of about 50 gN/(m3∙d) in this case.
Motile cilia are hair-like cell extensions that beat periodically to generate fluid flow along various epithelial tissues within the body. In dense multiciliated carpets, cilia were shown to exhibit a remarkable coordination of their beat in the form of traveling metachronal waves, a phenomenon which supposedly enhances fluid transport. Yet, how cilia coordinate their regular beat in multiciliated epithelia to move fluids remains insufficiently understood, particularly due to lack of rigorous quantification. We combine experiments, novel analysis tools, and theory to address this knowledge gap. To investigate collective dynamics of cilia, we studied zebrafish multiciliated epithelia in the nose and the brain. We focused mainly on the zebrafish nose, due to its conserved properties with other ciliated tissues and its superior accessibility for non-invasive imaging. We revealed that cilia are synchronized only locally and that the size of local synchronization domains increases with the viscosity of the surrounding medium. Even though synchronization is local only, we observed global patterns of traveling metachronal waves across the zebrafish multiciliated epithelium. Intriguingly, these global wave direction patterns are conserved across individual fish, but different for left and right noses, unveiling a chiral asymmetry of metachronal coordination. To understand the implications of synchronization for fluid pumping, we used a computational model of a regular array of cilia. We found that local metachronal synchronization prevents steric collisions, i.e., cilia colliding with each other, and improves fluid pumping in dense cilia carpets, but hardly affects the direction of fluid flow. In conclusion, we show that local synchronization together with tissue-scale cilia alignment coincide and generate metachronal wave patterns in multiciliated epithelia, which enhance their physiological function of fluid pumping.
Despite the challenges of pioneering molten salt towers (MST), it remains the leading technology in central receiver power plants today, thanks to cost effective storage integration and high cost reduction potential. The limited controllability in volatile solar conditions can cause significant losses, which are difficult to estimate without comprehensive modeling [1]. This paper presents a Methodology to generate predictions of the dynamic behavior of the receiver system as part of an operating assistance system (OAS). Based on this, it delivers proposals if and when to drain and refill the receiver during a cloudy period in order maximize the net yield and quantifies the amount of net electricity gained by this. After prior analysis with a detailed dynamic two-phase model of the entire receiver system, two different reduced modeling approaches where developed and implemented in the OAS. A tailored decision algorithm utilizes both models to deliver the desired predictions efficiently and with appropriate accuracy.
Antibias training is increasingly demanded and practiced in academia and industry to increase employees’ sensitivity to discrimination, racism, and diversity. Under the heading of “Diversity Management,” antibias trainings are mainly offered as one-off workshops intending to raise awareness of unconscious biases, create a diversity-affirming corporate culture, promote awareness of the potential of
diversity, and ultimately enable the reflection of diversity in development processes. However, coming from childhood education, research and scientific articles on the sustainable effectiveness of antibias in adulthood, especially in academia, are very scarce. In order to fill this research gap, the article aims to explore how sustainable the effects of individual antibias trainings on participants’ behavior are. In order to investigate this, participant observation in a qualitative pre–post setting was conducted, analyzing antibias training in an academic context. Two observers actively participated in the training sessions and documented the activities and reflection processes of the participants. Overall, the results question the effectiveness of single antibias trainings and show that a target-group adaptive approach is mandatory owing to the background of the approach in early childhood education. Therefore, antibias work needs to be adapted to the target group’s needs and realities of life. Furthermore, the study reveals that single antibias trainings must be embedded in a holistic diversity management approach to stimulate sustainable reflection processes among the target group. This article is one of the first to scientifically evaluate antibias training effectiveness, especially in engineering sciences and the university context.
The complex questions of today for a world of tomorrow are characterized by their global impact. Solutions must therefore not only be sustainable in the sense of the three pillars of sustainability (economic, environmental, and social) but must also function globally. This goes hand in hand with the need for intercultural acceptance of developed services and products. To achieve this, engineers, as the problem solvers of the future, must be able to work in intercultural teams on appropriate solutions, and be sensitive to intercultural perspectives. To equip the engineers of the future with the so-called future skills, teaching concepts are needed in which students can acquire these methods and competencies in application-oriented formats. The presented course "Applying Design Thinking - Sustainability, Innovation and Interculturality" was developed to teach future skills from the competency areas Digital Key Competencies, Classical Competencies and Transformative Competencies. The CDIO Standard 3.0, in particular the standards 5, 6, 7 and 8, was used as a guideline. The course aims to prepare engineering students from different disciplines and cultures for their future work in an international environment by combining a digital teaching format with an interdisciplinary, transdisciplinary and intercultural setting for solving sustainability challenges. The innovative moment lies in the digital application of design thinking and the inclusion of intercultural as well as trans- and interdisciplinary perspectives in innovation development processes. In this paper, the concept of the course will be presented in detail and the particularities of a digital implementation of design thinking will be addressed. Subsequently, the potentials and challenges will be reflected and practical advice for integrating design thinking in engineering education will be given.
This paper presents an approach to predicting the sound exposure on the ground caused by a landing aircraft with recuperating propellers. The noise source along the trajectory of a flight specified for a steeper approach is simulated based on measurements of sound power levels and additional parameters of a single propeller placed in a wind tunnel. To validate the measured data/measurement results, these simulations are also supported by overflight measurements of a test aircraft. It is shown that the simple source models of propellers do not provide fully satisfactory results since the sound levels are estimated too low. Nevertheless, with a further reference comparison, margins for an acceptable increase in the sound power level of the aircraft on its now steeper approach path could be estimated. Thus, in this case, a +7 dB increase in SWL would not increase the SEL compared to the conventional approach within only 2 km ahead of the airfield.
Residential and commercial buildings account for more than one-third of global energy-related greenhouse gas emissions. Integrated multi-energy systems at the district level are a promising way to reduce greenhouse gas emissions by exploiting economies of scale and synergies between energy sources. Planning district energy systems comes with many challenges in an ever-changing environment. Computational modelling established itself as the state-of-the-art method for district energy system planning. Unfortunately, it is still cumbersome to combine standalone models to generate insights that surpass their original purpose. Ideally, planning processes could be solved by using modular tools that easily incorporate the variety of competing and complementing computational models. Our contribution is a vision for a collaborative development and application platform for multi-energy system planning tools at the district level. We present challenges of district energy system planning identified in the literature and evaluate whether this platform can help to overcome these challenges. Further, we propose a toolkit that represents the core technical elements of the platform. Lastly, we discuss community management and its relevance for the success of projects with collaboration and knowledge sharing at their core.
Aspergillus oryzae is an industrially relevant organism for the secretory production of heterologous enzymes, especially amylases. The activities of potential heterologous amylases, however, cannot be quantified directly from the supernatant due to the high background activity of native α-amylase. This activity is caused by the gene products of amyA, amyB, and amyC. In this study, an in vitro CRISPR/Cas9 system was established in A. oryzae to delete these genes simultaneously. First, pyrG of A. oryzae NSAR1 was mutated by exploiting NHEJ to generate a counter-selection marker. Next, all amylase genes were deleted simultaneously by co-transforming a repair template carrying pyrG of Aspergillus nidulans and flanking sequences of amylase gene loci. The rate of obtained triple knock-outs was 47%. We showed that triple knockouts do not retain any amylase activity in the supernatant. The established in vitro CRISPR/Cas9 system was used to achieve sequence-specific knock-in of target genes. The system was intended to incorporate a single copy of the gene of interest into the desired host for the development of screening methods. Therefore, an integration cassette for the heterologous Fpi amylase was designed to specifically target the amyB locus. The site-specific integration rate of the plasmid was 78%, with exceptional additional integrations. Integration frequency was assessed via qPCR and directly correlated with heterologous amylase activity. Hence, we could compare the efficiency between two different signal peptides. In summary, we present a strategy to exploit CRISPR/Cas9 for gene mutation, multiplex knock-out, and the targeted knock-in of an expression cassette in A. oryzae. Our system provides straightforward strain engineering and paves the way for development of fungal screening systems.
In times of social climate protection movements, such as Fridays for Future, the priorities of society, industry and higher education are currently changing. The consideration of sustainability challenges is increasing. In the context of sustainable development, social skills are crucial to achieving the United Nations Sustainable Development Goals (SDGs). In particular, the impact that educational activities have on people, communities and society is therefore coming to the fore. Research has shown that people with high levels of social competence are better able to manage stressful situations, maintain positive relationships and communicate effectively. They are also associated with better academic performance and career success. However, especially in engineering programs, the social pillar is underrepresented compared to the environmental and economic pillars.
In response to these changes, higher education institutions should be more aware of their social impact - from individual forms of teaching to entire modules and degree programs. To specifically determine the potential for improvement and derive resulting change for further development, we present an initial framework for social impact measurement by transferring already established approaches from the business sector to the education sector. To demonstrate the applicability, we measure the key competencies taught in undergraduate engineering programs in Germany.
The aim is to prepare the students for success in the modern world of work and their future contribution to sustainable development. Additionally, the university can include the results in its sustainability report. Our method can be applied to different teaching methods and enables their comparison.
This book is based on a multimedia course for biological and chemical engineers, which is designed to trigger students' curiosity and initiative. A solid basic knowledge of thermodynamics and kinetics is necessary for understanding many technical, chemical, and biological processes.
The one-semester basic lecture course was divided into 12 workshops (chapters). Each chapter covers a practically relevant area of physical chemistry and contains the following didactic elements that make this book particularly exciting and understandable:
- Links to Videos at the start of each chapter as preparation for the workshop
- Key terms (in bold) for further research of your own
- Comprehension questions and calculation exercises with solutions as learning checks
- Key illustrations as simple, easy-to-replicate blackboard pictures
Humorous cartoons for each workshop (by Faelis) additionally lighten up the text and facilitate the learning process as a mnemonic. To round out the book, the appendix includes a summary of the most popular experiments in basic physical chemistry courses, as well as suggestions for designing workshops with exhibits, experiments, and "questions of the day."
Suitable for students minoring in chemistry; chemistry majors are sure to find this slimmed-down, didactically valuable book helpful as well. The book is excellent for self-study.
Experimental determination of the cross sections of proton capture on radioactive nuclei is extremely difficult. Therefore, it is of substantial interest for the understanding of the production of the p-nuclei. For the first time, a direct measurement of proton-capture cross sections on stored, radioactive ions became possible in an energy range of interest for nuclear astrophysics. The experiment was performed at the Experimental Storage Ring (ESR) at GSI by making use of a sensitive method to measure (p,γ) and (p,n) reactions in inverse kinematics. These reaction channels are of high relevance for the nucleosyn-thesis processes in supernovae, which are among the most violent explosions in the universe and are not yet well understood. The cross section of the ¹¹⁸Te(p,γ) reaction has been measured at energies of 6 MeV/u and 7 MeV/u. The heavy ions interacted with a hydrogen gas jet target. The radiative recombination process of the fully stripped ¹¹⁸Te ions and electrons from the hydrogen target was used as a luminosity monitor. An overview of the experimental method and preliminary results from the ongoing analysis will be presented.
Clinical assessment of newly developed sensors is important for ensuring their validity. Comparing recordings of emerging electrocardiography (ECG) systems to a reference ECG system requires accurate synchronization of data from both devices. Current methods can be inefficient and prone to errors. To address this issue, three algorithms are presented to synchronize two ECG time series from different recording systems: Binned R-peak Correlation, R-R Interval Correlation, and Average R-peak Distance. These algorithms reduce ECG data to their cyclic features, mitigating inefficiencies and minimizing discrepancies between different recording systems. We evaluate the performance of these algorithms using high-quality data and then assess their robustness after manipulating the R-peaks. Our results show that R-R Interval Correlation was the most efficient, whereas the Average R-peak Distance and Binned R-peak Correlation were more robust against noisy data.
Due to the decarbonization of the energy sector, the electric distribution grids are undergoing a major transformation, which is expected to increase the load on the operating resources due to new electrical loads and distributed energy resources. Therefore, grid operators need to gradually move to active grid management in order to ensure safe and reliable grid operation. However, this requires knowledge of key grid variables, such as node voltages, which is why the mass integration of measurement technology (smart meters) is necessary. Another problem is the fact that a large part of the topology of the distribution grids is not sufficiently digitized and models are partly faulty, which means that active grid operation management today has to be carried out largely blindly. It is therefore part of current research to develop methods for determining unknown grid topologies based on measurement data. In this paper, different clustering algorithms are presented and their performance of topology detection of low voltage grids is compared. Furthermore, the influence of measurement uncertainties is investigated in the form of a sensitivity analysis.
AI-based systems are nearing ubiquity not only in everyday low-stakes activities but also in medical procedures. To protect patients and physicians alike, explainability requirements have been proposed for the operation of AI-based decision support systems (AI-DSS), which adds hurdles to the productive use of AI in clinical contexts. This raises two questions: Who decides these requirements? And how should access to AI-DSS be provided to communities that reject these standards (particularly when such communities are expert-scarce)? This chapter investigates a dilemma that emerges from the implementation of global AI governance. While rejecting global AI governance limits the ability to help communities in need, global AI governance risks undermining and subjecting health-insecure communities to the force of the neo-colonial world order. For this, this chapter first surveys the current landscape of AI governance and introduces the approach of relational egalitarianism as key to (global health) justice. To discuss the two horns of the referred dilemma, the core power imbalances faced by health-insecure collectives (HICs) are examined. The chapter argues that only strong demands of a dual strategy towards health-secure collectives can both remedy the immediate needs of HICs and enable them to become healthcare independent.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Lead and nickel, as heavy metals, are still used in industrial processes, and are classified as “environmental health hazards” due to their toxicity and polluting potential. The detection of heavy metals can prevent environmental pollution at toxic levels that are critical to human health. In this sense, the electrolyte–insulator–semiconductor (EIS) field-effect sensor is an attractive sensing platform concerning the fabrication of reusable and robust sensors to detect such substances. This study is aimed to fabricate a sensing unit on an EIS device based on Sn₃O₄ nanobelts embedded in a polyelectrolyte matrix of polyvinylpyrrolidone (PVP) and polyacrylic acid (PAA) using the layer-by-layer (LbL) technique. The EIS-Sn₃O₄ sensor exhibited enhanced electrochemical performance for detecting Pb²⁺ and Ni²⁺ ions, revealing a higher affinity for Pb²⁺ ions, with sensitivities of ca. 25.8 mV/decade and 2.4 mV/decade, respectively. Such results indicate that Sn₃O₄ nanobelts can contemplate a feasible proof-of-concept capacitive field-effect sensor for heavy metal detection, envisaging other future studies focusing on environmental monitoring.
Selected problems in the field of multivariate statistical analysis are treated. Thereby, one focus is on the paired sample case. Among other things, statistical testing problems of marginal homogeneity are under consideration. In detail, properties of Hotelling‘s T² test in a special parametric situation are obtained. Moreover, the nonparametric problem of marginal homogeneity is discussed on the basis of possibly incomplete data. In the bivariate data case, properties of the Hoeffding-Blum-Kiefer-Rosenblatt independence test statistic on the basis of partly not identically distributed data are investigated. Similar testing problems are treated within the scope of the application of a result for the empirical process of the concomitants for partly categorial data. Furthermore, testing changes in the modeled solvency capital requirement of an insurance company by means of a paired sample from an internal risk model is discussed. Beyond the paired sample case, a new asymptotic relative efficiency concept based on the expected volumes of multidimensional confidence regions is introduced. Besides, a new approach for the treatment of the multi-sample goodness-of-fit problem is presented. Finally, a consistent test for the treatment of the goodness-of-fit problem is developed for the background of huge or infinite dimensional data.
To fulfil the CO2 emission reduction targets of the European Union (EU), heavy-duty (HD) trucks need to operate 15% more efficiently by 2025 and 30% by 2030. Their electrification is necessary as conventional HD trucks are already optimized for the long-haul application. The resulting hybrid electric vehicle (HEV) truck gains most of the fuel saving potential by the recuperation of potential energy and its consecutive utilization. The key to utilizing the full potential of HEV-HD trucks is to maximize the amount of recuperated energy and ensure its intelligent usage while keeping the operating point of the internal combustion engine as efficient as possible. To achieve this goal, an intelligent energy management strategy (EMS) based on ECMS is developed for a parallel HEV-HD truck which uses predictive discharge of the battery and adaptive operating strategy regarding the height profile and the vehicle mass. The presented EMS can reproduce the global optimal operating strategy over long phases and lead to a fuel saving potential of up to 2% compared with a heuristic strategy. Furthermore, the fuel saving potential is correlated with the investigated boundary conditions to deepen the understanding of the impact of intelligent EMS for HEV-HD trucks.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
This study evaluates neuromechanical control and muscle-tendon interaction during energy storage and dissipation tasks in hypergravity. During parabolic flights, while 17 subjects performed drop jumps (DJs) and drop landings (DLs), electromyography (EMG) of the lower limb muscles was combined with in vivo fascicle dynamics of the gastrocnemius medialis, two-dimensional (2D) kinematics, and kinetics to measure and analyze changes in energy management. Comparisons were made between movement modalities executed in hypergravity (1.8 G) and gravity on ground (1 G). In 1.8 G, ankle dorsiflexion, knee joint flexion, and vertical center of mass (COM) displacement are lower in DJs than in DLs; within each movement modality, joint flexion amplitudes and COM displacement demonstrate higher values in 1.8 G than in 1 G. Concomitantly, negative peak ankle joint power, vertical ground reaction forces, and leg stiffness are similar between both movement modalities (1.8 G). In DJs, EMG activity in 1.8 G is lower during the COM deceleration phase than in 1 G, thus impairing quasi-isometric fascicle behavior. In DLs, EMG activity before and during the COM deceleration phase is higher, and fascicles are stretched less in 1.8 G than in 1 G. Compared with the situation in 1 G, highly task-specific neuromuscular activity is diminished in 1.8 G, resulting in fascicle lengthening in both movement modalities. Specifically, in DJs, a high magnitude of neuromuscular activity is impaired, resulting in altered energy storage. In contrast, in DLs, linear stiffening of the system due to higher neuromuscular activity combined with lower fascicle stretch enhances the buffering function of the tendon, and thus the capacity to safely dissipate energy.
It has been shown that muscle fascicle curvature increases with increasing contraction level and decreasing muscle–tendon complex length. The analyses were done with limited examination windows concerning contraction level, muscle–tendon complex length, and/or intramuscular position of ultrasound imaging. With this study we aimed to investigate the correlation between fascicle arching and contraction, muscle–tendon complex length and their associated architectural parameters in gastrocnemius muscles to develop hypotheses concerning the fundamental mechanism of fascicle curving. Twelve participants were tested in five different positions (90°/105°*, 90°/90°*, 135°/90°*, 170°/90°*, and 170°/75°*; *knee/ankle angle). They performed isometric contractions at four different contraction levels (5%, 25%, 50%, and 75% of maximum voluntary contraction) in each position. Panoramic ultrasound images of gastrocnemius muscles were collected at rest and during constant contraction. Aponeuroses and fascicles were tracked in all ultrasound images and the parameters fascicle curvature, muscle–tendon complex strain, contraction level, pennation angle, fascicle length, fascicle strain, intramuscular position, sex and age group were analyzed by linear mixed effect models. Mean fascicle curvature of the medial gastrocnemius increased with contraction level (+5 m−1 from 0% to 100%; p = 0.006). Muscle–tendon complex length had no significant impact on mean fascicle curvature. Mean pennation angle (2.2 m−1 per 10°; p < 0.001), inverse mean fascicle length (20 m−1 per cm−1; p = 0.003), and mean fascicle strain (−0.07 m−1 per +10%; p = 0.004) correlated with mean fascicle curvature. Evidence has also been found for intermuscular, intramuscular, and sex-specific intramuscular differences of fascicle curving. Pennation angle and the inverse fascicle length show the highest predictive capacities for fascicle curving. Due to the strong correlations between pennation angle and fascicle curvature and the intramuscular pattern of curving we suggest for future studies to examine correlations between fascicle curvature and intramuscular fluid pressure.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Preprint: Studies on the enzymatic reduction of levulinic acid using Chiralidon-R and Chiralidon-S
(2023)
The enzymatic reduction of levulinic acid by the chiral catalysts Chiralidon-R and Chiralidon-S which are commercially available superabsorbed alcohol dehydrogenases is described. The Chiralidon®-R/S reduces the levulinic acid to the (R,S)-4-hydroxy valeric acid and the (R)- or (S)- gamma-valerolactone.
Background
Post-COVID-19 syndrome (PCS) is a lingering disease with ongoing symptoms such as fatigue and cognitive impairment resulting in a high impact on the daily life of patients. Understanding the pathophysiology of PCS is a public health priority, as it still poses a diagnostic and treatment challenge for physicians.
Methods
In this prospective observational cohort study, we analyzed the retinal microcirculation using Retinal Vessel Analysis (RVA) in a cohort of patients with PCS and compared it to an age- and gender-matched healthy cohort (n = 41, matched out of n = 204).
Measurements and main results
PCS patients exhibit persistent endothelial dysfunction (ED), as indicated by significantly lower venular flicker-induced dilation (vFID; 3.42% ± 1.77% vs. 4.64% ± 2.59%; p = 0.02), narrower central retinal artery equivalent (CRAE; 178.1 [167.5–190.2] vs. 189.1 [179.4–197.2], p = 0.01) and lower arteriolar-venular ratio (AVR; (0.84 [0.8–0.9] vs. 0.88 [0.8–0.9], p = 0.007). When combining AVR and vFID, predicted scores reached good ability to discriminate groups (area under the curve: 0.75). Higher PCS severity scores correlated with lower AVR (R = − 0.37 p = 0.017). The association of microvascular changes with PCS severity were amplified in PCS patients exhibiting higher levels of inflammatory parameters.
Conclusion
Our results demonstrate that prolonged endothelial dysfunction is a hallmark of PCS, and impairments of the microcirculation seem to explain ongoing symptoms in patients. As potential therapies for PCS emerge, RVA parameters may become relevant as clinical biomarkers for diagnosis and therapy management.
This work proposes a hybrid algorithm combining an Artificial Neural Network (ANN) with a conventional local path planner to navigate UAVs efficiently in various unknown urban environments. The proposed method of a Hybrid Artificial Neural Network Avoidance System is called HANNAS. The ANN analyses a video stream and classifies the current environment. This information about the current Environment is used to set several control parameters of a conventional local path planner, the 3DVFH*. The local path planner then plans the path toward a specific goal point based on distance data from a depth camera. We trained and tested a state-of-the-art image segmentation algorithm, PP-LiteSeg. The proposed HANNAS method reaches a failure probability of 17%, which is less than half the failure probability of the baseline and around half the failure probability of an improved, bio-inspired version of the 3DVFH*. The proposed HANNAS method does not show any disadvantages regarding flight time or flight distance.
Rocket engine test facilities and launch pads are typically equipped with a guide tube. Its purpose is to ensure the controlled and safe routing of the hot exhaust gases. In addition, the guide tube induces a suction that effects the nozzle flow, namely the flow separation during transient start-up and shut-down of the engine. A cold flow subscale nozzle in combination with a set of guide tubes was studied experimentally
to determine the main influencing parameters.
This paper introduces an inexpensive Wiegand-sensor-based rotary encoder that avoids rotating magnets and is suitable for electrical-drive applications. So far, Wiegand-sensor-based encoders usually include a magnetic pole wheel with rotating permanent magnets. These encoders combine the disadvantages of an increased magnet demand and a limited maximal speed due to the centripetal force acting on the rotating magnets. The proposed approach reduces the total demand of permanent magnets drastically. Moreover, the rotating part is manufacturable from a single piece of steel, which makes it very robust and cheap. This work presents the theoretical operating principle of the proposed approach and validates its benefits on a hardware prototype. The presented proof-of-concept prototype achieves a mechanical resolution of 4.5 ° by using only 4 permanent magnets, 2Wiegand sensors and a rotating steel gear wheel with 20 teeth.