Refine
Year of publication
- 2024 (75) (remove)
Institute
- Fachbereich Medizintechnik und Technomathematik (27)
- Fachbereich Elektrotechnik und Informationstechnik (11)
- IfB - Institut für Bioengineering (11)
- Fachbereich Luft- und Raumfahrttechnik (9)
- Fachbereich Chemie und Biotechnologie (8)
- INB - Institut für Nano- und Biotechnologien (7)
- ECSM European Center for Sustainable Mobility (6)
- Fachbereich Maschinenbau und Mechatronik (6)
- Fachbereich Wirtschaftswissenschaften (6)
- Fachbereich Energietechnik (3)
Language
- English (75) (remove)
Document Type
- Article (49)
- Conference Proceeding (16)
- Part of a Book (4)
- Book (3)
- Preprint (2)
- Working Paper (1)
Keywords
- Additive Manufacturing (2)
- Additive manufacturing (2)
- Coal (2)
- Engineering education (2)
- Hot S-parameter (2)
- Industry 4.0 (2)
- LPBF (2)
- SLM (2)
- 1P hub loads (1)
- 3-D printing (1)
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Several unconnected laboratory experiments are usually offered for students in instrumental analysis lab. To give the students a more rational overview of the most common instrumental techniques, a new laboratory experiment was developed. Marketed pain relief drugs, familiar consumer products with one to three active components, namely, acetaminophen (paracetamol), acetylsalicylic acid (ASA), and caffeine, were selected. Common analytical methods were compared regarding the performance of qualitative and quantitative analysis of unknown tablets: UV–visible (UV–vis), infrared (IR), and nuclear magnetic resonance (NMR) spectroscopies, as well as high-performance liquid chromatography (HPLC). The students successfully uncovered the composition of formulations, which were divided into three difficulty categories. Students were shown that in addition to simple mixtures handled in theoretical classes, the composition of complex drug products can also be uncovered. By comparing the performance of different techniques, students deepen their understanding and compare the efficiency of analytical methods in the context of complex mixtures. The laboratory experiment can be adjusted for graduate level by including extra tasks such as method optimization, validation, and 2D spectroscopic techniques.
Sexism in online media comments is a pervasive challenge that often manifests subtly, complicating moderation efforts as interpretations of what constitutes sexism can vary among individuals. We study monolingual and multilingual open-source text embeddings to reliably detect sexism and misogyny in Germanlanguage online comments from an Austrian newspaper. We observed classifiers trained on text embeddings to mimic closely the individual judgements of human annotators. Our method showed robust performance in the GermEval 2024 GerMS-Detect Subtask 1 challenge, achieving an average macro F1 score of 0.597 (4th place, as reported on Codabench). It also accurately predicted the distribution of human annotations in GerMS-Detect Subtask 2, with an average Jensen-Shannon distance of 0.301 (2nd place). The computational efficiency of our approach suggests potential for scalable applications across various languages and linguistic contexts.
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
Perennial ryegrass (Lolium perenne) is an underutilized lignocellulosic biomass that has several benefits such as high availability, renewability, and biomass yield. The grass press-juice obtained from the mechanical pretreatment can be used for the bio-based production of chemicals. Lactic acid is a platform chemical that has attracted consideration due to its broad area of applications. For this reason, the more sustainable production of lactic acid is expected to increase. In this work, lactic acid was produced using complex medium at the bench- and reactor scale, and the results were compared to those obtained using an optimized press-juice medium. Bench-scale fermentations were carried out in a pH-control system and lactic acid production reached approximately 21.84 ± 0.95 g/L in complex medium, and 26.61 ± 1.2 g/L in press-juice medium. In the bioreactor, the production yield was 0.91 ± 0.07 g/g, corresponding to a 1.4-fold increase with respect to the complex medium with fructose. As a comparison to the traditional ensiling process, the ensiling of whole grass fractions of different varieties harvested in summer and autumn was performed. Ensiling showed variations in lactic acid yields, with a yield up to 15.2% dry mass for the late-harvested samples, surpassing typical silage yields of 6–10% dry mass.
The book covers various numerical field simulation methods, nonlinear circuit technology and its MF-S- and X-parameters, as well as state-of-the-art power amplifier techniques. It also describes newly presented oscillators and the emerging field of GHz plasma technology. Furthermore, it addresses aspects such as waveguides, mixers, phase-locked loops, antennas, and propagation effects, in combination with the bachelor's book 'High-Frequency Engineering,' encompassing all aspects related to the current state of GHz technology.
In the research domain of energy informatics, the importance of open datais rising rapidly. This can be seen as various new public datasets are created andpublished. Unfortunately, in many cases, the data is not available under a permissivelicense corresponding to the FAIR principles, often lacking accessibility or reusability.Furthermore, the source format often differs from the desired data format or does notmeet the demands to be queried in an efficient way. To solve this on a small scale atoolbox for ETL-processes is provided to create a local energy data server with openaccess data from different valuable sources in a structured format. So while the sourcesitself do not fully comply with the FAIR principles, the provided unique toolbox allows foran efficient processing of the data as if the FAIR principles would be met. The energydata server currently includes information of power systems, weather data, networkfrequency data, European energy and gas data for demand and generation and more.However, a solution to the core problem - missing alignment to the FAIR principles - isstill needed for the National Research Data Infrastructure.
Due to the transition to renewable energies, electricity markets need to be made fit for purpose. To enable the comparison of different energy market designs, modeling tools covering market actors and their heterogeneous behavior are needed. Agent-based models are ideally suited for this task. Such models can be used to simulate and analyze changes to market design or market mechanisms and their impact on market dynamics. In this paper, we conduct an evaluation and comparison of two actively developed open-source energy market simulation models. The two models, namely AMIRIS and ASSUME, are both designed to simulate future energy markets using an agent-based approach. The assessment encompasses modelling features and techniques, model performance, as well as a comparison of model results, which can serve as a blueprint for future comparative studies of simulation models. The main comparison dataset includes data of Germany in 2019 and simulates the Day-Ahead market and participating actors as individual agents. Both models are comparable close to the benchmark dataset with a MAE between 5.6 and 6.4 €/MWh while also modeling the actual dispatch realistically.
The FAYMONVILLE case study describes how the family-owned company Faymonville from eastern Belgium has succeeded in becoming one of the leading manufacturers in its sector. The targeted identification of new markets, the focus on relevant customer needs, and a consistent product policy with a coordinated manufacturing concept lay the foundations for success. In this case study, students can learn about how a company can successfully resolve the fundamental contradiction between economic and customized production.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
In recent years, more and more digital startups have been founded and many of them work remotely by applying enterprise collaboration systems (ECS). The study investigates the functional affordances of ECS, particularly Slack, and examines its potential as a virtual office environment for cultural development in digital startups. Through a case study and based on affordance theoretical considerations, the paper explores how ECS facilitates remote collaboration, communication, and socialization within digital startups. The findings comprise material properties of ECS (synchrony and asynchrony communication), functional affordances (virtual office and culture development affordances) as well as its realization (through communication practices, openness, and inter-company accessibility) and are conceptualized as a model for ECS affordances in digital startups.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
Ga-doped Li7La3Zr2O12 garnet solid electrolytes exhibit the highest Li-ion conductivities among the oxide-type garnet-structured solid electrolytes, but instabilities toward Li metal hamper their practical application. The instabilities have been assigned to direct chemical reactions between LiGaO2 coexisting phases and Li metal by several groups previously. Yet, the understanding of the role of LiGaO2 in the electrochemical cell and its electrochemical properties is still lacking. Here, we are investigating the electrochemical properties of LiGaO2 through electrochemical tests in galvanostatic cells versus Li metal and complementary ex situ studies via confocal Raman microscopy, quantitative phase analysis based on powder X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron energy loss spectroscopy. The results demonstrate considerable and surprising electrochemical activity, with high reversibility. A three-stage reaction mechanism is derived, including reversible electrochemical reactions that lead to the formation of highly electronically conducting products. The results have considerable implications for the use of Ga-doped Li7La3Zr2O12 electrolytes in all-solid-state Li-metal battery applications and raise the need for advanced materials engineering to realize Ga-doped Li7La3Zr2O12for practical use.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
In this work, the effect of low air relative humidity on the operation of a polymer electrolyte membrane fuel cell is investigated. An innovative method through performing in situ electrochemical impedance spectroscopy is utilised to quantify the effect of inlet air relative humidity at the cathode side on internal ionic resistances and output voltage of the fuel cell. In addition, algorithms are developed to analyse the electrochemical characteristics of the fuel cell. For the specific fuel cell stack used in this study, the membrane resistance drops by over 39 % and the cathode side charge transfer resistance decreases by 23 % after increasing the humidity from 30 % to 85 %, while the results of static operation also show an increase of ∼2.2 % in the voltage output after increasing the relative humidity from 30 % to 85 %. In dynamic operation, visible drying effects occur at < 50 % relative humidity, whereby the increase of the air side stoichiometry increases the drying effects. Furthermore, other parameters, such as hydrogen humidification, internal stack structure, and operating parameters like stoichiometry, pressure, and temperature affect the overall water balance. Therefore, the optimal humidification range must be determined by considering all these parameters to maximise the fuel cell performance and durability. The results of this study are used to develop a health management system to ensure sufficient humidification by continuously monitoring the fuel cell polarisation data and electrochemical impedance spectroscopy indicators.
This paper presents a thermal simulation environment for moving objects on the lunar surface. The goal of the thermal simulation environment is to enable the reliable prediction of the temperature development of a given object on the lunar surface by providing the respective heat fluxes for a mission on a given travel path. The user can import any object geometry and freely define the path that the object should travel. Using the path of the object, the relevant lunar surface geometry is imported from a digital elevation model. The relevant parts of the lunar surface are determined based on distance to the defined path. A thermal model of these surface sections is generated, consisting of a porous layer on top and a denser layer below. The object is moved across the lunar surface, and its inclination is adapted depending on the slope of the terrain below it. Finally, a transient thermal analysis of the object and its environment is performed at several positions on its path and the results are visualized. The paper introduces details on the thermal modeling of the lunar surface, as well as its verification. Furthermore, the structure of the created software is presented. The robustness of the environment is verified with the help of sensitivity studies and possible improvements are presented.
Critical quantitative evaluation of integrated health management methods for fuel cell applications
(2024)
Online fault diagnostics is a crucial consideration for fuel cell systems, particularly in mobile applications, to limit downtime and degradation, and to increase lifetime. Guided by a critical literature review, in this paper an overview of Health management systems classified in a scheme is presented, introducing commonly utilised methods to diagnose FCs in various applications. In this novel scheme, various Health management system methods are summarised and structured to provide an overview of existing systems including their associated tools. These systems are classified into four categories mainly focused on model-based and non-model-based systems. The individual methods are critically discussed when used individually or combined aimed at further understanding their functionality and suitability in different applications. Additionally, a tool is introduced to evaluate methods from each category based on the scheme presented. This tool applies the technique of matrix evaluation utilising several key parameters to identify the most appropriate methods for a given application. Based on this evaluation, the most suitable methods for each specific application are combined to build an integrated Health management system.
Mathematical morphology is a part of image processing that has proven to be fruitful for numerous applications. Two main operations in mathematical morphology are dilation and erosion. These are based on the construction of a supremum or infimum with respect to an order over the tonal range in a certain section of the image. The tonal ordering can easily be realised in grey-scale morphology, and some morphological methods have been proposed for colour morphology. However, all of these have certain limitations.
In this paper we present a novel approach to colour morphology extending upon previous work in the field based on the Loewner order. We propose to consider an approximation of the supremum by means of a log-sum exponentiation introduced by Maslov. We apply this to the embedding of an RGB image in a field of symmetric 2x2 matrices. In this way we obtain nearly isotropic matrices representing colours and the structural advantage of transitivity. In numerical experiments we highlight some remarkable properties of the proposed approach.
Direct sampling method via Landweber iteration for an absorbing scatterer with a conductive boundary
(2024)
In this paper, we consider the inverse shape problem of recovering isotropic scatterers with a conductive boundary condition. Here, we assume that the measured far-field data is known at a fixed wave number. Motivated by recent work, we study a new direct sampling indicator based on the Landweber iteration and the factorization method. Therefore, we prove the connection between these reconstruction methods. The method studied here falls under the category of qualitative reconstruction methods where an imaging function is used to recover the absorbing scatterer. We prove stability of our new imaging function as well as derive a discrepancy principle for recovering the regularization parameter. The theoretical results are verified with numerical examples to show how the reconstruction performs by the new Landweber direct sampling method.
We consider the numerical approximation of second-order semi-linear parabolic stochastic partial differential equations interpreted in the mild sense which we solve on general two-dimensional domains with a C² boundary with homogeneous Dirichlet boundary conditions. The equations are driven by Gaussian additive noise, and several Lipschitz-like conditions are imposed on the nonlinear function. We discretize in space with a spectral Galerkin method and in time using an explicit Euler-like scheme. For irregular shapes, the necessary Dirichlet eigenvalues and eigenfunctions are obtained from a boundary integral equation method. This yields a nonlinear eigenvalue problem, which is discretized using a boundary element collocation method and is solved with the Beyn contour integral algorithm. We present an error analysis as well as numerical results on an exemplary asymmetric shape, and point out limitations of the approach.
After a brief introduction of conventional laboratory structures, this work focuses on an innovative and universal approach for a setup of a training laboratory for electric machines and drive systems. The novel approach employs a central 48 V DC bus, which forms the backbone of the structure. Several sets of DC machine, asynchronous machine and synchronous machine are connected to this bus. The advantages of the novel system structure are manifold, both from a didactic and a technical point of view: Student groups can work on their own performance level in a highly parallelized and at the same time individualized way. Additional training setups (similar or different) can easily be added. Only the total power dissipation has to be provided, i.e. the DC bus balances the power flow between the student groups. Comparative results of course evaluations of several cohorts of students are shown.
The Inverted Rotary Pendulum: Facilitating Practical Teaching in Advanced Control Engineering
(2024)
This paper outlines a practical approach to teach control engineering principles, with an inverted rotary pendulum, serving as an illustrative example. It shows how the pendulum is embedded in an advanced course of control engineering. This approach is incorporated into a flipped-classroom concept, as well as classical teaching concepts, offering students practical experience in control engineering. In addition, the design of the pendulum is shown, using a Raspberry Pi as the target platform for Matlab Simulink. This pendulum can be used in the classroom to evaluate the controller design mentioned above. It is analysed if the use of the pendulum generates a deeper understanding of the learning contents.
This paper serves as an introduction to the ECTS monitoring system and its potential applications in higher education. It also emphasizes the potential for ECTS monitoring to become a proactive system, supporting students by predicting academic success and identifying groups of potential dropouts for tailored support services. The use of the nearest neighbor analysis is suggested for improving data analysis and prediction accuracy.
In this work, we present a compact, bifunctional chip-based sensor setup that measures the temperature and electrical conductivity of water samples, including specimens from rivers and channels, aquaculture, and the Atlantic Ocean. For conductivity measurements, we utilize the impedance amplitude recorded via interdigitated electrode structures at a single triggering frequency. The results are well in line with data obtained using a calibrated reference instrument. The new setup holds for conductivity values spanning almost two orders of magnitude (river versus ocean water) without the need for equivalent circuit modelling. Temperature measurements were performed in four-point geometry with an on-chip platinum RTD (resistance temperature detector) in the temperature range between 2 °C and 40 °C, showing no hysteresis effects between warming and cooling cycles. Although the meander was not shielded against the liquid, the temperature calibration provided equivalent results to low conductive Milli-Q and highly conductive ocean water. The sensor is therefore suitable for inline and online monitoring purposes in recirculating aquaculture systems.
As one class of molecular imprinted polymers (MIPs), surface imprinted polymer (SIP)-based biosensors show great potential in direct whole-bacteria detection. Micro-contact imprinting, that involves stamping the template bacteria immobilized on a substrate into a pre-polymerized polymer matrix, is the most straightforward and prominent method to obtain SIP-based biosensors. However, the major drawbacks of the method arise from the requirement for fresh template bacteria and often non-reproducible bacteria distribution on the stamp substrate. Herein, we developed a positive master stamp containing photolithographic mimics of the template bacteria (E. coli) enabling reproducible fabrication of biomimetic SIP-based biosensors without the need for the “real” bacteria cells. By using atomic force and scanning electron microscopy imaging techniques, respectively, the E. coli-capturing ability of the SIP samples was tested, and compared with non-imprinted polymer (NIP)-based samples and control SIP samples, in which the cavity geometry does not match with E. coli cells. It was revealed that the presence of the biomimetic E. coli imprints with a specifically designed geometry increases the sensor E. coli-capturing ability by an “imprinting factor” of about 3. These findings show the importance of geometry-guided physical recognition in bacterial detection using SIP-based biosensors. In addition, this imprinting strategy was employed to interdigitated electrodes and QCM (quartz crystal microbalance) chips. E. coli detection performance of the sensors was demonstrated with electrochemical impedance spectroscopy (EIS) and QCM measurements with dissipation monitoring technique (QCM-D).
New insights into the influence of pre-culture on robust solvent production of C. acetobutylicum
(2024)
Clostridia are known for their solvent production, especially the production of butanol. Concerning the projected depletion of fossil fuels, this is of great interest. The cultivation of clostridia is known to be challenging, and it is difficult to achieve reproducible results and robust processes. However, existing publications usually concentrate on the cultivation conditions of the main culture. In this paper, the influence of cryo-conservation and pre-culture on growth and solvent production in the resulting main cultivation are examined. A protocol was developed that leads to reproducible cultivations of Clostridium acetobutylicum. Detailed investigation of the cell conservation in cryo-cultures ensured reliable cell growth in the pre-culture. Moreover, a reason for the acid crash in the main culture was found, based on the cultivation conditions of the pre-culture. The critical parameter to avoid the acid crash and accomplish the shift to the solventogenesis of clostridia is the metabolic phase in which the cells of the pre-culture were at the time of inoculation of the main culture; this depends on the cultivation time of the pre-culture. Using cells from the exponential growth phase to inoculate the main culture leads to an acid crash. To achieve the solventogenic phase with butanol production, the inoculum should consist of older cells which are in the stationary growth phase. Considering these parameters, which affect the entire cultivation process, reproducible results and reliable solvent production are ensured.
Biomass from various types of organic waste was tested for possible use in hydrogen production. The composition consisted of lignified samples, green waste, and kitchen scraps such as fruit and vegetable peels and leftover food. For this purpose, the enzymatic pretreatment of organic waste with a combination of five different hydrolytic enzymes (cellulase, amylase, glucoamylase, pectinase and xylase) was investigated to determine its ability to produce hydrogen (H2) with the hydrolyzate produced here. In course, the anaerobic rod-shaped bacterium T. neapolitana was used for H2 production. First, the enzymes were investigated using different substrates in preliminary experiments. Subsequently, hydrolyses were carried out using different types of organic waste. In the hydrolysis carried out here for 48 h, an increase in glucose concentration of 481% was measured for waste loads containing starch, corresponding to a glucose concentration at the end of hydrolysis of 7.5 g·L−1. In the subsequent set fermentation in serum bottles, a H2 yield of 1.26 mmol H2 was obtained in the overhead space when Terrific Broth Medium with glucose and yeast extract (TBGY medium) was used. When hydrolyzed organic waste was used, even a H2 yield of 1.37 mmol could be achieved in the overhead space. In addition, a dedicated reactor system for the anaerobic fermentation of T. neapolitana to produce H2 was developed. The bioreactor developed here can ferment anaerobically with a very low loss of produced gas. Here, after 24 h, a hydrogen concentration of 83% could be measured in the overhead space.
Direct air capture (DAC) combined with subsequent storage (DACCS) is discussed as one promising carbon dioxide removal option. The aim of this paper is to analyse and comparatively classify the resource consumption (land use, renewable energy and water) and costs of possible DAC implementation pathways for Germany. The paths are based on a selected, existing climate neutrality scenario that requires the removal of 20 Mt of carbon dioxide (CO2) per year by DACCS from 2045. The analysis focuses on the so-called “low-temperature” DAC process, which might be more advantageous for Germany than the “high-temperature” one. In four case studies, we examine potential sites in northern, central and southern Germany, thereby using the most suitable renewable energies for electricity and heat generation. We show that the deployment of DAC results in large-scale land use and high energy needs. The land use in the range of 167–353 km2 results mainly from the area required for renewable energy generation. The total electrical energy demand of 14.4 TWh per year, of which 46% is needed to operate heat pumps to supply the heat demand of the DAC process, corresponds to around 1.4% of Germany's envisaged electricity demand in 2045. 20 Mt of water are provided yearly, corresponding to 40% of the city of Cologne‘s water demand (1.1 million inhabitants). The capture of CO2 (DAC) incurs levelised costs of 125–138 EUR per tonne of CO2, whereby the provision of the required energy via photovoltaics in southern Germany represents the lowest value of the four case studies. This does not include the costs associated with balancing its volatility. Taking into account transporting the CO2 via pipeline to the port of Wilhelmshaven, followed by transporting and sequestering the CO2 in geological storage sites in the Norwegian North Sea (DACCS), the levelised costs increase to 161–176 EUR/tCO2. Due to the longer transport distances from southern and central Germany, a northern German site using wind turbines would be the most favourable.
Unmanned Aerial Vehicles (UAV) constantly gain in versatility. However, more reliable path planning algorithms are required until full autonomous UAV operation is possible. This work investigates the algorithm 3DVFH* and analyses its dependency on its cost function weights in 2400 environments. The analysis shows that the 3DVFH* can find a suitable path in every environment. However, a particular type of environment requires a specific choice of cost function weights. For minimal failure, probability interdependencies between the weights of the cost function have to be considered. This dependency reduces the number of control parameters and simplifies the usage of the 3DVFH*. Weights for costs associated with vertical evasion (pitch cost) and vicinity to obstacles (obstacle cost) have the highest influence on the failure probability of the local path planner. Environments with mainly very tall buildings (like large American city centres) require a preference for horizontal avoidance manoeuvres (achieved with high pitch cost weights). In contrast, environments with medium-to-low buildings (like European city centres) benefit from vertical avoidance manoeuvres (achieved with low pitch cost weights). The cost of the vicinity to obstacles also plays an essential role and must be chosen adequately for the environment. Choosing these two weights ideal is sufficient to reduce the failure probability below 10%.
Lifting propellers are of increasing interest for Advanced Air Mobility. All propellers and rotors are initially twisted beams, showing significant extension–twist coupling and centrifugal twisting. Torsional deformations severely impact aerodynamic performance. This paper presents a novel approach to assess different reasons for torsional deformations. A reduced-order model runs large parameter sweeps with algebraic formulations and numerical solution procedures. Generic beams represent three different propeller types for General Aviation, Commercial Aviation, and Advanced Air Mobility. Simulations include solid and hollow cross-sections made of aluminum, steel, and carbon fiber-reinforced polymer. The investigation shows that centrifugal twisting moments depend on both the elastic and initial twist. The determination of the centrifugal twisting moment solely based on the initial twist suffers from errors exceeding 5% in some cases. The nonlinear parts of the torsional rigidity do not significantly impact the overall torsional rigidity for the investigated propeller types. The extension–twist coupling related to the initial and elastic twist in combination with tension forces significantly impacts the net cross-sectional torsional loads. While the increase in torsional stiffness due to initial twist contributes to the overall stiffness for General and Commercial Aviation propellers, its contribution to the lift propeller’s stiffness is limited. The paper closes with the presentation of approximations for each effect identified as significant. Numerical evaluations are necessary to determine each effect for inhomogeneous cross-sections made of anisotropic material.
Electronic cigarettes (e-cigarettes) have become popular worldwide with the market growing exponentially in some countries. The absence of product standards and safety regulations requires urgent development of analytical methodologies for the holistic control of the growing diversity of such products. An approach based on low-field nuclear magnetic resonance (LF-NMR) at 80 MHz is presented for the simultaneous determination of key parameters: carrier solvents (vegetable glycerine (VG), propylene glycol (PG) and water), total nicotine as well as free-base nicotine fraction. Moreover, qualitative and quantitative determination of fourteen weak organic acids deliberately added to enhance sensory characteristics of e-cigarettes was possible. In most cases these parameters can be rapidly and conveniently determined without using any sample manipulation such as dilution, extraction or derivatization steps. The method was applied for 37 authentic e-cigarettes samples. In particular, eight different organic acids with the content up to 56 mg/mL were detected. Due to its simplicity, the method can be used in routine regulatory control as well as to study release behaviour of nicotine and other e-cigarettes constituents in different products.
In the context of the increasing digitalization, the Internet of Things (IoT) is seen as a technological driver through which completely new business models can emerge in the interaction of different players. Identified key players include traditional industrial companies, municipalities and telecommunications companies. The latter, by providing connectivity, ensure that small devices with tiny batteries can be connected almost anywhere and directly to the Internet. There are already many IoT use cases on the market that provide simplification for end users, such as Philips Hue Tap. In addition to business models based on connectivity, there is great potential for information-driven business models that can support or enhance existing business models. One example is the IoT use case Park and Joy, which uses sensors to connect parking spaces and inform drivers about available parking spaces in real time. Information-driven business models can be based on data generated in IoT use cases. For example, a telecommunications company can add value by deriving more decision-relevant information – called insights – from data that is used to increase decision agility. In addition, insights can be monetized. The monetization of insights can only be sustainable, if careful attention is taken and frameworks are considered. In this chapter, the concept of information-driven business models is explained and illustrated with the concrete use case Park and Joy. In addition, the benefits, risks and framework conditions are discussed.
Electrolyte-insulator-semiconductor capacitors (EISCAP) belong to field-effect sensors having an attractive transducer architecture for constructing various biochemical sensors. In this study, a capacitive model of enzyme-modified EISCAPs has been developed and the impact of the surface coverage of immobilized enzymes on its capacitance-voltage and constant-capacitance characteristics was studied theoretically and experimentally. The used multicell arrangement enables a multiplexed electrochemical characterization of up to sixteen EISCAPs. Different enzyme coverages have been achieved by means of parallel electrical connection of bare and enzyme-covered single EISCAPs in diverse combinations. As predicted by the model, with increasing the enzyme coverage, both the shift of capacitance-voltage curves and the amplitude of the constant-capacitance signal increase, resulting in an enhancement of analyte sensitivity of the EISCAP biosensor. In addition, the capability of the multicell arrangement with multi-enzyme covered EISCAPs for sequentially detecting multianalytes (penicillin and urea) utilizing the enzymes penicillinase and urease has been experimentally demonstrated and discussed.
This article addresses the need for an innovative technique in plasma shaping, utilizing antenna structures, Maxwell’s laws, and boundary conditions within a shielded environment. The motivation lies in exploring a novel approach to efficiently generate high-energy density plasma with potential applications across various fields. Implemented in an E01 circular cavity resonator, the proposed method involves the use of an impedance and field matching device with a coaxial connector and a specially optimized monopole antenna. This setup feeds a low-loss cavity resonator, resulting in a high-energy density air plasma with a surface temperature exceeding 3500 o C, achieved with a minimal power input of 80 W. The argon plasma, resembling the shape of a simple monopole antenna with modeled complex dielectric values, offers a more energy-efficient alternative compared to traditional, power-intensive plasma shaping methods. Simulations using a commercial electromagnetic (EM) solver validate the design’s effectiveness, while experimental validation underscores the method’s feasibility and practical implementation. Analyzing various parameters in an argon atmosphere, including hot S -parameters and plasma beam images, the results demonstrate the successful application of this technique, suggesting its potential in coating, furnace technology, fusion, and spectroscopy applications.
Next-generation aircraft designs often incorporate multiple large propellers attached along the wingspan (distributed electric propulsion), leading to highly flexible dynamic systems that can exhibit aeroelastic instabilities. This paper introduces a validated methodology to investigate the aeroelastic instabilities of wing–propeller systems and to understand the dynamic mechanism leading to wing and whirl flutter and transition from one to the other. Factors such as nacelle positions along the wing span and chord and its propulsion system mounting stiffness are considered. Additionally, preliminary design guidelines are proposed for flutter-free wing–propeller systems applicable to novel aircraft designs. The study demonstrates how the critical speed of the wing–propeller systems is influenced by the mounting stiffness and propeller position. Weak mounting stiffnesses result in whirl flutter, while hard mounting stiffnesses lead to wing flutter. For the latter, the position of the propeller along the wing span may change the wing mode shapes and thus the flutter mechanism. Propeller positions closer to the wing tip enhance stability, but pusher configurations are more critical due to the mass distribution behind the elastic axis.
A novel method to determine the extruded length of a metallic wire for a directed energy deposition (DED) process using a microwave (MW) plasma jet with a straight-through wire feed is presented. The method is based on the relative comparison of the measured frequency response obtained by the large-signal scattering parameter (Hot-S) technique. In the practical working range, repeatability of less than 6% for a nonactive plasma and 9% for the active plasma state is found. Measurements are conducted with a focus on a simple solution to decrease the processing time and reduce the integration time of the process into the existing hardware. It is shown that monitoring a single frequency for magnitude and phase changes is sufficient to achieve good accuracy. A combination of different measurement values to determine the length is possible. The applicability to different diameter of the same material is shown as well as a contact detection of the wire and metallic substrate.
This paper investigates the interior transmission problem for homogeneous media via eigenvalue trajectories parameterized by the magnitude of the refractive index. In the case that the scatterer is the unit disk, we prove that there is a one-to-one correspondence between complex-valued interior transmission eigenvalue trajectories and Dirichlet eigenvalues of the Laplacian which turn out to be exactly the trajectorial limit points as the refractive index tends to infinity. For general simply-connected scatterers in two or three dimensions, a corresponding relation is still open, but further theoretical results and numerical studies indicate a similar connection.
The artificial olfactory image was proposed by Lundström et al. in 1991 as a new strategy for an electronic nose system which generated a two-dimensional mapping to be interpreted as a fingerprint of the detected gas species. The potential distribution generated by the catalytic metals integrated into a semiconductor field-effect structure was read as a photocurrent signal generated by scanning light pulses. The impact of the proposed technology spread beyond gas sensing, inspiring the development of various imaging modalities based on the light addressing of field-effect structures to obtain spatial maps of pH distribution, ions, molecules, and impedance, and these modalities have been applied in both biological and non-biological systems. These light-addressing technologies have been further developed to realize the position control of a faradaic current on the electrode surface for localized electrochemical reactions and amperometric measurements, as well as the actuation of liquids in microfluidic devices.
Many important properties of bacterial cellulose (BC), such as moisture absorption capacity, elasticity and tensile strength, largely depend on its structure. This paper presents a study on the effect of the drying method on BC films produced by Medusomyces gisevii using two different procedures: room temperature drying (RT, (24 ± 2 °C, humidity 65 ± 1%, dried until a constant weight was reached) and freeze-drying (FD, treated at − 75 °C for 48 h). BC was synthesized using one of two different carbon sources—either glucose or sucrose. Structural differences in the obtained BC films were evaluated using atomic force microscopy (AFM), scanning electron microscopy (SEM), and X-ray diffraction. Macroscopically, the RT samples appeared semi-transparent and smooth, whereas the FD group exhibited an opaque white color and sponge-like structure. SEM examination showed denser packing of fibrils in FD samples while RT-samples displayed smaller average fiber diameter, lower surface roughness and less porosity. AFM confirmed the SEM observations and showed that the FD material exhibited a more branched structure and a higher surface roughness. The samples cultivated in a glucose-containing nutrient medium, generally displayed a straight and ordered shape of fibrils compared to the sucrose-derived BC, characterized by a rougher and wavier structure. The BC films dried under different conditions showed distinctly different crystallinity degrees, whereas the carbon source in the culture medium was found to have a relatively small effect on the BC crystallinity.
To gain insight on chemical sterilization processes, the influence of temperature (up to 70 °C), intense green light, and hydrogen peroxide (H₂O₂) concentration (up to 30% in aqueous solution) on microbial spore inactivation is evaluated by in-situ Raman spectroscopy with an optical trap. Bacillus atrophaeus is utilized as a model organism. Individual spores are isolated and their chemical makeup is monitored under dynamically changing conditions (temperature, light, and H₂O₂ concentration) to mimic industrially relevant process parameters for sterilization in the field of aseptic food processing. While isolated spores in water are highly stable, even at elevated temperatures of 70 °C, exposure to H₂O₂ leads to a loss of spore integrity characterized by the release of the key spore biomarker dipicolinic acid (DPA) in a concentration-dependent manner, which indicates damage to the inner membrane of the spore. Intensive light or heat, both of which accelerate the decomposition of H₂O₂ into reactive oxygen species (ROS), drastically shorten the spore lifetime, suggesting the formation of ROS as a rate-limiting step during sterilization. It is concluded that Raman spectroscopy can deliver mechanistic insight into the mode of action of H₂O₂-based sterilization and reveal the individual contributions of different sterilization methods acting in tandem.
Drought and water shortage are serious problems in many arid and semi-arid regions. This problem is getting worse and even continues in temperate climatic regions due to climate change. To address this problem, the use of biodegradable hydrogels is increasingly important for the application as water-retaining additives in soil. Furthermore, efficient (micro-)nutrient supply can be provided by the use of tailored hydrogels. Biodegradable polyaspartic acid (PASP) hydrogels with different available (1,6-hexamethylene diamine (HMD) and L-lysine (LYS)) and newly developed crosslinkers based on diesters of glycine (GLY) and (di-)ethylene glycol (DEG and EG, respectively) were synthesized and characterized using Fourier transform infrared (FTIR) spectroscopy and scanning electron microscopy (SEM) and regarding their swelling properties (kinetic, absorbency under load (AUL)) as well as biodegradability of PASP hydrogel. Copper (II) and zinc (II), respectively, were loaded as micronutrients in two different approaches: in situ with crosslinking and subsequent loading of prepared hydrogels. The results showed successful syntheses of di-glycine-ester-based crosslinkers. Hydrogels with good water-absorbing properties were formed. Moreover, the developed crosslinking agents in combination with the specific reaction conditions resulted in higher water absorbency with increased crosslinker content used in synthesis (10% vs. 20%). The prepared hydrogels are candidates for water-storing soil additives due to the biodegradability of PASP, which is shown in an exemple. The incorporation of Cu(II) and Zn(II) ions can provide these micronutrients for plant growth.
The “1. Stokes’ problem”, the “suddenly accelerated flat wall”, is the oldest application of the Navier-Stokes equations. Stokes’ solution of the “problem” does not comply with the mathematical theorem of Cauchy and Kowalewskaya on the “Uniqueness and Existence” of solutions of partial differential equations and violates the physical theorem of minimum entropy production/dissipation of the Thermodynamics of Irreversible Processes. The result includes very high local shear stresses and dissipation rates. That is of special interest for the theory of turbulent and mixed turbulent/laminar flow. A textbook solution of the “1. Stokes Problem” is the Couette flow, which has a constant sheer stress along a linear profile. A consequence is that the Navier-Stokes equations do not describe any S-shaped part of a turbulent profile found in any turbulent Couette experiment. The paper surveys arguments referring to that statement, concerning the history of >150 years. Contrary to this there is always a Navier-Stokes solution near the wall, observed by a linear part of the Couette profile. There a turbulent description (e.g. by the logarithmic law-of-the-wall) fails completely. That is explained by the minimum dissipation requirement together with the Couette feature τ = const. The local co-existence of a turbulent zone and a laminar zone near the wall is stable and observed also at high Reynolds-Numbers.
The deformation and damage laws of non-homogeneous irregular structural planes in rocks are the basis for studying the stability of rock engineering. To investigate the damage characteristics of rock containing non-parallel fissures, uniaxial compression tests and numerical simulations were conducted on sandstone specimens containing three non-parallel fissures inclined at 0°, 45° and 90° in this study. The characteristics of crack initiation and crack evolution of fissures with different inclinations were analyzed. A constitutive model for the discontinuous fractures of fissured sandstone was proposed. The results show that the fracture behaviors of fissured sandstone specimens are discontinuous. The stress–strain curves are non-smooth and can be divided into nonlinear crack closure stage, linear elastic stage, plastic stage and brittle failure stage, of which the plastic stage contains discontinuous stress drops. During the uniaxial compression test, the middle or ends of 0° fissures were the first to crack compared to 45° and 90° fissures. The end with small distance between 0° and 45° fissures cracked first, and the end with large distance cracked later. After the final failure, 0° fissures in all specimens were fractured, while 45° and 90° fissures were not necessarily fractured. Numerical simulation results show that the concentration of compressive stress at the tips of 0°, 45° and 90° fissures, as well as the concentration of tensile stress on both sides, decreased with the increase of the inclination angle. A constitutive model for the discontinuous fractures of fissured sandstone specimens was derived by combining the logistic model and damage mechanic theory. This model can well describe the discontinuous drops of stress and agrees well with the whole processes of the stress–strain curves of the fissured sandstone specimens.
Frequency mixing magnetic detection (FMMD) is a sensitive and selective technique to detect magnetic nanoparticles (MNPs) serving as probes for binding biological targets. Its principle relies on the nonlinear magnetic relaxation dynamics of a particle ensemble interacting with a dual frequency external magnetic field. In order to increase its sensitivity, lower its limit of detection and overall improve its applicability in biosensing, matching combinations of external field parameters and internal particle properties are being sought to advance FMMD. In this study, we systematically probe the aforementioned interaction with coupled Néel–Brownian dynamic relaxation simulations to examine how key MNP properties as well as applied field parameters affect the frequency mixing signal generation. It is found that the core size of MNPs dominates their nonlinear magnetic response, with the strongest contributions from the largest particles. The drive field amplitude dominates the shape of the field-dependent response, whereas effective anisotropy and hydrodynamic size of the particles only weakly influence the signal generation in FMMD. For tailoring the MNP properties and parameters of the setup towards optimal FMMD signal generation, our findings suggest choosing large particles of core sizes dc > 25 nm nm with narrow size distributions (σ < 0.1) to minimize the required drive field amplitude. This allows potential improvements of FMMD as a stand-alone application, as well as advances in magnetic particle imaging, hyperthermia and magnetic immunoassays.
Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. Furthermore, our results underline the relevance of frequency information for sleep stage scoring, while also demonstrating that deep neural networks utilize information beyond frequencies to enhance sleep staging performance, which is consistent with previous research. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces.
The growing body of political texts opens up new opportunities for rich insights into political dynamics and ideologies but also increases the workload for manual analysis. Automated speaker attribution, which detects who said what to whom in a speech event and is closely related to semantic role labeling, is an important processing step for computational text analysis. We study the potential of the large language model family Llama 2 to automate speaker attribution in German parliamentary debates from 2017-2021. We fine-tune Llama 2 with QLoRA, an efficient training strategy, and observe our approach to achieve competitive performance in the GermEval 2023 Shared Task On Speaker Attribution in German News Articles and Parliamentary Debates. Our results shed light on the capabilities of large language models in automating speaker attribution, revealing a promising avenue for computational analysis of political discourse and the development of semantic role labeling systems.
In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle.