Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1352)
- INB - Institut für Nano- und Biotechnologien (503)
- Fachbereich Chemie und Biotechnologie (474)
- Fachbereich Elektrotechnik und Informationstechnik (407)
- IfB - Institut für Bioengineering (404)
- Fachbereich Energietechnik (357)
- Fachbereich Luft- und Raumfahrttechnik (249)
- Fachbereich Maschinenbau und Mechatronik (152)
- Fachbereich Wirtschaftswissenschaften (116)
- Fachbereich Bauingenieurwesen (69)
Language
- English (3264) (remove)
Document Type
- Article (3264) (remove)
Keywords
- Einspielen <Werkstoff> (7)
- avalanche (5)
- Earthquake (4)
- FEM (4)
- Finite-Elemente-Methode (4)
- LAPS (4)
- additive manufacturing (4)
- biosensors (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
Effective government services rely on accurate population numbers to allocate resources. In Colombia and globally, census enumeration is challenging in remote regions and where armed conflict is occurring. During census preparations, the Colombian National Administrative Department of Statistics conducted social cartography workshops, where community representatives estimated numbers of dwellings and people throughout their regions. We repurposed this information, combining it with remotely sensed buildings data and other geospatial data. To estimate building counts and population sizes, we developed hierarchical Bayesian models, trained using nearby full-coverage census enumerations and assessed using 10-fold cross-validation. We compared models to assess the relative contributions of community knowledge, remotely sensed buildings, and their combination to model fit. The Community model was unbiased but imprecise; the Satellite model was more precise but biased; and the Combination model was best for overall accuracy. Results reaffirmed the power of remotely sensed buildings data for population estimation and highlighted the value of incorporating local knowledge.
Perennial ryegrass (Lolium perenne) is an underutilized lignocellulosic biomass that has several benefits such as high availability, renewability, and biomass yield. The grass press-juice obtained from the mechanical pretreatment can be used for the bio-based production of chemicals. Lactic acid is a platform chemical that has attracted consideration due to its broad area of applications. For this reason, the more sustainable production of lactic acid is expected to increase. In this work, lactic acid was produced using complex medium at the bench- and reactor scale, and the results were compared to those obtained using an optimized press-juice medium. Bench-scale fermentations were carried out in a pH-control system and lactic acid production reached approximately 21.84 ± 0.95 g/L in complex medium, and 26.61 ± 1.2 g/L in press-juice medium. In the bioreactor, the production yield was 0.91 ± 0.07 g/g, corresponding to a 1.4-fold increase with respect to the complex medium with fructose. As a comparison to the traditional ensiling process, the ensiling of whole grass fractions of different varieties harvested in summer and autumn was performed. Ensiling showed variations in lactic acid yields, with a yield up to 15.2% dry mass for the late-harvested samples, surpassing typical silage yields of 6–10% dry mass.
Purpose: Impaired paravascular drainage of β-Amyloid (Aβ) has been proposed as a contributing cause for sporadic Alzheimer’s disease (AD), as decreased cerebral blood vessel pulsatility and subsequently reduced propulsion in this pathway could lead to the accumulation and deposition of Aβ in the brain. Therefore, we hypothesized that there is an increased impairment in pulsatility across AD spectrum.
Patients and Methods: Using transcranial color-coded duplex sonography (TCCS) the resistance and pulsatility index (RI; PI) of the middle cerebral artery (MCA) in healthy controls (HC, n=14) and patients with AD dementia (ADD, n=12) were measured. In a second step, we extended the sample by adding patients with mild cognitive impairment (MCI) stratified by the presence (MCI-AD, n=8) or absence of biomarkers (MCI-nonAD, n=8) indicative for underlying AD pathology, and compared RI and PI across the groups. To control for atherosclerosis as a confounder, we measured the arteriolar-venular-ratio of retinal vessels.
Results: Left and right RI (p=0.020; p=0.027) and left PI (p=0.034) differed between HC and ADD controlled for atherosclerosis with AUCs of 0.776, 0.763, and 0.718, respectively. The RI and PI of MCI-AD tended towards ADD, of MCI-nonAD towards HC, respectively. RIs and PIs were associated with disease severity (p=0.010, p=0.023).
Conclusion: Our results strengthen the hypothesis that impaired pulsatility could cause impaired amyloid clearance from the brain and thereby might contribute to the development of AD. However, further studies considering other factors possibly influencing amyloid clearance as well as larger sample sizes are needed.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.
Transgenic plants have the potential to produce recombinant proteins on an agricultural scale, with yields of several tons per year. The cost-effectiveness of transgenic plants increases if simple cultivation facilities such as greenhouses can be used for production. In such a setting, we expressed a novel affinity ligand based on the fluorescent protein DsRed, which we used as a carrier for the linear epitope ELDKWA from the HIV-neutralizing antibody 2F5. The DsRed-2F5-epitope (DFE) fusion protein was produced in 12 consecutive batches of transgenic tobacco (Nicotiana tabacum) plants over the course of 2 years and was purified using a combination of blanching and immobilized metal-ion affinity chromatography (IMAC). The average purity after IMAC was 57 ± 26% (n = 24) in terms of total soluble protein, but the average yield of pure DFE (12 mg kg−1) showed substantial variation (± 97 mg kg−1, n = 24) which correlated with seasonal changes. Specifically, we found that temperature peaks (>28°C) and intense illuminance (>45 klx h−1) were associated with lower DFE yields after purification, reflecting the loss of the epitope-containing C-terminus in up to 90% of the product. Whereas the weather factors were of limited use to predict product yields of individual harvests conducted for each batch (spaced by 1 week), the average batch yields were well approximated by simple linear regression models using two independent variables for prediction (illuminance and plant age). Interestingly, accumulation levels determined by fluorescence analysis were not affected by weather conditions but positively correlated with plant age, suggesting that the product was still expressed at high levels, but the extreme conditions affected its stability, albeit still preserving the fluorophore function. The efficient production of intact recombinant proteins in plants may therefore require adequate climate control and shading in greenhouses or even cultivation in fully controlled indoor farms.
Chromatography is the workhorse of biopharmaceutical downstream processing because it can selectively enrich a target product while removing impurities from complex feed streams. This is achieved by exploiting differences in molecular properties, such as size, charge and hydrophobicity (alone or in different combinations). Accordingly, many parameters must be tested during process development in order to maximize product purity and recovery, including resin and ligand types, conductivity, pH, gradient profiles, and the sequence of separation operations. The number of possible experimental conditions quickly becomes unmanageable. Although the range of suitable conditions can be narrowed based on experience, the time and cost of the work remain high even when using high-throughput laboratory automation. In contrast, chromatography modeling using inexpensive, parallelized computer hardware can provide expert knowledge, predicting conditions that achieve high purity and efficient recovery. The prediction of suitable conditions in silico reduces the number of empirical tests required and provides in-depth process understanding, which is recommended by regulatory authorities. In this article, we discuss the benefits and specific challenges of chromatography modeling. We describe the experimental characterization of chromatography devices and settings prior to modeling, such as the determination of column porosity. We also consider the challenges that must be overcome when models are set up and calibrated, including the cross-validation and verification of data-driven and hybrid (combined data-driven and mechanistic) models. This review will therefore support researchers intending to establish a chromatography modeling workflow in their laboratory.
Proteins are important ingredients in food and feed, they are the active components of many pharmaceutical products, and they are necessary, in the form of enzymes, for the success of many technical processes. However, production can be challenging, especially when using heterologous host cells such as bacteria to express and assemble recombinant mammalian proteins. The manufacturability of proteins can be hindered by low solubility, a tendency to aggregate, or inefficient purification. Tools such as in silico protein engineering and models that predict separation criteria can overcome these issues but usually require the complex shape and surface properties of proteins to be represented by a small number of quantitative numeric values known as descriptors, as similarly used to capture the features of small molecules. Here, we review the current status of protein descriptors, especially for application in quantitative structure activity relationship (QSAR) models. First, we describe the complexity of proteins and the properties that descriptors must accommodate. Then we introduce descriptors of shape and surface properties that quantify the global and local features of proteins. Finally, we highlight the current limitations of protein descriptors and propose strategies for the derivation of novel protein descriptors that are more informative.
Self metathesis of oleochemicals offers a variety of bifunctional compounds, that can be used as monomer for polymer production. Many precursors are in huge scales available, like oleic acid ester (biodiesel), oleyl alcohol (tensides), oleyl amines (tensides, lubricants). We show several ways to produce and separate and purify C18-α,ω-bifunctional compounds, using Grubbs 2nd Generation catalysts, starting from technical grade educts.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Against the background of growing data in everyday life, data processing tools become more powerful to deal with the increasing complexity in building design. The architectural planning process is offered a variety of new instruments to design, plan and communicate planning decisions. Ideally the access to information serves to secure and document the quality of the building and in the worst case, the increased data absorbs time by collection and processing without any benefit for the building and its user. Process models can illustrate the impact of information on the design- and planning process so that architect and planner can steer the process. This paper provides historic and contemporary models to visualize the architectural planning process and introduces means to describe today’s situation consisting of stakeholders, events and instruments. It explains conceptions during Renaissance in contrast to models used in the second half of the 20th century. Contemporary models are discussed regarding their value against the background of increasing computation in the building process.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
Subglacial environments on Earth offer important analogs to Ocean World targets in our solar system. These unique microbial ecosystems remain understudied due to the challenges of access through thick glacial ice (tens to hundreds of meters). Additionally, sub-ice collections must be conducted in a clean manner to ensure sample integrity for downstream microbiological and geochemical analyses. We describe the field-based cleaning of a melt probe that was used to collect brine samples from within a glacier conduit at Blood Falls, Antarctica, for geomicrobiological studies. We used a thermoelectric melting probe called the IceMole that was designed to be minimally invasive in that the logistical requirements in support of drilling operations were small and the probe could be cleaned, even in a remote field setting, so as to minimize potential contamination. In our study, the exterior bioburden on the IceMole was reduced to levels measured in most clean rooms, and below that of the ice surrounding our sampling target. Potential microbial contaminants were identified during the cleaning process; however, very few were detected in the final englacial sample collected with the IceMole and were present in extremely low abundances (∼0.063% of 16S rRNA gene amplicon sequences). This cleaning protocol can help minimize contamination when working in remote field locations, support microbiological sampling of terrestrial subglacial environments using melting probes, and help inform planetary protection challenges for Ocean World analog mission concepts.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
Ga-doped Li7La3Zr2O12 garnet solid electrolytes exhibit the highest Li-ion conductivities among the oxide-type garnet-structured solid electrolytes, but instabilities toward Li metal hamper their practical application. The instabilities have been assigned to direct chemical reactions between LiGaO2 coexisting phases and Li metal by several groups previously. Yet, the understanding of the role of LiGaO2 in the electrochemical cell and its electrochemical properties is still lacking. Here, we are investigating the electrochemical properties of LiGaO2 through electrochemical tests in galvanostatic cells versus Li metal and complementary ex situ studies via confocal Raman microscopy, quantitative phase analysis based on powder X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron energy loss spectroscopy. The results demonstrate considerable and surprising electrochemical activity, with high reversibility. A three-stage reaction mechanism is derived, including reversible electrochemical reactions that lead to the formation of highly electronically conducting products. The results have considerable implications for the use of Ga-doped Li7La3Zr2O12 electrolytes in all-solid-state Li-metal battery applications and raise the need for advanced materials engineering to realize Ga-doped Li7La3Zr2O12for practical use.
The thermal conductivity of components manufactured using Laser Powder Bed Fusion (LPBF), also called Selective Laser Melting (SLM), plays an important role in their processing. Not only does a reduced thermal conductivity cause residual stresses during the process, but it also makes subsequent processes such as the welding of LPBF components more difficult. This article uses 316L stainless steel samples to investigate whether and to what extent the thermal conductivity of specimens can be influenced by different LPBF parameters. To this end, samples are set up using different parameters, orientations, and powder conditions and measured by a heat flow meter using stationary analysis. The heat flow meter set-up used in this study achieves good reproducibility and high measurement accuracy, so that comparative measurements between the various LPBF influencing factors to be tested are possible. In summary, the series of measurements show that the residual porosity of the components has the greatest influence on conductivity. The degradation of the powder due to increased recycling also appears to be detectable. The build-up direction shows no detectable effect in the measurement series.
In this work, the effect of low air relative humidity on the operation of a polymer electrolyte membrane fuel cell is investigated. An innovative method through performing in situ electrochemical impedance spectroscopy is utilised to quantify the effect of inlet air relative humidity at the cathode side on internal ionic resistances and output voltage of the fuel cell. In addition, algorithms are developed to analyse the electrochemical characteristics of the fuel cell. For the specific fuel cell stack used in this study, the membrane resistance drops by over 39 % and the cathode side charge transfer resistance decreases by 23 % after increasing the humidity from 30 % to 85 %, while the results of static operation also show an increase of ∼2.2 % in the voltage output after increasing the relative humidity from 30 % to 85 %. In dynamic operation, visible drying effects occur at < 50 % relative humidity, whereby the increase of the air side stoichiometry increases the drying effects. Furthermore, other parameters, such as hydrogen humidification, internal stack structure, and operating parameters like stoichiometry, pressure, and temperature affect the overall water balance. Therefore, the optimal humidification range must be determined by considering all these parameters to maximise the fuel cell performance and durability. The results of this study are used to develop a health management system to ensure sufficient humidification by continuously monitoring the fuel cell polarisation data and electrochemical impedance spectroscopy indicators.
Critical quantitative evaluation of integrated health management methods for fuel cell applications
(2024)
Online fault diagnostics is a crucial consideration for fuel cell systems, particularly in mobile applications, to limit downtime and degradation, and to increase lifetime. Guided by a critical literature review, in this paper an overview of Health management systems classified in a scheme is presented, introducing commonly utilised methods to diagnose FCs in various applications. In this novel scheme, various Health management system methods are summarised and structured to provide an overview of existing systems including their associated tools. These systems are classified into four categories mainly focused on model-based and non-model-based systems. The individual methods are critically discussed when used individually or combined aimed at further understanding their functionality and suitability in different applications. Additionally, a tool is introduced to evaluate methods from each category based on the scheme presented. This tool applies the technique of matrix evaluation utilising several key parameters to identify the most appropriate methods for a given application. Based on this evaluation, the most suitable methods for each specific application are combined to build an integrated Health management system.