Article
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (1309)
- INB - Institut für Nano- und Biotechnologien (485)
- Fachbereich Chemie und Biotechnologie (462)
- Fachbereich Elektrotechnik und Informationstechnik (408)
- IfB - Institut für Bioengineering (384)
- Fachbereich Energietechnik (352)
- Fachbereich Luft- und Raumfahrttechnik (240)
- Fachbereich Maschinenbau und Mechatronik (151)
- Fachbereich Wirtschaftswissenschaften (114)
- Fachbereich Bauingenieurwesen (66)
Has Fulltext
- no (3194) (remove)
Language
- English (3194) (remove)
Document Type
- Article (3194) (remove)
Keywords
- avalanche (5)
- Earthquake (4)
- LAPS (4)
- field-effect sensor (4)
- frequency mixing magnetic detection (4)
- Additive Manufacturing (3)
- CellDrum (3)
- Heparin (3)
- SLM (3)
- additive manufacturing (3)
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Several unconnected laboratory experiments are usually offered for students in instrumental analysis lab. To give the students a more rational overview of the most common instrumental techniques, a new laboratory experiment was developed. Marketed pain relief drugs, familiar consumer products with one to three active components, namely, acetaminophen (paracetamol), acetylsalicylic acid (ASA), and caffeine, were selected. Common analytical methods were compared regarding the performance of qualitative and quantitative analysis of unknown tablets: UV–visible (UV–vis), infrared (IR), and nuclear magnetic resonance (NMR) spectroscopies, as well as high-performance liquid chromatography (HPLC). The students successfully uncovered the composition of formulations, which were divided into three difficulty categories. Students were shown that in addition to simple mixtures handled in theoretical classes, the composition of complex drug products can also be uncovered. By comparing the performance of different techniques, students deepen their understanding and compare the efficiency of analytical methods in the context of complex mixtures. The laboratory experiment can be adjusted for graduate level by including extra tasks such as method optimization, validation, and 2D spectroscopic techniques.
Effective government services rely on accurate population numbers to allocate resources. In Colombia and globally, census enumeration is challenging in remote regions and where armed conflict is occurring. During census preparations, the Colombian National Administrative Department of Statistics conducted social cartography workshops, where community representatives estimated numbers of dwellings and people throughout their regions. We repurposed this information, combining it with remotely sensed buildings data and other geospatial data. To estimate building counts and population sizes, we developed hierarchical Bayesian models, trained using nearby full-coverage census enumerations and assessed using 10-fold cross-validation. We compared models to assess the relative contributions of community knowledge, remotely sensed buildings, and their combination to model fit. The Community model was unbiased but imprecise; the Satellite model was more precise but biased; and the Combination model was best for overall accuracy. Results reaffirmed the power of remotely sensed buildings data for population estimation and highlighted the value of incorporating local knowledge.
Perennial ryegrass (Lolium perenne) is an underutilized lignocellulosic biomass that has several benefits such as high availability, renewability, and biomass yield. The grass press-juice obtained from the mechanical pretreatment can be used for the bio-based production of chemicals. Lactic acid is a platform chemical that has attracted consideration due to its broad area of applications. For this reason, the more sustainable production of lactic acid is expected to increase. In this work, lactic acid was produced using complex medium at the bench- and reactor scale, and the results were compared to those obtained using an optimized press-juice medium. Bench-scale fermentations were carried out in a pH-control system and lactic acid production reached approximately 21.84 ± 0.95 g/L in complex medium, and 26.61 ± 1.2 g/L in press-juice medium. In the bioreactor, the production yield was 0.91 ± 0.07 g/g, corresponding to a 1.4-fold increase with respect to the complex medium with fructose. As a comparison to the traditional ensiling process, the ensiling of whole grass fractions of different varieties harvested in summer and autumn was performed. Ensiling showed variations in lactic acid yields, with a yield up to 15.2% dry mass for the late-harvested samples, surpassing typical silage yields of 6–10% dry mass.
Purpose: Impaired paravascular drainage of β-Amyloid (Aβ) has been proposed as a contributing cause for sporadic Alzheimer’s disease (AD), as decreased cerebral blood vessel pulsatility and subsequently reduced propulsion in this pathway could lead to the accumulation and deposition of Aβ in the brain. Therefore, we hypothesized that there is an increased impairment in pulsatility across AD spectrum.
Patients and Methods: Using transcranial color-coded duplex sonography (TCCS) the resistance and pulsatility index (RI; PI) of the middle cerebral artery (MCA) in healthy controls (HC, n=14) and patients with AD dementia (ADD, n=12) were measured. In a second step, we extended the sample by adding patients with mild cognitive impairment (MCI) stratified by the presence (MCI-AD, n=8) or absence of biomarkers (MCI-nonAD, n=8) indicative for underlying AD pathology, and compared RI and PI across the groups. To control for atherosclerosis as a confounder, we measured the arteriolar-venular-ratio of retinal vessels.
Results: Left and right RI (p=0.020; p=0.027) and left PI (p=0.034) differed between HC and ADD controlled for atherosclerosis with AUCs of 0.776, 0.763, and 0.718, respectively. The RI and PI of MCI-AD tended towards ADD, of MCI-nonAD towards HC, respectively. RIs and PIs were associated with disease severity (p=0.010, p=0.023).
Conclusion: Our results strengthen the hypothesis that impaired pulsatility could cause impaired amyloid clearance from the brain and thereby might contribute to the development of AD. However, further studies considering other factors possibly influencing amyloid clearance as well as larger sample sizes are needed.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.
Transgenic plants have the potential to produce recombinant proteins on an agricultural scale, with yields of several tons per year. The cost-effectiveness of transgenic plants increases if simple cultivation facilities such as greenhouses can be used for production. In such a setting, we expressed a novel affinity ligand based on the fluorescent protein DsRed, which we used as a carrier for the linear epitope ELDKWA from the HIV-neutralizing antibody 2F5. The DsRed-2F5-epitope (DFE) fusion protein was produced in 12 consecutive batches of transgenic tobacco (Nicotiana tabacum) plants over the course of 2 years and was purified using a combination of blanching and immobilized metal-ion affinity chromatography (IMAC). The average purity after IMAC was 57 ± 26% (n = 24) in terms of total soluble protein, but the average yield of pure DFE (12 mg kg−1) showed substantial variation (± 97 mg kg−1, n = 24) which correlated with seasonal changes. Specifically, we found that temperature peaks (>28°C) and intense illuminance (>45 klx h−1) were associated with lower DFE yields after purification, reflecting the loss of the epitope-containing C-terminus in up to 90% of the product. Whereas the weather factors were of limited use to predict product yields of individual harvests conducted for each batch (spaced by 1 week), the average batch yields were well approximated by simple linear regression models using two independent variables for prediction (illuminance and plant age). Interestingly, accumulation levels determined by fluorescence analysis were not affected by weather conditions but positively correlated with plant age, suggesting that the product was still expressed at high levels, but the extreme conditions affected its stability, albeit still preserving the fluorophore function. The efficient production of intact recombinant proteins in plants may therefore require adequate climate control and shading in greenhouses or even cultivation in fully controlled indoor farms.
Chromatography is the workhorse of biopharmaceutical downstream processing because it can selectively enrich a target product while removing impurities from complex feed streams. This is achieved by exploiting differences in molecular properties, such as size, charge and hydrophobicity (alone or in different combinations). Accordingly, many parameters must be tested during process development in order to maximize product purity and recovery, including resin and ligand types, conductivity, pH, gradient profiles, and the sequence of separation operations. The number of possible experimental conditions quickly becomes unmanageable. Although the range of suitable conditions can be narrowed based on experience, the time and cost of the work remain high even when using high-throughput laboratory automation. In contrast, chromatography modeling using inexpensive, parallelized computer hardware can provide expert knowledge, predicting conditions that achieve high purity and efficient recovery. The prediction of suitable conditions in silico reduces the number of empirical tests required and provides in-depth process understanding, which is recommended by regulatory authorities. In this article, we discuss the benefits and specific challenges of chromatography modeling. We describe the experimental characterization of chromatography devices and settings prior to modeling, such as the determination of column porosity. We also consider the challenges that must be overcome when models are set up and calibrated, including the cross-validation and verification of data-driven and hybrid (combined data-driven and mechanistic) models. This review will therefore support researchers intending to establish a chromatography modeling workflow in their laboratory.
Proteins are important ingredients in food and feed, they are the active components of many pharmaceutical products, and they are necessary, in the form of enzymes, for the success of many technical processes. However, production can be challenging, especially when using heterologous host cells such as bacteria to express and assemble recombinant mammalian proteins. The manufacturability of proteins can be hindered by low solubility, a tendency to aggregate, or inefficient purification. Tools such as in silico protein engineering and models that predict separation criteria can overcome these issues but usually require the complex shape and surface properties of proteins to be represented by a small number of quantitative numeric values known as descriptors, as similarly used to capture the features of small molecules. Here, we review the current status of protein descriptors, especially for application in quantitative structure activity relationship (QSAR) models. First, we describe the complexity of proteins and the properties that descriptors must accommodate. Then we introduce descriptors of shape and surface properties that quantify the global and local features of proteins. Finally, we highlight the current limitations of protein descriptors and propose strategies for the derivation of novel protein descriptors that are more informative.
Self metathesis of oleochemicals offers a variety of bifunctional compounds, that can be used as monomer for polymer production. Many precursors are in huge scales available, like oleic acid ester (biodiesel), oleyl alcohol (tensides), oleyl amines (tensides, lubricants). We show several ways to produce and separate and purify C18-α,ω-bifunctional compounds, using Grubbs 2nd Generation catalysts, starting from technical grade educts.
Background: Architectural representation, nurtured by the interaction between design thinking and design action, is inherently multi-layered. However, the representation object cannot always reflect these layers. Therefore, it is claimed that these reflections and layerings can gain visibility through ‘performativity in personal knowledge’, which basically has a performative character. The specific layers of representation produced during the performativity in personal knowledge permit insights about the ‘personal way of designing’ [1]. Therefore, the question, ‘how can these layered drawings be decomposed to understand the personal way of designing’, can be defined as the beginning of the study. On the other hand, performativity in personal knowledge in architectural design is handled through the relationship between explicit and tacit knowledge and representational and non-representational theory. To discuss the practical dimension of these theoretical relations, Zvi Hecker's drawing of the Heinz-Galinski-School is examined as an example. The study aims to understand the relationships between the layers by decomposing a layered drawing analytically in order to exemplify personal ways of designing.
Methods: The study is based on qualitative research methodologies. First, a model has been formed through theoretical readings to discuss the performativity in personal knowledge. This model is used to understand the layered representations and to research the personal way of designing. Thus, one drawing of Hecker’s Heinz-Galinski-School project is chosen. Second, its layers are decomposed to detect and analyze diverse objects, which hint to different types of design tools and their application. Third, Zvi Hecker’s statements of the design process are explained through the interview data [2] and other sources. The obtained data are compared with each other.
Results: By decomposing the drawing, eleven layers are defined. These layers are used to understand the relation between the design idea and its representation. They can also be thought of as a reading system. In other words, a method to discuss Hecker’s performativity in personal knowledge is developed. Furthermore, the layers and their interconnections are described in relation to Zvi Hecker’s personal way of designing.
Conclusions: It can be said that layered representations, which are associated with the multilayered structure of performativity in personal knowledge, form the personal way of designing.
Against the background of growing data in everyday life, data processing tools become more powerful to deal with the increasing complexity in building design. The architectural planning process is offered a variety of new instruments to design, plan and communicate planning decisions. Ideally the access to information serves to secure and document the quality of the building and in the worst case, the increased data absorbs time by collection and processing without any benefit for the building and its user. Process models can illustrate the impact of information on the design- and planning process so that architect and planner can steer the process. This paper provides historic and contemporary models to visualize the architectural planning process and introduces means to describe today’s situation consisting of stakeholders, events and instruments. It explains conceptions during Renaissance in contrast to models used in the second half of the 20th century. Contemporary models are discussed regarding their value against the background of increasing computation in the building process.
We conducted a scoping review for active learning in the domain of natural language processing (NLP), which we summarize in accordance with the PRISMA-ScR guidelines as follows:
Objective: Identify active learning strategies that were proposed for entity recognition and their evaluation environments (datasets, metrics, hardware, execution time).
Design: We used Scopus and ACM as our search engines. We compared the results with two literature surveys to assess the search quality. We included peer-reviewed English publications introducing or comparing active learning strategies for entity recognition.
Results: We analyzed 62 relevant papers and identified 106 active learning strategies. We grouped them into three categories: exploitation-based (60x), exploration-based (14x), and hybrid strategies (32x). We found that all studies used the F1-score as an evaluation metric. Information about hardware (6x) and execution time (13x) was only occasionally included. The 62 papers used 57 different datasets to evaluate their respective strategies. Most datasets contained newspaper articles or biomedical/medical data. Our analysis revealed that 26 out of 57 datasets are publicly accessible.
Conclusion: Numerous active learning strategies have been identified, along with significant open questions that still need to be addressed. Researchers and practitioners face difficulties when making data-driven decisions about which active learning strategy to adopt. Conducting comprehensive empirical comparisons using the evaluation environment proposed in this study could help establish best practices in the domain.
Subglacial environments on Earth offer important analogs to Ocean World targets in our solar system. These unique microbial ecosystems remain understudied due to the challenges of access through thick glacial ice (tens to hundreds of meters). Additionally, sub-ice collections must be conducted in a clean manner to ensure sample integrity for downstream microbiological and geochemical analyses. We describe the field-based cleaning of a melt probe that was used to collect brine samples from within a glacier conduit at Blood Falls, Antarctica, for geomicrobiological studies. We used a thermoelectric melting probe called the IceMole that was designed to be minimally invasive in that the logistical requirements in support of drilling operations were small and the probe could be cleaned, even in a remote field setting, so as to minimize potential contamination. In our study, the exterior bioburden on the IceMole was reduced to levels measured in most clean rooms, and below that of the ice surrounding our sampling target. Potential microbial contaminants were identified during the cleaning process; however, very few were detected in the final englacial sample collected with the IceMole and were present in extremely low abundances (∼0.063% of 16S rRNA gene amplicon sequences). This cleaning protocol can help minimize contamination when working in remote field locations, support microbiological sampling of terrestrial subglacial environments using melting probes, and help inform planetary protection challenges for Ocean World analog mission concepts.
Methane is a valuable energy source helping to mitigate the growing energy demand worldwide. However, as a potent greenhouse gas, it has also gained additional attention due to its environmental impacts. The biological production of methane is performed primarily hydrogenotrophically from H2 and CO2 by methanogenic archaea. Hydrogenotrophic methanogenesis also represents a great interest with respect to carbon re-cycling and H2 storage. The most significant carbon source, extremely rich in complex organic matter for microbial degradation and biogenic methane production, is coal. Although interest in enhanced microbial coalbed methane production is continuously increasing globally, limited knowledge exists regarding the exact origins of the coalbed methane and the associated microbial communities, including hydrogenotrophic methanogens. Here, we give an overview of hydrogenotrophic methanogens in coal beds and related environments in terms of their energy production mechanisms, unique metabolic pathways, and associated ecological functions.
Ga-doped Li7La3Zr2O12 garnet solid electrolytes exhibit the highest Li-ion conductivities among the oxide-type garnet-structured solid electrolytes, but instabilities toward Li metal hamper their practical application. The instabilities have been assigned to direct chemical reactions between LiGaO2 coexisting phases and Li metal by several groups previously. Yet, the understanding of the role of LiGaO2 in the electrochemical cell and its electrochemical properties is still lacking. Here, we are investigating the electrochemical properties of LiGaO2 through electrochemical tests in galvanostatic cells versus Li metal and complementary ex situ studies via confocal Raman microscopy, quantitative phase analysis based on powder X-ray diffraction, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and electron energy loss spectroscopy. The results demonstrate considerable and surprising electrochemical activity, with high reversibility. A three-stage reaction mechanism is derived, including reversible electrochemical reactions that lead to the formation of highly electronically conducting products. The results have considerable implications for the use of Ga-doped Li7La3Zr2O12 electrolytes in all-solid-state Li-metal battery applications and raise the need for advanced materials engineering to realize Ga-doped Li7La3Zr2O12for practical use.