Refine
Year of publication
- 2024 (67)
- 2023 (94)
- 2022 (136)
- 2021 (136)
- 2020 (169)
- 2019 (196)
- 2018 (169)
- 2017 (154)
- 2016 (157)
- 2015 (162)
- 2014 (161)
- 2013 (171)
- 2012 (162)
- 2011 (183)
- 2010 (181)
- 2009 (179)
- 2008 (150)
- 2007 (137)
- 2006 (129)
- 2005 (122)
- 2004 (150)
- 2003 (95)
- 2002 (123)
- 2001 (103)
- 2000 (102)
- 1999 (109)
- 1998 (98)
- 1997 (96)
- 1996 (81)
- 1995 (78)
- 1994 (87)
- 1993 (59)
- 1992 (54)
- 1991 (29)
- 1990 (39)
- 1989 (44)
- 1988 (56)
- 1987 (32)
- 1986 (19)
- 1985 (33)
- 1984 (22)
- 1983 (20)
- 1982 (29)
- 1981 (20)
- 1980 (36)
- 1979 (24)
- 1978 (34)
- 1977 (14)
- 1976 (13)
- 1975 (12)
- 1974 (3)
- 1973 (2)
- 1972 (2)
- 1971 (1)
- 1968 (1)
Institute
- Fachbereich Medizintechnik und Technomathematik (1575)
- Fachbereich Elektrotechnik und Informationstechnik (715)
- IfB - Institut für Bioengineering (567)
- Fachbereich Energietechnik (563)
- Fachbereich Chemie und Biotechnologie (541)
- INB - Institut für Nano- und Biotechnologien (533)
- Fachbereich Luft- und Raumfahrttechnik (484)
- Fachbereich Maschinenbau und Mechatronik (272)
- Fachbereich Wirtschaftswissenschaften (209)
- Solar-Institut Jülich (161)
Has Fulltext
- no (4735) (remove)
Language
- English (4735) (remove)
Document Type
- Article (3194)
- Conference Proceeding (1065)
- Part of a Book (197)
- Book (146)
- Conference: Meeting Abstract (34)
- Doctoral Thesis (32)
- Patent (25)
- Other (10)
- Report (10)
- Conference Poster (5)
Keywords
- Gamification (6)
- avalanche (6)
- Additive manufacturing (5)
- Earthquake (5)
- Enterprise Architecture (5)
- Industry 4.0 (5)
- MINLP (5)
- Natural language processing (5)
- solar sail (5)
- Additive Manufacturing (4)
Easy-read and large language models: on the ethical dimensions of LLM-based text simplification
(2024)
The production of easy-read and plain language is a challenging task, requiring well-educated experts to write context-dependent simplifications of texts. Therefore, the domain of easy-read and plain language is currently restricted to the bare minimum of necessary information. Thus, even though there is a tendency to broaden the domain of easy-read and plain language, the inaccessibility of a significant amount of textual information excludes the target audience from partaking or entertainment and restricts their ability to live life autonomously. Large language models can solve a vast variety of natural language tasks, including the simplification of standard language texts to easy-read or plain language. Moreover, with the rise of generative models like GPT, easy-read and plain language may be applicable to all kinds of natural language texts, making formerly inaccessible information accessible to marginalized groups like, a.o., non-native speakers, and people with mental disabilities. In this paper, we argue for the feasibility of text simplification and generation in that context, outline the ethical dimensions, and discuss the implications for researchers in the field of ethics and computer science.
The quest for scientifically advanced and sustainable solutions is driven by growing environmental and economic issues associated with coal mining, processing, and utilization. Consequently, within the coal industry, there is a growing recognition of the potential of microbial applications in fostering innovative technologies. Microbial-based coal solubilization, coal beneficiation, and coal dust suppression are green alternatives to traditional thermochemical and leaching technologies and better meet the need for ecologically sound and economically viable choices. Surfactant-mediated approaches have emerged as powerful tools for modeling, simulation, and optimization of coal-microbial systems and continue to gain prominence in clean coal fuel production, particularly in microbiological co-processing, conversion, and beneficiation. Surfactants (surface-active agents) are amphiphilic compounds that can reduce surface tension and enhance the solubility of hydrophobic molecules. A wide range of surfactant properties can be achieved by either directly influencing microbial growth factors, stimulants, and substrates or indirectly serving as frothers, collectors, and modifiers in the processing and utilization of coal. This review highlights the significant biotechnological potential of surfactants by providing a thorough overview of their involvement in coal biodegradation, bioprocessing, and biobeneficiation, acknowledging their importance as crucial steps in coal consumption.
Several unconnected laboratory experiments are usually offered for students in instrumental analysis lab. To give the students a more rational overview of the most common instrumental techniques, a new laboratory experiment was developed. Marketed pain relief drugs, familiar consumer products with one to three active components, namely, acetaminophen (paracetamol), acetylsalicylic acid (ASA), and caffeine, were selected. Common analytical methods were compared regarding the performance of qualitative and quantitative analysis of unknown tablets: UV–visible (UV–vis), infrared (IR), and nuclear magnetic resonance (NMR) spectroscopies, as well as high-performance liquid chromatography (HPLC). The students successfully uncovered the composition of formulations, which were divided into three difficulty categories. Students were shown that in addition to simple mixtures handled in theoretical classes, the composition of complex drug products can also be uncovered. By comparing the performance of different techniques, students deepen their understanding and compare the efficiency of analytical methods in the context of complex mixtures. The laboratory experiment can be adjusted for graduate level by including extra tasks such as method optimization, validation, and 2D spectroscopic techniques.
Sexism in online media comments is a pervasive challenge that often manifests subtly, complicating moderation efforts as interpretations of what constitutes sexism can vary among individuals. We study monolingual and multilingual open-source text embeddings to reliably detect sexism and misogyny in Germanlanguage online comments from an Austrian newspaper. We observed classifiers trained on text embeddings to mimic closely the individual judgements of human annotators. Our method showed robust performance in the GermEval 2024 GerMS-Detect Subtask 1 challenge, achieving an average macro F1 score of 0.597 (4th place, as reported on Codabench). It also accurately predicted the distribution of human annotations in GerMS-Detect Subtask 2, with an average Jensen-Shannon distance of 0.301 (2nd place). The computational efficiency of our approach suggests potential for scalable applications across various languages and linguistic contexts.
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
Effective government services rely on accurate population numbers to allocate resources. In Colombia and globally, census enumeration is challenging in remote regions and where armed conflict is occurring. During census preparations, the Colombian National Administrative Department of Statistics conducted social cartography workshops, where community representatives estimated numbers of dwellings and people throughout their regions. We repurposed this information, combining it with remotely sensed buildings data and other geospatial data. To estimate building counts and population sizes, we developed hierarchical Bayesian models, trained using nearby full-coverage census enumerations and assessed using 10-fold cross-validation. We compared models to assess the relative contributions of community knowledge, remotely sensed buildings, and their combination to model fit. The Community model was unbiased but imprecise; the Satellite model was more precise but biased; and the Combination model was best for overall accuracy. Results reaffirmed the power of remotely sensed buildings data for population estimation and highlighted the value of incorporating local knowledge.
Perennial ryegrass (Lolium perenne) is an underutilized lignocellulosic biomass that has several benefits such as high availability, renewability, and biomass yield. The grass press-juice obtained from the mechanical pretreatment can be used for the bio-based production of chemicals. Lactic acid is a platform chemical that has attracted consideration due to its broad area of applications. For this reason, the more sustainable production of lactic acid is expected to increase. In this work, lactic acid was produced using complex medium at the bench- and reactor scale, and the results were compared to those obtained using an optimized press-juice medium. Bench-scale fermentations were carried out in a pH-control system and lactic acid production reached approximately 21.84 ± 0.95 g/L in complex medium, and 26.61 ± 1.2 g/L in press-juice medium. In the bioreactor, the production yield was 0.91 ± 0.07 g/g, corresponding to a 1.4-fold increase with respect to the complex medium with fructose. As a comparison to the traditional ensiling process, the ensiling of whole grass fractions of different varieties harvested in summer and autumn was performed. Ensiling showed variations in lactic acid yields, with a yield up to 15.2% dry mass for the late-harvested samples, surpassing typical silage yields of 6–10% dry mass.
Purpose: Impaired paravascular drainage of β-Amyloid (Aβ) has been proposed as a contributing cause for sporadic Alzheimer’s disease (AD), as decreased cerebral blood vessel pulsatility and subsequently reduced propulsion in this pathway could lead to the accumulation and deposition of Aβ in the brain. Therefore, we hypothesized that there is an increased impairment in pulsatility across AD spectrum.
Patients and Methods: Using transcranial color-coded duplex sonography (TCCS) the resistance and pulsatility index (RI; PI) of the middle cerebral artery (MCA) in healthy controls (HC, n=14) and patients with AD dementia (ADD, n=12) were measured. In a second step, we extended the sample by adding patients with mild cognitive impairment (MCI) stratified by the presence (MCI-AD, n=8) or absence of biomarkers (MCI-nonAD, n=8) indicative for underlying AD pathology, and compared RI and PI across the groups. To control for atherosclerosis as a confounder, we measured the arteriolar-venular-ratio of retinal vessels.
Results: Left and right RI (p=0.020; p=0.027) and left PI (p=0.034) differed between HC and ADD controlled for atherosclerosis with AUCs of 0.776, 0.763, and 0.718, respectively. The RI and PI of MCI-AD tended towards ADD, of MCI-nonAD towards HC, respectively. RIs and PIs were associated with disease severity (p=0.010, p=0.023).
Conclusion: Our results strengthen the hypothesis that impaired pulsatility could cause impaired amyloid clearance from the brain and thereby might contribute to the development of AD. However, further studies considering other factors possibly influencing amyloid clearance as well as larger sample sizes are needed.
Purpose: A precise determination of the corneal diameter is essential for the diagnosis of various ocular diseases, cataract and refractive surgery as well as for the selection and fitting of contact lenses. The aim of this study was to investigate the agreement between two automatic and one manual method for corneal diameter determination and to evaluate possible diurnal variations in corneal diameter.
Patients and Methods: Horizontal white-to-white corneal diameter of 20 volunteers was measured at three different fixed times of a day with three methods: Scheimpflug method (Pentacam HR, Oculus), placido based topography (Keratograph 5M, Oculus) and manual method using an image analysis software at a slitlamp (BQ900, Haag-Streit).
Results: The two-factorial analysis of variance could not show a significant effect of the different instruments (p = 0.117), the different time points (p = 0.506) and the interaction between instrument and time point (p = 0.182). Very good repeatability (intraclass correlation coefficient ICC, quartile coefficient of dispersion QCD) was found for all three devices. However, manual slitlamp measurements showed a higher QCD than the automatic measurements with the Keratograph 5M and the Pentacam HR at all measurement times.
Conclusion: The manual and automated methods used in the study to determine corneal diameter showed good agreement and repeatability. No significant diurnal variations of corneal diameter were observed during the period of time studied.