Refine
Year of publication
- 2022 (132) (remove)
Document Type
- Article (75)
- Conference Proceeding (43)
- Part of a Book (9)
- Other (2)
- Poster (2)
- Book (1)
Language
- English (132) (remove)
Keywords
- Concentrated solar power (3)
- Energy storage (3)
- Hybrid energy system (3)
- Biocomposites (2)
- Chemometrics (2)
- Digital Twin (2)
- Earthquake (2)
- Electricity generation (2)
- Gamification (2)
- Heparin (2)
- IO-Link (2)
- NMR spectroscopy (2)
- Natural fibres (2)
- Polymer-matrix composites (2)
- Power plants (2)
- Seismic design (2)
- Seismic loading (2)
- Solar thermal technologies (2)
- Stress concentrations (2)
- biosensors (2)
Institute
- Fachbereich Medizintechnik und Technomathematik (42)
- Fachbereich Energietechnik (32)
- IfB - Institut für Bioengineering (27)
- ECSM European Center for Sustainable Mobility (16)
- Fachbereich Chemie und Biotechnologie (14)
- INB - Institut für Nano- und Biotechnologien (14)
- Solar-Institut Jülich (14)
- Fachbereich Elektrotechnik und Informationstechnik (12)
- Fachbereich Luft- und Raumfahrttechnik (10)
- Fachbereich Maschinenbau und Mechatronik (10)
- Kommission für Forschung und Entwicklung (10)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (7)
- Fachbereich Wirtschaftswissenschaften (6)
- Fachbereich Bauingenieurwesen (4)
- FH Aachen (1)
- IMP - Institut für Mikrowellen- und Plasmatechnik (1)
- Nowum-Energy (1)
Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435.