TY - JOUR A1 - Bialonski, Stephan A1 - Grieger, Niklas T1 - Der KI-Chatbot ChatGPT: Eine Herausforderung für die Hochschulen JF - Die neue Hochschule N2 - Essays, Gedichte, Programmcode: ChatGPT generiert automatisch Texte auf bisher unerreicht hohem Niveau. Dieses und nachfolgende Systeme werden nicht nur die akademische Welt nachhaltig verändern. Y1 - 2023 U6 - http://dx.doi.org/10.5281/zenodo.7533758 SN - 0340-448X VL - 2023 IS - 1 SP - 24 EP - 27 PB - HLB CY - Bonn ER - TY - INPR A1 - Grieger, Niklas A1 - Mehrkanoon, Siamak A1 - Bialonski, Stephan T1 - Preprint: Data-efficient sleep staging with synthetic time series pretraining T2 - arXiv N2 - Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed "frequency pretraining" to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. Furthermore, our results underline the relevance of frequency information for sleep stage scoring, while also demonstrating that deep neural networks utilize information beyond frequencies to enhance sleep staging performance, which is consistent with previous research. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces. Y1 - 2024 ER - TY - CHAP A1 - Bornheim, Tobias A1 - Grieger, Niklas A1 - Bialonski, Stephan T1 - FHAC at GermEval 2021: Identifying German toxic, engaging, and fact-claiming comments with ensemble learning T2 - Proceedings of the GermEval 2021 Workshop on the Identification of Toxic, Engaging, and Fact-Claiming Comments : 17th Conference on Natural Language Processing KONVENS 2021 Y1 - 2021 U6 - http://dx.doi.org/10.48415/2021/fhw5-x128 N1 - SP - 105 EP - 111 PB - Heinrich Heine University CY - Düsseldorf ER - TY - CHAP A1 - Blaneck, Patrick Gustav A1 - Bornheim, Tobias A1 - Grieger, Niklas A1 - Bialonski, Stephan T1 - Automatic readability assessment of german sentences with transformer ensembles T2 - Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text N2 - Reliable methods for automatic readability assessment have the potential to impact a variety of fields, ranging from machine translation to self-informed learning. Recently, large language models for the German language (such as GBERT and GPT-2-Wechsel) have become available, allowing to develop Deep Learning based approaches that promise to further improve automatic readability assessment. In this contribution, we studied the ability of ensembles of fine-tuned GBERT and GPT-2-Wechsel models to reliably predict the readability of German sentences. We combined these models with linguistic features and investigated the dependence of prediction performance on ensemble size and composition. Mixed ensembles of GBERT and GPT-2-Wechsel performed better than ensembles of the same size consisting of only GBERT or GPT-2-Wechsel models. Our models were evaluated in the GermEval 2022 Shared Task on Text Complexity Assessment on data of German sentences. On out-of-sample data, our best ensemble achieved a root mean squared error of 0:435. Y1 - 2022 U6 - http://dx.doi.org/10.48550/arXiv.2209.04299 N1 - Proceedings of the 18th Conference on Natural Language Processing/Konferenz zur Verarbeitung natürlicher Sprache (KONVENS 2022) 12-15 September, 2022 University of Potsdam Potsdam, Germany SP - 57 EP - 62 PB - Association for Computational Linguistics CY - Potsdam ER -