TY - CHAP A1 - König, Johannes Alexander A1 - Kaiser, Steffen A1 - Wolf, Martin T1 - Entwicklung und Evaluierung eines Regeleditors für die grafische Erstellung von „Smart Living Environment“-Services T2 - Angewandte Forschung in der Wirtschaftsinformatik 2019 : Tagungsband zur 32. AKWI-Jahrestagung / hrsg. von Martin R. Wolf, Thomas Barton, Frank Herrmann, Vera G. Meister, Christian Müller, Christian Seel N2 - In diesem Paper wird die Entwicklung und Evaluation eines grafischen Regeleditors für das Erstellen von „Smart Living Environments“-Services vorgestellt. Dafür werden zunächst die Deduktion und Implementierung des grafischen Regeleditors erläutert. Anschließend wird eine Probandenstudie vorgestellt, in welcher der Mehrwert bezogen auf die Aspekte Zeit, Fehleranfälligkeit und Gebrauchstauglichkeit festgestellt wird. Y1 - 2019 SN - 978-3-944330-62-4 SP - 234 EP - 243 PB - mana-Buch CY - Heide ER - TY - CHAP A1 - Kohl, Philipp A1 - Freyer, Nils A1 - Krämer, Yoka A1 - Werth, Henri A1 - Wolf, Steffen A1 - Kraft, Bodo A1 - Meinecke, Matthias A1 - Zündorf, Albert ED - Conte, Donatello ED - Fred, Ana ED - Gusikhin, Oleg ED - Sansone, Carlo T1 - ALE: a simulation-based active learning evaluation framework for the parameter-driven comparison of query strategies for NLP T2 - Deep Learning Theory and Applications. DeLTA 2023. Communications in Computer and Information Science N2 - Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance. However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP. The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework. KW - Active learning KW - Query learning KW - Natural language processing KW - Deep learning KW - Reproducible research Y1 - 2023 SN - 978-3-031-39058-6 (Print) SN - 978-3-031-39059-3 (Online) U6 - http://dx.doi.org/978-3-031-39059-3 N1 - 4th International Conference, DeLTA 2023, Rome, Italy, July 13–14, 2023. SP - 235 EP - 253 PB - Springer CY - Cham ER -