TY - CHAP A1 - Kohl, Philipp A1 - Freyer, Nils A1 - Krämer, Yoka A1 - Werth, Henri A1 - Wolf, Steffen A1 - Kraft, Bodo A1 - Meinecke, Matthias A1 - Zündorf, Albert ED - Conte, Donatello ED - Fred, Ana ED - Gusikhin, Oleg ED - Sansone, Carlo T1 - ALE: a simulation-based active learning evaluation framework for the parameter-driven comparison of query strategies for NLP T2 - Deep Learning Theory and Applications. DeLTA 2023. Communications in Computer and Information Science N2 - Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance. However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP. The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework. KW - Active learning KW - Query learning KW - Natural language processing KW - Deep learning KW - Reproducible research Y1 - 2023 SN - 978-3-031-39058-6 (Print) SN - 978-3-031-39059-3 (Online) U6 - http://dx.doi.org/978-3-031-39059-3 N1 - 4th International Conference, DeLTA 2023, Rome, Italy, July 13–14, 2023. SP - 235 EP - 253 PB - Springer CY - Cham ER - TY - JOUR A1 - Kempt, Hendrik A1 - Freyer, Nils A1 - Nagel, Saskia K. T1 - Justice and the normative standards of explainability in healthcare JF - Philosophy & Technology N2 - Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability. KW - Clinical decision support systems KW - Justice KW - Medical AI KW - Explainability KW - Normative standards Y1 - 2022 U6 - http://dx.doi.org/10.1007/s13347-022-00598-0 VL - 35 IS - Article number: 100 SP - 1 EP - 19 PB - Springer Nature CY - Berlin ER - TY - CHAP A1 - Freyer, Nils A1 - Thewes, Dustin A1 - Meinecke, Matthias ED - Gusikhin, Oleg ED - Hammoudi, Slimane ED - Cuzzocrea, Alfredo T1 - GUIDO: a hybrid approach to guideline discovery & ordering from natural language texts T2 - Proceedings of the 12th International Conference on Data Science, Technology and Applications DATA - Volume 1 N2 - Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low. KW - Natural Language Processing KW - Text Mining KW - Process Model Extraction KW - Business Process Intelligence Y1 - 2023 SN - 978-989-758-664-4 U6 - http://dx.doi.org/10.5220/0012084400003541 SN - 2184-285X N1 - Proceedings of the 12th International Conference on Data Science, Technology and Applications, July 11-13, 2023, in Rome, Italy. SP - 335 EP - 342 ER - TY - CHAP A1 - Freyer, Nils A1 - Kempt, Hendrik ED - Bhakuni, Himani ED - Miotto, Lucas T1 - AI-DSS in healthcare and their power over health-insecure collectives T2 - Justice in global health N2 - AI-based systems are nearing ubiquity not only in everyday low-stakes activities but also in medical procedures. To protect patients and physicians alike, explainability requirements have been proposed for the operation of AI-based decision support systems (AI-DSS), which adds hurdles to the productive use of AI in clinical contexts. This raises two questions: Who decides these requirements? And how should access to AI-DSS be provided to communities that reject these standards (particularly when such communities are expert-scarce)? This chapter investigates a dilemma that emerges from the implementation of global AI governance. While rejecting global AI governance limits the ability to help communities in need, global AI governance risks undermining and subjecting health-insecure communities to the force of the neo-colonial world order. For this, this chapter first surveys the current landscape of AI governance and introduces the approach of relational egalitarianism as key to (global health) justice. To discuss the two horns of the referred dilemma, the core power imbalances faced by health-insecure collectives (HICs) are examined. The chapter argues that only strong demands of a dual strategy towards health-secure collectives can both remedy the immediate needs of HICs and enable them to become healthcare independent. Y1 - 2023 SN - 9781003399933 U6 - http://dx.doi.org/10.4324/9781003399933-4 SP - 38 EP - 55 PB - Routledge CY - London ER -