TY - JOUR A1 - Emhardt, Selina N. A1 - Jarodzka, Halszka A1 - Brand-Gruwel, Saskia A1 - Drumm, Christian A1 - Niehorster, Diederick C. A1 - van Gog, Tamara T1 - What is my teacher talking about? Effects of displaying the teacher’s gaze and mouse cursor cues in video lectures on students’ learning JF - Journal of Cognitive Psychology N2 - Eye movement modelling examples (EMME) are instructional videos that display a teacher’s eye movements as “gaze cursor” (e.g. a moving dot) superimposed on the learning task. This study investigated if previous findings on the beneficial effects of EMME would extend to online lecture videos and compared the effects of displaying the teacher’s gaze cursor with displaying the more traditional mouse cursor as a tool to guide learners’ attention. Novices (N = 124) studied a pre-recorded video lecture on how to model business processes in a 2 (mouse cursor absent/present) × 2 (gaze cursor absent/present) between-subjects design. Unexpectedly, we did not find significant effects of the presence of gaze or mouse cursors on mental effort and learning. However, participants who watched videos with the gaze cursor found it easier to follow the teacher. Overall, participants responded positively to the gaze cursor, especially when the mouse cursor was not displayed in the video. KW - Instructional design KW - eye movement modelling examples KW - video learning Y1 - 2022 U6 - http://dx.doi.org/10.1080/20445911.2022.2080831 SN - 2044-5911 SP - 1 EP - 19 PB - Routledge, Taylor & Francis Group CY - Abingdon ER - TY - JOUR A1 - Mueller, Tobias A1 - Segin, Alexander A1 - Weigand, Christoph A1 - Schmitt, Robert H. T1 - Feature selection for measurement models JF - International journal of quality & reliability management N2 - Purpose In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset. Design/methodology/approach In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments. Findings Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model. Originality/value For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future. KW - Feature selection KW - Modelling KW - Measurement models KW - Measurement uncertainty Y1 - 2022 U6 - http://dx.doi.org/10.1108/IJQRM-07-2021-0245 SN - 0265-671X IS - Vol. ahead-of-print, No. ahead-of-print. PB - Emerald Group Publishing Limited CY - Bingley ER - TY - JOUR A1 - Kempt, Hendrik A1 - Freyer, Nils A1 - Nagel, Saskia K. T1 - Justice and the normative standards of explainability in healthcare JF - Philosophy & Technology N2 - Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability. KW - Clinical decision support systems KW - Justice KW - Medical AI KW - Explainability KW - Normative standards Y1 - 2022 U6 - http://dx.doi.org/10.1007/s13347-022-00598-0 VL - 35 IS - Article number: 100 SP - 1 EP - 19 PB - Springer Nature CY - Berlin ER -