Refine
Year of publication
Institute
- Fachbereich Wirtschaftswissenschaften (1121) (remove)
Has Fulltext
- no (1121) (remove)
Document Type
- Article (700)
- Book (214)
- Part of a Book (72)
- Conference Proceeding (55)
- Other (44)
- Review (13)
- Working Paper (7)
- Contribution to a Periodical (5)
- Doctoral Thesis (5)
- Report (4)
Keywords
- Datenschutz (5)
- Datenschutzgrundverordnung (3)
- Datenschutzrecht (2)
- EU-Datenschutzgrundverordnung (2)
- Internationales Recht / Europarecht (2)
- rebound-effect (2)
- sustainability (2)
- 3. EU Legislativpaket (1)
- Active learning (1)
- Atomausstieg (1)
Einleitung vor § 1297
(2014)
Vorbemerkung vor § 1297
(2014)
Vorbemerkung vor § 1353
(2014)
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Außerbilanzielle Korrekturen
(2005)
Sonderbilanz und Status
(2003)
Pro-Forma-Angaben
(2002)
Außerbilanzielle Korrekturen
(2012)
Die Entwicklungen der Rechtsinformatik und des Informationsrechts zeigen, dass diese Disziplinen aktuell vor der Herausforderung stehen, eine interdisziplinäre Zusammenarbeit zwischen ihnen und anderen Disziplinen zu etablieren. Unterschiedliche Publikationskulturen erschweren die Erreichung dieses Ziels. Forschungsportale stellen themenspezifische, internetbasierte Verzeichnisse dar, die bereits vorhandene Informationen strukturiert zugänglich machen. Sie können die Beziehungen zwischen den Disziplinen fördern, indem sie bereits erzielte Arbeitsergebnisse disziplinenübergreifend bekannt machen und dadurch dazu beitragen, Synergiepotenziale und mögliche Kooperationspartner zu identifizieren.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
CO2-Emissionshandel
(2006)
Innovationstätigkeit im Verarbeitenden Gewerbe und Bergbau hat zugenommen / Janz, N. und k. Voß
(1999)
Innovationsreport Großhandel
(1997)