Refine
Year of publication
Document Type
- Article (105)
- Conference Proceeding (42)
- Book (32)
- Part of a Book (14)
- Working Paper (2)
- Doctoral Thesis (1)
Language
- English (196) (remove)
Has Fulltext
- no (196) (remove)
Keywords
- Active learning (1)
- Bank-issued Warrants (1)
- Business Models (1)
- Business Process Intelligence (1)
- Case Study (1)
- Challenges (1)
- Clinical decision support systems (1)
- Consensus (1)
- Cross-platform (1)
- Deep learning (1)
- Discourse ethics (1)
- Disposition Effect (1)
- Evaluation (1)
- Explainability (1)
- Feature selection (1)
- Finland (1)
- Gamification (1)
- Germany (1)
- Guidelines (1)
- IT Products (1)
Institute
- Fachbereich Wirtschaftswissenschaften (196) (remove)
Info-Web-Generation
(2004)
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
Domain experts regularly teach novice students how to perform a task. This often requires them to adjust their behavior to the less knowledgeable audience and, hence, to behave in a more didactic manner. Eye movement modeling examples (EMMEs) are a contemporary educational tool for displaying experts’ (natural or didactic) problem-solving behavior as well as their eye movements to learners. While research on expert-novice communication mainly focused on experts’ changes in explicit, verbal communication behavior, it is as yet unclear whether and how exactly experts adjust their nonverbal behavior. This study first investigated whether and how experts change their eye movements and mouse clicks (that are displayed in EMMEs) when they perform a task naturally versus teach a task didactically. Programming experts and novices initially debugged short computer codes in a natural manner. We first characterized experts’ natural problem-solving behavior by contrasting it with that of novices. Then, we explored the changes in experts’ behavior when being subsequently instructed to model their task solution didactically. Experts became more similar to novices on measures associated with experts’ automatized processes (i.e., shorter fixation durations, fewer transitions between code and output per click on the run button when behaving didactically). This adaptation might make it easier for novices to follow or imitate the expert behavior. In contrast, experts became less similar to novices for measures associated with more strategic behavior (i.e., code reading linearity, clicks on run button) when behaving didactically.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
Goal Driven Business Modelling - Supporting Decision Making within Information System Development
(1995)
With a steady increase of regulatory requirements for business processes, automation support of compliance management is a field garnering increasing attention in Information Systems research. Several approaches have been developed to support compliance checking of process models. One major challenge for such approaches is their ability to handle different modeling techniques and compliance rules in order to enable widespread adoption and application. Applying a structured literature search strategy, we reflect and discuss compliance-checking approaches in order to provide an insight into their generalizability and evaluation. The results imply that current approaches mainly focus on special modeling techniques and/or a restricted set of types of compliance rules. Most approaches abstain from real-world evaluation which raises the question of their practical applicability. Referring to the search results, we propose a roadmap for further research in model-based business process compliance checking.
A Gamified Information System (GIS) implements game concepts and elements, such as affordances and game design principles to motivate people. Based on the idea to develop a GIS to increase the motivation of software developers to perform software quality tasks, the research work at hand aims at investigating relevant requirements from that target group. Therefore, 14 interviews with software development experts are conducted and analyzed. According to the results, software developers prefer the affordances points, narrative storytelling in a multiplayer and a round-based setting. Furthermore, six design principles for the development of a GIS are derived.
Researching the field of business intelligence and analytics (BI & A) has a long tradition within information systems research. Thereby, in each decade the rapid development of technologies opened new room for investigation. Since the early 1950s, the collection and analysis of structured data were the focus of interest, followed by unstructured data since the early 1990s. The third wave of BI & A comprises unstructured and sensor data of mobile devices. The article at hand aims at drawing a comprehensive overview of the status quo in relevant BI & A research of the current decade, focusing on the third wave of BI & A. By this means, the paper’s contribution is fourfold. First, a systematically developed taxonomy for BI & A 3.0 research, containing seven dimensions and 40 characteristics, is presented. Second, the results of a structured literature review containing 75 full research papers are analyzed by applying the developed taxonomy. The analysis provides an overview on the status quo of BI & A 3.0. Third, the results foster discussions on the predicted and observed developments in BI & A research of the past decade. Fourth, research gaps of the third wave of BI & A research are disclosed and concluded in a research agenda.
Purpose
In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset.
Design/methodology/approach
In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments.
Findings
Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model.
Originality/value
For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future.
Enterprise SOA Roadmap
(2008)
Does stiffer electoral competition reduce political shirking? For a micro-analysis of this question, I construct a new data set spanning the years 2005 to 2012 covering biographical and political information about German Members of Parliament (MPs), including their attendance rates in voting sessions. For the parliament elected in 2009, I show that indeed opposition party MPs who expect to face a close race in their district show significantly and relevantly lower absence rates in parliament beforehand. MPs of governing parties seem not to react significantly to electoral competition. These results are confirmed by an analysis of the parliament elected in 2005, by several robustness checks, and also by employing an instrumental variable strategy exploiting convenient peculiarities of the German electoral system. The study also shows how MPs elected via party lists react to different levels of electoral competition.
Divided government is often thought of as causing legislative deadlock. I investigate the link between divided government and economic reforms using a novel data set on welfare reforms in US states between 1978 and 2010. Panel data regressions show that, under divided government, a US state is around 25% more likely to adopt a welfare reform than under unified government. Several robustness checks confirm this counter-intuitive finding. Case study evidence suggests an explanation based on policy competition between governor, senate, and house.
Die Garantie im Kaufrecht
(1995)
Determinants of earnings forecast error, earnings forecast revision and earnings forecast accuracy
(2012)
Earnings forecasts are ubiquitous in today’s financial markets. They are essential indicators of future firm performance and a starting point for firm valuation. Extremely inaccurate and overoptimistic forecasts during the most recent financial crisis have raised serious doubts regarding the reliability of such forecasts. This thesis therefore investigates new determinants of forecast errors and accuracy. In addition, new determinants of forecast revisions are examined. More specifically, the thesis answers the following questions: 1) How do analyst incentives lead to forecast errors? 2) How do changes in analyst incentives lead to forecast revisions?, and 3) What factors drive differences in forecast accuracy?