Filtern
Erscheinungsjahr
- 2024 (4)
- 2023 (7)
- 2022 (6)
- 2021 (6)
- 2020 (11)
- 2019 (3)
- 2018 (2)
- 2017 (2)
- 2016 (2)
- 2015 (5)
- 2014 (6)
- 2013 (12)
- 2012 (10)
- 2011 (3)
- 2010 (2)
- 2009 (4)
- 2008 (9)
- 2007 (11)
- 2006 (6)
- 2005 (5)
- 2004 (2)
- 2003 (7)
- 2002 (7)
- 2001 (5)
- 2000 (4)
- 1999 (14)
- 1998 (7)
- 1997 (6)
- 1996 (7)
- 1995 (5)
- 1994 (9)
- 1993 (3)
- 1992 (3)
- 1991 (3)
- 1990 (5)
- 1987 (1)
- 1986 (2)
- 1984 (1)
Institut
- Fachbereich Wirtschaftswissenschaften (207) (entfernen)
Volltext vorhanden
- nein (207) (entfernen)
Sprache
- Englisch (207) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (114)
- Konferenzveröffentlichung (43)
- Buch (Monographie) (32)
- Teil eines Buches (Kapitel) (15)
- Arbeitspapier (2)
- Dissertation (1)
Schlagworte
- rebound-effect (2)
- sustainability (2)
- Active learning (1)
- Bank-issued Warrants (1)
- Brands (1)
- Business Models (1)
- Business Process Intelligence (1)
- Case Study (1)
- Challenges (1)
- Change (1)
Many of today’s factors make software development more and more complex, such as time pressure, new technologies, IT security risks, et cetera. Thus, a good preparation of current as well as future software developers in terms of a good software engineering education becomes progressively important. As current research shows, Competence Developing Games (CDGs) and Serious Games can offer a potential solution.
This paper identifies the necessary requirements for CDGs to be conducive in principle, but especially in software engineering (SE) education. For this purpose, the current state of research was summarized in the context of a literature review. Afterwards, some of the identified requirements as well as some additional requirements were evaluated by a survey in terms of subjective relevance.
The construction of a statistical test is investigated which is based only on “reliability” and “precision” as quality criteria. The reliability of a statistical test is quantifiedin a straightforward way by the probability that the decision of the test is correct. However, the quantification of the precision of a statistical test is not at all evident. Thereforethe paper presents and discusses several approaches. Moreover the distinction of “nullhypothesis” and “alternative hypothesis” is not necessary any longer.
Optimal Adjustment Policies
(1990)
This paper investigates the extent to which corporate governance affects the cost of debt and equity capital of German exchange-listed companies. I examine corporate governance along three dimensions: financial information quality, ownership structure and board structure. The results suggest that firms with high levels of financial transparency and bonus compensations face lower cost of equity. In addition, block ownership is negatively related to firms' cost of equity when the blockholders are other firms, managers or founding-family members. Consistent with the conjecture that agency costs increase with firm size, I find significant cost of debt effects only in the largest German companies. Here, the creditors demand lower cost of debt from firms with block ownerships held by corporations or banks. My findings demonstrate that a uniform set of governance attributes is unlikely to satisfy suppliers of debt and equity capital equally.
Bitcoin is a cryptocurrency and is considered a high-risk asset class whose price changes are difficult to predict. Current research focusses on daily price movements with a limited number of predictors. The paper at hand aims at identifying measurable indicators for Bitcoin price movements and the development of a suitable forecasting model for hourly changes. The paper provides three research contributions. First, a set of significant indicators for predicting the Bitcoin price is identified. Second, the results of a trained Long Short-term Memory (LSTM) neural network that predicts price changes on an hourly basis is presented and compared with other algorithms. Third, the results foster discussions of the applicability of neural nets for stock price predictions. In total, 47 input features for a period of over 10 months could be retrieved to train a neural net that predicts the Bitcoin price movements with an error rate of 3.52 %.
Adaptive logistics : information management for planning and control of small series assembly
(2007)
We analyze the trading behavior of individual investors in option-like securities, namely bankissued warrants, and thus expand the growing literature of investors behavior to a new kind of securities. A unique data set from a large German discount broker gives us the opportunity to analyze the trading behavior of 1,454 investors, making 89,958 transactions in 6,724 warrants on 397 underlyings. In different logit regression, we make use of the facts that investors can speculate on rising and falling prices of the underlying with call and put warrants and that we also have information about the stock portfolios of the investors. We report several facts about the trading behavior of individual investors in warrants that are consistent with the literature on the behavior of individual investors in the stock market. The warrant investors buy calls and sell puts if the price of the underlying has decreased over the past trading days and they sell calls and buy puts if the price of the underlying has increased. That means, the investors follow negative feedback trading strategies in all four trading categories observed. In addition, we find strong evidence for the disposition effect for call as well as put warrants, which is reversed in December. The trading behavior is also influenced if the underlying reaches some exceptionally prices, e.g. highs, lows or the strike price. We show that hedging, as one natural candidate to buy puts, does not play an important role in the market for bank-issued warrants.
Leveraging Social Network Data for Analytical CRM Strategies - The Introduction of Social BI.
(2012)
Knowledge Management
(2001)
A Cooperative Work Environment for Evolutionary Software Development / Kurbel, K., Pietsch, W.
(1990)
A Portable Implementation of Index Sequential Input-Output [Part 1] / Kurbel, Karl; Pietsch, W.
(1986)
A Portable Implementation of Index Sequential Input-Output [Part 2] / Kurbel, Karl; Pietsch, W.
(1986)
IT Service Deployment
(2007)
IT Products are viewed and managed differently depending on the perspectives and the stage within the life cycle. A model is presented that integrates different perspectives and stages serving as an aid for the analysis of business models and focused positioning of IT-products. Four generic business models are analysed with regard to the product management function in general and the positioning field for IT-products specifically: off-the-shelf (license), license plus service, project, and system service (incl. cloud computing).
Purpose
In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset.
Design/methodology/approach
In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments.
Findings
Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model.
Originality/value
For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future.
Die Garantie im Kaufrecht
(1995)
The role of Germany, Japan and the United States on the ECU-bond markets / Hans Wilhelm Mackenstein
(1991)
Books Reviewed - European Democratization since 1800 edited by J. Garrard, V. Tolz and R. White
(2000)
Names of individuals
(2017)
Small Claims Regulation
(2017)
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.