Article
Refine
Year of publication
- 2023 (2)
- 2022 (3)
- 2021 (1)
- 2020 (3)
- 2018 (1)
- 2016 (2)
- 2015 (3)
- 2014 (5)
- 2013 (3)
- 2012 (4)
- 2010 (2)
- 2009 (4)
- 2008 (3)
- 2007 (5)
- 2006 (2)
- 2005 (1)
- 2004 (2)
- 2003 (6)
- 2002 (3)
- 2001 (4)
- 2000 (2)
- 1999 (7)
- 1998 (2)
- 1997 (3)
- 1996 (6)
- 1995 (2)
- 1994 (8)
- 1993 (3)
- 1992 (2)
- 1991 (3)
- 1990 (5)
- 1987 (1)
- 1986 (2)
- 1984 (1)
Document Type
- Article (106) (remove)
Language
- English (106) (remove)
Keywords
- Bank-issued Warrants (1)
- Charging station (1)
- Clinical decision support systems (1)
- Consensus (1)
- Discourse ethics (1)
- Disposition Effect (1)
- Electronic vehicle (1)
- Explainability (1)
- Feature selection (1)
- Finland (1)
- Germany (1)
- Individual Investors (1)
- Instructional design (1)
- Justice (1)
- Latvia (1)
- Measurement models (1)
- Measurement uncertainty (1)
- Medical AI (1)
- Modelling (1)
- Negative Feedback Trading (1)
Institute
- Fachbereich Wirtschaftswissenschaften (106) (remove)
Next Generation Access Networks: Why is there a higher risk of investment and how to deal with it?
(2009)
Knowledge-based productivity in “low-tech” industries: evidence from firms in developing countries
(2014)
Using firm-level data from five developing countries—Brazil, Ecuador, South Africa, Tanzania, and Bangladesh—and three industries—food processing, textiles, and the garments and leather products—this article examines the importance of various sources of knowledge for explaining productivity and formally tests whether sector- or country-specific characteristics dominate these relationships. Knowledge sources driving productivity appear mainly sector specific. Also differences in the level of development affect the effectiveness of knowledge sources. In the food processing sector, firms with higher educated managers are more productive, and in least-developed countries, additionally those with technology licenses and imported machinery and equipment. In the capital-intensive textiles sector, productivity is higher in firms that conduct R&D. In the garments and leather products sector, higher education of the managers, licensing, and R&D raise productivity.
Prioritization is an essential task within requirements engineering to cope with complexity and to establish focus properly. The 3rd Workshop on Requirements Prioritization for customer oriented Software Development (RePriCo’12) focused on requirements prioritization and adjacent themes in the context of customer oriented development of bespoke and standard software. Five submissions have been accepted for the proceedings and for presentation. The report summarizes and points out key findings.
Info-Web-Generation
(2004)
Goal Driven Business Modelling - Supporting Decision Making within Information System Development
(1995)
Outlier Robust Estimation of an Euler Equation Investment Model with German Firm Level Panel Data
(2002)
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
The role of Germany, Japan and the United States on the ECU-bond markets / Hans Wilhelm Mackenstein
(1991)
Books Reviewed - European Democratization since 1800 edited by J. Garrard, V. Tolz and R. White
(2000)
Die Garantie im Kaufrecht
(1995)
Purpose
In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset.
Design/methodology/approach
In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments.
Findings
Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model.
Originality/value
For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future.
IT Service Deployment
(2007)
A Portable Implementation of Index Sequential Input-Output [Part 1] / Kurbel, Karl; Pietsch, W.
(1986)
A Portable Implementation of Index Sequential Input-Output [Part 2] / Kurbel, Karl; Pietsch, W.
(1986)
A Cooperative Work Environment for Evolutionary Software Development / Kurbel, K., Pietsch, W.
(1990)
Knowledge Management
(2001)
We analyze the trading behavior of individual investors in option-like securities, namely bankissued warrants, and thus expand the growing literature of investors behavior to a new kind of securities. A unique data set from a large German discount broker gives us the opportunity to analyze the trading behavior of 1,454 investors, making 89,958 transactions in 6,724 warrants on 397 underlyings. In different logit regression, we make use of the facts that investors can speculate on rising and falling prices of the underlying with call and put warrants and that we also have information about the stock portfolios of the investors. We report several facts about the trading behavior of individual investors in warrants that are consistent with the literature on the behavior of individual investors in the stock market. The warrant investors buy calls and sell puts if the price of the underlying has decreased over the past trading days and they sell calls and buy puts if the price of the underlying has increased. That means, the investors follow negative feedback trading strategies in all four trading categories observed. In addition, we find strong evidence for the disposition effect for call as well as put warrants, which is reversed in December. The trading behavior is also influenced if the underlying reaches some exceptionally prices, e.g. highs, lows or the strike price. We show that hedging, as one natural candidate to buy puts, does not play an important role in the market for bank-issued warrants.
This paper investigates the extent to which corporate governance affects the cost of debt and equity capital of German exchange-listed companies. I examine corporate governance along three dimensions: financial information quality, ownership structure and board structure. The results suggest that firms with high levels of financial transparency and bonus compensations face lower cost of equity. In addition, block ownership is negatively related to firms' cost of equity when the blockholders are other firms, managers or founding-family members. Consistent with the conjecture that agency costs increase with firm size, I find significant cost of debt effects only in the largest German companies. Here, the creditors demand lower cost of debt from firms with block ownerships held by corporations or banks. My findings demonstrate that a uniform set of governance attributes is unlikely to satisfy suppliers of debt and equity capital equally.
Optimal Adjustment Policies
(1990)
The construction of a statistical test is investigated which is based only on “reliability” and “precision” as quality criteria. The reliability of a statistical test is quantifiedin a straightforward way by the probability that the decision of the test is correct. However, the quantification of the precision of a statistical test is not at all evident. Thereforethe paper presents and discusses several approaches. Moreover the distinction of “nullhypothesis” and “alternative hypothesis” is not necessary any longer.