Refine
Year of publication
- 2024 (1)
- 2023 (7)
- 2022 (6)
- 2021 (3)
- 2020 (7)
- 2019 (3)
- 2018 (2)
- 2017 (2)
- 2016 (3)
- 2015 (5)
- 2014 (6)
- 2013 (12)
- 2012 (10)
- 2011 (3)
- 2010 (2)
- 2009 (6)
- 2008 (11)
- 2007 (15)
- 2006 (7)
- 2005 (5)
- 2004 (2)
- 2003 (7)
- 2002 (7)
- 2001 (5)
- 2000 (4)
- 1999 (14)
- 1998 (7)
- 1997 (6)
- 1996 (7)
- 1995 (5)
- 1994 (9)
- 1993 (3)
- 1992 (3)
- 1991 (3)
- 1990 (5)
- 1987 (1)
- 1986 (2)
- 1984 (1)
Document Type
- Article (106)
- Conference Proceeding (51)
- Book (32)
- Part of a Book (14)
- Working Paper (3)
- Doctoral Thesis (1)
Language
- English (207) (remove)
Keywords
- Telekommunikationsmarkt (4)
- Führung (3)
- Leadership (3)
- regulation (2)
- Active learning (1)
- Bank-issued Warrants (1)
- Breitband Markt (1)
- Bundesnetzagentur (1)
- Business Models (1)
- Business Process Intelligence (1)
- Case Study (1)
- Challenges (1)
- Charging station (1)
- Clinical decision support systems (1)
- Coaching (1)
- Communication (1)
- Consensus (1)
- Cross-Cultural Psychology (1)
- Cross-Cultural Training (1)
- Cross-platform (1)
Institute
- Fachbereich Wirtschaftswissenschaften (207) (remove)
A key feature of future broadband markets will be diversity of access technologies, meaning that numerous technologies will be exploited for broadband communication. Various factors will affect the success of these future broadband markets, the regulatory policy being one amongst others. So far, a coherent regulatory approach does not exist as to broadband markets. First results of policies so far suggest that less sector-specific regulation is likely to occur. Instead, regulators must ensure that access to networks and services of potentially dominant providers in a relevant broadband market will satisfy requirements for openness and non-discrimination. In this environment the future challenge of regulationg broadband markets will be to set the right incentives for investment into new infrastructures. This paper examines whether there is a need for the regulation of future broadband access markets an if yes, what is the appropriate regulatory tool to do so. Thereby the focus is on the analysis of European broadband markets and the regulatory approaches applied. The first section provides a description of the characteristics of future broadband markets. The second section discusses possible bottlenecks on broadband markets an their regulatory implications. The third section will examine regulatory issues concerning access to broadband networks in more detail. This will be done by comparing the regulatory approaches of European countries and the results in terms of bradband penetration. The final section will give key recommendations for a regulatory strategy on brandband access markets.
Market data for the German telecom market shows that Deutsche Telekom as the former incumbent is constantly loosing shares on all arkets for voice telephony: the market for local calls, the market for long-distance calls and the market for international calls. At the same time prices decline steadily with the latest trend being that operators offer voice services free of charge, the costs of which are covered by a monthly subscription charge. Against this background the paper examines the state of policy and regulatory reform in the telecommunications sector in Germany almost 10 years after the liberalisation of the fixed telecommunications market. Thereby the focus is on the analysis of the competitive conditions that have been established on the German market for voice telephony services. If these retail markets are competitive, there might be a need to remove remaining regulatory provisions. In the new environment of converging markets the future challenge of regulating fixed telecom markets might be to ensure that access to the network and/or services of a potentially dominant provider in a relevant market will satisfy requirements for openness and non-discrimination.
To give the exchange of goods and services between the European Union (EU) and the United States (U.S.) new momentum the two parties are currently negotiating the transatlantic free trade agreement Transatlantic Trade and Investment Partnership (TTIP). The aim is to create the largest free trade area in the world. The agreement, once entered into force, will oblige EU countries and the U.S. to further liberalize their markets.
The negotiations on TTIP include a chapter on Electronic Communications/ Telecommunications. The challenge therein will be securing commitments for market access to Electronic Communications services. At the same time, these commitments must reflect the legitimate need for consumer protection issues. The need to reduce Electronic Communications-related non-tariff barriers to trade between the Parties is due to the fact that these markets are heavily regulated. Without transnational rules as to regulations national governments can abuse these regulations to deter the market entry by new (foreign) suppliers. Thus the free trade agreement TTIP affects in many respects regulatory provisions on and access to Electronic Communications markets. The objective of this paper is therefore to examine to what extend the regulatory principles for Electronic Communications markets envisaged under TTIP will result in trade facilitation and regulatory convergence between the EU and the U.S.
As to this question the result of the analysis is that the chapter on Electronic Communications will be an important step towards facilitating trade in Electronic Communications services. At the same time some regulatory convergence will take place, but this convergence will not lead to a (full) harmonization of regulations. Rather the norm, also after TTIP negotiations will have been concluded successfully, will be mutual recognition of different regulatory regimes. Different regulations being the optimal policy response in different market settings will continue to exist. Moreover, it is very unlikely that such regulatory principles for the Electronic Communications sector are a vehicle for a race to the bottom in levels of consumer protection.
Next Generation Access Networks: Why is there a higher risk of investment and how to deal with it?
(2009)
AI-based systems are nearing ubiquity not only in everyday low-stakes activities but also in medical procedures. To protect patients and physicians alike, explainability requirements have been proposed for the operation of AI-based decision support systems (AI-DSS), which adds hurdles to the productive use of AI in clinical contexts. This raises two questions: Who decides these requirements? And how should access to AI-DSS be provided to communities that reject these standards (particularly when such communities are expert-scarce)? This chapter investigates a dilemma that emerges from the implementation of global AI governance. While rejecting global AI governance limits the ability to help communities in need, global AI governance risks undermining and subjecting health-insecure communities to the force of the neo-colonial world order. For this, this chapter first surveys the current landscape of AI governance and introduces the approach of relational egalitarianism as key to (global health) justice. To discuss the two horns of the referred dilemma, the core power imbalances faced by health-insecure collectives (HICs) are examined. The chapter argues that only strong demands of a dual strategy towards health-secure collectives can both remedy the immediate needs of HICs and enable them to become healthcare independent.
Extracting workflow nets from textual descriptions can be used to simplify guidelines or formalize textual descriptions of formal processes like business processes and algorithms. The task of manually extracting processes, however, requires domain expertise and effort. While automatic process model extraction is desirable, annotating texts with formalized process models is expensive. Therefore, there are only a few machine-learning-based extraction approaches. Rule-based approaches, in turn, require domain specificity to work well and can rarely distinguish relevant and irrelevant information in textual descriptions. In this paper, we present GUIDO, a hybrid approach to the process model extraction task that first, classifies sentences regarding their relevance to the process model, using a BERT-based sentence classifier, and second, extracts a process model from the sentences classified as relevant, using dependency parsing. The presented approach achieves significantly better resul ts than a pure rule-based approach. GUIDO achieves an average behavioral similarity score of 0.93. Still, in comparison to purely machine-learning-based approaches, the annotation costs stay low.
Determinants of earnings forecast error, earnings forecast revision and earnings forecast accuracy
(2012)
Earnings forecasts are ubiquitous in today’s financial markets. They are essential indicators of future firm performance and a starting point for firm valuation. Extremely inaccurate and overoptimistic forecasts during the most recent financial crisis have raised serious doubts regarding the reliability of such forecasts. This thesis therefore investigates new determinants of forecast errors and accuracy. In addition, new determinants of forecast revisions are examined. More specifically, the thesis answers the following questions: 1) How do analyst incentives lead to forecast errors? 2) How do changes in analyst incentives lead to forecast revisions?, and 3) What factors drive differences in forecast accuracy?
Knowledge-based productivity in “low-tech” industries: evidence from firms in developing countries
(2014)
Using firm-level data from five developing countries—Brazil, Ecuador, South Africa, Tanzania, and Bangladesh—and three industries—food processing, textiles, and the garments and leather products—this article examines the importance of various sources of knowledge for explaining productivity and formally tests whether sector- or country-specific characteristics dominate these relationships. Knowledge sources driving productivity appear mainly sector specific. Also differences in the level of development affect the effectiveness of knowledge sources. In the food processing sector, firms with higher educated managers are more productive, and in least-developed countries, additionally those with technology licenses and imported machinery and equipment. In the capital-intensive textiles sector, productivity is higher in firms that conduct R&D. In the garments and leather products sector, higher education of the managers, licensing, and R&D raise productivity.
Enterprise SOA Roadmap
(2008)
Prioritization is an essential task within requirements engineering to cope with complexity and to establish focus properly. The 3rd Workshop on Requirements Prioritization for customer oriented Software Development (RePriCo’12) focused on requirements prioritization and adjacent themes in the context of customer oriented development of bespoke and standard software. Five submissions have been accepted for the proceedings and for presentation. The report summarizes and points out key findings.
Introduction of RePriCo’13
(2013)
Info-Web-Generation
(2004)
Goal Driven Business Modelling - Supporting Decision Making within Information System Development
(1995)
Outlier Robust Estimation of an Euler Equation Investment Model with German Firm Level Panel Data
(2002)
Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of artificial intelligent decision-support systems (AI-DSS). Such algorithmic decision-making, however, is mostly developed in resource- and expert-abundant settings to support healthcare experts in their work. As a practical consequence, the normative standards and requirements for such algorithmic decision-making in healthcare require the technology to be at least as explainable as the decisions made by the experts themselves. The goal of providing healthcare in settings where resources and expertise are scarce might come with a normative pull to lower the normative standards of using digital technologies in order to provide at least some healthcare in the first place. We scrutinize this tendency to lower standards in particular settings from a normative perspective, distinguish between different types of absolute and relative, local and global standards of explainability, and conclude by defending an ambitious and practicable standard of local relative explainability.
We introduce a new way to measure the forecast effort that analysts devote to their earnings forecasts by measuring the analyst's general effort for all covered firms. While the commonly applied effort measure is based on analyst behaviour for one firm, our measure considers analyst behaviour for all covered firms. Our general effort measure captures additional information about analyst effort and thus can identify accurate forecasts. We emphasise the importance of investigating analyst behaviour in a larger context and argue that analysts who generally devote substantial forecast effort are also likely to devote substantial effort to a specific firm, even if this effort might not be captured by a firm-specific measure. Empirical results reveal that analysts who devote higher general forecast effort issue more accurate forecasts. Additional investigations show that analysts' career prospects improve with higher general forecast effort. Our measure improves on existing methods as it has higher explanatory power regarding differences in forecast accuracy than the commonly applied effort measure. Additionally, it can address research questions that cannot be examined with a firm-specific measure. It provides a simple but comprehensive way to identify accurate analysts.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Names of individuals
(2017)
Small Claims Regulation
(2017)
The role of Germany, Japan and the United States on the ECU-bond markets / Hans Wilhelm Mackenstein
(1991)