Springer
Refine
Year of publication
Institute
- Fachbereich Medizintechnik und Technomathematik (74)
- Fachbereich Elektrotechnik und Informationstechnik (73)
- IfB - Institut für Bioengineering (42)
- Fachbereich Luft- und Raumfahrttechnik (26)
- Fachbereich Chemie und Biotechnologie (23)
- Fachbereich Energietechnik (21)
- Fachbereich Wirtschaftswissenschaften (18)
- Fachbereich Maschinenbau und Mechatronik (14)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (12)
- INB - Institut für Nano- und Biotechnologien (10)
Has Fulltext
- no (258) (remove)
Document Type
- Article (127)
- Part of a Book (104)
- Conference Proceeding (25)
- Book (2)
Keywords
- MINLP (3)
- Natural language processing (3)
- Seismic design (3)
- Additive manufacturing (2)
- Digitale Transformation (2)
- Engineering optimization (2)
- Information extraction (2)
- Optimization (2)
- Pitching Moment (2)
- Powertrain (2)
This book is based on a multimedia course for biological and chemical engineers, which is designed to trigger students' curiosity and initiative. A solid basic knowledge of thermodynamics and kinetics is necessary for understanding many technical, chemical, and biological processes.
The one-semester basic lecture course was divided into 12 workshops (chapters). Each chapter covers a practically relevant area of physical chemistry and contains the following didactic elements that make this book particularly exciting and understandable:
- Links to Videos at the start of each chapter as preparation for the workshop
- Key terms (in bold) for further research of your own
- Comprehension questions and calculation exercises with solutions as learning checks
- Key illustrations as simple, easy-to-replicate blackboard pictures
Humorous cartoons for each workshop (by Faelis) additionally lighten up the text and facilitate the learning process as a mnemonic. To round out the book, the appendix includes a summary of the most popular experiments in basic physical chemistry courses, as well as suggestions for designing workshops with exhibits, experiments, and "questions of the day."
Suitable for students minoring in chemistry; chemistry majors are sure to find this slimmed-down, didactically valuable book helpful as well. The book is excellent for self-study.
Software development projects often fail because of insufficient code quality. It is now well documented that the task of testing software, for example, is perceived as uninteresting and rather boring, leading to poor software quality and major challenges to software development companies. One promising approach to increase the motivation for considering software quality is the use of gamification. Initial research works already investigated the effects of gamification on software developers and come to promising. Nevertheless, a lack of results from field experiments exists, which motivates the chapter at hand. By conducting a gamification experiment with five student software projects and by interviewing the project members, the chapter provides insights into the changing programming behavior of information systems students when confronted with a leaderboard. The results reveal a motivational effect as well as a reduction of code smells.
The problem of fair and privacy-preserving ordered set reconciliation arises in a variety of applications like auctions, e-voting, and appointment reconciliation. While several multi-party protocols have been proposed that solve this problem in the semi-honest model, there are no multi-party protocols that are secure in the malicious model so far. In this paper, we close this gap. Our newly proposed protocols are shown to be secure in the malicious model based on a variety of novel non-interactive zero-knowledge-proofs. We describe the implementation of our protocols and evaluate their performance in comparison to protocols solving the problem in the semi-honest case.
The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot’s perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.
Due to the increasing complexity of software projects, software development is becoming more and more dependent on teams. The quality of this teamwork can vary depending on the team composition, as teams are always a combination of different skills and personality types. This paper aims to answer the question of how to describe a software development team and what influence the personality of the team members has on the team dynamics. For this purpose, a systematic literature review (n=48) and a literature search with the AI research assistant Elicit (n=20) were conducted. Result: A person’s personality significantly shapes his or her thinking and actions, which in turn influences his or her behavior in software development teams. It has been shown that team performance and satisfaction can be strongly influenced by personality. The quality of communication and the likelihood of conflict can also be attributed to personality.
Ice melting probes
(2023)
The exploration of icy environments in the solar system, such as the poles of Mars and the icy moons (a.k.a. ocean worlds), is a key aspect for understanding their astrobiological potential as well as for extraterrestrial resource inspection. On these worlds, ice melting probes are considered to be well suited for the robotic clean execution of such missions. In this chapter, we describe ice melting probes and their applications, the physics of ice melting and how the melting behavior can be modeled and simulated numerically, the challenges for ice melting, and the required key technologies to deal with those challenges. We also give an overview of existing ice melting probes and report some results and lessons learned from laboratory and field tests.
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
In recent years, the development of large pretrained language models, such as BERT and GPT, significantly improved information extraction systems on various tasks, including relation classification. State-of-the-art systems are highly accurate on scientific benchmarks. A lack of explainability is currently a complicating factor in many real-world applications. Comprehensible systems are necessary to prevent biased, counterintuitive, or harmful decisions.
We introduce semantic extents, a concept to analyze decision patterns for the relation classification task. Semantic extents are the most influential parts of texts concerning classification decisions. Our definition allows similar procedures to determine semantic extents for humans and models. We provide an annotation tool and a software framework to determine semantic extents for humans and models conveniently and reproducibly. Comparing both reveals that models tend to learn shortcut patterns from data. These patterns are hard to detect with current interpretability methods, such as input reductions. Our approach can help detect and eliminate spurious decision patterns during model development. Semantic extents can increase the reliability and security of natural language processing systems. Semantic extents are an essential step in enabling applications in critical areas like healthcare or finance. Moreover, our work opens new research directions for developing methods to explain deep learning models.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.
Messenger apps like WhatsApp and Telegram are frequently used for everyday communication, but they can also be utilized as a platform for illegal activity. Telegram allows public groups with up to 200.000 participants. Criminals use these public groups for trading illegal commodities and services, which becomes a concern for law enforcement agencies, who manually monitor suspicious activity in these chat rooms. This research demonstrates how natural language processing (NLP) can assist in analyzing these chat rooms, providing an explorative overview of the domain and facilitating purposeful analyses of user behavior. We provide a publicly available corpus of annotated text messages with entities and relations from four self-proclaimed black market chat rooms. Our pipeline approach aggregates the extracted product attributes from user messages to profiles and uses these with their sold products as features for clustering. The extracted structured information is the foundation for further data exploration, such as identifying the top vendors or fine-granular price analyses. Our evaluation shows that pretrained word vectors perform better for unsupervised clustering than state-of-the-art transformer models, while the latter is still superior for sequence labeling.
Like all preceding transformations of the manufacturing industry, the large-scale usage of production data will reshape the role of humans within the sociotechnical production ecosystem. To ensure that this transformation creates work systems in which employees are empowered, productive, healthy, and motivated, the transformation must be guided by principles of and research on human-centered work design. Specifically, measures must be taken at all levels of work design, ranging from (1) the work tasks to (2) the working conditions to (3) the organizational level and (4) the supra-organizational level. We present selected research across all four levels that showcase the opportunities and requirements that surface when striving for human-centered work design for the Internet of Production (IoP). (1) On the work task level, we illustrate the user-centered design of human-robot collaboration (HRC) and process planning in the composite industry as well as user-centered design factors for cognitive assistance systems. (2) On the working conditions level, we present a newly developed framework for the classification of HRC workplaces. (3) Moving to the organizational level, we show how corporate data can be used to facilitate best practice sharing in production networks, and we discuss the implications of the IoP for new leadership models. Finally, (4) on the supra-organizational level, we examine overarching ethical dimensions, investigating, e.g., how the new work contexts affect our understanding of responsibility and normative values such as autonomy and privacy. Overall, these interdisciplinary research perspectives highlight the importance and necessary scope of considering the human factor in the IoP.
The paper deals with the asymptotic behaviour of estimators, statistical tests and confidence intervals for L²-distances to uniformity based on the empirical distribution function, the integrated empirical distribution function and the integrated empirical survival function. Approximations of power functions, confidence intervals for the L²-distances and statistical neighbourhood-of-uniformity validation tests are obtained as main applications. The finite sample behaviour of the procedures is illustrated by a simulation study.
The efficiency concepts of Bahadur and Pitman are used to compare the Wilcoxon tests in paired and independent survey samples. A comparison through the length of corresponding confidence intervals is also done. Simple conditions characterizing the dominance of a procedure are derived. Statistical tests for checking these conditions are suggested and discussed.
We discuss the testing problem of homogeneity of the marginal distributions of a continuous bivariate distribution based on a paired sample with possibly missing components (missing completely at random). Applying the well-known two-sample Crámer–von-Mises distance to the remaining data, we determine the limiting null distribution of our test statistic in this situation. It is seen that a new resampling approach is appropriate for the approximation of the unknown null distribution. We prove that the resulting test asymptotically reaches the significance level and is consistent. Properties of the test under local alternatives are pointed out as well. Simulations investigate the quality of the approximation and the power of the new approach in the finite sample case. As an illustration we apply the test to real data sets.
This paper considers a paired data framework and discusses the question of marginal homogeneity of bivariate high-dimensional or functional data. The related testing problem can be endowed into a more general setting for paired random variables taking values in a general Hilbert space. To address this problem, a Cramér–von-Mises type test statistic is applied and a bootstrap procedure is suggested to obtain critical values and finally a consistent test. The desired properties of a bootstrap test can be derived that are asymptotic exactness under the null hypothesis and consistency under alternatives. Simulations show the quality of the test in the finite sample case. A possible application is the comparison of two possibly dependent stock market returns based on functional data. The approach is demonstrated based on historical data for different stock market indices.
Industrial facilities must be thoroughly designed to withstand seismic actions as they exhibit an increased loss potential due to the possibly wideranging damage consequences and the valuable process engineering equipment. Past earthquakes showed the social and political consequences of seismic damage to industrial facilities and sensitized the population and politicians worldwide for the possible hazard emanating from industrial facilities. However, a holistic approach for the seismic design of industrial facilities can presently neither be found in national nor in international standards. The introduction of EN 1998-4 of the new generation of Eurocode 8 will improve the normative situation with
specific seismic design rules for silos, tanks and pipelines and secondary process components. The article presents essential aspects of the seismic design of industrial facilities based on the new generation of Eurocode 8 using the example of tank structures and secondary process components. The interaction effects of the process components with the primary structure are illustrated by means of the experimental results of a shaking table test of a three story moment resisting steel frame with different process components. Finally, an integrated approach of
digital plant models based on building information modelling (BIM) and structural health monitoring (SHM) is presented, which provides not only a reliable decision-making basis for operation, maintenance and repair but also an excellent tool for rapid assessment of seismic damage.
Because of customer churn, strong competition, and operational inefficiencies, the telecommunications operator ME Telco (fictitious name due to confidentiality) launched a strategic transformation program that included a Business Process Management (BPM) project. Major problems were silo-oriented process management and missing cross-functional transparency. Process improvements were not consistently planned and aligned with corporate targets. Measurable inefficiencies were observed on an operational level, e.g., high lead times and reassignment rates of the incident management process.
Nutzen und Rahmenbedingungen 5 informationsgetriebener Geschäftsmodelle des Internets der Dinge
(2018)
Im Kontext der zunehmenden Digitalisierung wird das Internet der Dinge (englisch: Internet of Things, IoT) als ein technologischer Treiber angesehen, durch den komplett neue Geschäftsmodelle im Zusammenspiel unterschiedlicher Akteure entstehen können. Identifizierte Schlüsselakteure sind unter anderem traditionelle Industrieunternehmen, Kommunen und Telekommunikationsunternehmen. Letztere sorgen mit der Bereitstellung von Konnektivität dafür, dass kleine Geräte mit winzigen Batterien nahezu überall und direkt an das Internet angebunden werden können. Es sind schon viele IoT-Anwendungsfälle auf dem Markt, die eine Vereinfachung für Endkunden darstellen, wie beispielsweise Philips Hue Tap. Neben Geschäftsmodellen basierend auf Konnektivität besteht ein großes Potenzial für informationsgetriebene Geschäftsmodelle, die bestehende Geschäftsmodelle unterstützen sowie weiterentwickeln können. Ein Beispiel dafür ist der IoT-Anwendungsfall Park and Joy der Deutschen Telekom AG, bei dem Parkplätze mithilfe von Sensoren vernetzt und Autofahrer in Echtzeit über verfügbare Parkplätze informiert werden. Informationsgetriebene Geschäftsmodelle können auf Daten aufsetzen, die in IoT-Anwendungsfällen erzeugt werden. Zum Beispiel kann ein Telekommunikationsunternehmen Mehrwert schöpfen, indem es aus Daten entscheidungsrelevantere Informationen – sogenannte Insights – ableitet, die zur Steigerung der Entscheidungsagilität genutzt werden. Außerdem können Insights monetarisiert werden. Die Monetarisierung von Insights kann nur nachhaltig stattfinden, wenn sorgfältig gehandelt wird und Rahmenbedingungen berücksichtigt werden. In diesem Kapitel wird das Konzept informationsgetriebener Geschäftsmodelle erläutert und anhand des konkreten Anwendungsfalls Park and Joy verdeutlicht. Darüber hinaus werden Nutzen, Risiken und Rahmenbedingungen diskutiert.
Im Rahmen der digitalen Transformation werden innovative Technologiekonzepte, wie z. B. das Internet der Dinge und Cloud Computing als Treiber für weitreichende Veränderungen von Organisationen und Geschäftsmodellen angesehen. In diesem Kontext ist Robotic Process Automation (RPA) ein neuartiger Ansatz zur Prozessautomatisierung, bei dem manuelle Tätigkeiten durch sogenannte Softwareroboter erlernt und automatisiert ausgeführt werden. Dabei emulieren Softwareroboter die Eingaben auf der bestehenden Präsentationsschicht, so dass keine Änderungen an vorhandenen Anwendungssystemen notwendig sind. Die innovative Idee ist die Transformation der bestehenden Prozessausführung von manuell zu digital, was RPA von traditionellen Ansätzen des Business Process Managements (BPM) unterscheidet, bei denen z. B. prozessgetriebene
Anpassungen auf Ebene der Geschäftslogik notwendig sind. Am Markt werden bereits unterschiedliche RPA-Lösungen als Softwareprodukte angeboten. Gerade bei operativen Prozessen mit sich wiederholenden Verarbeitungsschritten in unterschiedlichen Anwendungssystemen sind gute Ergebnisse durch RPA dokumentiert, wie z. B. die Automatisierung von 35 % der Backoffice-Prozesse bei Telefonica. Durch den vergleichsweise niedrigen Implementierungsaufwand verbunden mit einem hohen Automatisierungspotenzial ist in der Praxis (z. B. Banken, Telekommunikation, Energieversorgung) ein hohes Interesse an RPA vorhanden. Der Beitrag diskutiert RPA als innovativen Ansatz zur
Prozessdigitalisierung und gibt konkrete Handlungsempfehlungen für die Praxis. Dazu wird zwischen modellgetriebenen und selbstlernenden Ansätzen unterschieden. Anhand von generellen Architekturen von RPA-Systemen werden Anwendungsszenarien sowie deren Automatisierungspotenziale, aber auch Einschränkungen, diskutiert. Es folgt ein strukturierter Marktüberblick ausgewählter RPA-Produkte. Anhand von drei konkreten Anwendungsbeispielen wird die Nutzung von RPA in der Praxis verdeutlicht.