TY - CHAP A1 - Harlacher, Markus A1 - Altepost, Andrea A1 - Elsen, Ingo A1 - Ferrein, Alexander A1 - Hansen-Ampah, Adjan A1 - Merx, Wolfgang A1 - Niehues, Sina A1 - Schiffer, Stefan A1 - Shahinfar, Fatemeh Nasim ED - Lausberg, Isabel ED - Vogelsang, Michael T1 - Approach for the identification of requirements on the design of AI-supported work systems (in problem-based projects) T2 - AI in Business and Economics N2 - To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given. KW - Business understanding KW - Requirements KW - Process model KW - Participation KW - Im-plementation of AI-systems Y1 - 2024 SN - 9783110790320 U6 - https://doi.org/10.1515/9783110790320 SP - 87 EP - 99 PB - De Gruyter CY - Berlin ER - TY - CHAP A1 - Engemann, Heiko A1 - Du, Shengzhi A1 - Kallweit, Stephan A1 - Ning, Chuanfang A1 - Anwar, Saqib T1 - AutoSynPose: Automatic Generation of Synthetic Datasets for 6D Object Pose Estimation T2 - Machine Learning and Artificial Intelligence. Proceedings of MLIS 2020 N2 - We present an automated pipeline for the generation of synthetic datasets for six-dimension (6D) object pose estimation. Therefore, a completely automated generation process based on predefined settings is developed, which enables the user to create large datasets with a minimum of interaction and which is feasible for applications with a high object variance. The pipeline is based on the Unreal 4 (UE4) game engine and provides a high variation for domain randomization, such as object appearance, ambient lighting, camera-object transformation and distractor density. In addition to the object pose and bounding box, the metadata includes all randomization parameters, which enables further studies on randomization parameter tuning. The developed workflow is adaptable to other 3D objects and UE4 environments. An exemplary dataset is provided including five objects of the Yale-CMU-Berkeley (YCB) object set. The datasets consist of 6 million subsegments using 97 rendering locations in 12 different UE4 environments. Each dataset subsegment includes one RGB image, one depth image and one class segmentation image at pixel-level. Y1 - 2020 SN - 978-1-64368-137-5 U6 - https://doi.org/10.3233/FAIA200770 N1 - Frontiers in Artificial Intelligence and Applications. Vol 332 SP - 89 EP - 97 PB - IOS Press CY - Amsterdam ER - TY - CHAP A1 - Niemueller, T. A1 - Lakemeyer, G. A1 - Reuter, S. A1 - Jeschke, S. A1 - Ferrein, Alexander T1 - Benchmarking of Cyber-Physical Systems in Industrial Robotics: The RoboCup Logistics League as a CPS Benchmark Blueprint T2 - Cyber-Physical Systems: Foundations, Principles and Applications N2 - In the future, we expect manufacturing companies to follow a new paradigm that mandates more automation and autonomy in production processes. Such smart factories will offer a variety of production technologies as services that can be combined ad hoc to produce a large number of different product types and variants cost-effectively even in small lot sizes. This is enabled by cyber-physical systems that feature flexible automated planning methods for production scheduling, execution control, and in-factory logistics. During development, testbeds are required to determine the applicability of integrated systems in such scenarios. Furthermore, benchmarks are needed to quantify and compare system performance in these industry-inspired scenarios at a comprehensible and manageable size which is, at the same time, complex enough to yield meaningful results. In this chapter, based on our experience in the RoboCup Logistics League (RCLL) as a specific example, we derive a generic blueprint for how a holistic benchmark can be developed, which combines a specific scenario with a set of key performance indicators as metrics to evaluate the overall integrated system and its components. KW - Smart factory KW - Industry 4.0 KW - Cyber-physical systems KW - Multi-robot systems KW - Autonomous mobile robots Y1 - 2017 U6 - https://doi.org/10.1016/B978-0-12-803801-7.00013-4 SP - 193 EP - 207 PB - Academic Press CY - London ER - TY - CHAP A1 - Ferrein, Alexander A1 - Nikolovski, Gjorgji A1 - Limpert, Nicolas A1 - Reke, Michael A1 - Schiffer, Stefan A1 - Scholl, Ingrid ED - Küçük, Serdar T1 - Controlling a Fleet of Autonomous LHD Vehicles in Mining Operation T2 - Multi-Robot Systems - New Advances N2 - In this chapter, we report on our activities to create and maintain a fleet of autonomous load haul dump (LHD) vehicles for mining operations. The ever increasing demand for sustainable solutions and economic pressure causes innovation in the mining industry just like in any other branch. In this chapter, we present our approach to create a fleet of autonomous special purpose vehicles and to control these vehicles in mining operations. After an initial exploration of the site we deploy the fleet. Every vehicle is running an instance of our ROS 2-based architecture. The fleet is then controlled with a dedicated planning module. We also use continuous environment monitoring to implement a life-long mapping approach. In our experiments, we show that a combination of synthetic, augmented and real training data improves our classifier based on the deep learning network Yolo v5 to detect our vehicles, persons and navigation beacons. The classifier was successfully installed on the NVidia AGX-Drive platform, so that the abovementioned objects can be recognised during the dumper drive. The 3D poses of the detected beacons are assigned to lanelets and transferred to an existing map. Y1 - 2023 SN - 978-1-83768-290-4 U6 - https://doi.org/10.5772/intechopen.113044 PB - Intech Open CY - London ER - TY - CHAP A1 - Niemueller, Tim A1 - Zwilling, Frederik A1 - Lakemeyer, Gerhard A1 - Löbach, Matthias A1 - Reuter, Sebastian A1 - Jeschke, Sabina A1 - Ferrein, Alexander T1 - Cyber-Physical System Intelligence T2 - Industrial Internet of Things N2 - Cyber-physical systems are ever more common in manufacturing industries. Increasing their autonomy has been declared an explicit goal, for example, as part of the Industry 4.0 vision. To achieve this system intelligence, principled and software-driven methods are required to analyze sensing data, make goal-directed decisions, and eventually execute and monitor chosen tasks. In this chapter, we present a number of knowledge-based approaches to these problems and case studies with in-depth evaluation results of several different implementations for groups of autonomous mobile robots performing in-house logistics in a smart factory. We focus on knowledge-based systems because besides providing expressive languages and capable reasoning techniques, they also allow for explaining how a particular sequence of actions came about, for example, in the case of a failure. KW - Smart factory KW - Industry 4.0 KW - Multi-robot systems KW - Autonomous mobile robots KW - RoboCup Y1 - 2017 SN - 978-3-319-42559-7 U6 - https://doi.org/10.1007/978-3-319-42559-7_17 N1 - Springer Series in Wireless Technology SP - 447 EP - 472 PB - Springer CY - Cham ER - TY - CHAP A1 - Niemueller, Tim A1 - Reuter, Sebastian A1 - Ewert, Daniel A1 - Ferrein, Alexander A1 - Jeschke, Sabina A1 - Lakemeyer, Gerhard T1 - Decisive Factors for the Success of the Carologistics RoboCup Team in the RoboCup Logistics League 2014 T2 - RoboCup 2014: Robot World Cup XVIII Y1 - 2015 SN - 978-3-319-18615-3 N1 - Lecture Notes in Computer Science ; 8992 SP - 155 EP - 167 PB - Springer ER - TY - CHAP A1 - Kallweit, Stephan A1 - Gottschalk, Michael A1 - Walenta, Robert T1 - ROS based safety concept for collaborative robots in industrial applications T2 - Advances in robot design and intelligent control : proceedings of the 24th International Conference on Robotics in Alpe-Adria-Danube Region (RAAD). (Advances in intelligent systems and computing ; 371) N2 - The production and assembly of customized products increases the demand for flexible automation systems. One approach is to remove the safety fences that separate human and industrial robot to combine their skills. This collaboration possesses a certain risk for the human co-worker, leading to numerous safety concepts to protect him. The human needs to be monitored and tracked by a safety system using different sensors. The proposed system consists of a RGBD camera for surveillance of the common working area, an array of optical distance sensors to compensate shadowing effects of the RGBD camera and a laser range finder to detect the co-worker when approaching the work cell. The software for collision detection, path planning, robot control and predicting the behaviour of the co-worker is based on the Robot Operating System (ROS). A first prototype of the work cell shows that with advanced algorithms from the field of mobile robotics a very flexible safety concept can be realized: the robot not simply stops its movement when detecting a collision, but plans and executes an alternative path around the obstacle. KW - Collaborative robot KW - Human-Robot interaction KW - Safety concept KW - Workspace monitoring KW - Path planning Y1 - 2016 SN - 978-3-319-21289-0 (Print) ; 978-3-319-21290-6 (E-Book) U6 - https://doi.org/10.1007/978-3-319-21290-6_3 SP - 27 EP - 35 PB - Springer CY - Cham ER - TY - CHAP A1 - Goeckel, Tom A1 - Schiffer, Stefan A1 - Wagner, Hermann A1 - Lakemeyer, Gerhard T1 - The Video Conference Tool Robot ViCToR T2 - Intelligent Robotics and Applications : 8th International Conference, ICIRA 2015, Portsmouth, UK, August 24-27, 2015, Proceedings, Part II N2 - We present a robotic tool that autonomously follows a conversation to enable remote presence in video conferencing. When humans participate in a meeting with the help of video conferencing tools, it is crucial that they are able to follow the conversation both with acoustic and visual input. To this end, we design and implement a video conferencing tool robot that uses binaural sound source localization as its main source to autonomously orient towards the currently talking speaker. To increase robustness of the acoustic cue against noise we supplement the sound localization with a source detection stage. Also, we include a simple onset detector to retain fast response times. Since we only use two microphones, we are confronted with ambiguities on whether a source is in front or behind the device. We resolve these ambiguities with the help of face detection and additional moves. We tailor the system to our target scenarios in experiments with a four minute scripted conversation. In these experiments we evaluate the influence of different system settings on the responsiveness and accuracy of the device. Y1 - 2015 SN - 978-3-319-22876-1 U6 - https://doi.org/10.1007/978-3-319-22876-1_6 N1 - Lecture Notes in Computer Science ; 9245 SP - 61 EP - 73 PB - Springer ER -