TY - CHAP A1 - Ulmer, Jessica A1 - Braun, Sebastian A1 - Cheng, Chi-Tsun A1 - Dowey, Steve A1 - Wollert, Jörg T1 - Gamified Virtual Reality Training Environment for the Manufacturing Industry T2 - Proceedings of the 2020 19th International Conference on Mechatronics – Mechatronika (ME) N2 - Industry 4.0 imposes many challenges for manufacturing companies and their employees. Innovative and effective training strategies are required to cope with fast-changing production environments and new manufacturing technologies. Virtual Reality (VR) offers new ways of on-the-job, on-demand, and off-premise training. A novel concept and evaluation system combining Gamification and VR practice for flexible assembly tasks is proposed in this paper and compared to existing works. It is based on directed acyclic graphs and a leveling system. The concept enables a learning speed which is adjustable to the users’ pace and dynamics, while the evaluation system facilitates adaptive work sequences and allows employee-specific task fulfillment. The concept was implemented and analyzed in the Industry 4.0 model factory at FH Aachen for mechanical assembly jobs. Y1 - 2020 U6 - https://doi.org/10.1109/ME49197.2020.9286661 N1 - 2020 19th International Conference on Mechatronics – Mechatronika (ME), Prague, Czech Republic, December 2–4, 2020 SP - 1 EP - 6 PB - IEEE CY - New York, NY ER - TY - CHAP A1 - Reke, Michael A1 - Peter, Daniel A1 - Schulte-Tigges, Joschua A1 - Schiffer, Stefan A1 - Ferrein, Alexander A1 - Walter, Thomas A1 - Matheis, Dominik T1 - A Self-Driving Car Architecture in ROS2 T2 - 2020 International SAUPEC/RobMech/PRASA Conference, Cape Town, South Africa N2 - In this paper we report on an architecture for a self-driving car that is based on ROS2. Self-driving cars have to take decisions based on their sensory input in real-time, providing high reliability with a strong demand in functional safety. In principle, self-driving cars are robots. However, typical robot software, in general, and the previous version of the Robot Operating System (ROS), in particular, does not always meet these requirements. With the successor ROS2 the situation has changed and it might be considered as a solution for automated and autonomous driving. Existing robotic software based on ROS was not ready for safety critical applications like self-driving cars. We propose an architecture for using ROS2 for a self-driving car that enables safe and reliable real-time behaviour, but keeping the advantages of ROS such as a distributed architecture and standardised message types. First experiments with an automated real passenger car at lower and higher speed-levels show that our approach seems feasible for autonomous driving under the necessary real-time conditions. Y1 - 2020 SN - 978-1-7281-4162-6 U6 - https://doi.org/10.1109/SAUPEC/RobMech/PRASA48453.2020.9041020 N1 - 2020 International SAUPEC/RobMech/PRASA Conference, 29-31 Jan. 2020, Cape Town, South Africa SP - 1 EP - 6 PB - IEEE CY - New York, NY ER - TY - CHAP A1 - Kirsch, Maximilian A1 - Mataré, Victor A1 - Ferrein, Alexander A1 - Schiffer, Stefan T1 - Integrating golog++ and ROS for Practical and Portable High-level Control T2 - Proceedings of the 12th International Conference on Agents and Artificial Intelligence - Volume 2 N2 - The field of Cognitive Robotics aims at intelligent decision making of autonomous robots. It has matured over the last 25 or so years quite a bit. That is, a number of high-level control languages and architectures have emerged from the field. One concern in this regard is the action language GOLOG. GOLOG has been used in a rather large number of applications as a high-level control language ranging from intelligent service robots to soccer robots. For the lower level robot software, the Robot Operating System (ROS) has been around for more than a decade now and it has developed into the standard middleware for robot applications. ROS provides a large number of packages for standard tasks in robotics like localisation, navigation, and object recognition. Interestingly enough, only little work within ROS has gone into the high-level control of robots. In this paper, we describe our approach to marry the GOLOG action language with ROS. In particular, we present our architecture on inte grating golog++, which is based on the GOLOG dialect Readylog, with the Robot Operating System. With an example application on the Pepper service robot, we show how primitive actions can be easily mapped to the ROS ActionLib framework and present our control architecture in detail. Y1 - 2020 U6 - https://doi.org/10.5220/0008984406920699 N1 - Proceedings of the 12th International Conference on Agents and Artificial Intelligence: ICAART 2020, Valletta, Malta SP - 692 EP - 699 PB - SciTePress CY - Setúbal, Portugal ER - TY - JOUR A1 - Franko, Josef A1 - Du, Shengzhi A1 - Kallweit, Stephan A1 - Duelberg, Enno Sebastian A1 - Engemann, Heiko T1 - Design of a Multi-Robot System for Wind Turbine Maintenance JF - Energies N2 - The maintenance of wind turbines is of growing importance considering the transition to renewable energy. This paper presents a multi-robot-approach for automated wind turbine maintenance including a novel climbing robot. Currently, wind turbine maintenance remains a manual task, which is monotonous, dangerous, and also physically demanding due to the large scale of wind turbines. Technical climbers are required to work at significant heights, even in bad weather conditions. Furthermore, a skilled labor force with sufficient knowledge in repairing fiber composite material is rare. Autonomous mobile systems enable the digitization of the maintenance process. They can be designed for weather-independent operations. This work contributes to the development and experimental validation of a maintenance system consisting of multiple robotic platforms for a variety of tasks, such as wind turbine tower and rotor blade service. In this work, multicopters with vision and LiDAR sensors for global inspection are used to guide slower climbing robots. Light-weight magnetic climbers with surface contact were used to analyze structure parts with non-destructive inspection methods and to locally repair smaller defects. Localization was enabled by adapting odometry for conical-shaped surfaces considering additional navigation sensors. Magnets were suitable for steel towers to clamp onto the surface. A friction-based climbing ring robot (SMART— Scanning, Monitoring, Analyzing, Repair and Transportation) completed the set-up for higher payload. The maintenance period could be extended by using weather-proofed maintenance robots. The multi-robot-system was running the Robot Operating System (ROS). Additionally, first steps towards machine learning would enable maintenance staff to use pattern classification for fault diagnosis in order to operate safely from the ground in the future. Y1 - 2020 U6 - https://doi.org/10.3390/en13102552 SN - 1996-1073 VL - 13 IS - 10 SP - Article 2552 PB - MDPI CY - Basel ER - TY - CHAP A1 - Engemann, Heiko A1 - Du, Shengzhi A1 - Kallweit, Stephan A1 - Ning, Chuanfang A1 - Anwar, Saqib T1 - AutoSynPose: Automatic Generation of Synthetic Datasets for 6D Object Pose Estimation T2 - Machine Learning and Artificial Intelligence. Proceedings of MLIS 2020 N2 - We present an automated pipeline for the generation of synthetic datasets for six-dimension (6D) object pose estimation. Therefore, a completely automated generation process based on predefined settings is developed, which enables the user to create large datasets with a minimum of interaction and which is feasible for applications with a high object variance. The pipeline is based on the Unreal 4 (UE4) game engine and provides a high variation for domain randomization, such as object appearance, ambient lighting, camera-object transformation and distractor density. In addition to the object pose and bounding box, the metadata includes all randomization parameters, which enables further studies on randomization parameter tuning. The developed workflow is adaptable to other 3D objects and UE4 environments. An exemplary dataset is provided including five objects of the Yale-CMU-Berkeley (YCB) object set. The datasets consist of 6 million subsegments using 97 rendering locations in 12 different UE4 environments. Each dataset subsegment includes one RGB image, one depth image and one class segmentation image at pixel-level. Y1 - 2020 SN - 978-1-64368-137-5 U6 - https://doi.org/10.3233/FAIA200770 N1 - Frontiers in Artificial Intelligence and Applications. Vol 332 SP - 89 EP - 97 PB - IOS Press CY - Amsterdam ER - TY - JOUR A1 - Engemann, Heiko A1 - Du, Shengzhi A1 - Kallweit, Stephan A1 - Cönen, Patrick A1 - Dawar, Harshal T1 - OMNIVIL - an autonomous mobile manipulator for flexible production JF - Sensors Y1 - 2020 SN - 1424-8220 U6 - https://doi.org/10.3390/s20247249 N1 - Special issue: Sensor Networks Applications in Robotics and Mobile Systems VL - 20 IS - 24, art. no. 7249 SP - 1 EP - 30 PB - MDPI CY - Basel ER -