Part of a Book
Refine
Institute
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (8) (remove)
Has Fulltext
- no (8)
Document Type
- Part of a Book (8) (remove)
Keywords
- Autonomous mobile robots (2)
- Industry 4.0 (2)
- Multi-robot systems (2)
- Smart factory (2)
- Business understanding (1)
- Collaborative robot (1)
- Cyber-physical systems (1)
- Human-Robot interaction (1)
- Im-plementation of AI-systems (1)
- Participation (1)
Is part of the Bibliography
- no (8)
In this chapter, we report on our activities to create and maintain a fleet of autonomous load haul dump (LHD) vehicles for mining operations. The ever increasing demand for sustainable solutions and economic pressure causes innovation in the mining industry just like in any other branch. In this chapter, we present our approach to create a fleet of autonomous special purpose vehicles and to control these vehicles in mining operations. After an initial exploration of the site we deploy the fleet. Every vehicle is running an instance of our ROS 2-based architecture. The fleet is then controlled with a dedicated planning module. We also use continuous environment monitoring to implement a life-long mapping approach. In our experiments, we show that a combination of synthetic, augmented and real training data improves our classifier based on the deep learning network Yolo v5 to detect our vehicles, persons and navigation beacons. The classifier was successfully installed on the NVidia AGX-Drive platform, so that the abovementioned objects can be recognised during the dumper drive. The 3D poses of the detected beacons are assigned to lanelets and transferred to an existing map.
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
We present an automated pipeline for the generation of synthetic datasets for six-dimension (6D) object pose estimation. Therefore, a completely automated generation process based on predefined settings is developed, which enables the user to create large datasets with a minimum of interaction and which is feasible for applications with a high object variance. The pipeline is based on the Unreal 4 (UE4) game engine and provides a high variation for domain randomization, such as object appearance, ambient lighting, camera-object transformation and distractor density. In addition to the object pose and bounding box, the metadata includes all randomization parameters, which enables further studies on randomization parameter tuning. The developed workflow is adaptable to other 3D objects and UE4 environments. An exemplary dataset is provided including five objects of the Yale-CMU-Berkeley (YCB) object set. The datasets consist of 6 million subsegments using 97 rendering locations in 12 different UE4 environments. Each dataset subsegment includes one RGB image, one depth image and one class segmentation image at pixel-level.
In the future, we expect manufacturing companies to follow a new paradigm that mandates more automation and autonomy in production processes. Such smart factories will offer a variety of production technologies as services that can be combined ad hoc to produce a large number of different product types and variants cost-effectively even in small lot sizes. This is enabled by cyber-physical systems that feature flexible automated planning methods for production scheduling, execution control, and in-factory logistics.
During development, testbeds are required to determine the applicability of integrated systems in such scenarios. Furthermore, benchmarks are needed to quantify and compare system performance in these industry-inspired scenarios at a comprehensible and manageable size which is, at the same time, complex enough to yield meaningful results.
In this chapter, based on our experience in the RoboCup Logistics League (RCLL) as a specific example, we derive a generic blueprint for how a holistic benchmark can be developed, which combines a specific scenario with a set of key performance indicators as metrics to evaluate the overall integrated system and its components.
Cyber-physical systems are ever more common in manufacturing industries. Increasing their autonomy has been declared an explicit goal, for example, as part of the Industry 4.0 vision. To achieve this system intelligence, principled and software-driven methods are required to analyze sensing data, make goal-directed decisions, and eventually execute and monitor chosen tasks. In this chapter, we present a number of knowledge-based approaches to these problems and case studies with in-depth evaluation results of several different implementations for groups of autonomous mobile robots performing in-house logistics in a smart factory. We focus on knowledge-based systems because besides providing expressive languages and capable reasoning techniques, they also allow for explaining how a particular sequence of actions came about, for example, in the case of a failure.
We present a robotic tool that autonomously follows a conversation to enable remote presence in video conferencing. When humans participate in a meeting with the help of video conferencing tools, it is crucial that they are able to follow the conversation both with acoustic and visual input. To this end, we design and implement a video conferencing tool robot that uses binaural sound source localization as its main source to autonomously orient towards the currently talking speaker. To increase robustness of the acoustic cue against noise we supplement the sound localization with a source detection stage. Also, we include a simple onset detector to retain fast response times. Since we only use two microphones, we are confronted with ambiguities on whether a source is in front or behind the device. We resolve these ambiguities with the help of face detection and additional moves. We tailor the system to our target scenarios in experiments with a four minute scripted conversation. In these experiments we evaluate the influence of different system settings on the responsiveness and accuracy of the device.
The production and assembly of customized products increases the demand for flexible automation systems. One approach is to remove the safety fences that separate human and industrial robot to combine their skills. This collaboration possesses a certain risk for the human co-worker, leading to numerous safety concepts to protect him. The human needs to be monitored and tracked by a safety system using different sensors. The proposed system consists of a RGBD camera for surveillance of the common working area, an array of optical distance sensors to compensate shadowing effects of the RGBD camera and a laser range finder to detect the co-worker when approaching the work cell. The software for collision detection, path planning, robot control and predicting the behaviour of the co-worker is based on the Robot Operating System (ROS). A first prototype of the work cell shows that with advanced algorithms from the field of mobile robotics a very flexible safety concept can be realized: the robot not simply stops its movement when detecting a collision, but plans and executes an alternative path around the obstacle.