Filtern
Erscheinungsjahr
Institut
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (68) (entfernen)
Sprache
- Englisch (68) (entfernen)
Dokumenttyp
Schlagworte
- Gamification (3)
- Digital Twin (2)
- IO-Link (2)
- Industry 4.0 (2)
- 10BASE-T1L (1)
- Adaptive Systems (1)
- Arduino (1)
- Assembly (1)
- Asset Administration Shell (1)
- Augmented Reality (1)
The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot’s perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.
Adapting augmented reality systems to the users’ needs using gamification and error solving methods
(2021)
Animations of virtual items in AR support systems are typically predefined and lack interactions with dynamic physical environments. AR applications rarely consider users’ preferences and do not provide customized spontaneous support under unknown situations. This research focuses on developing adaptive, error-tolerant AR systems based on directed acyclic graphs and error resolving strategies. Using this approach, users will have more freedom of choice during AR supported work, which leads to more efficient workflows. Error correction methods based on CAD models and predefined process data create individual support possibilities. The framework is implemented in the Industry 4.0 model factory at FH Aachen.
Virtual Reality (VR) offers novel possibilities for remote training regardless of the availability of the actual equipment, the presence of specialists, and the training locations. Research shows that training environments that adapt to users' preferences and performance can promote more effective learning. However, the observed results can hardly be traced back to specific adaptive measures but the whole new training approach. This study analyzes the effects of a combined point and leveling VR-based gamification system on assembly training targeting specific training outcomes and users' motivations. The Gamified-VR-Group with 26 subjects received the gamified training, and the Non-Gamified-VR-Group with 27 subjects received the alternative without gamified elements. Both groups conducted their VR training at least three times before assembling the actual structure. The study found that a level system that gradually increases the difficulty and error probability in VR can significantly lower real-world error rates, self-corrections, and support usages. According to our study, a high error occurrence at the highest training level reduced the Gamified-VR-Group's feeling of competence compared to the Non-Gamified-VR-Group, but at the same time also led to lower error probabilities in real-life. It is concluded that a level system with a variable task difficulty should be combined with carefully balanced positive and negative feedback messages. This way, better learning results, and an improved self-evaluation can be achieved while not causing significant impacts on the participants' feeling of competence.
Gamification applications are on the rise in the manufacturing sector to customize working scenarios, offer user-specific feedback, and provide personalized learning offerings. Commonly, different sensors are integrated into work environments to track workers’ actions. Game elements are selected according to the work task and users’ preferences. However, implementing gamified workplaces remains challenging as different data sources must be established, evaluated, and connected. Developers often require information from several areas of the companies to offer meaningful gamification strategies for their employees. Moreover, work environments and the associated support systems are usually not flexible enough to adapt to personal needs. Digital twins are one primary possibility to create a uniform data approach that can provide semantic information to gamification applications. Frequently, several digital twins have to interact with each other to provide information about the workplace, the manufacturing process, and the knowledge of the employees. This research aims to create an overview of existing digital twin approaches for digital support systems and presents a concept to use digital twins for gamified support and training systems. The concept is based upon the Reference Architecture Industry 4.0 (RAMI 4.0) and includes information about the whole life cycle of the assets. It is applied to an existing gamified training system and evaluated in the Industry 4.0 model factory by an example of a handle mounting.
Assistance systems have been widely adopted in the manufacturing sector to facilitate various processes and tasks in production environments. However, existing systems are mostly equipped with rigid functional logic and do not provide individual user experiences or adapt to their capabilities. This work integrates human factors in assistance systems by adjusting the hardware and instruction presented to the workers’ cognitive and physical demands. A modular system architecture is designed accordingly, which allows a flexible component exchange according to the user and the work task. Gamification, the use of game elements in non-gaming contexts, has been further adopted in this work to provide level-based instructions and personalised feedback. The developed framework is validated by applying it to a manual workstation for industrial assembly routines.
Mechatronics consist of the integration of mechanical
engineering, electronic integration and computer science/
engineering. These broad fields are essential for robotic
systems, yet it makes it difficult for the researchers to specialize
and be experts in all these fields. Collaboration between
researchers allow for the integration of experience and specialization,
to allow optimized systems. Collaboration between the
European countries and South Africa is critical, as each country
has different resources available, which the other countries
might not have. Applications with the need for approval of
any restrictions, can also be obtained easier in some countries
compared to others, thus preventing the delays of research.
Some problems that have been experienced are discussed, with
the Robotics Center of South Africa as a possible solution.
20 Years of RoboCup
(2016)
Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle’s software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
MedicVR : Acceleration and Enhancement Techniques for Direct Volume Rendering in Virtual Reality
(2019)
In this paper we present an extension of the action language Golog that allows for using fuzzy notions in non-deterministic argument choices and the reward function in decision-theoretic planning. Often, in decision-theoretic planning, it is cumbersome to specify the set of values to pick from in the non-deterministic-choice-of-argument statement. Also, even for domain experts, it is not always easy to specify a reward function. Instead of providing a finite domain for values in the non-deterministic-choice-of-argument statement in Golog, we now allow for stating the argument domain by simply providing a formula over linguistic terms and fuzzy uents. In Golog’s forward-search DT planning algorithm, these formulas are evaluated in order to find the agent’s optimal policy. We illustrate this in the Diner Domain where the agent needs to calculate the optimal serving order.
In this paper we report on an architecture for a self-driving car that is based on ROS2. Self-driving cars have to take decisions based on their sensory input in real-time, providing high reliability with a strong demand in functional safety. In principle, self-driving cars are robots. However, typical robot software, in general, and the previous version of the Robot Operating System (ROS), in particular, does not always meet these requirements. With the successor ROS2 the situation has changed and it might be considered as a solution for automated and autonomous driving. Existing robotic software based on ROS was not ready for safety critical applications like self-driving cars. We propose an architecture for using ROS2 for a self-driving car that enables safe and reliable real-time behaviour, but keeping the advantages of ROS such as a distributed architecture and standardised message types. First experiments with an automated real passenger car at lower and higher speed-levels show that our approach seems feasible for autonomous driving under the necessary real-time conditions.
The work in modern open-pit and underground mines requires the transportation of large amounts of resources between fixed points. The navigation to these fixed points is a repetitive task that can be automated. The challenge in automating the navigation of vehicles commonly used in mines is the systemic properties of such vehicles. Many mining vehicles, such as the one we have used in the research for this paper, use steering systems with an articulated joint bending the vehicle’s drive axis to change its course and a hydraulic drive system to actuate axial drive components or the movements of tippers if available. To address the difficulties of controlling such a vehicle, we present a model-predictive approach for controlling the vehicle. While the control optimisation based on a parallel error minimisation of the predicted state has already been established in the past, we provide insight into the design and implementation of an MPC for an articulated mining vehicle and show the results of real-world experiments in an open-pit mine environment.
Cyber-physical systems are ever more common in manufacturing industries. Increasing their autonomy has been declared an explicit goal, for example, as part of the Industry 4.0 vision. To achieve this system intelligence, principled and software-driven methods are required to analyze sensing data, make goal-directed decisions, and eventually execute and monitor chosen tasks. In this chapter, we present a number of knowledge-based approaches to these problems and case studies with in-depth evaluation results of several different implementations for groups of autonomous mobile robots performing in-house logistics in a smart factory. We focus on knowledge-based systems because besides providing expressive languages and capable reasoning techniques, they also allow for explaining how a particular sequence of actions came about, for example, in the case of a failure.
With autonomous mobile robots receiving increased
attention in industrial contexts, the need for benchmarks
becomes more and more an urgent matter. The RoboCup
Logistics League (RCLL) is one specific industry-inspired scenario
focusing on production logistics within a Smart Factory.
In this paper, we describe how the RCLL allows to assess the
performance of a group of robots within the scenario as a
whole, focusing specifically on the coordination and cooperation
strategies and the methods and components to achieve them.
We report on recent efforts to analyze performance of teams in
2014 to understand the implications of the current grading
scheme, and derived criteria and metrics for performance
assessment based on Key Performance Indicators (KPI) adapted
from classic factory evaluation. We reflect on differences and
compatibility towards RoCKIn, a recent major benchmarking
European project.
Ground or aerial robots equipped with advanced sensing technologies, such as three-dimensional laser scanners and advanced mapping algorithms, are deemed useful as a supporting technology for first responders. A great deal of excellent research in the field exists, but practical applications at real disaster sites are scarce. Many projects concentrate on equipping robots with advanced capabilities, such as autonomous exploration or object manipulation. In spite of this, realistic application areas for such robots are limited to teleoperated reconnaissance or search. In this paper, we investigate how well state-of-the-art and off-the-shelf components and algorithms are suited for reconnaissance in current disaster-relief scenarios. The basic idea is to make use of some of the most common sensors and deploy some widely used algorithms in a disaster situation, and to evaluate how well the components work for these scenarios. We acquired the sensor data from two field experiments, one from a disaster-relief operation in a motorway tunnel, and one from a mapping experiment in a partly closed down motorway tunnel. Based on these data, which we make publicly available, we evaluate state-of-the-art and off-the-shelf mapping approaches. In our analysis, we integrate opinions and replies from first responders as well as from some algorithm developers on the usefulness of the data and the limitations of the deployed approaches, respectively. We discuss the lessons we learned during the two missions. These lessons are interesting for the community working in similar areas of urban search and rescue, particularly reconnaissance and search.
The field of Cognitive Robotics aims at intelligent decision making of autonomous robots. It has matured over the last 25 or so years quite a bit. That is, a number of high-level control languages and architectures have emerged from the field. One concern in this regard is the action language GOLOG. GOLOG has been used in a rather large number of applications as a high-level control language ranging from intelligent service robots to soccer robots. For the lower level robot software, the Robot Operating System (ROS) has been around for more than a decade now and it has developed into the standard middleware for robot applications. ROS provides a large number of packages for standard tasks in robotics like localisation, navigation, and object recognition. Interestingly enough, only little work within ROS has gone into the high-level control of robots. In this paper, we describe our approach to marry the GOLOG action language with ROS. In particular, we present our architecture on inte grating golog++, which is based on the GOLOG dialect Readylog, with the Robot Operating System. With an example application on the Pepper service robot, we show how primitive actions can be easily mapped to the ROS ActionLib framework and present our control architecture in detail.
The production and assembly of customized products increases the demand for flexible automation systems. One approach is to remove the safety fences that separate human and industrial robot to combine their skills. This collaboration possesses a certain risk for the human co-worker, leading to numerous safety concepts to protect him. The human needs to be monitored and tracked by a safety system using different sensors. The proposed system consists of a RGBD camera for surveillance of the common working area, an array of optical distance sensors to compensate shadowing effects of the RGBD camera and a laser range finder to detect the co-worker when approaching the work cell. The software for collision detection, path planning, robot control and predicting the behaviour of the co-worker is based on the Robot Operating System (ROS). A first prototype of the work cell shows that with advanced algorithms from the field of mobile robotics a very flexible safety concept can be realized: the robot not simply stops its movement when detecting a collision, but plans and executes an alternative path around the obstacle.
Project work and inter disciplinarity are integral parts of today's engineering work. It is therefore important to incorporate these aspects into the curriculum of academic studies of engineering. At the faculty of Electrical Engineering and Information Technology an interdisciplinary project is part of the bachelor program to address these topics. Since the summer term 2020 most courses changed to online mode during the Covid-19 crisis including the interdisciplinary projects. This online mode introduces additional challenges to the execution of the projects, both for the students as well as for the lecture. The challenges, but also the risks and chances of this kind of project courses are subject of this paper, based on five different interdisciplinary projects
To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.
We present a robotic tool that autonomously follows a conversation to enable remote presence in video conferencing. When humans participate in a meeting with the help of video conferencing tools, it is crucial that they are able to follow the conversation both with acoustic and visual input. To this end, we design and implement a video conferencing tool robot that uses binaural sound source localization as its main source to autonomously orient towards the currently talking speaker. To increase robustness of the acoustic cue against noise we supplement the sound localization with a source detection stage. Also, we include a simple onset detector to retain fast response times. Since we only use two microphones, we are confronted with ambiguities on whether a source is in front or behind the device. We resolve these ambiguities with the help of face detection and additional moves. We tailor the system to our target scenarios in experiments with a four minute scripted conversation. In these experiments we evaluate the influence of different system settings on the responsiveness and accuracy of the device.
The maintenance of wind turbines is of growing importance considering the transition to renewable energy. This paper presents a multi-robot-approach for automated wind turbine maintenance including a novel climbing robot. Currently, wind turbine maintenance remains a manual task, which is monotonous, dangerous, and also physically demanding due to the large scale of wind turbines. Technical climbers are required to work at significant heights, even in bad weather conditions. Furthermore, a skilled labor force with sufficient knowledge in repairing fiber composite material is rare. Autonomous mobile systems enable the digitization of the maintenance process. They can be designed for weather-independent operations. This work contributes to the development and experimental validation of a maintenance system consisting of multiple robotic platforms for a variety of tasks, such as wind turbine tower and rotor blade service. In this work, multicopters with vision and LiDAR sensors for global inspection are used to guide slower climbing robots. Light-weight magnetic climbers with surface contact were used to analyze structure parts with non-destructive inspection methods and to locally repair smaller defects. Localization was enabled by adapting odometry for conical-shaped surfaces considering additional navigation sensors. Magnets were suitable for steel towers to clamp onto the surface. A friction-based climbing ring robot (SMART— Scanning, Monitoring, Analyzing, Repair and Transportation) completed the set-up for higher payload. The maintenance period could be extended by using weather-proofed maintenance robots. The multi-robot-system was running the Robot Operating System (ROS). Additionally, first steps towards machine learning would enable maintenance staff to use pattern classification for fault diagnosis in order to operate safely from the ground in the future.
This summer, RoboCup competitions were held for the 20th time in Leipzig, Germany. It was the second time that RoboCup took place in Germany, 10 years after the 2006 RoboCup in Bremen. In this article, we give an overview on the latest developments of RoboCup and what happened in the different leagues over the last decade. With its 20th edition, RoboCup clearly is a success story and a role model for robotics competitions. From our personal view point, we acknowledge this by giving a retrospection about what makes RoboCup such a success.
The Robot Operating System (ROS) is the current de-facto standard in robot middlewares. The steadily increasing size of the user base results in a greater demand for training as well. User groups range from students in academia to industry professionals with a broad spectrum of developers in between. To deliver high quality training and education to any of these audiences, educators need to tailor individual curricula for any such training. In this paper, we present an approach to ease compiling curricula for ROS trainings based on a taxonomy of the teaching contents. The instructor can select a set of dedicated learning units and the system will automatically compile the teaching material based on the dependencies of the units selected and a set of parameters for a particular training. We walk through an example training to illustrate our work.
The main objective of our ROS Summer School series is to introduce MA level students to program mobile robots with the Robot Operating System (ROS). ROS is a robot middleware that is used my many research institutions world-wide. Therefore, many state-of-the-art algorithms of mobile robotics are available in ROS and can be deployed very easily. As a basic robot platform we deploy a 1/10 RC cart that is wquipped with an Arduino micro-controller to control the servo motors, and an embedded PC that runs ROS. In two weeks, participants get to learn the basics of mobile robotics hands-on. We describe our teaching concepts and our curriculum and report on the learning success of our students.