Refine
Year of publication
Document Type
- Article (71)
- Conference Proceeding (60)
- Part of a Book (7)
- Book (2)
Keywords
- Autonomous mobile robots (2)
- Industry 4.0 (2)
- Multi-robot systems (2)
- Smart factory (2)
- Anomaly detection (1)
- Automation (1)
- Benchmark (1)
- Computational modeling (1)
- Control (1)
- Cyber-physical systems (1)
- Datasets (1)
- GPU (1)
- Heuristic algorithms (1)
- Lidar (1)
- Mpc (1)
- Navigation (1)
- Neural networks (1)
- Path-following (1)
- Process optimization (1)
- Quality control (1)
- RoboCup (1)
- Self-driving (1)
- autonomous driving (1)
- do-it-yourself (1)
- education (1)
- embedded hardware (1)
- information systems (1)
- model-predictive control (1)
- sensor networks (1)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (140) (remove)
Many tasks for autonomous agents or robots are best described by a specification of the environment and a specification of the available actions the agent or robot can perform. Combining such a specification with the possibility to imperatively program a robot or agent is what we call the actionbased imperative programming. One of the most successful such approaches is Golog. In this paper, we draft a proposal for a new robot programming language YAGI, which is based on the action-based imperative programming paradigm. Our goal is to design a small, portable stand-alone YAGI interpreter. We combine the benefits of a principled domain specification with a clean, small and simple programming language, which does not exploit any side-effects from the implementation language. We discuss general requirements of action-based programming languages and outline YAGI, our action-based language approach which particularly aims at embeddability.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
In the future, we expect manufacturing companies to follow a new paradigm that mandates more automation and autonomy in production processes. Such smart factories will offer a variety of production technologies as services that can be combined ad hoc to produce a large number of different product types and variants cost-effectively even in small lot sizes. This is enabled by cyber-physical systems that feature flexible automated planning methods for production scheduling, execution control, and in-factory logistics.
During development, testbeds are required to determine the applicability of integrated systems in such scenarios. Furthermore, benchmarks are needed to quantify and compare system performance in these industry-inspired scenarios at a comprehensible and manageable size which is, at the same time, complex enough to yield meaningful results.
In this chapter, based on our experience in the RoboCup Logistics League (RCLL) as a specific example, we derive a generic blueprint for how a holistic benchmark can be developed, which combines a specific scenario with a set of key performance indicators as metrics to evaluate the overall integrated system and its components.
Benchmarking of various LiDAR sensors for use in self-driving vehicles in real-world environments
(2022)
Abstract
In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.
In this paper we present CAESAR, an intelligent domestic service robot. In domestic settings for service robots complex tasks have to be accomplished. Those tasks benefit from deliberation, from robust action execution and from flexible methods for human–robot interaction that account for qualitative notions used in natural language as well as human fallibility. Our robot CAESAR deploys AI techniques on several levels of its system architecture. On the low-level side, system modules for localization or navigation make, for instance, use of path-planning methods, heuristic search, and Bayesian filters. For face recognition and human–machine interaction, random trees and well-known methods from natural language processing are deployed. For deliberation, we use the robot programming and plan language READYLOG, which was developed for the high-level control of agents and robots; it allows combining programming the behaviour using planning to find a course of action. READYLOG is a variant of the robot programming language Golog. We extended READYLOG to be able to cope with qualitative notions of space frequently used by humans, such as “near” and “far”. This facilitates human–robot interaction by bridging the gap between human natural language and the numerical values needed by the robot. Further, we use READYLOG to increase the flexible interpretation of human commands with decision-theoretic planning. We give an overview of the different methods deployed in CAESAR and show the applicability of a system equipped with these AI techniques in domestic service robotics