Conference Proceeding
Refine
Year of publication
Document Type
- Conference Proceeding (1393) (remove)
Has Fulltext
- no (1393) (remove)
Keywords
- Enterprise Architecture (5)
- Gamification (5)
- Energy storage (4)
- Natural language processing (4)
- Power plants (4)
- hydrogen (4)
- solar sail (4)
- Associated liquids (3)
- Concentrated solar power (3)
- Hybrid energy system (3)
- MASCOT (3)
- Out-of-plane load (3)
- Serious Game (3)
- earthquakes (3)
- Additive manufacturing (2)
- Adjacent buildings (2)
- BIM (2)
- Case Study (2)
- Clustering (2)
- Deep learning (2)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (280)
- Fachbereich Energietechnik (226)
- Fachbereich Luft- und Raumfahrttechnik (192)
- Fachbereich Maschinenbau und Mechatronik (188)
- Solar-Institut Jülich (164)
- Fachbereich Medizintechnik und Technomathematik (148)
- IfB - Institut für Bioengineering (113)
- Fachbereich Bauingenieurwesen (101)
- ECSM European Center for Sustainable Mobility (57)
- Fachbereich Wirtschaftswissenschaften (53)
- MASKOR Institut für Mobile Autonome Systeme und Kognitive Robotik (44)
- INB - Institut für Nano- und Biotechnologien (41)
- Fachbereich Chemie und Biotechnologie (33)
- Nowum-Energy (22)
- Kommission für Forschung und Entwicklung (16)
- Fachbereich Architektur (13)
- ZHQ - Bereich Hochschuldidaktik und Evaluation (8)
- Fachbereich Gestaltung (3)
- IaAM - Institut für angewandte Automation und Mechatronik (3)
- Institut fuer Angewandte Polymerchemie (2)
In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle.
Application of polymers in textile reinforced concrete : from the interface to construction elements
(2006)
Application of Low NOx Micro-mix Hydrogen Combustion to 2MW Class Industrial Gas Turbine Combustor
(2019)
Prolonged operations close to small solar system bodies require a sophisticated control logic to minimize propellant mass and maximize operational efficiency. A control logic based on Discrete Mechanics and Optimal Control (DMOC) is proposed and applied to both conventionally propelled and solar sail spacecraft operating at an arbitrarily shaped asteroid in the class of Itokawa. As an example, stand-off inertial hovering is considered, recently identified as a challenging part of the Marco Polo mission. The approach is easily extended to stand-off orbits. We show that DMOC is applicable to spacecraft control at small objects, in particular with regard to the fact that the changes in gravity are exploited by the algorithm to optimally control the spacecraft position. Furthermore, we provide some remarks on promising developments.
This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.
Andere Primärenergiequellen
(1974)