Refine
Year of publication
- 2024 (7)
- 2023 (20)
- 2022 (12)
- 2021 (25)
- 2020 (20)
- 2019 (30)
- 2018 (35)
- 2017 (25)
- 2016 (26)
- 2015 (22)
- 2014 (24)
- 2013 (29)
- 2012 (24)
- 2011 (31)
- 2010 (26)
- 2009 (27)
- 2008 (27)
- 2007 (28)
- 2006 (17)
- 2005 (15)
- 2004 (19)
- 2003 (12)
- 2002 (20)
- 2001 (20)
- 2000 (19)
- 1999 (23)
- 1998 (16)
- 1997 (13)
- 1996 (10)
- 1995 (11)
- 1994 (15)
- 1993 (9)
- 1992 (8)
- 1991 (4)
- 1990 (6)
- 1989 (5)
- 1988 (5)
- 1987 (2)
- 1986 (1)
- 1985 (6)
- 1984 (1)
- 1982 (2)
- 1981 (1)
- 1980 (4)
- 1979 (2)
- 1978 (3)
- 1976 (1)
- 1975 (1)
- 1974 (1)
Institute
- Fachbereich Elektrotechnik und Informationstechnik (710) (remove)
Has Fulltext
- no (710) (remove)
Language
- English (710) (remove)
Document Type
- Article (412)
- Conference Proceeding (228)
- Part of a Book (38)
- Book (23)
- Conference: Meeting Abstract (5)
- Patent (2)
- Conference Poster (1)
- Doctoral Thesis (1)
Keywords
- Enterprise Architecture (5)
- MINLP (5)
- Engineering optimization (4)
- Optimization (3)
- Powertrain (3)
- Technical Operations Research (3)
- Telecommunication (3)
- Competence Developing Games (2)
- Energy efficiency (2)
- Engineering education (2)
In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle.
his report summarizes the results of a workshop on Groupware related task design which took place at the International Conference on Supporting Group Work Group'99, Arizona, from 14 th to 17 th November 1999.
The workshop was addressed to people from different
viewpoints, backgrounds, and domains:
- Researchers dealing with questions of task analysis
and task modeling for Groupware application from an
academic point of view. They may contribute modelbased design
approaches or theoretically oriented
work
- Practitioners with experience in the design and
everyday use of groupware systems. They might refer
to the practical side of the topic: "real" tasks, "real"
problems, "real" users, etc.
K3 User Guide
(2000)