TY - CHAP A1 - Hoegen, Anne von A1 - Doncker, Rik W. De A1 - Rütters, René T1 - Teaching Digital Control of Operational Amplifier Processes with a LabVIEW Interface and Embedded Hardware T2 - The 23rd International Conference on Electrical Machines and Systems (ICEMS), Hamamatsu, Japan Y1 - 2020 U6 - http://dx.doi.org/10.23919/ICEMS50442.2020.9290928 SP - 1117 EP - 1122 ER - TY - CHAP A1 - Rütters, René A1 - Weinheimer, Marius A1 - Bragard, Michael T1 - Teaching Control Theory with a Simplified Helicopter Model and a Classroom Fitting Hardware Test-Bench T2 - 2018 IEEE 59th International Scientific Conference on Power and Electrical Engineering of Riga Technical University (RTUCON) Y1 - 2018 SN - 978-1-5386-6903-7 U6 - http://dx.doi.org/10.1109/RTUCON.2018.8659871 ER - TY - CHAP A1 - Wittig, M. A1 - Rütters, René A1 - Bragard, Michael ED - Reiff-Stephan, Jörg ED - Jäkel, Jens ED - Schwarz, André T1 - Application of RL in control systems using the example of a rotatory inverted pendulum T2 - Tagungsband AALE 2024 : Fit für die Zukunft: praktische Lösungen für die industrielle Automation N2 - In this paper, the use of reinforcement learning (RL) in control systems is investigated using a rotatory inverted pendulum as an example. The control behavior of an RL controller is compared to that of traditional LQR and MPC controllers. This is done by evaluating their behavior under optimal conditions, their disturbance behavior, their robustness and their development process. All the investigated controllers are developed using MATLAB and the Simulink simulation environment and later deployed to a real pendulum model powered by a Raspberry Pi. The RL algorithm used is Proximal Policy Optimization (PPO). The LQR controller exhibits an easy development process, an average to good control behavior and average to good robustness. A linear MPC controller could show excellent results under optimal operating conditions. However, when subjected to disturbances or deviations from the equilibrium point, it showed poor performance and sometimes instable behavior. Employing a nonlinear MPC Controller in real time was not possible due to the high computational effort involved. The RL controller exhibits by far the most versatile and robust control behavior. When operated in the simulation environment, it achieved a high control accuracy. When employed in the real system, however, it only shows average accuracy and a significantly greater performance loss compared to the simulation than the traditional controllers. With MATLAB, it is not yet possible to directly post-train the RL controller on the Raspberry Pi, which is an obstacle to the practical application of RL in a prototyping or teaching setting. Nevertheless, RL in general proves to be a flexible and powerful control method, which is well suited for complex or nonlinear systems where traditional controllers struggle. KW - Rotatory Inverted Pendulum KW - MPC KW - LQR KW - PPO KW - Reinforcement Learning Y1 - 2024 SN - 978-3-910103-02-3 U6 - http://dx.doi.org/10.33968/2024.53 N1 - 20. AALE-Konferenz. Bielefeld, 06.03.-08.03.2024. (Tagungsband unter https://doi.org/10.33968/2024.29) SP - 241 EP - 248 PB - le-tex publishing services GmbH CY - Leipzig ER -