DOI: https://doi.org/10.15368/theses.2019.87
Available at: https://digitalcommons.calpoly.edu/theses/2071
Date of Award
9-2019
Degree Name
MS in Electrical Engineering
Department/Program
Electrical Engineering
Advisor
Helen Yu
Abstract
Applying reinforcement learning to control systems enables the use of machine learning to develop elegant and efficient control laws. Coupled with the representational power of neural networks, reinforcement learning algorithms can learn complex policies that can be difficult to emulate using traditional control system design approaches. In this thesis, three different model-free reinforcement learning algorithms, including Monte Carlo Control, REINFORCE with baseline, and Guided Policy Search are compared in simulated, continuous action-space environments. The results show that the Guided Policy Search algorithm is able to learn a desired control policy much faster than the other algorithms. In the inverted pendulum system, it learns an effective policy up to three times faster than the other algorithms. In the cartpole system, it learns an effective policy up to nearly fifteen times faster than the other algorithms.
Included in
Controls and Control Theory Commons, Other Electrical and Computer Engineering Commons, Robotics Commons