DOI: https://doi.org/10.15368/theses.2019.72
Available at: https://digitalcommons.calpoly.edu/theses/2054
Date of Award
9-2019
Degree Name
MS in Electrical Engineering
Department/Program
Electrical Engineering
Advisor
Xiao-Hua Yu
Abstract
The state-of-art model-free reinforcement learning algorithms can generate admissible controls for complicated systems with no prior knowledge of the system dynamics, so long as sufficient (oftentimes millions) of samples are available from the environ- ment. On the other hand, model-based reinforcement learning approaches seek to leverage known optimal or robust control to reinforcement learning tasks by mod- elling the system dynamics and applying well established control algorithms to the system model. Sliding-mode controllers are robust to system disturbance and modelling errors, and have been widely used for high-order nonlinear system control. This thesis studies the application of sliding mode control to model-based reinforcement learning. Computer simulation results demonstrate that sliding-mode control is viable in the setting of reinforcement learning. While the system performance may suffer from problems such as deviations in state estimation, limitations in the capacity of the system model to express the system dynamics, and the need for many samples to converge, this approach still performs comparably to conventional model-free reinforcement learning methods.