Header logo is ei

Gaussian Process Dynamic Programming




Reinforcement learning (RL) and optimal control of systems with contin- uous states and actions require approximation techniques in most interesting cases. In this article, we introduce Gaussian process dynamic programming (GPDP), an approximate value-function based RL algorithm. We consider both a classic optimal control problem, where problem-specific prior knowl- edge is available, and a classic RL problem, where only very general priors can be used. For the classic optimal control problem, GPDP models the unknown value functions with Gaussian processes and generalizes dynamic programming to continuous-valued states and actions. For the RL problem, GPDP starts from a given initial state and explores the state space using Bayesian active learning. To design a fast learner, available data has to be used efficiently. Hence, we propose to learn probabilistic models of the a priori unknown transition dynamics and the value functions on the fly. In both cases, we successfully apply the resulting continuous-valued controllers to the under-actuated pendulum swing up and analyze the performances of the suggested algorithms. It turns out that GPDP uses data very efficiently and can be applied to problems, where classic dynamic programming would be cumbersome.

Author(s): Deisenroth, MP. and Rasmussen, CE. and Peters, J.
Journal: Neurocomputing
Volume: 72
Number (issue): 7-9
Pages: 1508-1524
Year: 2009
Month: March
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
DOI: 10.1016/j.neucom.2008.12.019
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF


  title = {Gaussian Process Dynamic Programming},
  author = {Deisenroth, MP. and Rasmussen, CE. and Peters, J.},
  journal = {Neurocomputing},
  volume = {72},
  number = {7-9},
  pages = {1508-1524},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = mar,
  year = {2009},
  month_numeric = {3}