Header logo is ei

Reinforcement learning by reward-weighted regression for operational space control

2007

Conference Paper

am

ei


Many robot control problems of practical importance, including operational space control, can be reformulated as immediate reward reinforcement learning problems. However, few of the known optimization or reinforcement learning algorithms can be used in online learning control for robots, as they are either prohibitively slow, do not scale to interesting domains of complex robots, or require trying out policies generated by random search, which are infeasible for a physical system. Using a generalization of the EM-base reinforcement learning framework suggested by Dayan & Hinton, we reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence. The resulting algorithm is efficient, learns smoothly without dangerous jumps in solution space, and works well in applications of complex high degree-of-freedom robots.

Author(s): Peters, J. and Schaal, S.
Book Title: Proceedings of the 24th Annual International Conference on Machine Learning
Pages: 745-750
Year: 2007

Department(s): Autonomous Motion, Empirical Inference
Bibtex Type: Conference Paper (inproceedings)

Cross Ref: p2675
DOI: 10.1145/1273496.1273590
Event Name: ICML 2007
Event Place: Corvallis, OR, USA
Note: clmc
URL: http://www-clmc.usc.edu/publications//P/peters_ICML2007.pdf

BibTex

@inproceedings{Peters_PICML_2007,
  title = {Reinforcement learning by reward-weighted regression for operational space control},
  author = {Peters, J. and Schaal, S.},
  booktitle = {Proceedings of the 24th Annual International Conference on Machine Learning},
  pages = {745-750},
  year = {2007},
  note = {clmc},
  crossref = {p2675},
  url = {http://www-clmc.usc.edu/publications//P/peters_ICML2007.pdf}
}