Header logo is ei

Policy Search for Motor Primitives in Robotics

2011

Article

ei


Many motor skills in humanoid robotics can be learned using parametrized motor primitives. While successful applications to date have been achieved with imitation learning, most of the interesting motor learning problems are high-dimensional reinforcement learning problems. These problems are often beyond the reach of current reinforcement learning methods. In this paper, we study parametrized policy search methods and apply these to benchmark problems of motor primitive learning in robotics. We show that many well-known parametrized policy search methods can be derived from a general, common framework. This framework yields both policy gradient methods and expectation-maximization (EM) inspired algorithms. We introduce a novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives. We compare this algorithm, both in simulation and on a real robot, to several well-known parametrized policy search methods such as episodic REINFORCE, ‘Vanilla’ Policy Gradients with optimal baselines, episodic Natural Actor Critic, and episodic Reward-Weighted Regression. We show that the proposed method out-performs them on an empirical benchmark of learning dynamical system motor primitives both in simulation and on a real robot. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task on a real Barrett WAM™ robot arm.

Author(s): Kober, J. and Peters, J.
Journal: Machine Learning
Volume: 84
Number (issue): 1-2
Pages: 171-203
Year: 2011
Month: July
Day: 0

Department(s): Empirical Inference
Research Project(s): Reinforcement Learning
Robot Skill Learning
Bibtex Type: Article (article)

Digital: 0
DOI: 10.1007/s10994-010-5223-6
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF
PDF

BibTex

@article{6802,
  title = {Policy Search for Motor Primitives in Robotics},
  author = {Kober, J. and Peters, J.},
  journal = {Machine Learning},
  volume = {84},
  number = {1-2},
  pages = {171-203},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = jul,
  year = {2011},
  month_numeric = {7}
}