Empirical Inference

Reinforcement learning to adjust parametrized motor primitives to new situations

2012

Article

ei


Humans manage to adapt learned movements very quickly to new situations by generalizing learned behaviors from similar situations. In contrast, robots currently often need to re-learn the complete movement. In this paper, we propose a method that learns to generalize parametrized motor plans by adapting a small set of global parameters, called meta-parameters. We employ reinforcement learning to learn the required meta-parameters to deal with the current situation, described by states. We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. To show its feasibility, we evaluate this algorithm on a toy example and compare it to several previous approaches. Subsequently, we apply the approach to three robot tasks, i.e., the generalization of throwing movements in darts, of hitting movements in table tennis, and of throwing balls where the tasks are learned on several different real physical robots, i.e., a Barrett WAM, a BioRob, the JST-ICORP/SARCOS CBi and a Kuka KR 6.

Author(s): Kober, J. and Wilhelm, A. and Oztop, E. and Peters, J.
Journal: Autonomous Robots
Volume: 33
Number (issue): 4
Pages: 361-379
Year: 2012
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
DOI: 10.1007/s10514-012-9290-3

Links: PDF
PDF

BibTex

@article{KoberWOP2012,
  title = {Reinforcement learning to adjust parametrized motor primitives to new situations },
  author = {Kober, J. and Wilhelm, A. and Oztop, E. and Peters, J.},
  journal = {Autonomous Robots},
  volume = {33},
  number = {4},
  pages = {361-379},
  year = {2012},
  doi = {10.1007/s10514-012-9290-3}
}