Empirical Inference

Learning Control and Planning from the View of Control Theory and Imitation

2003

Talk

ei


Learning control and planning in high dimensional continuous state-action systems, e.g., as needed in a humanoid robot, has so far been a domain beyond the applicability of generic planning techniques like reinforcement learning and dynamic programming. This talk describes an approach we have taken in order to enable complex robotics systems to learn to accomplish control tasks. Adaptive learning controllers equipped with statistical learning techniques can be used to learn tracking controllers -- missing state information and uncertainty in the state estimates are usually addressed by observers or direct adaptive control methods. Imitation learning is used as an ingredient to seed initial control policies whose output is a desired trajectory suitable to accomplish the task at hand. Reinforcement learning with stochastic policy gradients using a natural gradient forms the third component that allows refining the initial control policy until the task is accomplished. In comparison to general learning control, this approach is highly prestructured and thus more domain specific. However, it seems to be a theoretically clean and feasible strategy for control systems of the complexity that we need to address.

Author(s): Peters, J. and Schaal, S.
Year: 2003
Month: December
Day: 0

Department(s): Empirical Inference
Bibtex Type: Talk (talk)

Digital: 0
Event Name: NIPS 2003 Workshop "Planning for the Real World: The promises and challenges of dealing with uncertainty"
Event Place: Whistler, BC, Canada

Links: Web

BibTex

@talk{PetersS2003,
  title = {Learning Control and Planning from the View of Control Theory and Imitation},
  author = {Peters, J. and Schaal, S.},
  month = dec,
  year = {2003},
  doi = {},
  month_numeric = {12}
}