In Proceedings of the 13th International Conference on Intelligent Autonomous Systems, 302, pages: 1601-1611, Advances in Intelligent Systems and Computing, (Editors: Menegatti, E., Michael, N., Berns, K., Yamaguchi, H.), Springer, IAS, 2014 (inproceedings)
Daniel, C., Viering, M., Metz, J., Kroemer, O., Peters, J.
Active Reward Learning
In Proceedings of Robotics: Science & Systems, (Editors: Fox, D., Kavraki, LE., and Kurniawati, H.), RSS, 2014 (inproceedings)
In Machine Learning and Knowledge Discovery in Databases, Proceedings of the European Conference on Machine Learning, Part III (ECML 2013), LNCS 8190, pages: 627-631, (Editors: Blockeel, H.,Kersting, K., Nijssen, S., and Zelezný, F.), Springer, 2013 (inproceedings)
In International Conference on Robotics and Automation (ICRA 2012), pages: 2605-2610, IEEE, IEEE International Conference on Robotics and Automation (ICRA), May 2012 (inproceedings)
The direct perception of actions allows a robot to predict the afforded actions of observed novel objects. In addition
to learning which actions are afforded, the robot must also learn to adapt its actions according to the object being manipulated. In this paper, we present a non-parametric approach to representing the affordance-bearing subparts of objects. This representation forms the basis of a kernel function for computing the similarity between different subparts. Using this kernel function, the robot can learn the required mappings to perform direct action perception. The proposed approach was successfully implemented on a real robot, which could then quickly learn to generalize
grasping and pouring actions to novel objects.
In Advances in Neural Information Processing Systems 25, pages: 2186-2194, (Editors: P Bartlett and FCN Pereira and CJC. Burges and L Bottou and KQ Weinberger), Curran Associates Inc., 26th Annual Conference on Neural Information Processing Systems (NIPS), 2012 (inproceedings)
In Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2011), pages: 25-31, IEEE, Piscataway, NJ, USA, IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), April 2011 (inproceedings)
As the complexity of robots and other autonomous systems increases, it becomes more important that these systems can adapt and optimize their settings actively. However, such optimization is rarely trivial. Sampling from the system is often expensive in terms of time and other costs, and excessive sampling should therefore be avoided. The parameter space is also usually continuous and multi-dimensional. Given the inherent exploration-exploitation dilemma of the problem, we propose treating it as an episodic reinforcement learning problem. In this reinforcement learning framework, the policy is defined by the system's parameters and the rewards are given by the system's performance. The rewards accumulate during each episode of a task. In this paper, we present a method for efficiently sampling and optimizing in continuous multidimensional spaces. The approach is based on Gaussian process regression, which can represent continuous non-linear mappings from parameters to system performance. We employ an upper confidence bound policy, which explicitly manages the trade-off between exploration and exploitation. Unlike many other policies for this kind of problem, we do not rely on a discretization of the action space. The presented method was evaluated on a real robot. The robot had to learn grasping parameters in order to adapt its grasping execution to different objects. The proposed method was also tested on a more general gain tuning problem. The results of the experiments show that the presented method can quickly determine suitable parameters and is applicable to real online learning applications.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems