Header logo is ei

14 results

2012


no image
Online Kernel-based Learning for Task-Space Tracking Robot Control

Nguyen-Tuong, D., Peters, J.

IEEE Transactions on Neural Networks and Learning Systems, 23(9):1417-1425, May 2012 (article)

Abstract
Abstract—Task-space control of redundant robot systems based on analytical models is known to be susceptive to modeling errors. Here, data driven model learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an illposed problem. In particular, the same input data point can yield many different output values, which can form a non-convex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally illposed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local, kernel-based learning approach for online model learning for task-space tracking control. We propose a parametrization for the local model which makes an application in task-space tracking control of redundant robots possible. The model parametrization further allows us to apply the kerneltrick and, therefore, enables a formulation within the kernel learning framework. For evaluations, we show the ability of the method for online model learning for task-space tracking control of redundant robots.

PDF DOI Project Page [BibTex]

2012

PDF DOI Project Page [BibTex]


no image
Learning Concurrent Motor Skills in Versatile Solution Spaces

Daniel, C., Neumann, G., Peters, J.

In IEEE/RSJ International Conference on Intelligent Robots and Systems , pages: 3591-3597, IROS, 2012 (inproceedings)

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
A Kernel-based Approach to Direct Action Perception

Kroemer, O., Ugur, E., Oztop, E., Peters, J.

In International Conference on Robotics and Automation (ICRA 2012), pages: 2605-2610, IEEE, IEEE International Conference on Robotics and Automation (ICRA), May 2012 (inproceedings)

Abstract
The direct perception of actions allows a robot to predict the afforded actions of observed novel objects. In addition to learning which actions are afforded, the robot must also learn to adapt its actions according to the object being manipulated. In this paper, we present a non-parametric approach to representing the affordance-bearing subparts of objects. This representation forms the basis of a kernel function for computing the similarity between different subparts. Using this kernel function, the robot can learn the required mappings to perform direct action perception. The proposed approach was successfully implemented on a real robot, which could then quickly learn to generalize grasping and pouring actions to novel objects.

PDF Web DOI Project Page [BibTex]

PDF Web DOI Project Page [BibTex]


no image
Robot Skill Learning

Peters, J., Kober, J., Mülling, K., Nguyen-Tuong, D., Kroemer, O.

In 20th European Conference on Artificial Intelligence , pages: 40-45, ECAI, 2012 (inproceedings)

PDF DOI Project Page [BibTex]

PDF DOI Project Page [BibTex]


no image
Probabilistic Modeling of Human Movements for Intention Inference

Wang, Z., Deisenroth, M., Ben Amor, H., Vogt, D., Schölkopf, B., Peters, J.

In Proceedings of Robotics: Science and Systems VIII, pages: 8, R:SS, 2012 (inproceedings)

Abstract
Inference of human intention may be an essential step towards understanding human actions [21] and is hence important for realizing efficient human-robot interaction. In this paper, we propose the Intention-Driven Dynamics Model (IDDM), a latent variable model for inferring unknown human intentions. We train the model based on observed human behaviors/actions and we introduce an approximate inference algorithm to efficiently infer the human’s intention from an ongoing action. We verify the feasibility of the IDDM in two scenarios, i.e., target inference in robot table tennis and action recognition for interactive humanoid robots. In both tasks, the IDDM achieves substantial improvements over state-of-the-art regression and classification.

PDF link (url) Project Page [BibTex]

PDF link (url) Project Page [BibTex]

2011


no image
Policy Search for Motor Primitives in Robotics

Kober, J., Peters, J.

Machine Learning, 84(1-2):171-203, July 2011 (article)

Abstract
Many motor skills in humanoid robotics can be learned using parametrized motor primitives. While successful applications to date have been achieved with imitation learning, most of the interesting motor learning problems are high-dimensional reinforcement learning problems. These problems are often beyond the reach of current reinforcement learning methods. In this paper, we study parametrized policy search methods and apply these to benchmark problems of motor primitive learning in robotics. We show that many well-known parametrized policy search methods can be derived from a general, common framework. This framework yields both policy gradient methods and expectation-maximization (EM) inspired algorithms. We introduce a novel EM-inspired algorithm for policy learning that is particularly well-suited for dynamical system motor primitives. We compare this algorithm, both in simulation and on a real robot, to several well-known parametrized policy search methods such as episodic REINFORCE, ‘Vanilla’ Policy Gradients with optimal baselines, episodic Natural Actor Critic, and episodic Reward-Weighted Regression. We show that the proposed method out-performs them on an empirical benchmark of learning dynamical system motor primitives both in simulation and on a real robot. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task on a real Barrett WAM™ robot arm.

PDF PDF DOI Project Page Project Page [BibTex]


no image
Learning Dynamic Tactile Sensing with Robust Vision-based Training

Kroemer, O., Lampert, C., Peters, J.

IEEE Transactions on Robotics, 27(3):545-557 , June 2011 (article)

Abstract
Dynamic tactile sensing is a fundamental ability to recognize materials and objects. However, while humans are born with partially developed dynamic tactile sensing and quickly master this skill, today's robots remain in their infancy. The development of such a sense requires not only better sensors but the right algorithms to deal with these sensors' data as well. For example, when classifying a material based on touch, the data are noisy, high-dimensional, and contain irrelevant signals as well as essential ones. Few classification methods from machine learning can deal with such problems. In this paper, we propose an efficient approach to infer suitable lower dimensional representations of the tactile data. In order to classify materials based on only the sense of touch, these representations are autonomously discovered using visual information of the surfaces during training. However, accurately pairing vision and tactile samples in real-robot applications is a difficult problem. The proposed approach, therefore, works with weak pairings between the modalities. Experiments show that the resulting approach is very robust and yields significantly higher classification performance based on only dynamic tactile sensing.

Web DOI Project Page [BibTex]

Web DOI Project Page [BibTex]

2010


no image
Imitation and Reinforcement Learning

Kober, J., Peters, J.

IEEE Robotics and Automation Magazine, 17(2):55-62, June 2010 (article)

Abstract
In this article, we present both novel learning algorithms and experiments using the dynamical system MPs. As such, we describe this MP representation in a way that it is straightforward to reproduce. We review an appropriate imitation learning method, i.e., locally weighted regression, and show how this method can be used both for initializing RL tasks as well as for modifying the start-up phase in a rhythmic task. We also show our current best-suited RL algorithm for this framework, i.e., PoWER. We present two complex motor tasks, i.e., ball-in-a-cup and ball paddling, learned on a real, physical Barrett WAM, using the methods presented in this article. Of particular interest is the ball-paddling application, as it requires a combination of both rhythmic and discrete dynamical systems MPs during the start-up phase to achieve a particular task.

PDF Web DOI Project Page [BibTex]

2010

PDF Web DOI Project Page [BibTex]


no image
Combining active learning and reactive control for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

Robotics and Autonomous Systems, 58(9):1105-1116, September 2010 (article)

Abstract
Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasp’s location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controller’s upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshapin g the hand depending on the object’s geometry. The system was evaluated both in simulation and on a real robot.

PDF Web DOI Project Page [BibTex]

PDF Web DOI Project Page [BibTex]


no image
Switched Latent Force Models for Movement Segmentation

Alvarez, M., Peters, J., Schölkopf, B., Lawrence, N.

In Advances in neural information processing systems 23, pages: 55-63, (Editors: J Lafferty and CKI Williams and J Shawe-Taylor and RS Zemel and A Culotta), Curran, Red Hook, NY, USA, 24th Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential equation is driven by a weighted sum of latent functions with uncertainty given by a Gaussian process prior. In this paper we consider employing the latent force model framework for the problem of determining robot motor primitives. To deal with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and non-linearities in the dynamics. We give illustrative examples on both synthetic data and for striking movements recorded using a BarrettWAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to have wide application for dynamical systems including models for human motion capture data and systems biology.

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]


no image
Movement extraction by detecting dynamics switches and repetitions

Chiappa, S., Peters, J.

In Advances in Neural Information Processing Systems 23, pages: 388-396, (Editors: Lafferty, J. , C. K.I. Williams, J. Shawe-Taylor, R. S. Zemel, A. Culotta), Curran, Red Hook, NY, USA, Twenty-Fourth Annual Conference on Neural Information Processing Systems (NIPS), 2010 (inproceedings)

Abstract
Many time-series such as human movement data consist of a sequence of basic actions, e.g., forehands and backhands in tennis. Automatically extracting and characterizing such actions is an important problem for a variety of different applications. In this paper, we present a probabilistic segmentation approach in which an observed time-series is modeled as a concatenation of segments corresponding to different basic actions. Each segment is generated through a noisy transformation of one of a few hidden trajectories representing different types of movement, with possible time re-scaling. We analyze three different approximation methods for dealing with model intractability, and demonstrate how the proposed approach can successfully segment table tennis movements recorded using a robot arm as haptic input device.

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]


no image
Grasping with Vision Descriptors and Motor Primitives

Kroemer, O., Detry, R., Piater, J., Peters, J.

In Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2010), pages: 47-54, (Editors: Filipe, J. , J. Andrade-Cetto, J.-L. Ferrier), SciTePress , Lisboa, Portugal, 7th International Conference on Informatics in Control, Automation and Robotics (ICINCO), June 2010 (inproceedings)

Abstract
Grasping is one of the most important abilities needed for future service robots. Given the task of picking up an object from betweem clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which may not always be available. Therefore, methods for executing grasps are required, which perform well with information gathered from only standard stereo vision, and make only a few necessary assumptions about the task environment. We propose techniques that reactively modify the robot’s learned motor primitives based on information derived from Early Cognitive Vision descriptors. The proposed techniques employ non-parametric potential fields centered on the Early Cognitive Vision descriptors to allow for curving hand trajectories around objects, and finger motions that adapt to the object’s local geometry. The methods were tested on a real robot and found to allow for easier imitation learning of human movements and give a considerable improvement to the robot’s performance in grasping tasks.

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]


no image
Learning Table Tennis with a Mixture of Motor Primitives

Mülling, K., Kober, J., Peters, J.

In Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010), pages: 411-416, IEEE, Piscataway, NJ, USA, 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), December 2010 (inproceedings)

Abstract
Table tennis is a sufficiently complex motor task for studying complete skill learning systems. It consists of several elementary motions and requires fast movements, accurate control, and online adaptation. To represent the elementary movements needed for robot table tennis, we rely on dynamic systems motor primitives (DMP). While such DMPs have been successfully used for learning a variety of simple motor tasks, they only represent single elementary actions. In order to select and generalize among different striking movements, we present a new approach, called Mixture of Motor Primitives that uses a gating network to activate appropriate motor primitives. The resulting policy enables us to select among the appropriate motor primitives as well as to generalize between them. In order to obtain a fully learned robot table tennis setup, we also address the problem of predicting the necessary context information, i.e., the hitting point in time and space where we want to hit the ball. We show that the resulting setup was capable of playing rudimentary table tennis using an anthropomorphic robot arm.

Web DOI Project Page [BibTex]

Web DOI Project Page [BibTex]


no image
Incremental Sparsification for Real-time Online Model Learning

Nguyen-Tuong, D., Peters, J.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 557-564, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Online model learning in real-time is required by many applications such as in robot tracking control. It poses a difficult problem, as fast and incremental online regression with large data sets is the essential component which cannot be achieved by straightforward usage of off-the-shelf machine learning methods (such as Gaussian process regression or support vector regression). In this paper, we propose a framework for online, incremental sparsification with a fixed budget designed for large scale real-time model learning. The proposed approach combines a sparsification method based on an independence measure with a large scale database. In combination with an incremental learning approach such as sequential support vector regression, we obtain a regression method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques. Implementation on a real robot emphasizes the applicability of the proposed approach in real-time online model learning for real world systems.

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]