Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner-Up Award, the 2012 Robotics: Science & Systems - Early Career Spotlight, the 2013 INNS Young Investigator Award, and the IEEE Robotics & Automation Society‘s 2013 Early Career Award. In 2015, he was awarded an ERC Starting Grant. In 2019, he was appointed IEEE Fellow.
Jan Peters has studied Computer Science, Electrical, Mechanical and Control Engineering at TU Munich and FernUni Hagen in Germany, at the National University of Singapore (NUS) and the University of Southern California (USC), and been an visiting researcher at the ATR Telecomunications Research Center in Japan. He has received four Master‘s degrees in these disciplines as well as a Computer Science Ph.D. from USC.
Jan Peters is a full professor at Darmstadt University of Technology (TU Darmstadt) heading the intelligent systems group while still being affliated with the MPI heading the robot learning lab.
Jan Peters joined the Max-Planck Institute for Biological Cybernetics in April 2007 as a research scientist. Before joining MPI, Jan Peters studied Electrical Engineering, Computer Science and Mechanical Engineering. He graduated from the University of Hagen in 2000 with a Diplom-Informatiker (German M.Sc. in Computer Science) and from Munich University of Technology in 2001 with a Diplom-Ingenieur in Electrical Engineering (German M.Sc. in Electrical Engineering). In 2000-2001, he spent two semesters as visiting student at National University of Singapore. In 2002, he completed a M.Sc. in Computer Science and, in 2005, a M.S. in Mechanical Engineering both from USC. He joined USC's Computer Science Ph.D. program and the CLMC Lab in Fall 2001. Jan Peters has been a visiting research student at the Department of Robotics at the German Aerospace Research Center in Germany, at Siemens Advanced Engineering in Singapore and at the Department of Humanoid Robotics and Computational Neuroscience at the Advanded Telecommunication Research (ATR) Center in Japan. Jan Peters graduated from University of Southern California with a Ph.D. in Computer Science in March 2007. He remains affiliated with the CLMC Lab as an adjunct researcher.
Creating autonomous robots that can learn to assist humans in situations of daily life is a fascinating challenge for machine learning. While this aim has been a long-standing vision of artificial intelligence, we have yet to create robots that can learn to accomplish many different tasks triggered by environmental context or higher...
Klink, P., D’Eramo, C., Peters, J., Pajarinen, J.
Self-Paced Deep Reinforcement LearningAdvances in Neural Information Processing Systems 33, 34th Annual Conference on Neural Information Processing Systems, December 2020 (conference) Accepted
Proceedings of the 36th International Conference on Uncertainty in Artificial Intelligence (UAI), 124, pages: 320-329, Proceedings of Machine Learning Research, (Editors: Jonas Peters and David Sontag), PMLR, August 2020 (conference)
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1210-1217, IEEE, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019 (conference)
Akrour, R., Pajarinen, J., Peters, J., Neumann, G.
Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 181-190, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)
Akrour, R., Pajarinen, J., Peters, J., Neumann, G.
Projections for Approximate Policy Iteration AlgorithmsProceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 181-190, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)
Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 553-562, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019 (conference)
High-speed and high-acceleration movements are inherently hard to control. Applying learning to the control of such motions on anthropomorphic robot arms can improve the accuracy of the control but might damage the system. The inherent exploration of learning approaches can lead to instabilities and the robot reaching joint limits at high speeds. Having hardware that enables safe exploration of high-speed and high-acceleration movements is therefore desirable. To address this issue, we propose to use robots actuated by Pneumatic Artificial Muscles (PAMs). In this paper, we present a four degrees of freedom (DoFs) robot arm that reaches high joint angle accelerations of up to 28000 °/s^2 while avoiding dangerous joint limits thanks to the antagonistic actuation and limits on the air pressure ranges. With this robot arm, we are able to tune control parameters using Bayesian optimization directly on the hardware without additional safety considerations. The achieved tracking performance on a fast trajectory exceeds previous results on comparable PAM-driven robots. We also show that our system can be controlled well on slow trajectories with PID controllers due to careful construction considerations such as minimal bending of cables, lightweight kinematics and minimal contact between PAMs and PAMs with the links. Finally, we propose a novel technique to control the the co-contraction of antagonistic muscle pairs. Experimental results illustrate that choosing the optimal co-contraction level is vital to reach better tracking performance. Through the use of PAM-driven robots and learning, we do a small step towards the future development of robots capable of more human-like motions.
IEEE Robotics and Automation Letters, Robotics and Automation Letters, 3(4):3161-3168, IEEE, 2018 (article)
Controlling musculoskeletal systems, especially robots actuated by pneumatic artificial muscles, is a challenging task due to nonlinearities, hysteresis effects, massive actuator de- lay and unobservable dependencies such as temperature. Despite such difficulties, muscular systems offer many beneficial prop- erties to achieve human-comparable performance in uncertain and fast-changing tasks. For example, muscles are backdrivable and provide variable stiffness while offering high forces to reach high accelerations. In addition, the embodied intelligence deriving from the compliance might reduce the control demands for specific tasks. In this paper, we address the problem of how to accurately control musculoskeletal robots. To address this issue, we propose to learn probabilistic forward dynamics models using Gaussian processes and, subsequently, to employ these models for control. However, Gaussian processes dynamics models cannot be set-up for our musculoskeletal robot as for traditional motor- driven robots because of unclear state composition etc. We hence empirically study and discuss in detail how to tune these approaches to complex musculoskeletal robots and their specific challenges. Moreover, we show that our model can be used to accurately control an antagonistic pair of pneumatic artificial muscles for a trajectory tracking task while considering only one- step-ahead predictions of the forward model and incorporating model uncertainty.
Proceedings of the 35th International Conference on Machine Learning (ICML), 80, pages: 4065-4074, Proceedings of Machine Learning Research, (Editors: Dy, Jennifer and Krause, Andreas), PMLR, July 2018 (conference)
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems