Skip to main content Skip to main navigation

Publikation

Model-based imitation learning by probabilistic trajectory matching

Peter Englert; Alexandros Paraschos; Jan Peters; Marc Peter Deisenroth
In: 2013 IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation (ICRA-2013), May 6-10, Karlsruhe, Germany, Pages 1922-1927, IEEE, 2013.

Zusammenfassung

One of the most elegant ways of teaching new skills to robots is to provide demonstrations of a task and let the robot imitate this behavior. Such imitation learning is a non-trivial task: Different anatomies of robot and teacher, and reduced robustness towards changes in the control task are two major difficulties in imitation learning. We present an imitation-learning approach to efficiently learn a task from expert demonstrations. Instead of finding policies indirectly, either via state-action mappings (behavioral cloning), or cost function learning (inverse reinforcement learning), our goal is to find policies directly such that predicted trajectories match observed ones. To achieve this aim, we model the trajectory of the teacher and the predicted robot trajectory by means of probability distributions. We match these distributions by minimizing their Kullback-Leibler divergence. In this paper, we propose to learn probabilistic forward models to compute a probability distribution over trajectories. We compare our approach to model-based reinforcement learning methods with hand-crafted cost functions. Finally, we evaluate our method with experiments on a real compliant robot.

Weitere Links