Skip to main content Skip to main navigation

Publikation

Contextual Latent-Movements Off-Policy Optimization for Robotic Manipulation Skills

Samuele Tosatto; Georgia Chalvatzaki; Jan Peters
In: IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation (ICRA-2021), May 30 - June 5, Xi'an, China, Pages 10815-10821, IEEE, 2021.

Zusammenfassung

Parameterized movement primitives have been extensively used for imitation learning of robotic tasks. However, the high-dimensionality of the parameter space hinders the improvement of such primitives in the reinforcement learning (RL) setting, especially for learning with physical robots. In this paper we propose a novel view on handling the demonstrated trajectories for acquiring low-dimensional, non-linear latent dynamics, using mixtures of probabilistic principal component analyzers (MPPCA) on the movements' parameter space. Moreover, we introduce a new contextual off-policy RL algorithm, named LAtent-Movements Policy Optimization (LAMPO). LAMPO can provide gradient estimates from previous experience using self-normalized importance sampling, hence, making full use of samples collected in previous learning iterations. These advantages combined provide a complete framework for sample-efficient off-policy optimization of movement primitives for robot learning of high-dimensional manipulation skills. Our experimental results conducted both in simulation and on a real robot show that LAMPO provides sample-efficient policies against common approaches in literature.

Weitere Links