Skip to main content Skip to main navigation

Publikation

Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences

Robert Pinsler; Riad Akrour; Takayuki Osa; Jan Peters; Gerhard Neumann
In: 2018 IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation (ICRA-2018), May 21-25, Brisbane, Australia, Pages 596-601, IEEE, 2018.

Zusammenfassung

While reinforcement learning has led to promising results in robotics, defining an informative reward function is challenging. Prior work considered including the human in the loop to jointly learn the reward function and the optimal policy. Generating samples from a physical robot and requesting human feedback are both taxing efforts for which efficiency is critical. We propose to learn reward functions from both the robot and the human perspectives to improve on both efficiency metrics. Learning a reward function from the human perspective increases feedback efficiency by assuming that humans rank trajectories according to a low-dimensional outcome space. Learning a reward function from the robot perspective circumvents the need for a dynamics model while retaining the sample efficiency of model-based approaches. We provide an algorithm that incorporates bi-perspective reward learning into a general hierarchical reinforcement learning framework and demonstrate the merits of our approach on a toy task and a simulated robot grasping task.

Weitere Links