Skip to main content Skip to main navigation

Publikation

Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning

Michael Lutter; Johannes Silberbauer; Joe Watson; Jan Peters
In: IEEE International Conference on Robotics and Automation. IEEE International Conference on Robotics and Automation (ICRA-2021), May 30 - June 5, Xi'an, China, Pages 4163-4170, IEEE, 2021.

Zusammenfassung

A limitation of model-based reinforcement learning (MBRL) is the exploitation of errors in the learned models. Black-box models can fit complex dynamics with high fidelity, but their behavior is undefined outside of the data distribution.Physics-based models are better at extrapolating, due to the general validity of their informed structure, but underfit in the real world due to the presence of unmodeled phenomena. In this work, we demonstrate experimentally that for the offline model-based reinforcement learning setting, physics-based models can be beneficial compared to high-capacity function approximators if the mechanical structure is known. Physics-based models can learn to perform the ball in a cup (BiC) task on a physical manipulator using only 4 minutes of sampled data using offline MBRL. We find that black-box models consistently produce unviable policies for BiC as all predicted trajectories diverge to physically impossible state, despite having access to more data than the physics-based model. In addition, we generalize the approach of physics parameter identification from modeling holonomic multi-body systems to systems with nonholonomic dynamics using end-to-end automatic differentiation. Videos: https://sites.google.com/view/ball-in-a-cup-in-4-minutes/

Weitere Links