Skip to main content Skip to main navigation

Publication

Towards Safe Robot Foundation Models Using Inductive Biases

Maximilian Tölle; Theo Gruner; Daniel Palenicek; Tim Schneider; Jonas Günster; Joe Watson; Davide Tateo; Puze Liu; Jan Peters
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2505.10219, Pages 1-14, arXiv, 2025.

Abstract

afety is a critical requirement for the real-world deployment of robotic systems. Unfortunately, while current robot foundation models show promis- ing generalization capabilities across a wide variety of tasks, they fail to address safety, an important aspect for ensuring long-term operation. Current robot foun- dation models assume that safe behavior should emerge by learning from a suffi- ciently large dataset of demonstrations. However, this approach has two clear ma- jor drawbacks. Firstly, there are no formal safety guarantees for a behavior cloning policy trained using supervised learning. Secondly, without explicit knowledge of any safety constraints, the policy may require an unreasonable number of addi- tional demonstrations to even approximate the desired constrained behavior. To solve these key issues, we show how we can instead combine robot foundation models with geometric inductive biases using ATACOM, a safety layer placed af- ter the foundation policy that ensures safe state transitions by enforcing action constraints. With this approach, we can ensure formal safety guarantees for gen- eralist policies without providing extensive demonstrations of safe behavior, and without requiring any specific fine-tuning for safety. Our experiments show that our approach can be beneficial both for classical manipulation tasks, where we avoid unwanted collisions with irrelevant objects, and for dynamic tasks, such as the robot air hockey environment, where we can generate fast trajectories re- specting complex tasks and joint space constraints. For experimental results, see https://sites.google.com/view/safe-robot-foundation-models.

More links