Skip to main content Skip to main navigation

Publication

Reinforcement Learning for Robust Athletic Intelligence: Lessons from the 2nd 'AI Olympics with RealAIGym' Competition

Felix Wiebe; Niccolò Turcato; Alberto Dalla Libera; Jean Seong Bjorn Choe; BumKyu Choi; Tim Lukas Faust; Habib Maraqten; Erfan Aghadavoodi; Marco Calì; Alberto Sinigaglia; Giulio Giacomuzzo; Diego Romeres; Jong-Kook Kim; Gian Antonio Susto; Shubham Vyas; Dennis Mronga; Boris Belousov; Jan Peters; Frank Kirchner; Shivesh Kumar
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2503.15290, Pages 1-8, arXiv, 2025.

Abstract

In the field of robotics many different approaches ranging from classical planning over optimal control to re- inforcement learning (RL) are developed and borrowed from other fields to achieve reliable control in diverse tasks. In order to get a clear understanding of their individual strengths and weaknesses and their applicability in real world robotic scenarios is it important to benchmark and compare their performances not only in a simulation but also on real hard- ware. The ’2nd AI Olympics with RealAIGym’ competition was held at the IROS 2024 conference to contribute to this cause and evaluate different controllers according to their ability to solve a dynamic control problem on an underactuated double pendulum system (Fig. 1) with chaotic dynamics. This paper describes the four different RL methods submitted by the participating teams, presents their performance in the swing- up task on a real double pendulum, measured against various criteria, and discusses their transferability from simulation to real hardware and their robustness to external disturbances.

More links