Skip to main content Skip to main navigation

Publication

Deep Reinforcement Learning Agents are not even close to Human Intelligence

Quentin Delfosse; Jannis Blüml; Fabian Tatai; Théo Vincent; Bjarne Gregori; Elisabeth Dillies; Jan Peters; Constantin A. Rothkopf; Kristian Kersting
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2505.21731, Pages 1-49, Computing Research Repository, 2025.

Abstract

Deep reinforcement learning (RL) agents achieve impressive results in a wide variety of tasks, but they lack zero-shot adaptation capabilities. While most robust- ness evaluations focus on tasks complexifications, for which human also struggle to maintain performances, no evaluation has been performed on tasks simplifi- cations. To tackle this issue, we introduce HackAtari, a set of task variations of the Arcade Learning Environments. We use it to demonstrate that, contrary to humans, RL agents systematically exhibit huge performance drops on simpler versions of their training tasks, uncovering agents’ consistent reliance on shortcuts. Our analysis across multiple algorithms and architectures highlights the persis- tent gap between RL agents and human behavioral intelligence, underscoring the need for new benchmarks and methodologies that enforce systematic generaliza- tion testing beyond static evaluation protocols. Training and testing in the same environment is not enough to obtain agents equipped with human-like intelligence.

More links