Publikation
FlashSAC: Fast and Stable Off-Policy Reinforcement Learning for High-Dimensional Robot Control
Donghu Kim; Youngdo Lee; Minho Park; Kinam Kim; I Made Aswin Nahendra; Takuma Seno; Sehee Min; Daniel Palenicek; Florian Vogt; Danica Kragic; Jan Peters; Jaegul Choo; Hojoon Lee
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2604.04539, Pages 1-40, arXiv, 2026.
Zusammenfassung
Reinforcement learning (RL) is a core approach for robot control when expert demonstrations
are unavailable. On-policy methods such as Proximal Policy Optimization (PPO) are widely used for their
stability, but their reliance on narrowly distributed on-policy data limits accurate policy evaluation in
high-dimensional state and action spaces. Off-policy methods can overcome this limitation by learning
from a broader state-action distribution, yet suffer from slow convergence and instability, as fitting a value
function over diverse data requires many gradient updates, causing critic errors to accumulate through
bootstrapping. We present FlashSAC, a fast and stable off-policy RL algorithm built on Soft Actor-Critic.
Motivated by scaling laws observed in supervised learning, FlashSAC sharply reduces gradient updates
while compensating with larger models and higher data throughput. To maintain stability at increased scale,
FlashSAC explicitly bounds weight, feature, and gradient norms, curbing critic error accumulation. Across
over 60 tasks in 10 simulators, FlashSAC consistently outperforms PPO and strong off-policy baselines
in both final performance and training efficiency, with the largest gains on high-dimensional tasks such as
dexterous manipulation. In sim-to-real humanoid locomotion, FlashSAC reduces training time from hours
to minutes, demonstrating the promise of off-policy RL for sim-to-real transfer
