Publication
Uncertainty Representations in State-Space Layers for Deep Reinforcement Learning under Partial Observability
Carlos E. Luis; Alessandro G. Bottero; Julia Vinogradska; Felix Berkenkamp; Jan Peters
In: Transactions on Machine Learning Research (TMLR), Vol. 2025, Pages 1-31, arXiv, 2025.
Abstract
Optimal decision-making under partial observability requires reasoning about the uncertainty
of the environment’s hidden state. However, most reinforcement learning architectures
handle partial observability with sequence models that have no internal mechanism to
incorporate uncertainty in their hidden state representation, such as recurrent neural networks,
deterministic state-space models and transformers. Inspired by advances in probabilistic
world models for reinforcement learning, we propose a standalone Kalman filter layer that
performs closed-form Gaussian inference in linear state-space models and train it end-to-end
within a model-free architecture to maximize returns. Similar to efficient linear recurrent
layers, the Kalman filter layer processes sequential data using a parallel scan, which scales
logarithmically with the sequence length. By design, Kalman filter layers are a drop-in
replacement for other recurrent layers in standard model-free architectures, but importantly
they include an explicit mechanism for probabilistic filtering of the latent state representation.
Experiments in a wide variety of tasks with partial observability show that Kalman filter layers
excel in problems where uncertainty reasoning is key for decision-making, outperforming
other stateful models.
