Skip to main content Skip to main navigation

Publikation

Lifelong Learning-Based MPC for Uncertain Multi-Task Manipulation

Dimitrios Rakovitis
In: IEEE Robotics and Automation Letters (RA-L), Vol. 11, No. 4, Pages 4977-4984, RA-L, 1/2026.

Zusammenfassung

Modern model-based manipulators are increasingly demanded to perform diverse real-world tasks, e.g. from opening various doors to pick-and-place. Yet as the range of desired tasks grows in the robot's lifetime, model uncertainty increases, which can severely degrade control performance. Adaptive Model Predictive Control (AMPC) mitigates this by updating the robot-environment contact dynamics online, but existing approaches assume well-calibrated task priors or fixed training distributions, limiting generalization to novel tasks. Hence, we propose a lifelong, mixture-of-experts AMPC framework that continuously learns, refines, and identifies task-specific expert contact models from experience using only proprioception. In novel environments, experts are self-trained online in mini-batches using pseudo-labels derived from residuals, and after successful trials they are consolidated with high-confidence labels. At its core the framework relies on Gaussian Mixture Models (GMMs) to: (i) gate experts via error likelihoods, and (ii) estimate out-of-distribution (OOD) proximity via error-prediction likelihoods, which monitors learning and adjusts the MPC compliance to mitigate risk and forgetting under uncertainty. Two expert classes are instantiated and compared, Gaussian Mixture Regression (GMR) and Neural Networks (NNs). The approach is evaluated with a real manipulator and is shown to outperform two state-of-the-art AMPC methods on diverse tasks with multiple unknown and mid-task switching dynamic parameters.

Projekte