Skip to main content Skip to main navigation

Publikation

MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction

Vignesh Prasad; Dorothea Koert; Ruth Stock-Homburg; Jan Peters; Georgia Chalvatzaki
In: 21st IEEE-RAS International Conference on Humanoid Robots. IEEE-RAS International Conference on Humanoid Robots (Humanoids-2022), November 28-30, Ginowan, Japan, Pages 472-479, IEEE, 2022.

Zusammenfassung

Modeling interaction dynamics to generate robot trajectories that enable a robot to adapt and react to a human's actions and intentions is critical for efficient and effective collaborative Human-Robot Interactions (HRI). Learning from Demonstration (LfD) methods from Human-Human Interactions (HHI) have shown promising results, especially when coupled with representation learning techniques. However, such methods for learning HRI either do not scale well to high dimensional data or cannot accurately adapt to changing via-poses of the interacting partner. We propose Multimodal Interactive Latent Dynamics (MILD), a method that couples deep representation learning and probabilistic machine learning to address the problem of two-party physical HRIs. We learn the interaction dynamics from demonstrations, using Hidden Semi-Markov Models (HSMMs) to model the joint distribution of the interacting agents in the latent space of a Variational Autoencoder (VAE). Our experimental evaluations for learning HRI from HHI demonstrations show that MILD effectively captures the multimodality in the latent representations of HRI tasks, allowing us to decode the varying dynamics occurring in such tasks. Compared to related work, MILD generates more accurate trajectories for the controlled agent (robot) when conditioned on the observed agent's (human) trajectory. Notably, MILD can learn directly from camera-based pose estimations to generate trajectories, which we then map to a humanoid robot without the need for any additional training.

Weitere Links