Skip to main content Skip to main navigation


M2P3: Multimodal Multi-Pedestrian Path Prediction by Self-Driving Cars With Egocentric Vision

Atanas Poibrenski; Matthias Klusch; Igor Vozniak; Christian Müller
In: Proceedings of 35th ACM Symposium On Applied Computing. ACM Symposium On Applied Computing (SAC-2020), March 30 - April 3, Brno, Czech Republic, ACM Press, 2020.


Accurate prediction of the future position of pedestrians in traffic scenarios is required for safe navigation of an autonomous vehicle but remains a challenge. This concerns, in particular, the effective and efficient multimodal prediction of most likely trajectories of tracked pedestrians from egocentric view of self-driving car. In this paper, we present a novel solution, named M2P3, which combines a conditional variational autoencoder with recurrent neural network encoder-decoder architecture in order to predict a set of possible future locations of each pedestrian in a traffic scene. The M2P3 system uses a sequence of RGB images delivered through an internal vehicle-mounted camera for egocentric vision. It takes as an input only two modes, that are past trajectories and scales of pedestrians, and delivers as an output the three most likely paths for each tracked pedestrian. Experimental evaluation of the proposed architecture on the JAAD dataset reveal that the M2P3 system is significantly superior to selected state-of-the-art solutions.