Skip to main content Skip to main navigation

Publication

Multimodal Multi-Pedestrian Path Prediction for Autonomous Cars

Atanas Poibrenski; Matthias Klusch; Igor Vozniak; Christian Müller
In: Applied Computing Review (JACR), Vol. 20, No. 4, ACM, 2020.

Abstract

Accurate prediction of the future position of pedestrians in trac scenarios is required for safe navigation of an autonomous vehicle but remains a challenge. This concerns, in particular, the e ective and ecient multimodal prediction of most likely trajectories of tracked pedestrians from egocentric view of self-driving car. In this paper, we present a novel solution, named M2P3, which combines a conditional variational autoencoder with recurrent neural network encoder-decoder architecture in order to predict a set of possible future locations of each pedestrian in a trac scene. The M2P3 system uses a sequence of RGB images delivered through an internal vehicle-mounted camera for egocentric vision. It takes as an input only two modes, that are past trajectories and scales of pedestrians, and delivers as an output the three most likely paths for each tracked pedestrian. Experimental evaluation of the proposed architecture on the JAAD, ETH/UCY and Stanford Drone datasets reveal that the M2P3 system is signi cantly superior to selected state-of-the-art solutions.

Projects