Inferring Landmarks for Pedestrian Navigation from Mobile Eye-Tracking Data and Google Street View

Christian Lander, Frederik Wiehr, Nico Herbig, Antonio Krüger, Markus Löchtefeld

In: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. International Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA-17) Denver Colorado United States ACM 5/2017.


While it has been well established that incorporating landmarks into route descriptions enhances understanding and performance of wayfinding, only a very few available systems make use of them. This is primarily due to the fact that landmark data is often not available, and the creation of the data is connected to tedious manual labor. Prior work explored crowd-sourced approaches to collect landmark data, but most of that work focused on explicit user input to gather the data. In this paper, we presented our work towards a system to automatically infer suitable landmarks for pedestrian navigation instructions from mobile eye-tracking data. By matching the video feed of the scene camera of a head-mounted eye tracker to Google Street View imagery, our system is able to cluster the visual attention of the users and extract suitable landmarks from it. We present early results of a field study conducted with six participants to highlight the potential of our approach.

Weitere Links

final-ea2721-lander.pdf (pdf, 3 MB )

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz