Skip to main content Skip to main navigation

Publication

Learning Relational Navigation Policies

Alexandru Cocora; Kristian Kersting; Christian Plagemann; Wolfram Burgard; Luc De Raedt
In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2006), October 9-15, Beijing, China, Pages 2792-2797, IEEE, 2006.

Abstract

Navigation is one of the fundamental tasks for a mobile robot. The majority of path planning approaches has been designed to entirely solve the given problem from scratch given the current and goal configurations of the robot. Although these approaches yield highly efficient plans, the computed policies typically do not transfer to other, similar tasks. We propose to learn relational decision trees as abstract navigation strategies from example paths. Relational abstraction has several interesting and important properties. First, it allows a mobile robot to generalize navigation plans from specific examples provided by users or exploration. Second, the navigation policy learned in one environment can be transferred to unknown environments. In several experiments with real robots in a real environment and in simulated runs, we demonstrate the usefulness of our approach

More links