Skip to main content Skip to main navigation

Publication

Motion Primitives as the Action Space of Deep Q-Learning for Planning in Autonomous Driving

Tristan Schneider; Matheus V. A. Pedrosa; Timo P. Gros; Verena Wolf; Kathrin Flaßkamp
In: IEEE Transactions on Intelligent Transportation Systems, Vol. 25, No. 11, Pages 17852-17864, IEEE, 9/2024.

Abstract

Motion planning for autonomous vehicles is commonly implemented via graph-search methods, which pose limitations to the model accuracy and environmental complexity that can be handled under real-time constraints. In contrast, reinforcement learning, specifically the deep Q-learning (DQL) algorithm, provides an interesting alternative for real-time solutions. Some approaches, such as the deep Q-network (DQN), model the RL-action space by quantizing the continuous control inputs. Here, we propose to use motion primitives, which encode continuous-time nonlinear system behavior as the action space. The novel methodology of motion primitives-DQL planning is evaluated in a numerical example using a single-track vehicle model and different planning scenarios. We show that our approach outperforms a state-of-the-art graph-search method in computation time and probability of reaching the goal.