Skip to main content Skip to main navigation

Publikation

Non-Strict Hierarchical Reinforcement Learning for Interactive Systems and Robots

Dr. Heriberto Cuayáhuitl; Ivana Kruijff-Korbayová; Nina Dethlefs
In: ACM Transactions on Interactive Intelligent Systems, Vol. 4, No. 3, ACM/IEEE, 2014.

Zusammenfassung

Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users.

Projekte

Weitere Links