Publikation
Symbolic Dynamic Programming for First-order POMDPs
Scott Sanner; Kristian Kersting
In: Maria Fox; David Poole (Hrsg.). Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence (AAAI-2010), July 11-15, Atlanta, Georgia, USA, Pages 1140-1146, AAAI Press, 2010.
Zusammenfassung
Partially-observable Markov decision processes (POMDPs) provide a powerful model for sequential decision-making problems with partially-observed state and are known to have (approximately) optimal dynamic programming solutions. Much work in recent years has focused on improving the efficiency of these dynamic programming algorithms by exploiting symmetries and factored or relational representations. In this work, we show that it is also possible to exploit the full expressive power of first-order quantification to achieve state, action, and observation abstraction in a dynamic programming solution to relationally specified POMDPs. Among the advantages of this approach are the ability to maintain compact value function representations, abstract over the space of potentially optimal actions, and automatically derive compact conditional policy trees that minimally partition relational observation spaces according to distinctions that have an impact on policy values. This is the first lifted relational POMDP solution that can optimally accommodate actions with a potentially infinite relational space of observation outcomes.