Skip to main content Skip to main navigation

Publikation

EANT+KALMAN: An Efficient Reinforcement Learning Method for Continuous State Partially Observable Domains

Yohannes Kassahun; José de Gea Fernández; Jan Hendrik Metzen; Mark Edgington; Frank Kirchner
In: Andreas Dengel; K. Berns; Thomas Breuel; Frank Bomarius; Thomas Roth-Berghofer (Hrsg.). KI 2008: Advances in Artificial Intelligence. German Conference on Artificial Intelligence (KI-08), 31st, September 23-26, Kaiserslautern, Germany, Pages 241-248, Lecture Notes in Artificial Intelligence (LNAI), Vol. 5243, Springer, Berlin/ Heidelberg, 2008.

Zusammenfassung

In this contribution we present an extension of a neuroevolutionary method called Evolutionary Acquisition of Neural Topologies (EANT) [11] that allows the evolution of solutions taking the form of a POMDP agent (Partially Observable Markov Decision Process) [8]. The solution we propose involves cascading a Kalman filter [10] (state estimator) and a feed-forward neural network. The extension (EANT+KALMAN) has been tested on the double pole balancing without velocity benchmark, achieving significantly better results than the to date published results of other algorithms.