Skip to main content Skip to main navigation

Publication

Reward-Weighted Regression with Sample Reuse for Direct Policy Search in Reinforcement Learning

Hirotaka Hachiya; Jan Peters; Masashi Sugiyama
In: Neural Computation, Vol. 23, No. 11, Pages 2798-2832, MIT Press, 2011.

Abstract

Direct policy search is a promising reinforcement learning framework, in particular for controlling continuous, high-dimensional systems. Policy search often requires a large number of samples for obtaining a stable policy update estimator, and this is prohibitive when the sampling cost is expensive. In this letter, we extend an expectation-maximization-based policy search method so that previously collected samples can be efficiently reused. The usefulness of the proposed method, reward-weighted regression with sample reuse (R), is demonstrated through robot learning experiments.(This letter is an extended version of our earlier conference paper: Hachiya, Peters, & Sugiyama,.)

Weitere Links