Skip to main content Skip to main navigation

Publikation

Reinforcement Learning for Parameterized Motor Primitives

Jan Peters; Stefan Schaal
In: Proceedings of the International Joint Conference on Neural Networks, IJCNN 2006. International Joint Conference on Neural Networks (IJCNN-2006), July 16-21, Vancouver, BC, Canada, Pages 73-80, IEEE, 2006.

Zusammenfassung

One of the major challenges in both action generation for robotics and in the understanding of human motor control is to learn the "building blocks of movement generation", called motor primitives. Motor primitives, as used in this paper, are parameterized control policies such as splines or nonlinear differential equations with desired attractor properties. While a lot of progress has been made in teaching parameterized motor primitives using supervised or imitation learning, the selfimprovement by interaction of the system with the environment remains a challenging problem. In this paper, we evaluate different reinforcement learning approaches for improving the performance of parameterized motor primitives. For pursuing this goal, we highlight the difficulties with current reinforcement learning methods, and outline both established and novel algorithms for the gradient-based improvement of parameterized policies. We compare these algorithms in the context of motor primitive learning, and show that our most modern algorithm, the Episodic Natural Actor-Critic outperforms previous algorithms by at least an order of magnitude. We demonstrate the efficiency of this reinforcement learning method in the application of learning to hit a baseball with an anthropomorphic robot arm.

Weitere Links