Skip to main content Skip to main navigation

Publikation

Learning Movement Primitives

Stefan Schaal; Jan Peters; Jun Nakanishi; Auke Jan Ijspeert
In: Paolo Dario; Raja Chatila (Hrsg.). Robotics Research, The Eleventh International Symposium. International Symposium of Robotics Research (ISRR-2003), October 19-22, Siena, Italy, Pages 561-572, Springer Tracts in Advanced Robotics (STAR), Vol. 15, Springer, 2003.

Zusammenfassung

This paper discusses a comprehensive framework for modular motor control based on a recently developed theory of dynamic movement primitives (DMP). DMPs are a formulation of movement primitives with autonomous nonlinear differential equations, whose time evolution creates smooth kinematic control policies. Model-based control theory is used to convert the outputs of these policies into motor commands. By means of coupling terms, on-line modifications can be incorporated into the time evolution of the differential equations, thus providing a rather flexible and reactive framework for motor planning and execution. The linear parameterization of DMPs lends itself naturally to supervised learning from demonstration. Moreover, the temporal, scale, and translation invariance of the differential equations with respect to these parameters provides a useful means for movement recognition. A novel reinforcement learning technique based on natural stochastic policy gradients allows a general approach of improving DMPs by trial and error learning with respect to almost arbitrary optimization criteria. We demonstrate the different ingredients of the DMP approach in various examples, involving skill learning from demonstration on the humanoid robot DB, and learning biped walking from demonstration in simulation, including self-improvement of the movement patterns towards energy efficiency through resonance tuning.

Weitere Links