Dynamic terrain traversal skills using reinforcement learning
Xue Bin Peng, Glen Berseth, Michiel van de Panne
In ACM Transactions on Graphics (TOG), 34(4), August 2015.
Abstract: The locomotion skills developed for physics-based characters most often target flat terrain. However, much of their potential lies with the creation of dynamic, momentum-based motions across more complex terrains. In this paper, we learn controllers that allow simulated characters to traverse terrains with gaps, steps, and walls using highly dynamic gaits. This is achieved using reinforcement learning, with careful attention given to the action representation, non-parametric approximation of both the value function and the policy; epsilon-greedy exploration; and the learning of a good state distance metric. The methods enable a 21-link planar dog and a 7-link planar biped to navigate challenging sequences of terrain using bounding and running gaits. We evaluate the impact of the key features of our skill learning pipeline on the resulting performance.
Article URL: http://doi.acm.org/10.1145/2766910
BibTeX format:
@article{10.1145-2766910,
  author = {Xue Bin Peng and Glen Berseth and Michiel van de Panne},
  title = {Dynamic terrain traversal skills using reinforcement learning},
  journal = {ACM Transactions on Graphics (TOG)},
  volume = {34},
  number = {4},
  articleno = {80},
  month = aug,
  year = {2015},
}
Search for more articles by Xue Bin Peng.
Search for more articles by Glen Berseth.
Search for more articles by Michiel van de Panne.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."