Learning Behavior Styles with Inverse Reinforcement Learning
Seong Jae Lee, Zoran Popović
In ACM Transactions on Graphics, 29(4), July 2010.
Abstract: We present a method for inferring the behavior styles of character controllers from a small set of examples. We show that a rich set of behavior variations can be captured by determining the appropriate reward function in the reinforcement learning framework, and show that the discovered reward function can be applied to different environments and scenarios. We also introduce a new algorithm to recover the unknown reward function that improves over the original apprenticeship learning algorithm. We show that the reward function representing a behavior style can be applied to a variety of different tasks, while still preserving the key features of the style present in the given examples. We describe an adaptive process where an author can, with just a few additional examples, refine the behavior so that it has better generalization properties.
Keyword(s): apprenticeship learning, data driven animation, human animation, inverse reinforcement learning, optimal control
Article URL: http://doi.acm.org/10.1145/1778765.1778859
BibTeX format:
@article{Lee:2010:LBS,
  author = {Seong Jae Lee and Zoran Popović},
  title = {Learning Behavior Styles with Inverse Reinforcement Learning},
  journal = {ACM Transactions on Graphics},
  volume = {29},
  number = {4},
  pages = {122:1--122:7},
  month = jul,
  year = {2010},
}
Search for more articles by Seong Jae Lee.
Search for more articles by Zoran Popović.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."