Voice Puppetry
Matthew Brand
Proceedings of SIGGRAPH 99, August 1999, pp. 21--28.
Abstract: We introduce a method for predicting a control signal from another related signal, and apply it to voice puppetry: Generating full facial animation from expressive information in an audio track. The voice puppet learns a facial control model from computer vision of real facial behavior, automatically incorporating vocal and facial dynamics such as co-articulation. Animation is produced by using audio to drive the model, which induces a probability distribution over the manifold of possible facial motions. We present a linear-time closed-form solution for the most probable trajectory over this manifold. The output is a series of facial control parameters, suitable for driving many different kinds of animation ranging from video-realistic image warps to 3D cartoon characters.
Keyword(s): Facial animation, lip-syncing, control, learning, computer vision and audition
BibTeX format:
@inproceedings{Brand:1999:VP,
  author = {Matthew Brand},
  title = {Voice Puppetry},
  booktitle = {Proceedings of SIGGRAPH 99},
  pages = {21--28},
  month = aug,
  year = {1999},
}
Search for more articles by Matthew Brand.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."