Dynamic, Expressive Speech Animation From a Single Mesh
Kevin Wampler, Daichi Sasaki, Li Zhang, Zoran Popović
Symposium on Computer Animation, August 2007, pp. 53--62.
Abstract: In this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time.
BibTeX format:
@inproceedings{Wampler:2007:DES,
  author = {Kevin Wampler and Daichi Sasaki and Li Zhang and Zoran Popović},
  title = {Dynamic, Expressive Speech Animation From a Single Mesh},
  booktitle = {Symposium on Computer Animation},
  pages = {53--62},
  month = aug,
  year = {2007},
}
Search for more articles by Kevin Wampler.
Search for more articles by Daichi Sasaki.
Search for more articles by Li Zhang.
Search for more articles by Zoran Popović.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."