Dynamic, Expressive Speech Animation From a Single Mesh
Kevin Wampler, Daichi Sasaki, Li Zhang, Zoran Popović
Symposium on Computer Animation, August 2007, pp. 53--62.
Abstract: In this work we present a method for human face animation which allows us to generate animations for a novel person given just a single mesh of their face. These animations can be of arbitrary text and may include emotional expressions. We build a multilinear model from data which encapsulates the variation in dynamic face motions over changes in identity, expression, and over different texts. We then describe a synthesis method consisting of a phoneme planning and a blending stage which uses this model as a base and attempts to preserve both face shape and dynamics given a novel text and an emotion at each point in time.
@inproceedings{Wampler:2007:DES,
author = {Kevin Wampler and Daichi Sasaki and Li Zhang and Zoran Popović},
title = {Dynamic, Expressive Speech Animation From a Single Mesh},
booktitle = {Symposium on Computer Animation},
pages = {53--62},
month = aug,
year = {2007},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."