eFASE: Expressive Facial Animation Synthesis and Editing with Phoneme-Isomap Controls
Zhigang Deng, Ulrich Neumann
Symposium on Computer Animation, September 2006, pp. 251--260.
Abstract: This paper presents a novel data-driven system for expressive facial animation synthesis and editing. Given novel phoneme-aligned speech input and its emotion modifiers (specifications), this system automatically generates expressive facial animation by concatenating captured motion data while animators establish constraints and goals. A constrained dynamic programming algorithm is used to search for best-matched captured motion nodes by minimizing a cost function. Users optionally specify "hard constraints" (motion-node constraints for expressing phoneme utterances) and "soft constraints" (emotion modifiers) to guide the search process. Users can also edit the processed facial motion node database by inserting and deleting motion nodes via a novel phoneme-Isomap interface. Novel facial animation synthesis experiments and objective trajectory comparisons between synthesized facial motion and captured motion demonstrate that this system is effective for producing realistic expressive facial animations.
BibTeX format:
@inproceedings{Deng:2006:EEF,
  author = {Zhigang Deng and Ulrich Neumann},
  title = {eFASE: Expressive Facial Animation Synthesis and Editing with Phoneme-Isomap Controls},
  booktitle = {Symposium on Computer Animation},
  pages = {251--260},
  month = sep,
  year = {2006},
}
Search for more articles by Zhigang Deng.
Search for more articles by Ulrich Neumann.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."