Expressive speech-driven facial animation
Yong Cao, Wen C. Tien, Petros Faloutsos, Frédéric Pighin
In ACM Transactions on Graphics, 24(4), October 2005.
Abstract: Speech-driven facial motion synthesis is a well explored research topic. However, little has been done to model expressive visual behavior during speech. We address this issue using a machine learning approach that relies on a database of speech-related high-fidelity facial motions. From this training set, we derive a generative model of expressive facial motion that incorporates emotion control, while maintaining accurate lip-synching. The emotional content of the input speech can be manually specified by the user or automatically extracted from the audio signal using a Support Vector Machine classifier.
Keyword(s): Facial animation, expression synthesis, independent component analysis, lip synching
@article{Cao:2005:ESF,
author = {Yong Cao and Wen C. Tien and Petros Faloutsos and Frédéric Pighin},
title = {Expressive speech-driven facial animation},
journal = {ACM Transactions on Graphics},
volume = {24},
number = {4},
pages = {1283--1302},
month = oct,
year = {2005},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."