Animating blendshape faces by cross-mapping motion capture data
Zhigang Deng, Pei-Ying Chiang, Pamela Fox, Ulrich Neumann
Symposium on Interactive 3D Graphics and Games, March 2006, pp. 43--48.
Abstract: Animating 3D faces to achieve compelling realism is a challenging task in the entertainment industry. Previously proposed face transfer approaches generally require a high-quality animated source face in order to transfer its motion to new 3D faces. In this work, we present a semi-automatic technique to directly animate popularized 3D blendshape face models by mapping facial motion capture data spaces to 3D blendshape face spaces. After sparse markers on the face of a human subject are captured by motion capture systems while a video camera is simultaneously used to record his/her front face, then we carefully select a few motion capture frames and accompanying video frames as reference mocap-video pairs. Users manually tune blendshape weights to perceptually match the animated blendshape face models with reference facial images (the reference mocap-video pairs) in order to create reference mocap-weight pairs. Finally, the Radial Basis Function (RBF) regression technique is used to map any new facial motion capture frame to blendshape weights based on the reference mocap-weight pairs. Our results demonstrate that this technique is efficient to animate blendshape face models, while offering its generality and flexiblity.
@inproceedings{Deng:2006:ABF,
author = {Zhigang Deng and Pei-Ying Chiang and Pamela Fox and Ulrich Neumann},
title = {Animating blendshape faces by cross-mapping motion capture data},
booktitle = {Symposium on Interactive 3D Graphics and Games},
pages = {43--48},
month = mar,
year = {2006},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."