A framework for locally retargeting and rendering facial performance
Ko-Yun Liu, Wan-Chun Ma, Chun-Fa Chang, Chuan-Chang Wang, Paul Debevec
In Computer Animation and Virtual Worlds, 22(2-3), 2011.
Abstract: We present a facial motion retargeting method that enables the control of a blendshape rig according to marker-based motion capture data. The main purpose of the proposed technique is to allow a blendshape rig to create facial expressions, which conforms best to the current motion capture input, regardless the underlying blendshape poses. In other words, even though all of the blendshape poses may comprise symmetrical facial expressions only, our method is still able to create asymmetrical expressions without physically splitting any of them into more local blendshape poses. An automatic segmentation technique based on the analysis of facial motion is introduced to create facial regions for local retargeting. We also show that it is possible to blend normal maps for rendering in the same framework. Rendering with the blended normal map significantly improves surface appearance and details.
Keyword(s): expression synthesis, facial retargeting, normal map blending
Article URL: http://dx.doi.org/10.1002/cav.404
BibTeX format:
@article{Liu:2011:AFF,
  author = {Ko-Yun Liu and Wan-Chun Ma and Chun-Fa Chang and Chuan-Chang Wang and Paul Debevec},
  title = {A framework for locally retargeting and rendering facial performance},
  journal = {Computer Animation and Virtual Worlds},
  volume = {22},
  number = {2-3},
  pages = {159--167},
  year = {2011},
}
Search for more articles by Ko-Yun Liu.
Search for more articles by Wan-Chun Ma.
Search for more articles by Chun-Fa Chang.
Search for more articles by Chuan-Chang Wang.
Search for more articles by Paul Debevec.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."