Controllable high-fidelity facial performance transfer
Feng Xu, Jinxiang Chai, Yilong Liu, Xin Tong
In ACM Transactions on Graphics, 33(4), July 2014.
Abstract: Recent technological advances in facial capture have made it possible to acquire high-fidelity 3D facial performance data with stunningly high spatial-temporal resolution. Current methods for facial expression transfer, however, are often limited to large-scale facial deformation. This paper introduces a novel facial expression transfer and editing technique for high-fidelity facial performance data. The key idea of our approach is to decompose high-fidelity facial performances into high-level facial feature lines, large-scale facial deformation and fine-scale motion details and transfer them appropriately to reconstruct the retargeted facial animation in an efficient optimization framework. The system also allows the user to quickly modify and control the retargeted facial sequences in the spatial-temporal domain. We demonstrate the power of our approach by transferring and editing high-fidelity facial animation data from high-resolution source models to a wide range of target models, including both human faces and non-human faces such as "monster" and "dog".
@article{Xu:2014:CHF,
author = {Feng Xu and Jinxiang Chai and Yilong Liu and Xin Tong},
title = {Controllable high-fidelity facial performance transfer},
journal = {ACM Transactions on Graphics},
volume = {33},
number = {4},
pages = {42:1--42:13},
month = jul,
year = {2014},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."