Dynamic 3D avatar creation from hand-held video input
Alexandru Eugen Ichim, Sofien Bouaziz, Mark Pauly
In ACM Transactions on Graphics (TOG), 34(4), August 2015.
Abstract: We present a complete pipeline for creating fully rigged, personalized 3D facial avatars from hand-held video. Our system faithfully recovers facial expression dynamics of the user by adapting a blendshape template to an image sequence of recorded expressions using an optimization that integrates feature tracking, optical flow, and shape from shading. Fine-scale details such as wrinkles are captured separately in normal maps and ambient occlusion maps. From this user- and expression-specific data, we learn a regressor for on-the-fly detail synthesis during animation to enhance the perceptual realism of the avatars. Our system demonstrates that the use of appropriate reconstruction priors yields compelling face rigs even with a minimalistic acquisition system and limited user assistance. This facilitates a range of new applications in computer animation and consumer-level online communication based on personalized avatars. We present realtime application demos to validate our method.
@article{10.1145-2766974,
author = {Alexandru Eugen Ichim and Sofien Bouaziz and Mark Pauly},
title = {Dynamic 3D avatar creation from hand-held video input},
journal = {ACM Transactions on Graphics (TOG)},
volume = {34},
number = {4},
articleno = {45},
month = aug,
year = {2015},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."