Facial performance illumination transfer from a single video using interpolation in non-skin region
Hongyu Wu, Xiaowu Chen, Mengxia Yang, Zhihong Fang
In Computer Animation and Virtual Worlds, 24(3-4), 2013.
Abstract: This paper proposes a novel video-based method to transfer the illumination from a single reference facial performance video to a target one taken under nearly uniform illumination. We first filter the key frames of the reference and the target face videos with an edge-preserving filter. Then, the illumination component of reference key frame is extracted through dividing the filtered reference key frames by the corresponding filtered target key frames in skin region. The differences in non-skin region caused by different expressions between the reference and target face may bring about artifacts. Therefore, we interpolate the illumination component of the non-skin region by that of the surrounded skin region to ensure the spatial smoothness and consistency. After that, the illumination components of key frames are propagated to non-key frames to ensure the temporal consistency between the two adjacent frames. We obtain convincing results by transferring the illumination effects of a single reference facial performance video to a target one with the spatial and temporal consistencies preserved.
Keyword(s): illumination transfer, facial performance, single video, propagation, interpolation
@article{Wu:2013:FPI,
author = {Hongyu Wu and Xiaowu Chen and Mengxia Yang and Zhihong Fang},
title = {Facial performance illumination transfer from a single video using interpolation in non-skin region},
journal = {Computer Animation and Virtual Worlds},
volume = {24},
number = {3-4},
pages = {255--263},
year = {2013},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."