Dynamic hair manipulation in images and videos
Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, Kun Zhou
In ACM Transactions on Graphics, 32(4), July 2013.
Abstract: This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.
@article{Chai:2013:DHM,
author = {Menglei Chai and Lvdi Wang and Yanlin Weng and Xiaogang Jin and Kun Zhou},
title = {Dynamic hair manipulation in images and videos},
journal = {ACM Transactions on Graphics},
volume = {32},
number = {4},
pages = {75:1--75:8},
month = jul,
year = {2013},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."