Articulated Mesh Animation from Multi-view Silhouettes
Daniel Vlasic, Ilya Baran, Wojciech Matusik, Jovan Popović
In ACM Transactions on Graphics, 27(3), August 2008.
Abstract: Details in mesh animations are difficult to generate but they have great impact on visual quality. In this work, we demonstrate a practical software system for capturing such details from multi-view video recordings. Given a stream of synchronized video images that record a human performance from multiple viewpoints and an articulated template of the performer, our system captures the motion of both the skeleton and the shape. The output mesh animation is enhanced with the details observed in the image silhouettes. For example, a performance in casual loose-fitting clothes will generate mesh animations with flowing garment motions. We accomplish this with a fast pose tracking method followed by nonrigid deformation of the template to fit the silhouettes. The entire process takes less than sixteen seconds per frame and requires no markers or texture cues. Captured meshes are in full correspondence making them readily usable for editing operations including texturing, deformation transfer, and deformation model learning.
Keyword(s): deformation, motion capture
Article URL: http://doi.acm.org/10.1145/1360612.1360696
BibTeX format:
@article{Vlasic:2008:AMA,
  author = {Daniel Vlasic and Ilya Baran and Wojciech Matusik and Jovan Popović},
  title = {Articulated Mesh Animation from Multi-view Silhouettes},
  journal = {ACM Transactions on Graphics},
  volume = {27},
  number = {3},
  pages = {97:1--97:9},
  month = aug,
  year = {2008},
}
Search for more articles by Daniel Vlasic.
Search for more articles by Ilya Baran.
Search for more articles by Wojciech Matusik.
Search for more articles by Jovan Popović.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."