High-quality video view interpolation using a layered representation
C. Lawrence Zitnick, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
In ACM Transactions on Graphics, 23(3), August 2004.
Abstract: The ability to interactively control viewpoint while watching a video is an exciting application of image-based rendering. The goal of our work is to render dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this paper, we show how high-quality video-based rendering of dynamic scenes can be accomplished using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation.In our approach, we first use a novel color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a novel temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates.
Keyword(s): Computer Vision, Dynamic Scenes, Image-Based Rendering
@article{Zitnick:2004:HVV,
author = {C. Lawrence Zitnick and Sing Bing Kang and Matthew Uyttendaele and Simon Winder and Richard Szeliski},
title = {High-quality video view interpolation using a layered representation},
journal = {ACM Transactions on Graphics},
volume = {23},
number = {3},
pages = {600--608},
month = aug,
year = {2004},
}
Return to the search page.
graphbib: Powered by "bibsql" and "SQLite3."