Racking focus and tracking focus on live video streams: a stereo solution
Zhan Yu, Xuan Yu, Christopher Thorpe, Scott Grauer-Gray, Feng Li, Jingyi Yu
In The Visual Computer, 30(1), January 2014.
Abstract: The ability to produce dynamic Depth of Field effects in live video streams was until recently a quality unique to movie cameras. In this paper, we present a computational camera solution coupled with real-time GPU processing to produce runtime dynamic Depth of Field effects. We first construct a hybrid-resolution stereo camera with a high-res/low-res camera pair. We recover a low-res disparity map of the scene using GPU-based Belief Propagation, and subsequently upsample it via fast Cross/Joint Bilateral Upsampling. With the recovered high-resolution disparity map, we warp the high-resolution video stream to nearby viewpoints to synthesize a light field toward the scene. We exploit parallel processing and atomic operations on the GPU to resolve visibility when multiple pixels warp to the same image location. Finally, we generate racking focus and tracking focus effects from the synthesized light field rendering. All processing stages are mapped onto NVIDIA's CUDA architecture. Our system can produce racking and tracking focus effects for the resolution of 640×480 at 15 fps.
Article URL: http://dx.doi.org/10.1007/s00371-013-0778-4
BibTeX format:
@article{Yu:2014:RFA,
  author = {Zhan Yu and Xuan Yu and Christopher Thorpe and Scott Grauer-Gray and Feng Li and Jingyi Yu},
  title = {Racking focus and tracking focus on live video streams: a stereo solution},
  journal = {The Visual Computer},
  volume = {30},
  number = {1},
  pages = {45--58},
  month = jan,
  year = {2014},
}
Search for more articles by Zhan Yu.
Search for more articles by Xuan Yu.
Search for more articles by Christopher Thorpe.
Search for more articles by Scott Grauer-Gray.
Search for more articles by Feng Li.
Search for more articles by Jingyi Yu.

Return to the search page.


graphbib: Powered by "bibsql" and "SQLite3."