Image-Based Rendering of Dynamic Scenes

Sing Bing Kang


Abstract

The ability to interactively control the viewpoint while watching a video is an exciting application of image-based rendering. Our goal is high-quality rendering of dynamic scenes with interactive viewpoint control using a relatively small number of video cameras. In this talk, I will describe how we achieved this goal using multiple synchronized video streams combined with novel image-based modeling and rendering algorithms. Once these video streams have been processed, we can synthesize any intermediate view between cameras at any time, with the potential for space-time manipulation. In our approach, we first use a color segmentation-based stereo algorithm to generate high-quality photoconsistent correspondences across all camera views. Mattes for areas near depth discontinuities are then automatically extracted to reduce artifacts during view synthesis. Finally, a new temporal two-layer compressed representation that handles matting is developed for rendering at interactive rates. This work was done with Larry Zitnick, Matthew Uyttendaele, Simon Winder, and Richard Szeliski, and was presented at SIGGRAPH'04.


Maintained by Changki Min