Global Matching Criterion and Color Segmentation Based Stereo

Hai Tao


Abstract

Accurate estimation of 3D dense scene structure is crucial for applications such as image based 3D modeling and new view rendering. In this talk, I will present a new analysis by synthesis computational framework for stereo vision. It is designed to achieve the following goals: (1) enforcing global visibility constraints, (2) obtaining reliable depth for depth boundaries and thin structures, (3) obtaining correct depth for textureless regions, and (4) hypothesizing correct depth for unmatched regions. The framework employs depth and visibility based rendering within a global matching criterion to compute depth in contrast with approaches that rely on local matching measures and relaxation. A color segmentation based depth representation guarantees smoothness in textureless regions. Hypothesizing depth from neighboring segments enables propagation of correct depth and produces reasonable depth values for unmatched region. A practical algorithm that integrates all these aspects is presented in this talk. Comparative experimental results are shown for real images. Results on new view rendering from multiple depth maps are demonstrated.

I will also briefly review the research activities in Sarnoff Vision Technology Laboratory. Additional material on dynamic motion layer analysis and a sampling method for tracking multiple objects will be presented if time allows.


Maintained by Alexandre R.J. FRANÇOIS