Parag Havaldar and Misuen Lee


Synthesizing the image of a 3-D scene as it would be captured by a camera from an arbitrary viewpoint is a central problem in Computer Graphics. Given a complete 3-D model, it is possible to render the scene from any viewpoint. The generation of the model is a tedious, error-prone and labor intensive task. Here, we propose to bypass the model generation phase altogether, and to generate images of a 3-D scene from any novel viewpoint from prestored images. Furthermore, unlike methods presented so far, we propose to completely avoid inferring and reasoning in 3-D by using projective invariants derived from corresponding points in the prestored images. The correspondence between features is performed off-line in a semi- automated way with a simple interface. It is then possible to generate wireframe animation in real time on a standard computing platform. Well understood texture mapping methods are then applied to realistically render new views from the prestored ones.

The method proposed here should allow the integration of computer generated and real imagery and walkthough in realistic virtual environments. We illustrate our approach on synthetic and real indoor and outdoor images.

Example Result:

Original Views
Synthesized Views: