Multi-View 3-D Object Description
with Uncertain Reasoning and Machine Learning

ZuWhan Kim


Abstract

Acquiring 3-D object description from a single or multiple images has been a key goal of computer vision. A key issue to get a good object description is decision making at various stages with diverse and uncertain evidence. Most of the previous work in 3-D object description has been focused on feature grouping, and the decisions were usually made by ad hoc operators.

I first show two experiments of applying uncertain reasoning and learning for 3-D object description. First experiment with a monocular building detection and description system verifies the idea that the uncertain reasoning and learning will bring better result with smaller efforts for tuning the parameters. In the second experiment, I apply Bayesian inference to a multi-view and multi-modal building description system, where the number of evidence inputs varies according to the number of images used. I propose an expandable Bayesian network (EBN) for such a situation. In the experimental results, the proposed method shows a superior performance to others.

Finally, I present Automatic Building Extraction and Reconstruction System (ABERS). ABERS detects and describes complex buildings which consist of flat or gable polygonal rooftops. Despite the increased model complexity, the computation is maintained to be affordable by using multiple images and rough range data. Rooftop hypotheses are generated from 3-D features obtained from multiple images and verified from the range data. Information from diverse sources (multiple images and range data) is combined at various levels with various methods, such as probabilistic height reasoning and hypotheses verification with expandable Bayesian networks. Experimental results on complex buildings show that the suggested approach is promising.


Maintained by Philippos Mordohai