Learning Highly Structured Motion Model for 3D Figure Tracking

Tao Zhao


Abstract

The talk will cover my work on unsupervised learning of structured motion model from un-segmented 3D Mocap sequence and 3D tracking from regular monocular video sequence with the learnt motion model. It is mainly composed of two parts: learning and tracking.

Our work is done on a class of structured motion, which is composed of a number of building blocks or basic movements, which we call primitives. The learning goal is to recover these primitives from un-segmented training data. This is done under an MDL paradigm and via a two-step explicit optimization procedure.

3D figure tracking from monocular video sequence is a difficult due to a number of reasons, mainly the high dimensionality, loss of depth information and image noise. Some of the difficulties can be alleviated by dynamical model, serving as a prior. With the primitive segmentations, we can build efficient mixture dynamical model (individual dynamical model for each primitive and their transitions) for the entire motion. We adapted Condenstation algorithm for the tracking and successfully tracked sequences that are otherwise difficult to track without the dynamical model.

The talk also includes an overview of state of art of human motion capture and an introduction to Condenstation algorithm.


Maintained by Philippos Mordohai