ECCV 2008 Rehearsals
Chang Huang, Yuan Li and Qian Yu
1 Chang Huang, Title: Robust Object Tracking by Hierarchical Association of Detection Responses
Abstract: We present a detection-based three-level hierarchical association approach to robustly track multiple objects in crowded environments from a single
camera. At the low level, reliable tracklets (i.e. short tracks for further analysis) are generated by linking detection responses based on conservative affinity constraints.
At the middle level, these tracklets are further associated to form longer tracklets based on more complex affinity measures. The association is formulated
as a MAP problem and solved by the Hungarian algorithm. At the high level, entries, exits and scene occluders are estimated using the already computed tracklets,
which are used to refine the final trajectories. This approach is applied to the pedestrian class and evaluated on two challenging datasets. The experimental
results show a great improvement in performance compared to previous methods.
2 Yuan Li, Title: Key Object Driven Multi-Category Object Recognition, Localization and Tracking Using Spatio-Temporal Context
Abstract: In this paper we address the problem of recognizing, localizing and tracking multiple objects of different categories in meeting room videos.
we show that incorporating object-level spatio-temporal relationships can lead to improvements in inference of object category and state. Contextual
relationships are modeled by a dynamic Markov random field, in which recognition, localization and tracking are done simultaneously. Further, we define human
as the key object of the scene, which can be detected relatively robustly and therefore is used to guide the inference of other objects. Experiments are done on the
CHIL meeting video corpus. Performance is evaluated in terms of object detection and false alarm rates, object recognition confusion matrix and pixel-level
accuracy of object segmentation.
3 Qian Yu, Title: Online Tracking and Reacquisition Using Co-trained Generative and Discriminative Trackers
Abstract: Visual tracking is a challenging problem, as an object may change its appearance due to viewpoint variations, illumination changes, and
occlusion. Also, an object may leave the field of view and then reappear. In order to track and reacquire an unknown object with limited labeling data, we propose
to learn these changes online and build a model that describes all seen appearance while tracking. To address this semi-supervised learning problem, we propose a
cotraining based approach to continuously label incoming data and online update a hybrid discriminative generative model. The generative model uses a
number of low dimesion linear subspaces to describe the appearance of the object.
In order to reacquire an object, the generative model encodes all the appearance variations that have been seen. A discriminative classifier is implemented as an
online support vector machine, which is trained to focus on recent appearance variations. The online co-training of this hybrid approach accounts for appearance
changes and allows reacquisition of an object after total occlusion. We demonstrate that under challenging situations, this method has strong reacquisition
ability and robustness to distracters in background.