Video Surveillance and Activity Monitoring
DARPA-Image Understanding Project: 1997-1999
We present a system which takes as input a video stream, obtained from an airborne moving platform, and produces an analysis of the behavior of the moving objects in the scene. To achieve this functionality, our system relies on two modular blocks.
The first one detects and tracks moving regions in the sequence. It uses a set of features at multiple scales to stabilize the image sequence, that is compensate for the motion of the observer; then extracts regions with residual motion, and uses an attribute graph representation to infer their trajectories.
The second module takes as input these trajectories, together with user-provided information, in the form of geospatial context, and goal context, to instantiate likely scenarios.
|Last modified: November 2006 USC Computer Vision|