Body Pose Estimation and Gesture Recognition for Human-Computer Interaction System

Chi-Wei Chu


Abstract

We investigate approached for a visual communication application for a dark, theater-like interactive virtual simulation training environment. Our goal is to visually recognition and user action and estimate and tracks the body position, orientation and the limb configuration of the user. This system uses a near-IR camera array to capture images of the trainee from different angles in the dim-lighted theater. Image features like silhouettes and intermediate silhouette body axis points are then segmented and extracted from image backgrounds. 3D body shape information such as 3D body skeleton points and visual hulls can be reconstructed from these 2D features in multiple calibrated images. For gesture recognition, the system classifies the current user action from a set of posture and gesture dictionary. And the most likely gesture is recognized from the observed image sequences. For gesture estimation, we proposed a particle-filtering based method that fits an articulated body model to the observed image features. Currently we focus on the arm-pointing gesture of either limb. From the fitted articulated model we can derive the position on the screen the user is pointing to. We use current graphic hardware to accelerate the processing speed so the system is able to work in real-time. The system serves as part of multi-modal user-input device in the interactive simulation


Maintained by Qian Yu