I s a a c  C O H E N 

 

 

Human Body Posture Recognition

 

 

   

D e s c r i p t i o n 

 

Multimodal interaction systems represent a considerable shift from classical windows, icons, menus and pointing (WIMP) interfaces. Gesture and speech represent the main component of such interface as they correspond to the foundation of natural human communication. While speech recognition systems are commercially available, gesture recognition is still in its infancy. This is partially due to the fact that speech modality is linear and very structured while gesture is a spatial modality that is still challenging to capture and interpret.
Identifying body postures from its 3D shape is challenging as the 3D description of the shape has to account for shape variability in characterizing a posture. Indeed, several people will perform similar posture differently and therefore identifying a posture from the 2D/3D shape descriptions will require a learning step. We focus on developing an appearance-based, learning formalism that is view point independent. It uses a 3D shape descriptor of the visual-hull for classifying and identifying human posture.

S t u d e n t s 

 

  • Hongxia Li

R e s u l t s 

 

Real-time 3D Reconstruction of the hand from its silhouettes and fitting of an articulated model

Visualization of system's output: 4 detected silhouettes, the corresponding visual hull and the identified posture is automatically recognized and the corresponding thumbnail highlighted

A second example

Visualization of system's output: 4 detected silhouettes, the corresponding visual hull and the identified posture is automatically recognized and the corresponding thumbnail highlighted

P u b l i c a t i o n s