Visual Sensing for Natural Human-Robot Interaction

 

 

  USC

 

  IRIS Computer Vision Lab

 

  ETRI Intelligence Robot Research Division

 

 

 

  Project Description

 

Vision is clearly an important element of human-human communication. Body language such as facial expressions, silent nods
and other gestures add important information in human-to-human dialog.

We expect it can do the same in human-robot interaction. The robot should always be able to provide answers to questions
such as "Where am I?", "Are there people?", "Who are they?", "Am I being called?" and so on. To answer these questions,
the robot should have following essential capabilities.

Awareness : An assistive robot should always locate and identify individuals, interpret human motion and actions
                            and  also know the position of itself.

Communication : To interact with humans, including its master, the robot should understand face and gesture signals
                                     from a human and respond to them with other gestures, such as “I understood”, “I am going to”.

Decision : After it receives the commands from its master, the robot should decide “What should I do next?” and finally
                        respond to the commands with some gestures of acknowledgement.

We will develop a robust and persistent 3-D based vision system to support these capabilities through the joint research project
between USC and ETRI as following tasks.

Long range interaction : Detecting and Tracking of human

Short range interaction : Pose estimation & gesture recognition

Position estimation : Recognizing the position of its master and itself

Integration : Porting on real robot

 

  Member of Research Team

 

Gérard Medioni (Project Leader)

Isaac Cohen (Research assistant professor)

Hosub Yoon (Visiting Researcher)

Kwangsu Kim (Research Assistant)

Matheen Siddiqui (Research Assistant)