This project aims to create and implement Stevi, a high-level vision subsystem for use with personal service robots. In order to properly function, service robots will require the ability to detect the location and intentions of potential users. To meet these requirements, this system will be able to perform localization, detection and tracking of people, as well gesture recognition in real time using stereo and omnidirectional video.
While computer vision has made significant progress over the years in all of these areas, there is still a significant divide between the abilities of even the most advanced algorithms and the human visual system. The reduced computational power and dynamic nature of most robotic platforms further intensify the need for better vision systems. To address these issues, we make use of the Software Architecture for Immersipresence (SAI) framework to accomplish a fusion of preexisting efficient algorithms at the symbolic level, resulting in a highly robust system.
To test this approach we make use of the WEVER-N, a prototype personal service robot platform developed by ETRI in Korea. This provides us with a working platform for research, including propulsion, servo controlled stereo color cameras, and speech synthesis. Future research will augment this system with static cameras placed strategically throughout the environment to increase Stevi's situational awareness.