An approach to estimating human behaviors by using an active vision head

Keigo Watanabe, Kiyotaka Izumi, Kei Shibayama, Kohei Kamohara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

When an intelligent robot works together with human in a living space, we may often encounter a scene that the robot supports the human behaviors. Understanding of the human behavior is required for the robot to judge what kind of action serves as assistance to the human in such a scene. The present research aims at constructing a system that can understand the human behavior using the time series images for the movement of the human, which can be obtained from an active vision head. The first process estimates the human posture in each frame. The next process divides the basic human action into a posture and a position. The final process estimates the human behavior by examining the action and the target, under the assumption that the human behavior consists of the combination of an action and a behavioral target

Original languageEnglish
Title of host publication9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06
DOIs
Publication statusPublished - Dec 1 2006
Externally publishedYes
Event9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06 - Singapore, Singapore
Duration: Dec 5 2006Dec 8 2006

Publication series

Name9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06

Other

Other9th International Conference on Control, Automation, Robotics and Vision, 2006, ICARCV '06
Country/TerritorySingapore
CitySingapore
Period12/5/0612/8/06

Keywords

  • Active vision head
  • Assistance robots
  • Human behavior
  • Real-time detection

ASJC Scopus subject areas

  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'An approach to estimating human behaviors by using an active vision head'. Together they form a unique fingerprint.

Cite this