Understanding user commands by evaluating fuzzy linguistic information based on visual attention

A. G.Buddhika P. Jayasekara, Keigo Watanabe, Kiyotaka Izumi

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as "very little" are commonly included in voice commands. Therefore, a robot's capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user's working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.

Original languageEnglish
Pages (from-to)48-52
Number of pages5
JournalArtificial Life and Robotics
Volume14
Issue number1
DOIs
Publication statusPublished - Oct 16 2009
Externally publishedYes

Keywords

  • Fuzzy linguistic information
  • Robot control
  • Visual attention

ASJC Scopus subject areas

  • Biochemistry, Genetics and Molecular Biology(all)
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Understanding user commands by evaluating fuzzy linguistic information based on visual attention'. Together they form a unique fingerprint.

Cite this