Abstract
This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as "very little" are commonly included in voice commands. Therefore, a robot's capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user's working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.
Original language | English |
---|---|
Pages (from-to) | 48-52 |
Number of pages | 5 |
Journal | Artificial Life and Robotics |
Volume | 14 |
Issue number | 1 |
DOIs | |
Publication status | Published - Oct 16 2009 |
Externally published | Yes |
Keywords
- Fuzzy linguistic information
- Robot control
- Visual attention
ASJC Scopus subject areas
- Biochemistry, Genetics and Molecular Biology(all)
- Artificial Intelligence