A multimodal language to communicate with life-supporting robots through a touch screen and a speech interface

被引:0
|
作者
Oka, T. [1 ,2 ]
Matsumoto, H. [1 ,2 ]
Kibayashi, R. [1 ,2 ]
机构
[1] Nihon Univ, Coll Ind Technol, 1-2-1 Izumicho, Narashino, Chiba 2758575, Japan
[2] Fukuoka Inst Technol, Fac Informat Engn, Fukuoka, Japan
关键词
Life-supporting robot; Multimodal language; Speech; Touch screen; Human-robot interaction;
D O I
10.1007/s10015-011-0924-x
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This article proposes a multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. The language is designed for untrained users who need support in their daily lives from cost-effective robots. In this language, the users can combine spoken and pointing messages in an interactive manner in order to convey their intentions to the robots. Spoken messages include verb and noun phrases which describe intentions. Pointing messages are given when the user's finger touches a camera image, a picture containing a robot body, or a button on a touch screen at hand which convey a location in their environment, a direction, a body part of the robot, a cue, a reply to a query, or other information to help the robot. This work presents the philosophy and structure of the language.
引用
收藏
页码:292 / 296
页数:5
相关论文
共 15 条