This article proposes a multimodal language to communicate with life-supporting robots through a touch screen and a speech interface. The language is designed for untrained users who need support in their daily lives from cost-effective robots. In this language, the users can combine spoken and pointing messages in an interactive manner in order to convey their intentions to the robots. Spoken messages include verb and noun phrases which describe intentions. Pointing messages are given when the user's finger touches a camera image, a picture containing a robot body, or a button on a touch screen at hand which convey a location in their environment, a direction, a body part of the robot, a cue, a reply to a query, or other information to help the robot. This work presents the philosophy and structure of the language.
机构:
Inst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, ChinaInst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, China
Gao, W.
Chen, X.L.
论文数: 0引用数: 0
h-index: 0
机构:
Inst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, ChinaInst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, China
Chen, X.L.
Ma, J.Y.
论文数: 0引用数: 0
h-index: 0
机构:
Inst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, ChinaInst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, China
Ma, J.Y.
Wang, Z.Q.
论文数: 0引用数: 0
h-index: 0
机构:
Inst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, ChinaInst. of Computing Technol., Chinese Acad. of Science, Beijing 100080, China