Multimodal Behavior Analysis of Human-Robot Navigational Commands

被引:0
|
作者
Priyanayana, K. S. [1 ]
Jayasekara, A. G. Buddhika P. [1 ]
Gopura, R. A. R. C. [2 ]
机构
[1] Univ Moratuwa, Dept Elect Engn, Moratuwa, Sri Lanka
[2] Univ Moratuwa, Dept Mech Engn, Moratuwa, Sri Lanka
关键词
Human robot interaction; Social robotics; Non verbal communication; Multimodal interaction;
D O I
10.1109/ICCR51572.2020.9344419
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human-Robot Interactions have to be more human like and human-human communications are multimodal interactions. In general communication, humans tend to use multiple modalities at a time to convey a message through. Multimodal interactions could be in many modalities such as gestures, speech, gaze and etc. Major multimodal human-human combination is the speech-hand gesture interaction. Hand gestures are used in a diverse range in these interactions. They add different meanings and enhance the understanding of the complete interaction in multiple dimensions. Purpose of this paper is to conduct a comprehensive analysis on multimodal relationship of speech-hand gesture interaction and its effect on the true meaning of interactions. Therefore this paper will focus on different aspects of each modality with regards to multimodal interactions such as vocal uncertainties, static and dynamic hand gestures, deictic, redundant and unintentional gestures, their timeline parameters, hand features and etc. Furthermore this paper discuss the effect of each speech-gesture parameter on understanding of the vocal ambiguities. Complete analysis of these aspects was conducted through detailed human study and results are interpreted through above multimodal aspects. Further vocal commands are analyzed using different vocal categories and different types of uncertainties. Hand gestures are analyzed though timeline parameters and hand feature analysis. For the timeline analysis, parameters were decided from the feedback of the participants on effectiveness of each parameter. Lag time, speed of the gesture movements and range of the gesture were considered for the timeline analysis.
引用
收藏
页码:79 / 84
页数:6
相关论文
共 50 条
  • [31] Multimodal fusion and human-robot interaction control of an intelligent robot
    Gong, Tao
    Chen, Dan
    Wang, Guangping
    Zhang, Weicai
    Zhang, Junqi
    Ouyang, Zhongchuan
    Zhang, Fan
    Sun, Ruifeng
    Ji, Jiancheng Charles
    Chen, Wei
    FRONTIERS IN BIOENGINEERING AND BIOTECHNOLOGY, 2024, 11
  • [32] Development of an Office Delivery Robot with Multimodal Human-Robot Interactions
    Jean, Jong-Hann
    Wei, Chen-Fu
    Lin, Zheng-Wei
    Lian, Kuang-Yow
    2012 PROCEEDINGS OF SICE ANNUAL CONFERENCE (SICE), 2012, : 1564 - 1567
  • [33] Behavior Analysis for Increasing the Efficiency of Human-Robot Collaboration
    Lin, Hsien-, I
    Wibowo, Fauzy Satrio
    Lathifah, Nurani
    Chen, Wen-Hui
    MACHINES, 2022, 10 (11)
  • [34] High-Level Commands in Human-Robot Interaction for Search and Rescue
    Caltieri, Alain
    Amigoni, Francesco
    ROBOCUP 2013: ROBOT WORLD CUP XVII, 2014, 8371 : 480 - 491
  • [35] Learning to Understand Parameterized Commands through a Human-Robot Training Task
    Austermann, Anja
    Yamada, Seiji
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 800 - +
  • [36] Learning to Interpret Natural Language Commands through Human-Robot Dialog
    Thomason, Jesse
    Zhang, Shiqi
    Mooney, Raymond
    Stone, Peter
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 1923 - 1929
  • [37] Adapting Robot Behavior for Human-Robot Interaction
    Christopher, G. John
    Preethi, S.
    Beevi, S. Jailani
    INFORMATION AND NETWORK TECHNOLOGY, 2011, 4 : 147 - 152
  • [38] Adapting robot behavior for human-robot interaction
    Mitsunaga, Noriaki
    Smith, Christian
    Kanda, Takayuki
    Ishiguro, Hiroshi
    Hagita, Norihiro
    IEEE TRANSACTIONS ON ROBOTICS, 2008, 24 (04) : 911 - 916
  • [39] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706
  • [40] An Extensible Architecture for Robust Multimodal Human-Robot Communication
    Rossi, Silvia
    Leone, Enrico
    Fiore, Michelangelo
    Finzi, Alberto
    Cutugno, Francesco
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2208 - 2213