Multimodal Behavior Analysis of Human-Robot Navigational Commands

被引:0
|
作者
Priyanayana, K. S. [1 ]
Jayasekara, A. G. Buddhika P. [1 ]
Gopura, R. A. R. C. [2 ]
机构
[1] Univ Moratuwa, Dept Elect Engn, Moratuwa, Sri Lanka
[2] Univ Moratuwa, Dept Mech Engn, Moratuwa, Sri Lanka
关键词
Human robot interaction; Social robotics; Non verbal communication; Multimodal interaction;
D O I
10.1109/ICCR51572.2020.9344419
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human-Robot Interactions have to be more human like and human-human communications are multimodal interactions. In general communication, humans tend to use multiple modalities at a time to convey a message through. Multimodal interactions could be in many modalities such as gestures, speech, gaze and etc. Major multimodal human-human combination is the speech-hand gesture interaction. Hand gestures are used in a diverse range in these interactions. They add different meanings and enhance the understanding of the complete interaction in multiple dimensions. Purpose of this paper is to conduct a comprehensive analysis on multimodal relationship of speech-hand gesture interaction and its effect on the true meaning of interactions. Therefore this paper will focus on different aspects of each modality with regards to multimodal interactions such as vocal uncertainties, static and dynamic hand gestures, deictic, redundant and unintentional gestures, their timeline parameters, hand features and etc. Furthermore this paper discuss the effect of each speech-gesture parameter on understanding of the vocal ambiguities. Complete analysis of these aspects was conducted through detailed human study and results are interpreted through above multimodal aspects. Further vocal commands are analyzed using different vocal categories and different types of uncertainties. Hand gestures are analyzed though timeline parameters and hand feature analysis. For the timeline analysis, parameters were decided from the feedback of the participants on effectiveness of each parameter. Lag time, speed of the gesture movements and range of the gesture were considered for the timeline analysis.
引用
收藏
页码:79 / 84
页数:6
相关论文
共 50 条
  • [41] Experimental Environments for Dismounted Human-Robot Multimodal Communications
    Abich, Julian
    Barber, Daniel J.
    Reinerman-Jones, Lauren
    VIRTUAL, AUGMENTED AND MIXED REALITY (VAMR 2015), 2015, 9179 : 165 - 173
  • [42] Multimodal Engagement Prediction in Multiperson Human-Robot Interaction
    Abdelrahman, Ahmed A.
    Strazdas, Dominykas
    Khalifa, Aly
    Hintz, Jan
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    IEEE ACCESS, 2022, 10 : 61980 - 61991
  • [43] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [44] A dialogue manager for multimodal human-robot interaction and learning of a humanoid robot
    Holzapfel, Hartwig
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2008, 35 (06): : 528 - 535
  • [45] A unified multimodal control framework for human-robot interaction
    Cherubini, Andrea
    Passama, Robin
    Fraisse, Philippe
    Crosnier, Andre
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 70 : 106 - 115
  • [46] HARMONIC: A multimodal dataset of assistive human-robot collaboration
    Newman, Benjamin A.
    Aronson, Reuben M.
    Srinivasa, Siddhartha S.
    Kitani, Kris
    Admoni, Henny
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (01): : 3 - 11
  • [47] Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence
    Zhang, Zhengyou
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 2 - 2
  • [48] Referent Identification Process in Human-Robot Multimodal Communication
    Shibasaki, Yuta
    Inaba, Takahiro
    Nakano, Yukiko I.
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2012, : 197 - 198
  • [49] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [50] DiGeTac Unit for Multimodal Communication in Human-Robot Interaction
    Al, Gorkem Anil
    Martinez-Hernandez, Uriel
    IEEE SENSORS LETTERS, 2024, 8 (05)