Multimodal Behavior Analysis of Human-Robot Navigational Commands

被引:0
|
作者
Priyanayana, K. S. [1 ]
Jayasekara, A. G. Buddhika P. [1 ]
Gopura, R. A. R. C. [2 ]
机构
[1] Univ Moratuwa, Dept Elect Engn, Moratuwa, Sri Lanka
[2] Univ Moratuwa, Dept Mech Engn, Moratuwa, Sri Lanka
关键词
Human robot interaction; Social robotics; Non verbal communication; Multimodal interaction;
D O I
10.1109/ICCR51572.2020.9344419
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Human-Robot Interactions have to be more human like and human-human communications are multimodal interactions. In general communication, humans tend to use multiple modalities at a time to convey a message through. Multimodal interactions could be in many modalities such as gestures, speech, gaze and etc. Major multimodal human-human combination is the speech-hand gesture interaction. Hand gestures are used in a diverse range in these interactions. They add different meanings and enhance the understanding of the complete interaction in multiple dimensions. Purpose of this paper is to conduct a comprehensive analysis on multimodal relationship of speech-hand gesture interaction and its effect on the true meaning of interactions. Therefore this paper will focus on different aspects of each modality with regards to multimodal interactions such as vocal uncertainties, static and dynamic hand gestures, deictic, redundant and unintentional gestures, their timeline parameters, hand features and etc. Furthermore this paper discuss the effect of each speech-gesture parameter on understanding of the vocal ambiguities. Complete analysis of these aspects was conducted through detailed human study and results are interpreted through above multimodal aspects. Further vocal commands are analyzed using different vocal categories and different types of uncertainties. Hand gestures are analyzed though timeline parameters and hand feature analysis. For the timeline analysis, parameters were decided from the feedback of the participants on effectiveness of each parameter. Lag time, speed of the gesture movements and range of the gesture were considered for the timeline analysis.
引用
收藏
页码:79 / 84
页数:6
相关论文
共 50 条
  • [1] Enhancing Human-Robot Interaction by Interpreting Uncertain Information in Navigational Commands Based on Experience and Environment
    Muthugala, M. A. Viraj J.
    Jayasekara, A. G. Buddhika P.
    2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2016, : 2915 - 2921
  • [2] Multimodal Adapted Robot Behavior Synthesis within a Narrative Human-Robot Interaction
    Aly, Amir
    Tapus, Adriana
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 2986 - 2993
  • [3] Coordinating Shared Tasks in Human-Robot Collaboration by Commands
    Angleraud, Alexandre
    Sefat, Amir Mehman
    Netzev, Metodi
    Pieters, Roel
    FRONTIERS IN ROBOTICS AND AI, 2021, 8
  • [4] Multimodal Interface for Human-Robot Collaboration
    Rautiainen, Samu
    Pantano, Matteo
    Traganos, Konstantinos
    Ahmadi, Seyedamir
    Saenz, Jose
    Mohammed, Wael M.
    Lastra, Jose L. Martinez
    MACHINES, 2022, 10 (10)
  • [5] Building a multimodal human-robot interface
    Perzanowski, D
    Schultz, AC
    Adams, W
    Marsh, E
    Bugajska, M
    IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 2001, 16 (01): : 16 - 21
  • [6] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [7] Multimodal control for human-robot cooperation
    Cherubini, Andrea
    Passama, Robin
    Meline, Arnaud
    Crosnier, Andre
    Fraisse, Philippe
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 2202 - 2207
  • [8] Design of an Entertainment Robot with Multimodal Human-Robot Interactions
    Jean, Jong-Hann
    Chen, Kuan-Ting
    Shih, Kuang-Yao
    Lin, Hsiu-Li
    2008 PROCEEDINGS OF SICE ANNUAL CONFERENCE, VOLS 1-7, 2008, : 1378 - 1382
  • [9] Extending Commands Embedded in Actions for Human-Robot Cooperative Tasks
    Kobayashi, Kazuki
    Yamada, Seiji
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2010, 2 (02) : 159 - 173
  • [10] Clarifying Commands with Information-Theoretic Human-Robot Dialog
    Deits, Robin
    Tellex, Stefanie
    Thaker, Pratiksha
    Simeonov, Dimitar
    Kollar, Thomas
    Roy, Nicholas
    JOURNAL OF HUMAN-ROBOT INTERACTION, 2013, 2 (02): : 58 - 79