Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks

被引:0
|
作者
Gao, Xiaofeng [1 ]
Gong, Ran [1 ]
Zhao, Yizhou [1 ]
Wang, Shu [1 ]
Shu, Tianmin [2 ]
Zhu, Song-Chun [1 ]
机构
[1] Univ Calif Los Angeles, Ctr Vis Cognit Learning & Auton, Los Angeles, CA 90024 USA
[2] MIT, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human collaborators can effectively communicate with their partners to finish a common task by inferring each other's mental states (e.g., goals, beliefs, and desires). Such mind-aware communication minimizes the discrepancy among collaborators' mental states, and is crucial to the success in human ad-hoc teaming. We believe that robots collaborating with human users should demonstrate similar pedagogic behavior. Thus, in this paper, we propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations, where the robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications based on its online Bayesian inference of the user's mental state. To evaluate our framework, we conduct a user study on a real-time human-robot cooking task. Experimental results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot. Code and video demos are available on our project website: https://xfgao.github.io/xCookingWeb/.
引用
收藏
页码:1119 / 1126
页数:8
相关论文
共 50 条
  • [11] Methods for Providing Indications of Robot Intent in Collaborative Human-Robot Tasks
    Bejerano, Gal
    LeMasurier, Gregory
    Yanco, Holly A.
    COMPANION OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'18), 2018, : 65 - 66
  • [12] Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
    Nikolaidis, Stefanos
    Ramakrishnan, Ramya
    Gu, Keren
    Shah, Julie
    PROCEEDINGS OF THE 2015 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'15), 2015, : 189 - 196
  • [13] Analyzing Human Visual Attention in Human-Robot Collaborative Construction Tasks
    Liang, Xiaoyun
    Cai, Jiannan
    Hu, Yuqing
    CONSTRUCTION RESEARCH CONGRESS 2024: ADVANCED TECHNOLOGIES, AUTOMATION, AND COMPUTER APPLICATIONS IN CONSTRUCTION, 2024, : 856 - 865
  • [14] Prediction of Human Activity Patterns for Human-Robot Collaborative Assembly Tasks
    Zanchettin, Andrea Maria
    Casalino, Andrea
    Piroddi, Luigi
    Rocco, Paolo
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2019, 15 (07) : 3934 - 3942
  • [15] A Programming by Demonstration System for Human-Robot Collaborative Assembly Tasks
    Hamabe, Takuma
    Goto, Hiraki
    Miura, Jun
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO), 2015, : 1195 - 1201
  • [16] Trust or Not?: A Computational Robot-Trusting-Human Model for Human-Robot Collaborative Tasks
    Hannum, Corey
    Li, Rui
    Wang, Weitian
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 5689 - 5691
  • [17] Human-robot mutual adaptation in collaborative tasks: Models and experiments
    Nikolaidis, Stefanos
    Hsu, David
    Srinivasa, Siddhartha
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 618 - 634
  • [18] The collaborative mind: intention reading and trust in human-robot interaction
    Vinanzi, Samuele
    Cangelosi, Angelo
    Goerick, Christian
    ISCIENCE, 2021, 24 (02)
  • [19] Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
    Campbell, Joseph
    Stepputtis, Simon
    Amor, Heni Ben
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [20] Tactile-Driven Gentle Grasping for Human-Robot Collaborative Tasks
    Ford, Christopher J.
    Li, Haoran
    Lloyd, John
    Catalano, Manuel G.
    Bianchi, Matteo
    Psomopoulou, Efi
    Lepora, Nathan F.
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 10394 - 10400