Cross-View Human Intention Recognition for Human-Robot Collaboration

被引:4
|
作者
Ni, Shouxiang [1 ]
Zhao, Lindong [1 ]
Li, Ang [1 ]
Wu, Dan [2 ]
Zhou, Liang [1 ]
机构
[1] Nanjing Univ Posts & Telecommun, Nanjing, Peoples R China
[2] Army Engn Univ PLA, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Measurement; Face recognition; Wireless networks; Semantics; Collaboration; Machine learning; Production facilities; Human-robot interaction;
D O I
10.1109/MWC.018.2200514
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Benefiting from the promise of sixth generation (6G) wireless networks, multimodal machine learning based on exploiting complementarity among video, audio, and haptic signals, becomes a key enabler for human intention recognition, which is critical to realize effective human-robot collaboration in Industry 4.0 scenarios. However, as multimodal human intention recognition is limited by expensive equipment and a demanding environment, it is hard to strike an efficient trade-off between inference accuracy and system overhead. Naturally, how to induce more intention semantics from readily available videos emerges as a fundamental issue for human intention recognition. In this article, we use cross-view human intention recognition to solve the above issue and demonstrate the effectiveness of our method with well-designed evaluation metrics. Specifically, we first compensate for the scarcity of intention semantics in the body view by adding a face view. Second, we deploy the cross-view generative model to capture intention semantics induced by the mutual generation of two views. Finally, in the human-robot collaboration experiments, our method gets closer to human performance regarding response time and inference accuracy.
引用
收藏
页码:189 / 195
页数:7
相关论文
共 50 条
  • [21] Human-Robot Collaboration using Variable Admittance Control and Human Intention Prediction
    Lu, Weifeng
    Hu, Zhe
    Pan, Jia
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 1116 - 1121
  • [22] Visual Diver Recognition for Underwater Human-Robot Collaboration
    Xia, Youya
    Sattar, Junaed
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 6839 - 6845
  • [23] Optimal collaboration in human-robot target recognition systems
    Bechar, Avital
    Edan, Yael
    Meyer, Joachim
    2006 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-6, PROCEEDINGS, 2006, : 4243 - +
  • [24] Hybrid Recurrent Neural Network Architecture-Based Intention Recognition for Human-Robot Collaboration
    Gao, Xiaoshan
    Yan, Liang
    Wang, Gang
    Gerada, Chris
    IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (03) : 1578 - 1586
  • [25] A Model-Based Human Activity Recognition for Human-Robot Collaboration
    Lee, Sang Uk
    Hofmann, Andreas
    Williams, Brian
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 736 - 743
  • [26] Human modeling for human-robot collaboration
    Hiatt, Laura M.
    Narber, Cody
    Bekele, Esube
    Khemlani, Sangeet S.
    Trafton, J. Gregory
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 580 - 596
  • [27] Human Intention Inference and On-Line Human Hand Motion Prediction for Human-Robot Collaboration
    Luo, Ren C.
    Mai, Licong
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 5958 - 5964
  • [28] Interaction Intention Recognition via Human Emotion for Human-Robot Natural Interaction
    Yang, Shengtian
    Guan, Yisheng
    Li, Yihui
    Shi, Wenjing
    2022 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2022, : 380 - 385
  • [29] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [30] Situation-Specific Intention Recognition for Human-Robot Cooperation
    Krauthausen, Peter
    Hanebeck, Uwe D.
    KI 2010: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2010, 6359 : 418 - 425