Dual Track Multimodal Automatic Learning through Human-Robot Interaction

被引:0
|
作者
Jiang, Shuqiang [1 ,2 ]
Min, Weiqing [1 ,2 ]
Li, Xue [1 ,3 ]
Wang, Huayang [1 ,2 ]
Sun, Jian [1 ,3 ]
Zhou, Jiaqi [1 ]
机构
[1] Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Shandong Univ Sci & Technol, Qingdao, Peoples R China
基金
中国博士后科学基金; 北京市自然科学基金; 中国国家自然科学基金;
关键词
ONLINE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human beings are constantly improving their cognitive ability via automatic learning from the interaction with the environment. Two important aspects of automatic learning are the visual perception and knowledge acquisition. The fusion of these two aspects is vital for improving the intelligence and interaction performance of robots. Many automatic knowledge extraction and recognition methods have been widely studied. However, little work focuses on integrating automatic knowledge extraction and recognition into a unified framework to enable jointly visual perception and knowledge acquisition. To solve this problem, we propose a Dual Track Multimodal Automatic Learning (DTMAL) system, which consists of two components: Hybrid Incremental Learning (HIL) from the vision track and Multimodal Knowledge Extraction (MKE) from the knowledge track. HIL can incrementally improve recognition ability of the system by learning new object samples and new object concepts. MKE is capable of constructing and updating the multimodal knowledge items based on the recognized new objects from HIL and other knowledge by exploring the multimodal signals. The fusion of the two tracks is a mutual promotion process and jointly devote to the dual track learning. We have conducted the experiments through human-machine interaction and the experimental results validated the effectiveness of our proposed system.
引用
收藏
页码:4485 / 4491
页数:7
相关论文
共 50 条
  • [1] Knowledge acquisition through human-robot multimodal interaction
    Randelli, Gabriele
    Bonanni, Taigo Maria
    Iocchi, Luca
    Nardi, Daniele
    INTELLIGENT SERVICE ROBOTICS, 2013, 6 (01) : 19 - 31
  • [2] A dialogue manager for multimodal human-robot interaction and learning of a humanoid robot
    Holzapfel, Hartwig
    INDUSTRIAL ROBOT-THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH AND APPLICATION, 2008, 35 (06): : 528 - 535
  • [3] Learning Multimodal Confidence for Intention Recognition in Human-Robot Interaction
    Zhao, Xiyuan
    Li, Huijun
    Miao, Tianyuan
    Zhu, Xianyi
    Wei, Zhikai
    Tan, Lifen
    Song, Aiguo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (09): : 7819 - 7826
  • [4] Multimodal Interaction for Human-Robot Teams
    Burke, Dustin
    Schurr, Nathan
    Ayers, Jeanine
    Rousseau, Jeff
    Fertitta, John
    Carlin, Alan
    Dumond, Danielle
    UNMANNED SYSTEMS TECHNOLOGY XV, 2013, 8741
  • [5] Understanding and learning of gestures through human-robot interaction
    Kuno, Y
    Murashima, T
    Shimada, N
    Shirai, Y
    2000 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2000), VOLS 1-3, PROCEEDINGS, 2000, : 2133 - 2138
  • [6] MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction
    Prasad, Vignesh
    Koert, Dorothea
    Stock-Homburg, Ruth
    Peters, Jan
    Chalvatzaki, Georgia
    2022 IEEE-RAS 21ST INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2022, : 472 - 479
  • [7] MULTIMODAL SIGNAL PROCESSING AND LEARNING ASPECTS OF HUMAN-ROBOT INTERACTION FOR AN ASSISTIVE BATHING ROBOT
    Zlatintsi, A.
    Rodomagoulakis, I.
    Koutras, P.
    Dometios, A. C.
    Pitsikalis, V.
    Tzafestas, C. S.
    Maragos, P.
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 3171 - 3175
  • [8] Recent advancements in multimodal human-robot interaction
    Su, Hang
    Qi, Wen
    Chen, Jiahao
    Yang, Chenguang
    Sandoval, Juan
    Laribi, Med Amine
    FRONTIERS IN NEUROROBOTICS, 2023, 17
  • [9] A Dialogue System for Multimodal Human-Robot Interaction
    Lucignano, Lorenzo
    Cutugno, Francesco
    Rossi, Silvia
    Finzi, Alberto
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 197 - 204
  • [10] Multimodal Information Fusion for Human-Robot Interaction
    Luo, Ren C.
    Wu, Y. C.
    Lin, P. H.
    2015 IEEE 10TH JUBILEE INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2015, : 535 - 540