Dual Track Multimodal Automatic Learning through Human-Robot Interaction

被引:0
|
作者
Jiang, Shuqiang [1 ,2 ]
Min, Weiqing [1 ,2 ]
Li, Xue [1 ,3 ]
Wang, Huayang [1 ,2 ]
Sun, Jian [1 ,3 ]
Zhou, Jiaqi [1 ]
机构
[1] Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Shandong Univ Sci & Technol, Qingdao, Peoples R China
基金
中国博士后科学基金; 北京市自然科学基金; 中国国家自然科学基金;
关键词
ONLINE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human beings are constantly improving their cognitive ability via automatic learning from the interaction with the environment. Two important aspects of automatic learning are the visual perception and knowledge acquisition. The fusion of these two aspects is vital for improving the intelligence and interaction performance of robots. Many automatic knowledge extraction and recognition methods have been widely studied. However, little work focuses on integrating automatic knowledge extraction and recognition into a unified framework to enable jointly visual perception and knowledge acquisition. To solve this problem, we propose a Dual Track Multimodal Automatic Learning (DTMAL) system, which consists of two components: Hybrid Incremental Learning (HIL) from the vision track and Multimodal Knowledge Extraction (MKE) from the knowledge track. HIL can incrementally improve recognition ability of the system by learning new object samples and new object concepts. MKE is capable of constructing and updating the multimodal knowledge items based on the recognized new objects from HIL and other knowledge by exploring the multimodal signals. The fusion of the two tracks is a mutual promotion process and jointly devote to the dual track learning. We have conducted the experiments through human-machine interaction and the experimental results validated the effectiveness of our proposed system.
引用
收藏
页码:4485 / 4491
页数:7
相关论文
共 50 条
  • [31] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [32] DiGeTac Unit for Multimodal Communication in Human-Robot Interaction
    Al, Gorkem Anil
    Martinez-Hernandez, Uriel
    IEEE SENSORS LETTERS, 2024, 8 (05)
  • [33] Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
    Campbell, Joseph
    Stepputtis, Simon
    Amor, Heni Ben
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [34] Multimodal Target Prediction for Rapid Human-Robot Interaction
    Mitra, Mukund
    Patil, Ameya
    Mothish, G. V. S.
    Kumar, Gyanig
    Mukhopadhyay, Abhishek
    Murthy, L. R. D.
    Chakrabarti, Partha Pratim
    Biswas, Pradipta
    COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 18 - 23
  • [35] Online Learning of Varying Stiffness Through Physical Human-Robot Interaction
    Kronander, Klas
    Billard, Aude
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 1842 - 1849
  • [36] Learning Compliant Manipulation through Kinesthetic and Tactile Human-Robot Interaction
    Kronander, Klas
    Billard, Aude
    IEEE TRANSACTIONS ON HAPTICS, 2014, 7 (03) : 367 - 380
  • [37] Predicting Human Intentions in Human-Robot Hand-Over Tasks Through Multimodal Learning
    Wang, Weitian
    Li, Rui
    Chen, Yi
    Sun, Yi
    Jia, Yunyi
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2022, 19 (03) : 2339 - 2353
  • [38] Automatic gesture recognition for intelligent human-robot interaction
    Lee, Seong-Whan
    PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION - PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE, 2006, : 645 - 650
  • [39] Human-Robot Interaction and Collaborative Manipulation with Multimodal Perception Interface for Human
    Huang, Shouren
    Ishikawa, Masatoshi
    Yamakawa, Yuji
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 289 - 291
  • [40] Multimodal Human-Robot Interaction from the Perspective of a Speech Scientist
    Rigoll, Gerhard
    SPEECH AND COMPUTER (SPECOM 2015), 2015, 9319 : 3 - 10