Dual Track Multimodal Automatic Learning through Human-Robot Interaction

被引:0
|
作者
Jiang, Shuqiang [1 ,2 ]
Min, Weiqing [1 ,2 ]
Li, Xue [1 ,3 ]
Wang, Huayang [1 ,2 ]
Sun, Jian [1 ,3 ]
Zhou, Jiaqi [1 ]
机构
[1] Chinese Acad Sci, Key Lab Intelligent Informat Proc, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
[3] Shandong Univ Sci & Technol, Qingdao, Peoples R China
基金
中国博士后科学基金; 北京市自然科学基金; 中国国家自然科学基金;
关键词
ONLINE;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human beings are constantly improving their cognitive ability via automatic learning from the interaction with the environment. Two important aspects of automatic learning are the visual perception and knowledge acquisition. The fusion of these two aspects is vital for improving the intelligence and interaction performance of robots. Many automatic knowledge extraction and recognition methods have been widely studied. However, little work focuses on integrating automatic knowledge extraction and recognition into a unified framework to enable jointly visual perception and knowledge acquisition. To solve this problem, we propose a Dual Track Multimodal Automatic Learning (DTMAL) system, which consists of two components: Hybrid Incremental Learning (HIL) from the vision track and Multimodal Knowledge Extraction (MKE) from the knowledge track. HIL can incrementally improve recognition ability of the system by learning new object samples and new object concepts. MKE is capable of constructing and updating the multimodal knowledge items based on the recognized new objects from HIL and other knowledge by exploring the multimodal signals. The fusion of the two tracks is a mutual promotion process and jointly devote to the dual track learning. We have conducted the experiments through human-machine interaction and the experimental results validated the effectiveness of our proposed system.
引用
收藏
页码:4485 / 4491
页数:7
相关论文
共 50 条
  • [21] Strategies of human-robot interaction for automatic microassembly
    Ferreira, A
    2003 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-3, PROCEEDINGS, 2003, : 3076 - 3081
  • [22] Automatic domain modeling for human-robot interaction
    Savic, Srdan Z.
    Gnjatovic, Milan
    Stefanovic, Darko
    Lalic, Bojan
    Macek, Nemanja
    INTELLIGENT SERVICE ROBOTICS, 2020, 13 (01) : 99 - 111
  • [23] Continual Learning Through Human-Robot Interaction: Human Perceptions of a Continual Learning Robot in Repeated Interactions
    Ayub, Ali
    De Francesco, Zachary
    Holthaus, Patrick
    Nehaniv, Chrystopher L.
    Dautenhahn, Kerstin
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2025, 17 (02) : 277 - 296
  • [24] Constructive learning for human-robot interaction
    Singh, Amarjot
    Karanam, Srikrishna
    Kumar, Devinder
    IEEE Potentials, 2013, 32 (04): : 13 - 19
  • [25] Online learning for human-robot interaction
    Raducanu, Bogdan
    Vitria, Jordi
    2007 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-8, 2007, : 3342 - +
  • [26] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706
  • [27] Multimodal Engagement Prediction in Multiperson Human-Robot Interaction
    Abdelrahman, Ahmed A.
    Strazdas, Dominykas
    Khalifa, Aly
    Hintz, Jan
    Hempel, Thorsten
    Al-Hamadi, Ayoub
    IEEE ACCESS, 2022, 10 : 61980 - 61991
  • [28] Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence
    Zhang, Zhengyou
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 2 - 2
  • [29] A unified multimodal control framework for human-robot interaction
    Cherubini, Andrea
    Passama, Robin
    Fraisse, Philippe
    Crosnier, Andre
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2015, 70 : 106 - 115
  • [30] Multimodal QOL Estimation During Human-Robot Interaction
    Nakagawa, Satoshi
    Kuniyoshi, Yasuo
    2024 IEEE INTERNATIONAL CONFERENCE ON DIGITAL HEALTH, ICDH 2024, 2024, : 23 - 32