A framework for the fusion of visual and tactile modalities for improving robot perception

被引:10
|
作者
Zhang, Wenchang [1 ,2 ]
Sun, Fuchun [1 ]
Wu, Hang [2 ]
Yang, Haolin [1 ]
机构
[1] Tsinghua Univ, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[2] Inst Med Equipment, Tianjin 300161, Peoples R China
基金
中国国家自然科学基金;
关键词
multi-modal fusion; robot perception; vision; tactile; classification; SPARSE REPRESENTATION; CLASSIFICATION;
D O I
10.1007/s11432-016-0158-2
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Robots should ideally perceive objects using human-like multi-modal sensing such as vision, tactile feedback, smell, and hearing. However, the features presentations are different for each modal sensor. Moreover, the extracted feature methods for each modal are not the same. Some modal features such as vision, which presents a spatial property, are static while features such as tactile feedback, which presents temporal pattern, are dynamic. It is difficult to fuse these data at the feature level for robot perception. In this study, we propose a framework for the fusion of visual and tactile modal features, which includes the extraction of features, feature vector normalization and generation based on bag-of-system (BoS), and coding by robust multi-modal joint sparse representation (RM-JSR) and classification, thereby enabling robot perception to solve the problem of diverse modal data fusion at the feature level. Finally, comparative experiments are carried out to demonstrate the performance of this framework.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] A framework for the fusion of visual and tactile modalities for improving robot perception
    Wenchang ZHANG
    Fuchun SUN
    Hang WU
    Haolin YANG
    ScienceChina(InformationSciences), 2017, 60 (01) : 145 - 156
  • [2] A framework for the fusion of visual and tactile modalities for improving robot perception一种用于提高机器人感知的视触觉模态融合的框架
    Wenchang Zhang
    Fuchun Sun
    Hang Wu
    Haolin Yang
    Science China Information Sciences, 2017, 60
  • [3] Integration of visual and tactile modalities
    Summers, IR
    Du, J
    SCANDINAVIAN AUDIOLOGY, 1997, 26 : 29 - 33
  • [4] Multisensory surgical support system incorporating, tactile, visual and auditory perception modalities
    Constantinou, CE
    Omata, S
    Murayama, Y
    FOURTH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY, PROCEEDINGS, 2004, : 870 - 874
  • [5] Visual Perception Framework for an Intelligent Mobile Robot
    Lee, Chung-Yeon
    Lee, Hyundo
    Hwang, Injune
    Zhang, Byoung-Tak
    2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR), 2020, : 612 - 616
  • [6] A mixed reality framework for microsurgery simulation with visual-tactile perception
    Xiang, Nan
    Liang, Hai-Ning
    Yu, Lingyun
    Yang, Xiaosong
    Zhang, Jian J.
    VISUAL COMPUTER, 2023, 39 (08): : 3661 - 3673
  • [7] A mixed reality framework for microsurgery simulation with visual-tactile perception
    Nan Xiang
    Hai-Ning Liang
    Lingyun Yu
    Xiaosong Yang
    Jian J. Zhang
    The Visual Computer, 2023, 39 : 3661 - 3673
  • [8] A sensorimotor account of visual and tactile integration for depth perception : an iCub robot experiment
    Sanchez-Fibla, Marti
    Moulin-Frier, Clement
    Verschure, Paul F. M. J.
    2017 THE SEVENTH JOINT IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2017, : 86 - 91
  • [9] Multi-Sensory Surgical Support System Incorporating, Tactile, Visual and Auditory Perception Modalities
    Omata, Sadao
    Murayama, Yoshinobu
    Constantinou, Christos E.
    MEDICINE MEETS VIRTUAL REALITY 13: THE MAGICAL NEXT BECOMES THE MEDICAL NOW, 2005, 111 : 369 - 371
  • [10] UNILATERAL SPATIAL NEGLECT IN VISUAL AND TACTILE MODALITIES
    FUJII, T
    FUKATSU, R
    KIMURA, I
    SASO, SI
    KOGURE, K
    CORTEX, 1991, 27 (02) : 339 - 343