Visual-tactile fusion object classification method based on adaptive feature weighting

被引:2
|
作者
Zhang, Peng [1 ,4 ]
Bai, Lu [1 ]
Shan, Dongri [2 ]
Wang, Xiaofang [1 ]
Li, Shuang [1 ]
Zou, Wenkai [1 ]
Chen, Zhenxue [3 ]
机构
[1] Qilu Univ Technol, Shandong Acad Sci, Sch Informat & Automat Engn, Jinan, Peoples R China
[2] Qilu Univ Technol, Shandong Acad Sci, Sch Mech Engn, Jinan, Peoples R China
[3] Shandong Univ, Sch Control Sci & Engn, Jinan, Peoples R China
[4] Qilu Univ Technol, Shandong Acad Sci, Sch Informat & Automat Engn, Jinan 250353, Peoples R China
来源
关键词
Object classification; visual-tactile fusion; adaptive feature weighting; lightweight model; GRASP;
D O I
10.1177/17298806231191947
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Visual-tactile fusion information plays a crucial role in robotic object classification. The fusion module in existing visual-tactile fusion models directly splices visual and tactile features at the feature layer; however, for different objects, the contributions of visual features and tactile features to classification are different. Moreover, direct concatenation may ignore features that are more beneficial for classification and will also increase computational costs and reduce model classification efficiency. To utilize object feature information more effectively and further improve the efficiency and accuracy of robotic object classification, we propose a visual-tactile fusion object classification method based on adaptive feature weighting in this article. First, a lightweight feature extraction module is used to extract the visual and tactile features of each object. Then, the two feature vectors are input into an adaptive weighted fusion module. Finally, the fused feature vector is input into the fully connected layer for classification, yielding the categories and physical attributes of the objects. In this article, extensive experiments are performed with the Penn Haptic Adjective Corpus 2 public dataset and the newly developed Visual-Haptic Adjective Corpus 52 dataset. The experimental results demonstrate that for the public dataset Penn Haptic Adjective Corpus 2, our method achieves a value of 0.9750 in terms of the area under the curve. Compared with the highest area under the curve obtained by the existing state-of-the-art methods, our method improves by 1.92%. Moreover, compared with the existing state-of-the-art methods, our method achieves the best results in training time and inference time; while for the novel Visual-Haptic Adjective Corpus 52 dataset, our method achieves values of 0.9827 and 0.9850 in terms of the area under the curve and accuracy metrics, respectively. Furthermore, the inference time reaches 1.559 s/sheet, demonstrating the effectiveness of the proposed method.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Visual-Tactile Fusion for Object Recognition
    Liu, Huaping
    Yu, Yuanlong
    Sun, Fuchun
    Gu, Jason
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2017, 14 (02) : 996 - 1008
  • [2] A glove-based system for object recognition via visual-tactile fusion
    Fang, Bin
    Sun, Fuchun
    Liu, Huaping
    Tan, Chuanqi
    Guo, Di
    SCIENCE CHINA-INFORMATION SCIENCES, 2019, 62 (05)
  • [3] A glove-based system for object recognition via visual-tactile fusion
    Bin Fang
    Fuchun Sun
    Huaping Liu
    Chuanqi Tan
    Di Guo
    Science China Information Sciences, 2019, 62
  • [4] A glove-based system for object recognition via visual-tactile fusion
    Bin FANG
    Fuchun SUN
    Huaping LIU
    Chuanqi TAN
    Di GUO
    ScienceChina(InformationSciences), 2019, 62 (05) : 11 - 13
  • [5] Visual-Tactile Fusion for Transparent Object Grasping in Complex Backgrounds
    Li, Shoujie
    Yu, Haixin
    Ding, Wenbo
    Liu, Houde
    Ye, Linqi
    Xia, Chongkun
    Wang, Xueqian
    Zhang, Xiao-Ping
    IEEE TRANSACTIONS ON ROBOTICS, 2023, 39 (05) : 3838 - 3856
  • [6] VITO-Transformer: A Visual-Tactile Fusion Network for Object Recognition
    Li, Baojiang
    Bai, Jibo
    Qiu, Shengjie
    Wang, Haiyan
    Guo, Yuting
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [7] Alignment and Multi-Scale Fusion for Visual-Tactile Object Recognition
    Wei, Fuyang
    Zhao, Jianhui
    Shan, Chudong
    Yuan, Zhiyong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [8] A Human-Like Siamese-Based Visual-Tactile Fusion Model for Object Recognition
    Wang, Fei
    Li, Yucheng
    Tao, Liangze
    Wu, Juan
    Huang, Gewen
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2024, 16 (03) : 850 - 863
  • [9] Robotic grasp slip detection based on visual-tactile fusion
    Cui S.
    Wei J.
    Wang R.
    Wang S.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2020, 48 (01): : 98 - 102
  • [10] Visual Perception based Adaptive Feature Fusion for Visual Object Tracking
    Krieger, Evan
    Asari, Vijayan K.
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 1345 - 1350