Learn from Incomplete Tactile Data: Tactile Representation Learning with Masked Autoencoders

被引:1
|
作者
Cao, Guanqun [1 ]
Jiang, Jiaqi [2 ]
Bollegala, Danushka [1 ]
Luo, Shan [2 ]
机构
[1] Univ Liverpool, Dept Comp Sci, Liverpool L69 3BX, England
[2] Kings Coll London, Dept Engn, London WC2R 2LS, England
基金
英国工程与自然科学研究理事会;
关键词
OBJECT PROPERTIES; PERCEPTION;
D O I
10.1109/IROS55552.2023.10341788
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The missing signal caused by the objects being occluded or an unstable sensor is a common challenge during data collection. Such missing signals will adversely affect the results obtained from the data, and this issue is observed more frequently in robotic tactile perception. In tactile perception, due to the limited working space and the dynamic environment, the contact between the tactile sensor and the object is frequently insufficient and unstable, which causes the partial loss of signals, thus leading to incomplete tactile data. The tactile data will therefore contain fewer tactile cues with low information density. In this paper, we propose a tactile representation learning method, named TacMAE, based on Masked Autoencoder to address the problem of incomplete tactile data in tactile perception. In our framework, a portion of the tactile image is masked out to simulate the missing contact regions. By reconstructing the missing signals in the tactile image, the trained model can achieve a high-level understanding of surface geometry and tactile properties from limited tactile cues. The experimental results of tactile texture recognition show that TacMAE can achieve a high recognition accuracy of 71.4% in the zero-shot transfer and 85.8% after fine-tuning, which are 15.2% and 8.2% higher than the results without using masked modeling. The extensive experiments on YCB objects demonstrate the knowledge transferability of our proposed method and the potential to improve efficiency in tactile exploration.
引用
收藏
页码:10800 / 10805
页数:6
相关论文
共 50 条
  • [41] Recognition of Curved Surfaces From "One-Dimensional" Tactile Data
    Ibrayev, Rinat
    Jia, Yan-Bin
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2012, 9 (03) : 613 - 621
  • [42] Texture Recognition Based on Perception Data from a Bionic Tactile Sensor
    Huang, Shiyao
    Wu, Hao
    SENSORS, 2021, 21 (15)
  • [43] Surface Patch Reconstruction From One-Dimensional Tactile Data
    Jia, Yan-Bin
    Tian, Jiang
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2010, 7 (02) : 400 - 407
  • [44] Estimating the Orientation of Objects from Tactile Sensing Data Using Machine Learning Methods and Visual Frames of Reference
    da Fonseca, Vinicius Prado
    de Oliveira, Thiago Eustaquio Alves
    Petriu, Emil M.
    SENSORS, 2019, 19 (10)
  • [45] Learning cross-modal visual-tactile representation using ensembled generative adversarial networks
    Li, Xinwu
    Liu, Huaping
    Zhou, Junfeng
    Sun, FuChun
    COGNITIVE COMPUTATION AND SYSTEMS, 2019, 1 (02) : 40 - 44
  • [46] Learning TAN from incomplete data
    Tian, FZ
    Wang, ZH
    Yu, J
    Huang, HK
    ADVANCES IN INTELLIGENT COMPUTING, PT 1, PROCEEDINGS, 2005, 3644 : 495 - 504
  • [47] EEG-based texture roughness classification in active tactile exploration with invariant representation learning networks
    Ozdenizci, Ozan
    Eldeeb, Safaa
    Demir, Andac
    Erdogmus, Deniz
    Akcakaya, Murat
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 67
  • [48] Learning-Based Resource Allocation for Data-Intensive and Immersive Tactile Applications
    Elsayed, Medhat
    Erol-Kantarci, Melike
    2018 IEEE 5G WORLD FORUM (5GWF), 2018, : 278 - 283
  • [49] ST-HMP: Unsupervised Spatio-Temporal Feature Learning for Tactile Data
    Madry, Marianna
    Bo, Liefeng
    Kragic, Danica
    Fox, Dieter
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2014, : 2262 - 2269
  • [50] Domain Invariant Masked Autoencoders for Self-supervised Learning from Multi-domains
    Yang, Haiyang
    Tang, Shixiang
    Chen, Meilin
    Wang, Yizhou
    Zhu, Feng
    Bai, Lei
    Zhao, Rui
    Ouyang, Wanli
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 151 - 168