ECO-3D: Equivariant Contrastive Learning for Pre-training on Perturbed 3D Point Cloud

被引:0
|
作者
Wang, Ruibin [1 ]
Ying, Xianghua [1 ]
Xing, Bowei [1 ]
Yang, Jinfa [1 ]
机构
[1] Peking Univ, Sch Intelligence Sci & Technol, Key Lab Machine Percept, MOE, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we investigate contrastive learning on perturbed point clouds and find that the contrasting process may widen the domain gap caused by random perturbations, making the pre-trained network fail to generalize on testing data. To this end, we propose the Equivariant COntrastive frame-work which closes the domain gap before contrasting, further introduces the equivariance property, and enables pretraining networks under more perturbation types to obtain meaningful features. Specifically, to close the domain gap, a pre-trained VAE is adopted to convert perturbed point clouds into less perturbed point embedding of similar domains and separated perturbation embedding. The contrastive pairs can then be generated by mixing the point embedding with different perturbation embedding. Moreover, to pursue the equivariance property, a Vector Quantizer is adopted during VAE training, discretizing the perturbation embedding into one-hot tokens which indicate the perturbation labels. By correctly predicting the perturbation labels from the perturbed point cloud, the property of equivariance can be encouraged in the learned features. Experiments on synthesized and real-world perturbed datasets show that ECO-3D outperforms most existing pre-training strategies under various downstream tasks, achieving SOTA performance for lots of perturbations.
引用
收藏
页码:2626 / 2634
页数:9
相关论文
共 50 条
  • [21] Automated 3D Pre-Training for Molecular Property Prediction
    Wang, Xu
    Zhao, Huan
    Tu, Wei-wei
    Yao, Quanming
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2419 - 2430
  • [22] Advances in 3D pre-training and downstream tasks: a survey
    Yuenan Hou
    Xiaoshui Huang
    Shixiang Tang
    Tong He
    Wanli Ouyang
    Vicinagearth, 1 (1):
  • [23] SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-training for Spatial-Aware Visual Representations
    Li, Zhenyu
    Chen, Zehui
    Li, Ang
    Fang, Liangji
    Jiang, Qinhong
    Liu, Xianming
    Jiang, Junjun
    Zhou, Bolei
    Zhao, Hang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1500 - 1508
  • [24] Mutual Information Driven Equivariant Contrastive Learning for 3D Action Representation Learning
    Lin, Lilang
    Zhang, Jiahang
    Liu, Jiaying
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 1883 - 1897
  • [25] Improved Training for 3D Point Cloud Classification
    Paul, Sneha
    Patterson, Zachary
    Bouguila, Nizar
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2022, 2022, 13813 : 253 - 263
  • [26] Boosting 3D Single Object Tracking with 2D Matching Distillation and 3D Pre-training
    Wu, Qiangqiang
    Xia, Yan
    Wan, Jia
    Chan, Antoni B.
    COMPUTER VISION - ECCV 2024, PT XII, 2025, 15070 : 270 - 288
  • [27] DACNet: A Dual-Attention Contrastive Learning Network for 3D Point Cloud Classification
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [28] Contrastive pre-training and 3D convolution neural network for RNA and small molecule binding affinity prediction
    Sun, Saisai
    Gao, Lin
    BIOINFORMATICS, 2024, 40 (04)
  • [29] Learning from 3D (Point Cloud) Data
    Hsu, Winston H.
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2697 - 2698
  • [30] Learning multiview 3D point cloud registration
    Gojcic, Zan
    Zhou, Caifa
    Wegner, Jan D.
    Guibas, Leonidas J.
    Birdal, Tolga
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1756 - 1766