Generalizable Local Feature Pre-training for Deformable Shape Analysis

被引:3
|
作者
Attaiki, Souhaib [1 ]
Li, Lei [1 ]
Ovsjanikov, Maks [1 ]
机构
[1] IP Paris, Ecole Polytech, LIX, Paris, France
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
欧洲研究理事会;
关键词
OBJECT RECOGNITION; GEOMETRY;
D O I
10.1109/CVPR52729.2023.01312
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning is fundamental for addressing problems in settings with little training data. While several transfer learning approaches have been proposed in 3D, unfortunately, these solutions typically operate on an entire 3D object or even scene-level and thus, as we show, fail to generalize to new classes, such as deformable organic shapes. In addition, there is currently a lack of understanding of what makes pre-trained features transferable across significantly different 3D shape categories. In this paper, we make a step toward addressing these challenges. First, we analyze the link between feature locality and transferability in tasks involving deformable 3D objects, while also comparing different backbones and losses for local feature pre-training. We observe that with proper training, learned features can be useful in such tasks, but, crucially, only with an appropriate choice of the receptive field size. We then propose a differentiable method for optimizing the receptive field within 3D transfer learning. Jointly, this leads to the first learnable features that can successfully generalize to unseen classes of 3D shapes such as humans and animals. Our extensive experiments show that this approach leads to state-of-the-art results on several downstream tasks such as segmentation, shape correspondence, and classification. Our code is available at https://github.com/pvnieo/vader.
引用
收藏
页码:13650 / 13661
页数:12
相关论文
共 50 条
  • [1] Image Priors Assisted Pre-training for Point Cloud Shape Analysis
    Li, Zhengyu
    Wu, Yao
    Qu, Yanyun
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 133 - 145
  • [2] Towards Generalizable Semantic Product Search by Text Similarity Pre-training on Search Click Logs
    Liu, Zheng
    Zhang, Wei
    Chen, Yan
    Sun, Weiyi
    Du, Michael
    Schroeder, Benjamin
    PROCEEDINGS OF THE 5TH WORKSHOP ON E-COMMERCE AND NLP (ECNLP 5), 2022, : 224 - 233
  • [3] Comprehensive analysis of embeddings and pre-training in NLP
    Tripathy, Jatin Karthik
    Sethuraman, Sibi Chakkaravarthy
    Cruz, Meenalosini Vimal
    Namburu, Anupama
    Mangalraj, P.
    Kumar, R. Nandha
    Ilango, S. Sudhakar
    Vijayakumar, Vaidehi
    COMPUTER SCIENCE REVIEW, 2021, 42
  • [4] An Empirical Exploration of Local Ordering Pre-training for Structured Prediction
    Zhang, Zhisong
    Kong, Xiang
    Levin, Lori
    Hovy, Eduard
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1770 - 1783
  • [5] Pre-training local and non-local geographical influences with contrastive learning
    Oh, Byungkook
    Suh, Ilhyun
    Cha, Kihoon
    Kim, Junbeom
    Park, Goeon
    Jeong, Sihyun
    KNOWLEDGE-BASED SYSTEMS, 2023, 259
  • [6] Event Feature Pre-training Model Based on Public Opinion Evolution
    Wang, Nan
    Tan, Shu-Ru
    Xie, Xiao-Lan
    Li, Hai-Rong
    Jiang, Jia-Hui
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (04) : 197 - 206
  • [7] Masked Feature Prediction for Self-Supervised Visual Pre-Training
    Wei, Chen
    Fan, Haoqi
    Xie, Saining
    Wu, Chao-Yuan
    Yuille, Alan
    Feichtenhofer, Christoph
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14648 - 14658
  • [8] Multi-stage Pre-training over Simplified Multimodal Pre-training Models
    Liu, Tongtong
    Feng, Fangxiang
    Wang, Xiaojie
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2556 - 2565
  • [9] Table Pre-training: A Survey on Model Architectures, Pre-training Objectives, and Downstream Tasks
    Dong, Haoyu
    Cheng, Zhoujun
    He, Xinyi
    Zhou, Mengyu
    Zhou, Anda
    Zhou, Fan
    Liu, Ao
    Han, Shi
    Zhang, Dongmei
    PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, 2022, : 5426 - 5435
  • [10] Feature-Suppressed Contrast for Self-Supervised Food Pre-training
    Liu, Xinda
    Zhu, Yaohui
    Liu, Linhu
    Tian, Jiang
    Wang, Lili
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4359 - 4367