Generalizable Local Feature Pre-training for Deformable Shape Analysis

被引:3
|
作者
Attaiki, Souhaib [1 ]
Li, Lei [1 ]
Ovsjanikov, Maks [1 ]
机构
[1] IP Paris, Ecole Polytech, LIX, Paris, France
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
基金
欧洲研究理事会;
关键词
OBJECT RECOGNITION; GEOMETRY;
D O I
10.1109/CVPR52729.2023.01312
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning is fundamental for addressing problems in settings with little training data. While several transfer learning approaches have been proposed in 3D, unfortunately, these solutions typically operate on an entire 3D object or even scene-level and thus, as we show, fail to generalize to new classes, such as deformable organic shapes. In addition, there is currently a lack of understanding of what makes pre-trained features transferable across significantly different 3D shape categories. In this paper, we make a step toward addressing these challenges. First, we analyze the link between feature locality and transferability in tasks involving deformable 3D objects, while also comparing different backbones and losses for local feature pre-training. We observe that with proper training, learned features can be useful in such tasks, but, crucially, only with an appropriate choice of the receptive field size. We then propose a differentiable method for optimizing the receptive field within 3D transfer learning. Jointly, this leads to the first learnable features that can successfully generalize to unseen classes of 3D shapes such as humans and animals. Our extensive experiments show that this approach leads to state-of-the-art results on several downstream tasks such as segmentation, shape correspondence, and classification. Our code is available at https://github.com/pvnieo/vader.
引用
收藏
页码:13650 / 13661
页数:12
相关论文
共 50 条
  • [41] Realistic Channel Models Pre-training
    Huangfu, Yourui
    Wang, Jian
    Xu, Chen
    Li, Rong
    Ge, Yiqun
    Wang, Xianbin
    Zhang, Huazi
    Wang, Jun
    2019 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2019,
  • [42] Blessing of Class Diversity in Pre-training
    Zhao, Yulai
    Chen, Jianshu
    Du, Simon S.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206 : 283 - 305
  • [43] Rethinking pre-training on medical imaging
    Wen, Yang
    Chen, Leiting
    Deng, Yu
    Zhou, Chuan
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
  • [44] Event Camera Data Pre-training
    Yang, Yan
    Pan, Liyuan
    Liu, Liu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10665 - 10675
  • [45] Quality Diversity for Visual Pre-Training
    Chavhan, Ruchika
    Gouk, Henry
    Li, Da
    Hospedales, Timothy
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 5361 - 5371
  • [46] Pre-training Methods in Information Retrieval
    Fan, Yixing
    Xie, Xiaohui
    Cai, Yinqiong
    Chen, Jia
    Ma, Xinyu
    Li, Xiangsheng
    Zhang, Ruqing
    Guo, Jiafeng
    FOUNDATIONS AND TRENDS IN INFORMATION RETRIEVAL, 2022, 16 (03): : 178 - 317
  • [47] Pre-training in Medical Data: A Survey
    Qiu, Yixuan
    Lin, Feng
    Chen, Weitong
    Xu, Miao
    MACHINE INTELLIGENCE RESEARCH, 2023, 20 (02) : 147 - 179
  • [48] Pre-Training Without Natural Images
    Hirokatsu Kataoka
    Kazushige Okayasu
    Asato Matsumoto
    Eisuke Yamagata
    Ryosuke Yamada
    Nakamasa Inoue
    Akio Nakamura
    Yutaka Satoh
    International Journal of Computer Vision, 2022, 130 : 990 - 1007
  • [49] Structure-inducing pre-training
    Matthew B. A. McDermott
    Brendan Yap
    Peter Szolovits
    Marinka Zitnik
    Nature Machine Intelligence, 2023, 5 : 612 - 621
  • [50] Pre-training Universal Language Representation
    Li, Yian
    Zhao, Hai
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 5122 - 5133