Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models

被引:6
|
作者
Zha, Yaohua [1 ]
Wang, Jinpeng [1 ]
Dai, Tao [2 ]
Bin Chen [3 ]
Wang, Zhi [1 ]
Xia, Shu-Tao [4 ]
机构
[1] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Beijing, Peoples R China
[2] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen, Peoples R China
[3] Harbin Inst Technol, Harbin, Peoples R China
[4] Shenzhen Res Ctr Artificial Intelligence, Peng Cheng Lab, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICCV51070.2023.01302
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Pre-trained point cloud models have found extensive applications in 3D understanding tasks like object classification and part segmentation. However, the prevailing strategy of full fine-tuning in downstream tasks leads to large per-task storage overhead for model parameters, which limits the efficiency when applying large-scale pre-trained models. Inspired by the recent success of visual prompt tuning (VPT), this paper attempts to explore prompt tuning on pre-trained point cloud models, to pursue an elegant balance between performance and parameter efficiency. We find while instance-agnostic static prompting, e.g. VPT, shows some efficacy in downstream transfer, it is vulnerable to the distribution diversity caused by various types of noises in real-world point cloud data. To conquer this limitation, we propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models. The essence of IDPT is to develop a dynamic prompt generation module to perceive semantic prior features of each point cloud instance and generate adaptive prompt tokens to enhance the model's robustness. Notably, extensive experiments demonstrate that IDPT outperforms full fine-tuning in most tasks with a mere 7% of the trainable parameters, providing a promising solution to parameter-efficient learning for pre-trained point cloud models. Code is available at https://github.com/zyh16143998882/ICCV23-IDPT.
引用
收藏
页码:14115 / 14124
页数:10
相关论文
共 50 条
  • [31] MuDPT: Multi-modal Deep-symphysis Prompt Tuning for Large Pre-trained Vision-Language Models
    Miao, Yongzhu
    Li, Shasha
    Tang, Jintao
    Wang, Ting
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 25 - 30
  • [32] Relational Prompt-Based Pre-Trained Language Models for Social Event Detection
    Li, Pu
    Yu, Xiaoyan
    Peng, Hao
    Xian, Yantuan
    Wang, Linqin
    Sun, Li
    Zhang, Jingyun
    Yu, Philip S.
    ACM Transactions on Information Systems, 2024, 43 (01)
  • [33] Prompt Learning with Structured Semantic Knowledge Makes Pre-Trained Language Models Better
    Zheng, Hai-Tao
    Xie, Zuotong
    Liu, Wenqiang
    Huang, Dongxiao
    Wu, Bei
    Kim, Hong-Gee
    ELECTRONICS, 2023, 12 (15)
  • [34] Adaptive Prompt Routing for Arbitrary Text Style Transfer with Pre-trained Language Models
    Liu, Qingyi
    Qin, Jinghui
    Ye, Wenxuan
    Mou, Hao
    He, Yuxuan
    Wang, Keze
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 18689 - 18697
  • [35] Efficient utilization of pre-trained models: A review of sentiment analysis via prompt learning
    Bu, Kun
    Liu, Yuanchao
    Ju, Xiaolong
    KNOWLEDGE-BASED SYSTEMS, 2024, 283
  • [36] Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs
    You, Kaichao
    Liu, Yong
    Zhang, Ziyang
    Wang, Jianmin
    Jordan, Michael I.
    Long, Mingsheng
    Journal of Machine Learning Research, 2022, 23
  • [37] Quality-aware Pre-trained Models for Blind Image Quality Assessment
    Zhao, Kai
    Yuan, Kun
    Sun, Ming
    Li, Mading
    Wen, Xing
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22302 - 22313
  • [38] Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
    Ghanbarzadeh, Somayeh
    Huang, Yan
    Palangi, Hamid
    Moreno, Radames Cruz
    Khanpour, Hamed
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5448 - 5458
  • [39] Fine-tuning Pre-trained Models for Robustness under Noisy Labels
    Ahn, Sumyeong
    Kim, Sihyeon
    Ko, Jongwoo
    Yun, Se-Young
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 3643 - 3651
  • [40] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning
    Gira, Michael
    Zhang, Ruisu
    Lee, Kangwook
    PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69