Fine-tuned CLIP Models are Efficient Video Learners

被引:51
|
作者
Rasheed, Hanoona [1 ]
Khattak, Muhammad Uzair [1 ]
Maaz, Muhammad [1 ]
Khan, Salman [1 ,2 ]
Khan, Fahad Shahbaz [1 ,3 ]
机构
[1] Mohamed Bin Zayed Univ AI, Abu Dhabi, U Arab Emirates
[2] Australian Natl Univ, Canberra, ACT, Australia
[3] Linkoping Univ, Linkoping, Sweden
关键词
D O I
10.1109/CVPR52729.2023.00633
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the framelevel processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a 'bridge and prompt' approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code and pre-trained models are available at https://github.com/muzairkhattak/ViFi-CLIP.
引用
收藏
页码:6545 / 6554
页数:10
相关论文
共 50 条
  • [1] Frozen CLIP Models are Efficient Video Learners
    Lin, Ziyi
    Geng, Shijie
    Zhang, Renrui
    Gao, Peng
    de Melo, Gerard
    Wang, Xiaogang
    Dai, Jifeng
    Qiao, Yu
    Li, Hongsheng
    COMPUTER VISION - ECCV 2022, PT XXXV, 2022, 13695 : 388 - 404
  • [2] Action Recognition via Fine-Tuned CLIP Model and Temporal Transformer
    Yang, Xiaoyu
    Fu, Yuzhuo
    Liu, Ting
    ADVANCES IN COMPUTER GRAPHICS, CGI 2023, PT III, 2024, 14497 : 498 - 513
  • [3] Exploring Memorization in Fine-tuned Language Models
    Zeng, Shenglai
    Li, Yaxin
    Ren, Jie
    Liu, Yiding
    Xu, Han
    He, Pengfei
    Xing, Yue
    Wang, Shuaiqiang
    Tang, Jiliang
    Yin, Dawei
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 3917 - 3948
  • [4] Fingerprinting Fine-tuned Language Models in the Wild
    Diwan, Nirav
    Chakravorty, Tanmoy
    Shafiq, Zubair
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4652 - 4664
  • [5] The fine-tuned universe
    Theunis, Andre
    Gidion, Gunnar
    STRAD, 2019, 130 (1550): : 66 - 67
  • [6] Fine-tuned canoes
    Logan, A
    NEW SCIENTIST, 2002, 174 (2337) : 51 - 51
  • [7] THE FINE-TUNED ORGANIZATION
    HAMMONS, C
    MADDUX, GA
    QUALITY PROGRESS, 1992, 25 (02) : 47 - 48
  • [8] Fine-tuned kraft
    Papermaker, 1996, 59 (03):
  • [9] Fine-tuned antifungals
    Naomi Attar
    Nature Reviews Microbiology, 2015, 13 (7) : 398 - 398
  • [10] FINE-TUNED OF NECESSITY?
    Page, Ben
    RES PHILOSOPHICA, 2018, 95 (04) : 663 - 692