Fine-tuned CLIP Models are Efficient Video Learners

被引:51
|
作者
Rasheed, Hanoona [1 ]
Khattak, Muhammad Uzair [1 ]
Maaz, Muhammad [1 ]
Khan, Salman [1 ,2 ]
Khan, Fahad Shahbaz [1 ,3 ]
机构
[1] Mohamed Bin Zayed Univ AI, Abu Dhabi, U Arab Emirates
[2] Australian Natl Univ, Canberra, ACT, Australia
[3] Linkoping Univ, Linkoping, Sweden
关键词
D O I
10.1109/CVPR52729.2023.00633
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the framelevel processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a 'bridge and prompt' approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code and pre-trained models are available at https://github.com/muzairkhattak/ViFi-CLIP.
引用
收藏
页码:6545 / 6554
页数:10
相关论文
共 50 条
  • [31] Vision-based Human Detection by Fine-Tuned SSD Models
    Cheng, Tang Jin
    Ab Nasir, Ahmad Fakhri
    Razman, Mohd Azraai Mohd
    Majeed, Anwar P. P. Abdul
    Li Lim, Thai
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (11) : 386 - 390
  • [32] Brain Tumor Classification Based on Fine-Tuned Models and the Ensemble Method
    Noreen, Neelum
    Palaniappan, Sellapan
    Qayyum, Abdul
    Ahmad, Iftikhar
    Alassafi, Madini O.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 67 (03): : 3967 - 3982
  • [33] Clothing Detection and Classification with Fine-Tuned YOLO-Based Models
    Nguyen, Hai T.
    Nguyen, Khanh K.
    Diem, Pham T-N
    Dien, Tran T.
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE. THEORY AND APPLICATIONS, IEA/AIE 2023, PT I, 2023, 13925 : 127 - 132
  • [34] The Central Park Bandshell Fine-Tuned
    Gardner, James
    MAGAZINE ANTIQUES, 2021, 188 (05): : 50 - 52
  • [35] Fine-tuned nerves for sporting aces
    Goss, H
    NEW SCIENTIST, 1996, 149 (2016) : 15 - 15
  • [36] Increasing profits with fine-tuned reflow
    2001, Cahners Publishing Co. Inc. (41):
  • [37] FINE-TUNED MICROSTRUCTURES FOR LPBF PRINTING
    不详
    ADVANCED MATERIALS & PROCESSES, 2021, 179 (08): : 64 - 64
  • [38] There is no adequate definition of 'fine-tuned for life'
    Manson, NA
    INQUIRY-AN INTERDISCIPLINARY JOURNAL OF PHILOSOPHY, 2000, 43 (03): : 341 - 352
  • [39] Magnetic monopoles as fine-tuned objects
    Zakharov, VI
    NUCLEAR PHYSICS B-PROCEEDINGS SUPPLEMENTS, 2003, 121 : 325 - 332
  • [40] Efficient fine-tuned preventive monitoring models of bearing failures without prior on-site fault data
    Liu, Wenjing
    Xu, Zhiwei
    Wang, Jing
    Tian, Jie
    Jin, Dahai
    Gong, Yunzhan
    MEASUREMENT, 2025, 242