Fine-tuned CLIP Models are Efficient Video Learners

被引:51
|
作者
Rasheed, Hanoona [1 ]
Khattak, Muhammad Uzair [1 ]
Maaz, Muhammad [1 ]
Khan, Salman [1 ,2 ]
Khan, Fahad Shahbaz [1 ,3 ]
机构
[1] Mohamed Bin Zayed Univ AI, Abu Dhabi, U Arab Emirates
[2] Australian Natl Univ, Canberra, ACT, Australia
[3] Linkoping Univ, Linkoping, Sweden
关键词
D O I
10.1109/CVPR52729.2023.00633
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the framelevel processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a 'bridge and prompt' approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code and pre-trained models are available at https://github.com/muzairkhattak/ViFi-CLIP.
引用
收藏
页码:6545 / 6554
页数:10
相关论文
共 50 条
  • [21] On the Generalization Abilities of Fine-Tuned Commonsense Language Representation Models
    Shen, Ke
    Kejriwal, Mayank
    ARTIFICIAL INTELLIGENCE XXXVIII, 2021, 13101 : 3 - 16
  • [22] THE JUGGERNAUT GETS FINE-TUNED
    KUSUNOKI, S
    ELECTRONICS, 1990, 63 (01): : 88 - 90
  • [23] Evil in the Fine-Tuned World
    Azadegan, Ebrahim
    HEYTHROP JOURNAL, 2019, 60 (05): : 795 - 804
  • [24] Efficient two-photon absorbing chrornophores with fine-tuned π-bridges
    Zheng, LX
    Sassa, T
    Jen, AKY
    ORGANIC AND POLYMERIC MATERIALS AND DEVICES-OPTICAL, ELECTRICAL AND OPTOELECTRONIC PROPERTIES, 2002, 725 : 219 - 224
  • [25] Crowd Anomaly Detection in Video Frames Using Fine-Tuned AlexNet Model
    Khan, Arfat Ahmad
    Nauman, Muhammad Asif
    Shoaib, Muhammad
    Jahangir, Rashid
    Alroobaea, Roobaea
    Alsafyani, Majed
    Binmahfoudh, Ahmed
    Wechtaisong, Chitapong
    ELECTRONICS, 2022, 11 (19)
  • [26] Fine-tuned perovskite hollow fiber reactor for efficient degradation of ciprofloxacin
    Tan, Xi-Han
    Cheng, Zhong-Fu
    Bian, Bin
    Zhang, Han-Qi
    Chen, Zhi-Jie
    Tan, Rui
    Ni, Bing-Jie
    Weng, Bo
    Han, Ning
    RARE METALS, 2025,
  • [27] Performance Assessment of Fine-Tuned Barrier Recognition Models in Varying Conditions
    Thoma, Marios
    Partaourides, Harris
    Sreedharan, Ieswaria
    Theodosiou, Zenonas
    Michael, Loizos
    Lanitis, Andreas
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, CAIP 2023, PT II, 2023, 14185 : 172 - 181
  • [28] Comparative Study of Model Optimization Techniques in Fine-Tuned CNN Models
    Poojary, Ramaprasad
    Pai, Akul
    2019 INTERNATIONAL CONFERENCE ON ELECTRICAL AND COMPUTING TECHNOLOGIES AND APPLICATIONS (ICECTA), 2019,
  • [29] LogFiT: Log Anomaly Detection Using Fine-Tuned Language Models
    Almodovar, Crispin
    Sabrina, Fariza
    Karimi, Sarvnaz
    Azad, Salahuddin
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2024, 21 (02): : 1715 - 1723
  • [30] Improving Fine-Tuned Question Answering Models for Electronic Health Records
    Mairittha, Tittaya
    Mairittha, Nattaya
    Inoue, Sozo
    UBICOMP/ISWC '20 ADJUNCT: PROCEEDINGS OF THE 2020 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2020 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2020, : 688 - 691