CLIP-It! Language-Guided Video Summarization

被引:0
|
作者
Narasimhan, Medhini [1 ]
Rohrbach, Anna [1 ]
Darrell, Trevor [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and queryfocused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] mmFilter: Language-Guided Video Analytics at the Edge
    Hu, Zhiming
    Ye, Ning
    Phillips, Caleb
    Capes, Tim
    Mohomed, Iqbal
    PROCEEDINGS OF THE 2020 21ST INTERNATIONAL MIDDLEWARE CONFERENCE INDUSTRIAL TRACK (MIDDLEWARE INDUSTRY '20), 2020, : 1 - 7
  • [2] LGDN: Language-Guided Denoising Network for Video-Language Modeling
    Lu, Haoyu
    Ding, Mingyu
    Fei, Nanyi
    Huo, Yuqi
    Lu, Zhiwu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] Language-Guided Visual Aggregation Network for Video Question Answering
    Liang, Xiao
    Wang, Di
    Wang, Quan
    Wan, Bo
    An, Lingling
    He, Lihuo
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 5195 - 5203
  • [4] Language-Guided Music Recommendation for Video via Prompt Analogies
    McKee, Daniel
    Salamon, Justin
    Sivic, Josef
    Russell, Bryan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 14784 - 14793
  • [5] Language-guided Robot Grasping: CLIP-based Referring Grasp Synthesis in Clutter
    Tziafas, Georgios
    Xu, Yucheng
    Goel, Arushi
    Kasaei, Mohammadreza
    Li, Zhibin
    Kasaei, Hamidreza
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [6] MTA-CLIP: Language-Guided Semantic Segmentation with Mask-Text Alignment
    Das, Anurag
    Hu, Xinting
    Jiang, Li
    Schiele, Bernt
    COMPUTER VISION - ECCV 2024, PT LIV, 2025, 15112 : 39 - 56
  • [7] CLUE: Contrastive language-guided learning for referring video object segmentation
    Gao, Qiqi
    Zhong, Wanjun
    Li, Jie
    Zhao, Tiejun
    PATTERN RECOGNITION LETTERS, 2024, 178 : 115 - 121
  • [8] Language-guided Multi-Modal Fusion for Video Action Recognition
    Hsiao, Jenhao
    Li, Yikang
    Ho, Chiuman
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 3151 - 3155
  • [9] CLIP-S4: Language-Guided Self-Supervised Semantic Segmentation
    He, Wenbin
    Jamonnak, Suphanut
    Gou, Liang
    Ren, Liu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11207 - 11216
  • [10] Video Question Answering Using Language-Guided Deep Compressed-Domain Video Feature
    Kim, Nayoung
    Ha, Seong Jong
    Kang, Je-Won
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1688 - 1697