CLIP-It! Language-Guided Video Summarization

被引:0
|
作者
Narasimhan, Medhini [1 ]
Rohrbach, Anna [1 ]
Darrell, Trevor [1 ]
机构
[1] Univ Calif Berkeley, Berkeley, CA 94720 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A generic video summary is an abridged version of a video that conveys the whole story and features the most important scenes. Yet the importance of scenes in a video is often subjective, and users should have the option of customizing the summary by using natural language to specify what is important to them. Further, existing models for fully automatic generic summarization have not exploited available language models, which can serve as an effective prior for saliency. This work introduces CLIP-It, a single framework for addressing both generic and queryfocused video summarization, typically approached separately in the literature. We propose a language-guided multimodal transformer that learns to score frames in a video based on their importance relative to one another and their correlation with a user-defined query (for query-focused summarization) or an automatically generated dense video caption (for generic video summarization). Our model can be extended to the unsupervised setting by training without ground-truth supervision. We outperform baselines and prior work by a significant margin on both standard video summarization datasets (TVSum and SumMe) and a query-focused video summarization dataset (QFVS). Particularly, we achieve large improvements in the transfer setting, attesting to our method's strong generalization capabilities.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression
    Li, Wanhua
    Huang, Xiaoke
    Zhu, Zheng
    Tang, Yansong
    Li, Xiu
    Zhou, Jie
    Lu, Jiwen
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [32] ShapeWalk: Compositional Shape Editing through Language-Guided Chains
    Slim, Habib
    Elhoseiny, Mohamed
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 22574 - 22583
  • [33] Language-Guided Semantic Clustering for Remote Sensing Change Detection
    Hu, Shenglong
    Bian, Yiting
    Chen, Bin
    Song, Huihui
    Zhang, Kaihua
    SENSORS, 2024, 24 (24)
  • [34] Towards Language-Guided Visual Recognition via Dynamic Convolutions
    Gen Luo
    Yiyi Zhou
    Xiaoshuai Sun
    Yongjian Wu
    Yue Gao
    Rongrong Ji
    International Journal of Computer Vision, 2024, 132 : 1 - 19
  • [35] Towards a Watson That Sees: Language-Guided Action Recognition for Robots
    Teo, Ching L.
    Yang, Yezhou
    Daume, Hal, III
    Fermueller, Cornelia
    Aloimonos, Yiannis
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 374 - 381
  • [36] A Benchmark for UAV-View Natural Language-Guided Tracking
    Li, Hengyou
    Liu, Xinyan
    Li, Guorong
    ELECTRONICS, 2024, 13 (09)
  • [37] Clip-based similarity measure for query-dependent clip retrieval and video summarization
    Peng, Yuxin
    Ngo, Chong-Wah
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2006, 16 (05) : 612 - 627
  • [38] Language-Guided Transformer for Federated Multi-Label Classification
    Liu, I-Jieh
    Lin, Ci-Siang
    Yang, Fu-En
    Wang, Yu-Chiang Frank
    arXiv, 2023,
  • [39] LapsCore: Language-guided Person Search via Color Reasoning
    Wu, Yushuang
    Yan, Zizheng
    Han, Xiaoguang
    Li, Guanbin
    Zou, Changqing
    Cui, Shuguang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1604 - 1613
  • [40] Towards Language-Guided Visual Recognition via Dynamic Convolutions
    Luo, Gen
    Zhou, Yiyi
    Sun, Xiaoshuai
    Wu, Yongjian
    Gao, Yue
    Ji, Rongrong
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (01) : 1 - 19