Learning Video-Text Aligned Representations for Video Captioning

被引:7
|
作者
Shi, Yaya [1 ]
Xu, Haiyang [2 ]
Yuan, Chunfeng [3 ]
Li, Bing [3 ]
Hu, Weiming [3 ,4 ,5 ]
Zha, Zheng-Jun [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, 96 Jinzhai Rd, Hefei 230026, Anhui, Peoples R China
[2] Alibaba Grp, 969 Wenyi West Rd, Hangzhou 311121, Zhejiang, Peoples R China
[3] Chinese Acad Sci, Inst Automat, NLPR, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[5] CAS Ctr Excellence Brain Sci & Intelligence Techn, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
Video captioning; video-text alignment; aligned representation;
D O I
10.1145/3546828
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video captioning requires that the model has the abilities of video understanding, video-text alignment, and text generation. Due to the semantic gap between vision and language, conducting video-text alignment is a crucial step to reduce the semantic gap, which maps the representations from the visual to the language domain. However, the existing methods often overlook this step, so the decoder has to directly take the visual representations as input, which increases the decoder's workload and limits its ability to generate semantically correct captions. In this paper, we propose a video-text alignment module with a retrieval unit and an alignment unit to learn video-text aligned representations for video captioning. Specifically, we firstly propose a retrieval unit to retrieve sentences as additional input which is used as the semantic anchor between visual scene and language description. Then, we employ an alignment unit with the input of the video and retrieved sentences to conduct the video-text alignment. The representations of two modal inputs are aligned in a shared semantic space. The obtained video-text aligned representations are used to generate semantically correct captions. Moreover, retrieved sentences provide rich semantic concepts which are helpful for generating distinctive captions. Experiments on two public benchmarks, i.e., VATEX and MSR-VTT, demonstrate that our method outperforms state-of-the-art performances by a large margin. The qualitative analysis shows that our method generates correct and distinctive captions.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] ActBERT: Learning Global-Local Video-Text Representations
    Zhu, Linchao
    Yang, Yi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 8743 - 8752
  • [2] Deep learning for video-text retrieval: a review
    Zhu, Cunjuan
    Jia, Qi
    Chen, Wei
    Guo, Yanming
    Liu, Yu
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2023, 12 (01)
  • [3] Deep learning for video-text retrieval: a review
    Cunjuan Zhu
    Qi Jia
    Wei Chen
    Yanming Guo
    Yu Liu
    International Journal of Multimedia Information Retrieval, 2023, 12
  • [4] Video-text extraction and recognition
    Chen, TB
    Ghosh, D
    Ranganath, S
    TENCON 2004 - 2004 IEEE REGION 10 CONFERENCE, VOLS A-D, PROCEEDINGS: ANALOG AND DIGITAL TECHNIQUES IN ELECTRICAL ENGINEERING, 2004, : A319 - A322
  • [5] Guided Graph Attention Learning for Video-Text Matching
    Li, Kunpeng
    Liu, Chang
    Stopa, Mike
    Amano, Jun
    Fu, Yun
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)
  • [6] SViTT: Temporal Learning of Sparse Video-Text Transformers
    Li, Yi
    Min, Kyle
    Tripathi, Subarna
    Vasconcelos, Nuno
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18919 - 18929
  • [7] Complementarity-Aware Space Learning for Video-Text Retrieval
    Zhu, Jinkuan
    Zeng, Pengpeng
    Gao, Lianli
    Li, Gongfu
    Liao, Dongliang
    Song, Jingkuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) : 4362 - 4374
  • [8] Expert-guided contrastive learning for video-text retrieval
    Lee, Jewook
    Lee, Pilhyeon
    Park, Sungho
    Byun, Hyeran
    NEUROCOMPUTING, 2023, 536 : 50 - 58
  • [9] SEMANTIC-PRESERVING METRIC LEARNING FOR VIDEO-TEXT RETRIEVAL
    Choo, Sungkwon
    Ha, Seong Jong
    Lee, Joonsoo
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 2388 - 2392
  • [10] Video-Text Representation Learning via DifferentiableWeak Temporal Alignment
    Ko, Dohwan
    Choi, Joonmyung
    Ko, Juyeon
    Noh, Shinyeong
    On, Kyoung-Woon
    Kim, Eun-Sol
    Kim, Hyunwoo J.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5006 - 5015