Refined Semantic Enhancement towards Frequency Diffusion for Video Captioning

被引:0
|
作者
Zhong, Xian [1 ]
Li, Zipeng [1 ]
Chen, Shuqin [2 ]
Jiang, Kui [3 ]
Chen, Chen [4 ]
Ye, Mang [3 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
[2] Hubei Univ Educ, Coll Comp, Wuhan, Peoples R China
[3] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
[4] Univ Cent Florida, Ctr Res Comp Vis, Orlando, FL USA
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3 | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video captioning aims to generate natural language sentences that describe the given video accurately. Existing methods obtain favorable generation by exploring richer visual representations in encode phase or improving the decoding ability. However, the long-tailed problem hinders these attempts at low-frequency tokens, which rarely occur but carry critical semantics, playing a vital role in the detailed generation. In this paper, we introduce a novel Refined Semantic enhancement method towards Frequency Diffusion (RSFD), a captioning model that constantly perceives the linguistic representation of the infrequent tokens. Concretely, a Frequency-Aware Diffusion (FAD) module is proposed to comprehend the semantics of low-frequency tokens to break through generation limitations. In this way, the caption is refined by promoting the absorption of tokens with insufficient occurrence. Based on FAD, we design a Divergent Semantic Supervisor (DSS) module to compensate for the information loss of high-frequency tokens brought by the diffusion process, where the semantics of low-frequency tokens is further emphasized to alleviate the long-tailed problem. Extensive experiments indicate that RSFD outperforms the state-of-the-art methods on two benchmark datasets, i.e., MSR-VTT and MSVD, demonstrate that the enhancement of low-frequency tokens semantics can obtain a competitive generation effect. Code is available at https://github.com/lzp870/RSFD.
引用
收藏
页码:3724 / 3732
页数:9
相关论文
共 50 条
  • [31] Set Prediction Guided by Semantic Concepts for Diverse Video Captioning
    Lu, Yifan
    Zhang, Ziqi
    Yuan, Chunfeng
    Li, Peng
    Wang, Yan
    Li, Bing
    Hu, Weiming
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3909 - 3917
  • [32] Fused GRU with semantic-temporal attention for video captioning
    Gao, Lianli
    Wang, Xuanhan
    Song, Jingkuan
    Liu, Yang
    NEUROCOMPUTING, 2020, 395 : 222 - 228
  • [33] Semantic association enhancement transformer with relative position for image captioning
    Xin Jia
    Yunbo Wang
    Yuxin Peng
    Shengyong Chen
    Multimedia Tools and Applications, 2022, 81 : 21349 - 21367
  • [34] Semantic association enhancement transformer with relative position for image captioning
    Jia, Xin
    Wang, Yunbo
    Peng, Yuxin
    Chen, Shengyong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (15) : 21349 - 21367
  • [35] Semantic Enhanced Encoder-Decoder Network (SEN) for Video Captioning
    Gui, Yuling
    Guo, Dan
    Zhao, Ye
    PROCEEDINGS OF THE 2ND WORKSHOP ON MULTIMEDIA FOR ACCESSIBLE HUMAN COMPUTER INTERFACES (MAHCI '19), 2019, : 25 - 32
  • [36] BiTransformer: augmenting semantic context in video captioning via bidirectional decoder
    Maosheng Zhong
    Hao Zhang
    Yong Wang
    Hao Xiong
    Machine Vision and Applications, 2022, 33
  • [37] Center-enhanced video captioning model with multimodal semantic alignment
    Zhang, Benhui
    Gao, Junyu
    Yuan, Yuan
    NEURAL NETWORKS, 2024, 180
  • [38] BiTransformer: augmenting semantic context in video captioning via bidirectional decoder
    Zhong, Maosheng
    Zhang, Hao
    Wang, Yong
    Xiong, Hao
    MACHINE VISION AND APPLICATIONS, 2022, 33 (05)
  • [39] Multi-level video captioning method based on semantic space
    Yao, Xiao
    Zeng, Yuanlin
    Gu, Min
    Yuan, Ruxi
    Li, Jie
    Ge, Junyi
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (28) : 72113 - 72130
  • [40] Global-Local Combined Semantic Generation Network for Video Captioning
    Mao L.
    Gao H.
    Yang D.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (09): : 1374 - 1382