Emotional Video Captioning With Vision-Based Emotion Interpretation Network

被引:6
|
作者
Song, Peipei [1 ]
Guo, Dan [2 ,3 ,4 ]
Yang, Xun [1 ]
Tang, Shengeng [2 ]
Wang, Meng [2 ,5 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Dept Elect Engn & Informat Sci, Hefei 230026, Peoples R China
[2] Hefei Univ Technol HFUT, Sch Comp Sci & Informat Engn, Key Lab Knowledge Engn Big Data, Minist Educ, Hefei 230601, Peoples R China
[3] Inst Artificial Intelligence, Hefei Comprehens Natl Sci Ctr, Hefei 230088, Peoples R China
[4] Anhui Zhonghuitong Technol Co Ltd, Hefei 230094, Peoples R China
[5] China Inst Artificial Intelligence, Hefei Comprehens Natl Sci Ctr, Hefei 230088, Peoples R China
关键词
Emotional video captioning; emotion analysis; emotion-fact coordinated optimization;
D O I
10.1109/TIP.2024.3359045
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Effectively summarizing and re-expressing video content by natural languages in a more human-like fashion is one of the key topics in the field of multimedia content understanding. Despite good progress made in recent years, existing efforts usually overlooked the emotions in user-generated videos, thus making the generated sentence a bit boring and soulless. To fill the research gap, this paper presents a novel emotional video captioning framework in which we design a Vision-based Emotion Interpretation Network to effectively capture the emotions conveyed in videos and describe the visual content in both factual and emotional languages. Specifically, we first model the emotion distribution over an open psychological vocabulary to predict the emotional state of videos. Then, guided by the discovered emotional state, we incorporate visual context, textual context, and visual-textual relevance into an aggregated multimodal contextual vector to enhance video captioning. Furthermore, we optimize the network in a new emotion-fact coordinated way that involves two losses- Emotional Indication Loss and Factual Contrastive Loss, which penalize the error of emotion prediction and visual-textual factual relevance, respectively. In other words, we innovatively introduce emotional representation learning into an end-to-end video captioning network. Extensive experiments on public benchmark datasets, EmVidCap and EmVidCap-S, demonstrate that our method can significantly outperform the state-of-the-art methods by a large margin. Quantitative ablation studies and qualitative analyses clearly show that our method is able to effectively capture the emotions in videos and thus generate emotional language sentences to interpret the video content.
引用
收藏
页码:1122 / 1135
页数:14
相关论文
共 50 条
  • [31] Critic-based Attention Network for Event-based Video Captioning
    Barati, Elaheh
    Chen, Xuewen
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 811 - 817
  • [32] Emotional learning of a vision-based partner robot for natural communication with human
    Kubota, Naoyuki
    Omote, Shintaro
    Mori, Yoshikazu
    2006 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1-5, 2006, : 1179 - +
  • [33] Video Captioning based on Image Captioning as Subsidiary Content
    Vaishnavi, J.
    Narmatha, V
    2022 SECOND INTERNATIONAL CONFERENCE ON ADVANCES IN ELECTRICAL, COMPUTING, COMMUNICATION AND SUSTAINABLE TECHNOLOGIES (ICAECT), 2022,
  • [34] GPT-Based Knowledge Guiding Network for Commonsense Video Captioning
    Yuan, Mengqi
    Jia, Gengyun
    Bao, Bing-Kun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5147 - 5158
  • [35] Context Visual Information-based Deliberation Network for Video Captioning
    Lu, Min
    Li, Xueyong
    Liu, Caihua
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9812 - 9818
  • [36] Learning topic emotion and logical semantic for video paragraph captioning
    Li, Qinyu
    Wang, Hanli
    Yi, Xiaokai
    DISPLAYS, 2024, 83
  • [37] Vision-based framework for automatic interpretation of construction workers' hand gestures
    Wang, Xin
    Zhu, Zhenhua
    AUTOMATION IN CONSTRUCTION, 2021, 130
  • [38] A vision-based method for the circle pose determination with a direct geometric interpretation
    Chen, Z
    Huang, JB
    IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1999, 15 (06): : 1135 - 1140
  • [39] Vision-based interpretation of hand gestures for remote control of a computer mouse
    Argyros, Antonis A.
    Lourakis, Manolis I. A.
    COMPUTER VISION IN HUMAN-COMPUTER INTERACTION, 2006, 3979 : 40 - 51
  • [40] Video Motion Magnification to Improve the Accuracy of Vision-Based Vibration Measurements
    Perez, Eduardo
    Zappa, Emanuele
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71