Implicit Hybrid Video Emotion Tagging by Integrating Video Content and Users' Multiple Physiological Responses

被引:0
|
作者
Chen, Shiyu [1 ]
Wang, Shangfei [1 ]
Wu, Chongliang [1 ]
Gao, Zhen [1 ]
Shi, Xiaoxiao [1 ]
Ji, Qiang [2 ]
机构
[1] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei, Anhui, Peoples R China
[2] Rensselaer Polytech Inst, Dept Elect Comp & Syst Engn, Troy, NY USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The intrinsic interactions among a video's emotion tag, its content, and a user's spontaneous response while consuming the video can be leveraged to improve video emotion tagging, but this capability has not been thoroughly exploited yet. In this paper, we propose an implicit hybrid video emotion tagging approach by integrating video content and users' multiple physiological responses, which are only required during training. Specifically, multiple physiological signals during training construct a better emotion tagging model from video content. We add similarity constraints on the classifier mapping functions during training to capture the relationships among different kinds of features. We modify the traditional support vector machine with these constraints to improve video emotion tagging. Efficient learning algorithms of the proposed model are also developed. Experiments on three benchmark databases demonstrate the effectiveness and superior performance of our proposed method for implicitly integrating multiple physiological responses to improve video emotion tagging.
引用
收藏
页码:295 / 300
页数:6
相关论文
共 50 条
  • [1] Content-Based Video Emotion Tagging Augmented by Users' Multiple Physiological Responses
    Wang, Shangfei
    Chen, Shiyu
    Ji, Qiang
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2019, 10 (02) : 155 - 166
  • [2] Hybrid video emotional tagging using users’ EEG and video content
    Shangfei Wang
    Yachen Zhu
    Guobing Wu
    Qiang Ji
    Multimedia Tools and Applications, 2014, 72 : 1257 - 1283
  • [3] Hybrid video emotional tagging using users' EEG and video content
    Wang, Shangfei
    Zhu, Yachen
    Wu, Guobing
    Ji, Qiang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2014, 72 (02) : 1257 - 1283
  • [4] Implicit video emotion tagging from audiences' facial expression
    Wang, Shangfei
    Liu, Zhilei
    Zhu, Yachen
    He, Menghua
    Chen, Xiaoping
    Ji, Qiang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (13) : 4679 - 4706
  • [5] Implicit video emotion tagging from audiences’ facial expression
    Shangfei Wang
    Zhilei Liu
    Yachen Zhu
    Menghua He
    Xiaoping Chen
    Qiang Ji
    Multimedia Tools and Applications, 2015, 74 : 4679 - 4706
  • [6] Exploiting multi-expression dependences for implicit multi-emotion video tagging
    Wang, Shangfei
    Liu, Zhilei
    Wang, Jun
    Wang, Zhaoyu
    Li, Yongqiang
    Chen, Xiaoping
    Ji, Qiang
    IMAGE AND VISION COMPUTING, 2014, 32 (10) : 682 - 691
  • [7] Implicit Video Multi-Emotion Tagging by Exploiting Multi-Expression Relations
    Liu, Zhilei
    Wang, Shangfei
    Wang, Zhaoyu
    Ji, Qiang
    2013 10TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), 2013,
  • [8] Implicit Affective Video Tagging Using Pupillary Response
    Gui, Dongdong
    Zhong, Sheng-Hua
    Ming, Zhong
    MULTIMEDIA MODELING, MMM 2018, PT II, 2018, 10705 : 165 - 176
  • [9] User-centric Affective Video Tagging from MEG and Peripheral Physiological Responses
    Abadi, Mojtaba Khomami
    Kia, Seyed Mostafa
    Subramanian, Ramanathan
    Avesani, Paolo
    Sebe, Nicu
    2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, : 582 - 587
  • [10] Automatic Video Tagging using Content Redundancy
    Siersdorfer, Stefan
    Pedro, Jose San
    Sanderson, Mark
    PROCEEDINGS 32ND ANNUAL INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2009, : 395 - 402