An Emotion-Embedded Visual Attention Model for Dimensional Emotion Context Learning

被引:3
|
作者
Tang, Yuhao [1 ]
Mao, Qirong [1 ]
Jia, Hongjie [1 ]
Song, Heping [1 ]
Zhan, Yongzhao [1 ]
机构
[1] Jiangsu Univ, Sch Comp Sci & Commun Engn, Zhenjiang 212013, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Dimensional emotion; attention mechanism; context learning;
D O I
10.1109/ACCESS.2019.2911714
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dimensional emotion recognition has attracted more and more researchers' attention from various fields including psychology, cognition, and computer science. In this paper, we propose an emotion-embedded visual attention model (EVAM) to learn emotion context information for predicting affective dimension values from video sequences. First, deep CNN is used to generate a high-level representation of the raw face images. Second, a visual attention model based on the gated recurrent unit (GRU) is employed to learn the context information of the feature sequences from face features. Third, the k-means algorithm is adapted to embed previous emotion into attention model to produce more robust time series predictions, which emphasize the influence of previous emotion on current effective prediction. In this paper, all experiments are carried out on database AVEC 2016 and AVEC 2017. The experimental results validate the efficiency of our method, and competitive results are obtained.
引用
收藏
页码:72457 / 72468
页数:12
相关论文
共 50 条
  • [11] Emotion regulates attention:: The relation between facial configurations, facial emotion, and visual attention
    Lundqvist, D
    Öhman, A
    VISUAL COGNITION, 2005, 12 (01) : 51 - 84
  • [12] A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition
    Praveen, R. Gnana
    de Melo, Wheidima Carneiro
    Ullah, Nasib
    Aslam, Haseeb
    Zeeshan, Osama
    Denorme, Theo
    Pedersoli, Marco
    Koerich, Alessandro L.
    Bacon, Simon
    Cardinal, Patrick
    Granger, Eric
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 2485 - 2494
  • [13] Opinion mining for multiple types of emotion-embedded products/services through evolutionary strategy
    Yang, Heng-Li
    Lin, Qing-Feng
    EXPERT SYSTEMS WITH APPLICATIONS, 2018, 99 : 44 - 55
  • [14] Emotion Wheel Attention-Based Emotion Distribution Learning
    Zeng, Xueqiang
    Chen, Qifan
    Fu, Xuefeng
    Zuo, Jiali
    IEEE ACCESS, 2021, 9 : 153360 - 153370
  • [15] Changing the spotlight of attention: The influence of emotion on visual attention
    Koji, Shahnaz
    Fernandes, Myra A.
    Dixon, Michael J.
    Aquino, Jennifer
    CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2009, 63 (04): : 340 - 340
  • [16] Emotion and startle: Attention, context, and affective contrast
    Lucas, C
    Caldwell, S
    Farley, RL
    Williams, WC
    PSYCHOPHYSIOLOGY, 2002, 39 : S53 - S53
  • [17] Emotion schemas are embedded in the human visual system
    Kragel, Philip A.
    Reddan, Marianne C.
    LaBar, Kevin S.
    Wager, Tor D.
    SCIENCE ADVANCES, 2019, 5 (07):
  • [18] On color and emotion: An ERP study of visual attention
    Pilarczyk, J.
    Kuniecki, M.
    PERCEPTION, 2012, 41 : 145 - 146
  • [19] Interactive influences of emotion and extraversion on visual attention
    Bendall, Robert C. A.
    Begley, Shaunine
    Thompson, Catherine
    BRAIN AND BEHAVIOR, 2021, 11 (11):
  • [20] The relationship between dispositional attention to feelings and visual attention to emotion
    Bujanow, Anna
    Bodenschatz, Charlott Maria
    Szymanska, Monika
    Kersting, Anette
    Vulliez-Coady, Lauriane
    Suslow, Thomas
    PROGRESS IN NEURO-PSYCHOPHARMACOLOGY & BIOLOGICAL PSYCHIATRY, 2020, 100