Toward Mathematical Representation of Emotion: A Deep Multitask Learning Method Based On Multimodal Recognition

被引:1
|
作者
Harata, Seiichi [1 ]
Sakuma, Takuto [1 ]
Kato, Shohei [1 ]
机构
[1] Nagoya Inst Technol, Nagoya, Aichi, Japan
关键词
Affective Computing; Deep Neural Networks; Multimodal Fusion; Multitask Learning; Emotional Space;
D O I
10.1145/3395035.3425254
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
To emulate human emotions in agents, the mathematical representation of emotion (an emotional space) is essential for each component, such as emotion recognition, generation, and expression. In this study, we aim to acquire a modality-independent emotional space by extracting shared emotional information from different modalities. We propose a method of acquiring an emotional space by integrating multimodalities on a DNN and combining the emotion recognition task and the unification task. The emotion recognition task learns the representation of emotions, and the unification task learns an identical emotional space from each modality. Through the experiments with audio-visual data, we confirmed that there are differences in emotional spaces acquired from unimodality, and the proposed method can acquire a joint emotional space. We also indicated that the proposed method could adequately represent emotions in a low-dimensional emotional space, such as in five or six dimensions, under this paper's experimental conditions.
引用
收藏
页码:47 / 51
页数:5
相关论文
共 50 条
  • [1] Mathematical representation of emotion using multimodal recognition model with deep multitask learning
    Harata S.
    Sakuma T.
    Kato S.
    Harata, Seiichi (harata@katolab.nitech.ac.jp), 1600, Institute of Electrical Engineers of Japan (140): : 1343 - 1351
  • [2] A multimodal fusion emotion recognition method based on multitask learning and attention mechanism
    Xie, Jinbao
    Wang, Jiyu
    Wang, Qingyan
    Yang, Dali
    Gu, Jinming
    Tang, Yongqiang
    Varatnitski, Yury I.
    NEUROCOMPUTING, 2023, 556
  • [3] Deep Representation Learning for Multimodal Emotion Recognition Using Physiological Signals
    Zubair, Muhammad
    Woo, Sungpil
    Lim, Sunhwan
    Yoon, Changwoo
    IEEE ACCESS, 2024, 12 : 106605 - 106617
  • [4] Disentangled Representation Learning for Multimodal Emotion Recognition
    Yang, Dingkang
    Huang, Shuai
    Kuang, Haopeng
    Du, Yangtao
    Zhang, Lihua
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 1642 - 1651
  • [5] A deep interpretable representation learning method for speech emotion recognition
    Jing, Erkang
    Liu, Yezheng
    Chai, Yidong
    Sun, Jianshan
    Samtani, Sagar
    Jiang, Yuanchun
    Qian, Yang
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (06)
  • [6] A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations
    Zhang, Yazhou
    Wang, Jinglin
    Liu, Yaochen
    Rong, Lu
    Zheng, Qian
    Song, Dawei
    Tiwari, Prayag
    Qin, Jing
    INFORMATION FUSION, 2023, 93 : 282 - 301
  • [7] Emotion Recognition Using Multimodal Deep Learning
    Liu, Wei
    Zheng, Wei-Long
    Lu, Bao-Liang
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT II, 2016, 9948 : 521 - 529
  • [8] Emotion Recognition on Multimodal with Deep Learning and Ensemble
    Dharma, David Adi
    Zahra, Amalia
    International Journal of Advanced Computer Science and Applications, 2022, 13 (12): : 656 - 663
  • [9] Emotion Recognition on Multimodal with Deep Learning and Ensemble
    Dharma, David Adi
    Zahra, Amalia
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2022, 13 (12) : 656 - 663
  • [10] Deep Learning Based Emotion Recognition and Visualization of Figural Representation
    Lu, Xiaofeng
    FRONTIERS IN PSYCHOLOGY, 2022, 12