Toward Mathematical Representation of Emotion: A Deep Multitask Learning Method Based On Multimodal Recognition

被引:1
|
作者
Harata, Seiichi [1 ]
Sakuma, Takuto [1 ]
Kato, Shohei [1 ]
机构
[1] Nagoya Inst Technol, Nagoya, Aichi, Japan
关键词
Affective Computing; Deep Neural Networks; Multimodal Fusion; Multitask Learning; Emotional Space;
D O I
10.1145/3395035.3425254
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
To emulate human emotions in agents, the mathematical representation of emotion (an emotional space) is essential for each component, such as emotion recognition, generation, and expression. In this study, we aim to acquire a modality-independent emotional space by extracting shared emotional information from different modalities. We propose a method of acquiring an emotional space by integrating multimodalities on a DNN and combining the emotion recognition task and the unification task. The emotion recognition task learns the representation of emotions, and the unification task learns an identical emotional space from each modality. Through the experiments with audio-visual data, we confirmed that there are differences in emotional spaces acquired from unimodality, and the proposed method can acquire a joint emotional space. We also indicated that the proposed method could adequately represent emotions in a low-dimensional emotional space, such as in five or six dimensions, under this paper's experimental conditions.
引用
收藏
页码:47 / 51
页数:5
相关论文
共 50 条
  • [41] Learning deep multimodal affective features for spontaneous speech emotion recognition
    Zhang, Shiqing
    Tao, Xin
    Chuang, Yuelong
    Zhao, Xiaoming
    SPEECH COMMUNICATION, 2021, 127 : 73 - 81
  • [42] Multimodal Emotion Recognition with Deep Learning: Advancements, challenges, and future directions
    Geetha, A., V
    Mala, T.
    Priyanka, D.
    Uma, E.
    INFORMATION FUSION, 2024, 105
  • [43] Self-supervised representation learning using multimodal Transformer for emotion recognition
    Goetz, Theresa
    Arora, Pulkit
    Erick, F. X.
    Holzer, Nina
    Sawant, Shrutika
    PROCEEDINGS OF THE 8TH INTERNATIONAL WORKSHOP ON SENSOR-BASED ACTIVITY RECOGNITION AND ARTIFICIAL INTELLIGENCE, IWOAR 2023, 2023,
  • [44] SigRep: Toward Robust Wearable Emotion Recognition With Contrastive Representation Learning
    Dissanayake, Vipula
    Seneviratne, Sachith
    Rana, Rajib
    Wen, Elliott
    Kaluarachchi, Tharindu
    Nanayakkara, Suranga
    IEEE ACCESS, 2022, 10 : 18105 - 18120
  • [45] MULTIMODAL EMOTION RECOGNITION WITH CAPSULE GRAPH CONVOLUTIONAL BASED REPRESENTATION FUSION
    Liu, Jiaxing
    Chen, Sen
    Wang, Longbiao
    Liu, Zhilei
    Fu, Yahui
    Guo, Lili
    Dang, Jianwu
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6339 - 6343
  • [46] Multimodal Music Emotion Recognition Method Based on the Combination of Knowledge Distillation and Transfer Learning
    Tong, Guiying
    SCIENTIFIC PROGRAMMING, 2022, 2022
  • [47] Deep Learning and Audio Based Emotion Recognition
    Demir, Asli
    Atila, Orhan
    Sengur, Abdulkadir
    2019 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND DATA PROCESSING (IDAP 2019), 2019,
  • [48] Multimodal Deep Learning Model for Subject-Independent EEG-based Emotion Recognition
    Dharia, Shyamal Y.
    Valderrama, Camilo E.
    Camorlinga, Sergio G.
    2023 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, CCECE, 2023,
  • [49] Exploration on multimodal data recognition method for Internet of Things based on deep learning
    Zheng, Xuan
    Sun, Zheng
    INTELLIGENT DECISION TECHNOLOGIES-NETHERLANDS, 2024, 18 (02): : 759 - 767
  • [50] A review on EEG-based multimodal learning for emotion recognition
    Pillalamarri, Rajasekhar
    Shanmugam, Udhayakumar
    ARTIFICIAL INTELLIGENCE REVIEW, 2025, 58 (05)