A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences

被引:12
|
作者
Muhammad, Farah [1 ]
Hussain, Muhammad [1 ]
Aboalsamh, Hatim [1 ]
机构
[1] King Saud Univ, Coll Comp Sci & Informat, Dept Comp Sci, Riyadh 11451, Saudi Arabia
关键词
bimodal; electroencephalography; facial video clips; emotion recognition; CNN; feature level fusion; Deep CCA; HCI; EXPRESSION; SYSTEM;
D O I
10.3390/diagnostics13050977
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
In recent years, human-computer interaction (HCI) systems have become increasingly popular. Some of these systems demand particular approaches for discriminating actual emotions through the use of better multimodal methods. In this work, a deep canonical correlation analysis (DCCA) based multimodal emotion recognition method is presented through the fusion of electroencephalography (EEG) and facial video clips. A two-stage framework is implemented, where the first stage extracts relevant features for emotion recognition using a single modality, while the second stage merges the highly correlated features from the two modalities and performs classification. Convolutional neural network (CNN) based Resnet50 and 1D-CNN (1-Dimensional CNN) have been utilized to extract features from facial video clips and EEG modalities, respectively. A DCCA-based approach was used to fuse highly correlated features, and three basic human emotion categories (happy, neutral, and sad) were classified using the SoftMax classifier. The proposed approach was investigated based on the publicly available datasets called MAHNOB-HCI and DEAP. Experimental results revealed an average accuracy of 93.86% and 91.54% on the MAHNOB-HCI and DEAP datasets, respectively. The competitiveness of the proposed framework and the justification for exclusivity in achieving this accuracy were evaluated by comparison with existing work.
引用
收藏
页数:28
相关论文
共 50 条
  • [21] A Multi-scale Fusion Framework for Bimodal Speech Emotion Recognition
    Chen, Ming
    Zhao, Xudong
    INTERSPEECH 2020, 2020, : 374 - 378
  • [22] A Temporal Approach to Facial Emotion Expression Recognition
    Asaju, Christine
    Vadapalli, Hima
    ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2021, 2022, 1551 : 274 - 286
  • [23] Emotion Detection through fusion of complementary facial features
    Gupta, Sagar
    Vaish, Ashutosh
    Rathee, Neeru
    2017 7TH INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS AND NETWORK TECHNOLOGIES (CSNT), 2017, : 163 - 166
  • [24] Facial Expression Recognition with Emotion-Based Feature Fusion
    Turan, Cigdem
    Lam, Kin-Man
    He, Xiangjian
    2015 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2015,
  • [25] Facial Emotion Recognition Through Detection of Facial Action Units and Their Intensity
    Borgalli R.A.
    Surve S.
    Scientific Visualization, 2022, 14 (01): : 62 - 86
  • [26] Reducing Videoconferencing Fatigue through Facial Emotion Recognition
    Roessler, Jannik
    Sun, Jiachen
    Gloor, Peter
    FUTURE INTERNET, 2021, 13 (05):
  • [27] Emotion Identification in Movies through Facial Expression Recognition
    Almeida, Joao
    Vilaca, Luis
    Teixeira, Ines N.
    Viana, Paula
    APPLIED SCIENCES-BASEL, 2021, 11 (15):
  • [28] Survey on bimodal speech emotion recognition from acoustic and linguistic information fusion
    Atmaja, Bagus Tris
    Sasou, Akira
    Akagi, Masato
    SPEECH COMMUNICATION, 2022, 140 : 11 - 28
  • [29] Bimodal Emotion Recognition Model Based on Cascaded Two Channel Phased Fusion
    Xu, Zhijing
    Liu, Xia
    Computer Engineering and Applications, 2023, (08): : 127 - 137
  • [30] Facial Emotion Recognition System - A Machine Learning Approach
    Ramalingam, V. V.
    Pandian, A.
    Jayakumar, Lavanya
    PROCEEDINGS OF THE 10TH NATIONAL CONFERENCE ON MATHEMATICAL TECHNIQUES AND ITS APPLICATIONS (NCMTA 18), 2018, 1000