Multimodal Explainability Using Class Activation Maps and Canonical Correlation for MI-EEG Deep Learning Classification

被引:0
|
作者
Loaiza-Arias, Marcos [1 ]
alvarez-Meza, Andres Marino [1 ]
Cardenas-Pena, David [2 ]
Orozco-Gutierrez, Alvaro angel [2 ]
Castellanos-Dominguez, German [1 ]
机构
[1] Univ Nacl Colombia, Signal Proc & Recognit Grp, Manizales 170003, Colombia
[2] Univ Tecnol Pereira UTP, Automat Res Grp, Pereira 660003, Colombia
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 23期
关键词
deep learning; explainable models; multimodal analysis; EEG; motor imagery; SPATIAL-PATTERNS; MOTOR; NETWORK;
D O I
10.3390/app142311208
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Brain-computer interfaces (BCIs) are essential in advancing medical diagnosis and treatment by providing non-invasive tools to assess neurological states. Among these, motor imagery (MI), in which patients mentally simulate motor tasks without physical movement, has proven to be an effective paradigm for diagnosing and monitoring neurological conditions. Electroencephalography (EEG) is widely used for MI data collection due to its high temporal resolution, cost-effectiveness, and portability. However, EEG signals can be noisy from a number of sources, including physiological artifacts and electromagnetic interference. They can also vary from person to person, which makes it harder to extract features and understand the signals. Additionally, this variability, influenced by genetic and cognitive factors, presents challenges for developing subject-independent solutions. To address these limitations, this paper presents a Multimodal and Explainable Deep Learning (MEDL) approach for MI-EEG classification and physiological interpretability. Our approach involves the following: (i) evaluating different deep learning (DL) models for subject-dependent MI-EEG discrimination; (ii) employing class activation mapping (CAM) to visualize relevant MI-EEG features; and (iii) utilizing a questionnaire-MI performance canonical correlation analysis (QMIP-CCA) to provide multidomain interpretability. On the GIGAScience MI dataset, experiments show that shallow neural networks are good at classifying MI-EEG data, while the CAM-based method finds spatio-frequency patterns. Moreover, the QMIP-CCA framework successfully correlates physiological data with MI-EEG performance, offering an enhanced, interpretable solution for BCIs.
引用
收藏
页数:25
相关论文
共 50 条
  • [31] Multimodal Fusion of EEG and Eye Data for Attention Classification using Machine Learning
    Roy, Indrani Paul
    Neog, Debanga Raj
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 953 - 954
  • [32] USING EEG BRAIN WAVES AND DEEP LEARNING METHOD FOR LEARNING STATUS CLASSIFICATION
    Liao, Chung-Yen
    Chen, Rung-Ching
    PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), VOL 2, 2018, : 527 - 532
  • [33] Intermediality of Musical Emotions in a Multimodal Scenario: Deep Learning-Aided EEG Correlation Study
    Sanyal, Shankha
    Banerjee, Archi
    Nag, Sayan
    Basu, Medha
    Gangopadhyay, Madhuparna
    Ghosh, Dipak
    PROCEEDINGS OF 27TH INTERNATIONAL SYMPOSIUM ON FRONTIERS OF RESEARCH IN SPEECH AND MUSIC, FRSM 2023, 2024, 1455 : 399 - 413
  • [34] Learning Relationships between Text, Audio, and Video via Deep Canonical Correlation for Multimodal Language Analysis
    Sun, Zhongkai
    Sarma, Prathusha K.
    Sethares, William A.
    Liang, Yingyu
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8992 - 8999
  • [35] MIMETIC: Mobile encrypted traffic classification using multimodal deep learning
    Aceto, Giuseppe
    Ciuonzo, Domenico
    Montieri, Antonio
    Pescape, Antonio
    COMPUTER NETWORKS, 2019, 165
  • [36] Detection of Alzheimer's Dementia by Using EEG Feature Maps and Deep Learning
    Akbugday, Sude Pehlivan
    Cura, Ozlem Karabiber
    Akbugday, Burak
    Akan, Aydin
    32ND EUROPEAN SIGNAL PROCESSING CONFERENCE, EUSIPCO 2024, 2024, : 1397 - 1401
  • [37] Multimodal Deep Learning using Images and Text for Information Graphic Classification
    Kim, Edward
    McCoy, Kathleen F.
    ASSETS'18: PROCEEDINGS OF THE 20TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY, 2018, : 143 - 148
  • [38] Classification of pilots' mental states using a multimodal deep learning network
    Han, Soo-Yeon
    Kwak, No-Sang
    Oh, Taegeun
    Lee, Seong-Whan
    BIOCYBERNETICS AND BIOMEDICAL ENGINEERING, 2020, 40 (01) : 324 - 336
  • [39] Classification of parotid gland tumors by using multimodal MRI and deep learning
    Chang, Yi-Ju
    Huang, Teng-Yi
    Liu, Yi-Jui
    Chung, Hsiao-Wen
    Juan, Chun-Jung
    NMR IN BIOMEDICINE, 2021, 34 (01)
  • [40] A Hybrid Deep Learning Emotion Classification System Using Multimodal Data
    Kim, Dong-Hwi
    Son, Woo-Hyeok
    Kwak, Sung-Shin
    Yun, Tae-Hyeon
    Park, Ji-Hyeok
    Lee, Jae-Dong
    SENSORS, 2023, 23 (23)