Decoding of imagined speech electroencephalography neural signals using transfer learning method

被引:4
|
作者
Mahapatra, Nrushingh Charan [1 ,2 ]
Bhuyan, Prachet [2 ]
机构
[1] Intel Technol India Pvt Ltd, Bengaluru 560103, India
[2] Kalinga Inst Ind Technol, Sch Comp Engn, Bhubaneswar 751024, India
来源
JOURNAL OF PHYSICS COMMUNICATIONS | 2023年 / 7卷 / 09期
关键词
brain-computer interface (BCI); deep learning (DL); electroencephalography (EEG); imagined speech; signal processing; transfer learning (TL); EEG; ARTIFACTS;
D O I
10.1088/2399-6528/ad0197
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
The use of brain-computer interfaces to produce imagined speech from brain waves has the potential to assist individuals with difficulty producing speech or communicating silently. The decoding of covert speech has been observed to have limited efficacy due to the diverse nature of the associated measured brain waves and the limited number of covert speech databases. As a result, traditional machine learning algorithms for learning and inference are challenging, and one of the real alternatives could be to leverage transfer of learning. The main goals of this research were to create a new deep learning (DL) framework for decoding imagined speech electroencephalography (EEG) signals tasks using transfer learning and to transfer the model learning of the source task of an imagined speech EEG dataset to the model training on the target task of another imagined speech EEG dataset, essentially the cross-task learning transfer of discriminative characteristics of the source task to the target task of imagined speech. The experiment was carried out using two distinct open-access EEG datasets, FEIS and KaraOne, that recorded the imagined speech classes of neural signals from multiple individuals. The target FEIS model and the target KaraOne model for multiclass classification exhibit overall accuracy of 89.01% and 82.35%, respectively, according to the proposed transfer learning. The experiment results indicate that the cross-task deep transfer learning design reliably classifies the imagined speech EEG signals by applying the source task learning to the target task learning. The findings suggest the feasibility of a consistent strategy for classifying multiclass imagined speech with transfer learning, which could thereby open up the possibility of future investigation into cross-task imagined speech classification knowledge usability for generalization of new imagined speech prompts.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Riemannian geometric and ensemble learning for decoding cross-session motor imagery electroencephalography signals
    Pan, Lincong
    Wang, Kun
    Xu, Lichao
    Sun, Xinwei
    Yi, Weibo
    Xu, Minpeng
    Ming, Dong
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (06)
  • [42] Enabling the Translation of Electromyographic Signals Into Speech: A Neural Network Based Decoding Approach
    Abhishek Bharali
    Bidyut Bikash Borah
    Uddipan Hazarika
    Soumik Roy
    SN Computer Science, 5 (8)
  • [43] Classification of group speech imagined EEG signals based on attention mechanism and deep learning
    Zhou, Yifan
    Zhang, Lingwei
    Zhou, Zhengdong
    Cai, Zhi
    Yuan, Mengyao
    Yuan, Xiaoxi
    Yang, Zeyi
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2024, 58 (12): : 2540 - 2546
  • [44] Denoising Method for Microseismic Signals with Convolutional Neural Network Based on Transfer Learning
    Li, Xuegui
    Feng, Shuo
    Guo, Yuantao
    Li, Hanyang
    Zhou, Yingjie
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2023, 16 (01)
  • [45] Denoising Method for Microseismic Signals with Convolutional Neural Network Based on Transfer Learning
    Xuegui Li
    Shuo Feng
    Yuantao Guo
    Hanyang Li
    Yingjie Zhou
    International Journal of Computational Intelligence Systems, 16
  • [46] DECODING MOVEMENT IMAGINATION AND EXECUTION FROM EEG SIGNALS USING BCI-TRANSFER LEARNING METHOD BASED ON RELATION NETWORK
    Lee, Do-Yeun
    Jeong, Ji-Hoon
    Shim, Kyung-Hwan
    Lee, Seong-Whan
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1354 - 1358
  • [47] Benchmarks for machine learning in depression discrimination using electroencephalography signals
    Seal, Ayan
    Bajpai, Rishabh
    Karnati, Mohan
    Agnihotri, Jagriti
    Yazidi, Anis
    Herrera-Viedma, Enrique
    Krejcar, Ondrej
    APPLIED INTELLIGENCE, 2023, 53 (10) : 12666 - 12683
  • [48] Benchmarks for machine learning in depression discrimination using electroencephalography signals
    Ayan Seal
    Rishabh Bajpai
    Mohan Karnati
    Jagriti Agnihotri
    Anis Yazidi
    Enrique Herrera-Viedma
    Ondrej Krejcar
    Applied Intelligence, 2023, 53 : 12666 - 12683
  • [49] Neural correlates of speech quality dimensions analyzed using electroencephalography (EEG)
    Uhrig, Stefan
    Mittag, Gabriel
    Moeller, Sebastian
    Voigt-Antons, Jan-Niklas
    JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
  • [50] Speech synthesis from intracranial stereotactic Electroencephalography using a neural vocoder
    Arthur, Frigyes Viktor
    Csapo, Tamas Gabor
    INFOCOMMUNICATIONS JOURNAL, 2024, 16 (01): : 47 - 55