A Deep Learning Framework for Decoding Motor Imagery Tasks of the Same Hand Using EEG Signals

被引:41
|
作者
Alazrai, Rami [1 ]
Abuhijleh, Motaz [1 ]
Alwanni, Hisham [2 ]
Daoud, Mohammad, I [1 ]
机构
[1] German Jordanian Univ, Sch Elect Engn & Informat Technol, Dept Comp Engn, Amman 11180, Jordan
[2] Univ Freiburg, Fac Engn, D-79098 Freiburg, Germany
来源
IEEE ACCESS | 2019年 / 7卷
关键词
Convolutional neural networks (CNN); deep learning; electroencephalography (EEG); motor imagery; time-frequency distribution; BRAIN-COMPUTER INTERFACES; SINGLE-TRIAL EEG; DISCRIMINATION; CLASSIFICATION;
D O I
10.1109/ACCESS.2019.2934018
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This study aims to increase the control's dimensions of the electroencephalography (EEG)-based brain-computer interface (BCI) systems by distinguishing between the motor imagery (MI) tasks associated with fine body-parts of the same hand, such as the wrist and fingers. This in turn can enable individuals who are suffering from transradial amputations to better control prosthetic hands and to perform various dexterous hand tasks. In particular, we present a novel three-stage framework for decoding MI tasks of the same hand. The three stages of the proposed framework are the input, feature extraction, and classification stages. At the input stage, we employ a quadratic time-frequency distribution (QTFD) to analyze the EEG signals in the joint time-frequency domain. The use of a QTFD enables to transform the EEG signals into a set of two-dimensional (2D) time-frequency images (TFIs) that describe the distribution of the energy encapsulated within the EEG signals in terms of the time, frequency, and electrode position. At the feature extraction stage, we design a new convolutional neural network (CNN) architecture that can automatically analyze and extract salient features from the TFIs created at the input stage. Finally, the features obtained at the feature extraction stage are passed to the classification stage to assign each input TFI to one of the eleven MI tasks that are considered in the current study. The performance of our proposed framework is evaluated using EEG signals that were acquired from eighteen able-bodied subjects and four transradial amputated subjects while performing eleven MI tasks within the same hand. The average classification accuracies obtained for the able-bodied and transradial amputated subjects are 73.7% and 72.8%, respectively. Moreover, our proposed framework yields 14.5% and 11.2% improvements over the results obtained for the able-bodied and transradial amputated subjects, respectively, using conventional QTFD-based handcrafted features and a multi-class support vector machine classifier. The results demonstrate the efficacy of the proposed framework to decode the MI tasks associated with the same hand for able-bodied and transradial amputated subjects.
引用
收藏
页码:109612 / 109627
页数:16
相关论文
共 50 条
  • [1] Decoding Hand Motor Imagery Tasks Within the Same Limb From EEG Signals Using Deep Learning
    Achanccaray, David
    Hayashibe, Mitsuhiro
    IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, 2020, 2 (04): : 692 - 699
  • [2] Decoding and Mapping of Right Hand Motor Imagery Tasks using EEG Source Imaging
    Edelman, Brad
    Baxter, Bryan
    He, Bin
    2015 7TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2015, : 194 - 197
  • [3] Speech Imagery Decoding Using EEG Signals and Deep Learning: A Survey
    Zhang, Liying
    Zhou, Yueying
    Gong, Peiliang
    Zhang, Daoqiang
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2025, 17 (01) : 22 - 39
  • [4] Classification of motor imagery EEG signals using deep learning
    Rahma, Boungab
    Aicha, Reffad
    Kamel, Mebarkia
    PROGRAM OF THE 2ND INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING AND AUTOMATIC CONTROL, ICEEAC 2024, 2024,
  • [5] EEG Classification of Motor Imagery Using a Novel Deep Learning Framework
    Dai, Mengxi
    Zheng, Dezhi
    Na, Rui
    Wang, Shuai
    Zhang, Shuailei
    SENSORS, 2019, 19 (03)
  • [6] HemCNN: Deep Learning enables decoding of fNIRS cortical signals in hand grip motor tasks
    Ortega, Pablo
    Faisal, Aldo
    2021 10TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2021, : 718 - 721
  • [7] Hybrid deep neural network using transfer learning for EEG motor imagery decoding
    Zhang, Ruilong
    Zong, Qun
    Dou, Liqian
    Zhao, Xinyi
    Tang, Yifan
    Li, Zhiyu
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 63
  • [8] Classification of Motor Imagery EEG Signals with Deep Learning Models
    Shen, Yurun
    Lu, Hongtao
    Jia, Jie
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, ISCIDE 2017, 2017, 10559 : 181 - 190
  • [9] Deep learning in motor imagery EEG signal decoding: A Systematic Review
    Saibene, Aurora
    Ghaemi, Hafez
    Dagdevir, Eda
    NEUROCOMPUTING, 2024, 610
  • [10] EEG Source Imaging Enhances the Decoding of Complex Right-Hand Motor Imagery Tasks
    Edelman, Bradley J.
    Baxter, Bryan
    He, Bin
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2016, 63 (01) : 4 - 14