Cross-dataset transfer learning for motor imagery signal classification via multi-task learning and pre-training

被引:18
|
作者
Xie, Yuting [1 ]
Wang, Kun [1 ,3 ]
Meng, Jiayuan [1 ,2 ,3 ]
Yue, Jin [1 ]
Meng, Lin [1 ,3 ]
Yi, Weibo [3 ,4 ]
Jung, Tzyy-Ping [1 ,2 ]
Xu, Minpeng [1 ,2 ,3 ]
Ming, Dong [1 ,2 ,3 ]
机构
[1] Tianjin Univ, Acad Med Engn & Translat Med, Tianjin, Peoples R China
[2] Tianjin Univ, Coll Precis Instruments & Optoelect Engn, Tianjin, Peoples R China
[3] Haihe Lab Brain Comp Interact & Human Machine Inte, Tianjin, Peoples R China
[4] Beijing Inst Mech Equipment, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
motor imagery; deep learning; cross-dataset transfer learning; pre-training; multi-task learning; BRAIN-COMPUTER INTERFACES; DOMAIN ADAPTATION; EEG;
D O I
10.1088/1741-2552/acfe9c
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Deep learning (DL) models have been proven to be effective in decoding motor imagery (MI) signals in Electroencephalogram (EEG) data. However, DL models' success relies heavily on large amounts of training data, whereas EEG data collection is laborious and time-consuming. Recently, cross-dataset transfer learning has emerged as a promising approach to meet the data requirements of DL models. Nevertheless, transferring knowledge across datasets involving different MI tasks remains a significant challenge in cross-dataset transfer learning, limiting the full utilization of valuable data resources. Approach: This study proposes a pre-training-based cross-dataset transfer learning method inspired by Hard Parameter Sharing in multi-task learning. Different datasets with distinct MI paradigms are considered as different tasks, classified with shared feature extraction layers and individual task-specific layers to allow cross-dataset classification with one unified model. Then, Pre-training and fine-tuning are employed to transfer knowledge across datasets. We also designed four fine-tuning schemes and conducted extensive experiments on them. Main results: The results showed that compared to models without pre-training, models with pre-training achieved a maximum increase in accuracy of 7.76%. Moreover, when limited training data were available, the pre-training method significantly improved DL model's accuracy by 27.34% at most. The experiments also revealed that pre-trained models exhibit faster convergence and remarkable robustness. The training time per subject could be reduced by up to 102.83 s, and the variance of classification accuracy decreased by 75.22% at best. Significance: This study represents the first comprehensive investigation of the cross-dataset transfer learning method between two datasets with different MI tasks. The proposed pre-training method requires only minimal fine-tuning data when applying DL models to new MI paradigms, making MI-Brain-computer interface more practical and user-friendly.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Cross-dataset face analysis based on multi-task learning
    Zhou, Caixia
    Zhi, Ruicong
    Hu, Xin
    APPLIED INTELLIGENCE, 2023, 53 (10) : 12971 - 12984
  • [2] Cross-dataset face analysis based on multi-task learning
    Caixia Zhou
    Ruicong Zhi
    Xin Hu
    Applied Intelligence, 2023, 53 : 12971 - 12984
  • [3] Multi-task Pre-training with Soft Biometrics for Transfer-learning Palmprint Recognition
    Xu, Huanhuan
    Leng, Lu
    Yang, Ziyuan
    Teoh, Andrew Beng Jin
    Jin, Zhe
    NEURAL PROCESSING LETTERS, 2023, 55 (03) : 2341 - 2358
  • [4] Multi-task Pre-training with Soft Biometrics for Transfer-learning Palmprint Recognition
    Huanhuan Xu
    Lu Leng
    Ziyuan Yang
    Andrew Beng Jin Teoh
    Zhe Jin
    Neural Processing Letters, 2023, 55 : 2341 - 2358
  • [5] Dataset for modulation classification and signal type classification for multi-task and single task learning
    Jagannath, Anu
    Jagannath, Jithin
    COMPUTER NETWORKS, 2021, 199
  • [6] CLMSM: A Multi-Task Learning Framework for Pre-training on Procedural Text
    Nandy, Abhilash
    Kapadnis, Manav Nitin
    Goyal, Pawan
    Ganguly, Niloy
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8793 - 8806
  • [7] DL4ALL: Multi-Task Cross-Dataset Transfer Learning for Acute Lymphoblastic Leukemia Detection
    Genovese, Angelo
    Piuri, Vincenzo
    Plataniotis, Konstantinos N.
    Scotti, Fabio
    IEEE ACCESS, 2023, 11 : 65222 - 65237
  • [8] Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding
    Zhang, Yu
    Cheng, Hao
    Shen, Zhihong
    Liu, Xiaodong
    Wang, Ye-Yi
    Gao, Jianfeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12259 - 12275
  • [9] Cross-dataset motor imagery decoding - A transfer learning assisted graph convolutional network approach
    Zhang, Jiayang
    Li, Kang
    Yang, Banghua
    Zhao, Zhengrun
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 102
  • [10] Multi-source deep domain adaptation ensemble framework for cross-dataset motor imagery EEG transfer learning
    Miao, Minmin
    Yang, Zhong
    Sheng, Zhenzhen
    Xu, Baoguo
    Zhang, Wenbin
    Cheng, Xinmin
    PHYSIOLOGICAL MEASUREMENT, 2024, 45 (05)