Cross-dataset transfer learning for motor imagery signal classification via multi-task learning and pre-training

被引:18
|
作者
Xie, Yuting [1 ]
Wang, Kun [1 ,3 ]
Meng, Jiayuan [1 ,2 ,3 ]
Yue, Jin [1 ]
Meng, Lin [1 ,3 ]
Yi, Weibo [3 ,4 ]
Jung, Tzyy-Ping [1 ,2 ]
Xu, Minpeng [1 ,2 ,3 ]
Ming, Dong [1 ,2 ,3 ]
机构
[1] Tianjin Univ, Acad Med Engn & Translat Med, Tianjin, Peoples R China
[2] Tianjin Univ, Coll Precis Instruments & Optoelect Engn, Tianjin, Peoples R China
[3] Haihe Lab Brain Comp Interact & Human Machine Inte, Tianjin, Peoples R China
[4] Beijing Inst Mech Equipment, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
motor imagery; deep learning; cross-dataset transfer learning; pre-training; multi-task learning; BRAIN-COMPUTER INTERFACES; DOMAIN ADAPTATION; EEG;
D O I
10.1088/1741-2552/acfe9c
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Deep learning (DL) models have been proven to be effective in decoding motor imagery (MI) signals in Electroencephalogram (EEG) data. However, DL models' success relies heavily on large amounts of training data, whereas EEG data collection is laborious and time-consuming. Recently, cross-dataset transfer learning has emerged as a promising approach to meet the data requirements of DL models. Nevertheless, transferring knowledge across datasets involving different MI tasks remains a significant challenge in cross-dataset transfer learning, limiting the full utilization of valuable data resources. Approach: This study proposes a pre-training-based cross-dataset transfer learning method inspired by Hard Parameter Sharing in multi-task learning. Different datasets with distinct MI paradigms are considered as different tasks, classified with shared feature extraction layers and individual task-specific layers to allow cross-dataset classification with one unified model. Then, Pre-training and fine-tuning are employed to transfer knowledge across datasets. We also designed four fine-tuning schemes and conducted extensive experiments on them. Main results: The results showed that compared to models without pre-training, models with pre-training achieved a maximum increase in accuracy of 7.76%. Moreover, when limited training data were available, the pre-training method significantly improved DL model's accuracy by 27.34% at most. The experiments also revealed that pre-trained models exhibit faster convergence and remarkable robustness. The training time per subject could be reduced by up to 102.83 s, and the variance of classification accuracy decreased by 75.22% at best. Significance: This study represents the first comprehensive investigation of the cross-dataset transfer learning method between two datasets with different MI tasks. The proposed pre-training method requires only minimal fine-tuning data when applying DL models to new MI paradigms, making MI-Brain-computer interface more practical and user-friendly.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Continual Contrastive Learning for Cross-Dataset Scene Classification
    Peng, Rui
    Zhao, Wenzhi
    Li, Kaiyuan
    Ji, Fengcheng
    Rong, Caixia
    REMOTE SENSING, 2022, 14 (20)
  • [22] Explainable cross-task adaptive transfer learning for motor imagery EEG classification
    Miao, Minmin
    Yang, Zhong
    Zeng, Hong
    Zhang, Wenbin
    Xu, Baoguo
    Hu, Wenjun
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (06)
  • [23] Early Recurrence Prediction of Hepatocellular Carcinoma Using Deep Learning Frameworks with Multi-Task Pre-Training
    Song, Jian
    Dong, Haohua
    Chen, Youwen
    Zhang, Xianru
    Zhan, Gan
    Jain, Rahul Kumar
    Chen, Yen-Wei
    INFORMATION, 2024, 15 (08)
  • [24] Multi-Task Collaborative Pre-Training and Adaptive Token Selection: A Unified Framework for Brain Representation Learning
    Jiang, Ning
    Wang, Gongshu
    Ye, Chuyang
    Liu, Tiantian
    Yan, Tianyi
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5528 - 5539
  • [25] XLPT-AMR: Cross-Lingual Pre-Training via Multi-Task Learning for Zero-Shot AMR Parsing and Text Generation
    Xu, Dongqin
    Li, Junhui
    Zhu, Muhua
    Zhang, Min
    Zhou, Guodong
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 896 - 907
  • [26] Cross-dataset Deep Transfer Learning for Activity Recognition
    Gjoreski, Martin
    Kalabakov, Stefan
    Lustrek, Mitja
    Gams, Matjaz
    Gjoreski, Hristijan
    UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 714 - 718
  • [27] TransPeakNet for solvent-aware 2D NMR prediction via multi-task pre-training and unsupervised learning
    Li, Yunrui
    Xu, Hao
    Kumar, Ambrish
    Wang, Duo-Sheng
    Heiss, Christian
    Azadi, Parastoo
    Hong, Pengyu
    COMMUNICATIONS CHEMISTRY, 2025, 8 (01):
  • [28] Meta-Learning-based Cross-Dataset Motor Imagery Brain-Computer Interface
    Kim, Jun-Mo
    Bak, Soyeon
    Nam, Hyeonyeong
    Choi, WooHyeok
    Kam, Tae-Eui
    2024 12TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI 2024, 2024,
  • [29] Cross-Subject & Cross-Dataset Subject Transfer in Motor Imagery BCI systems
    Zaremba, Teddy
    Atyabi, Adham
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [30] Multi-Task Learning with Knowledge Transfer for Facial Attribute Classification
    Fanhe, Xiaohui
    Guo, Jie
    Huang, Zheng
    Qiu, Weidong
    Zhang, Yuele
    2019 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2019, : 877 - 882