Spatialspectral-Backdoor: Realizing backdoor attack for deep neural networks in brain-computer interface via EEG characteristics

被引:0
|
作者
Li, Fumin [1 ,3 ]
Huang, Mengjie [2 ]
You, Wenlong [1 ,3 ]
Zhu, Longsheng [1 ,3 ]
Cheng, Hanjing [4 ]
Yang, Rui [1 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou 215123, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Design Sch, Suzhou 215123, Peoples R China
[3] Univ Liverpool, Sch Elect Engn Elect & Comp Sci, Liverpool L69 3BX, England
[4] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou 215009, Peoples R China
基金
中国国家自然科学基金;
关键词
Backdoor attack; Deep neural networks; Brain-computer interfaces; Electroencephalogram;
D O I
10.1016/j.neucom.2024.128902
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, electroencephalogram (EEG) based on the brain-computer interface (BCI) systems have become increasingly advanced, with researcher using deep neural networks as tools to enhance performance. BCI systems heavily rely on EEG signals for effective human-computer interactions, and deep neural networks show excellent performance in processing and classifying these signals. Nevertheless, the vulnerability to backdoor attack is still a major problem. Backdoor attack is the injection of specially designed triggers into the model training process, which can lead to significant security issues. Therefore, in order to simulate the negative impact of backdoor attack and bridge the research gap in the field of BCI, this paper proposes anew backdoor attack method to call researcher attention to the security issues of BCI. In this paper, Spatialspectral-Backdoor is proposed to effectively attack the BCI system. The method is carefully designed to target the spectral active backdoor attack of the BCI system and includes a multi-channel preference method to select the electrode channels sensitive to the target task. Ultimately, the effectiveness of the comparison and ablation experiments is validated on the publicly available BCI competition datasets. The results show that the average attack success rate and clean sample accuracy of Spatialspectral-Backdoor in the BCI scenario are 97.12% and 85.16%, respectively, compared with other backdoor attack methods. Furthermore, by observing the infection ratio of backdoor triggers and visualization of the feature space, the proposed Spatialspectral-Backdoor outperforms other backdoor attack methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Detecting Backdoor Attacks via Class Difference in Deep Neural Networks
    Kwon, Hyun
    IEEE ACCESS, 2020, 8 : 191049 - 191056
  • [22] Adversarial filtering based evasion and backdoor attacks to EEG-based brain-computer interfaces
    Meng, Lubin
    Jiang, Xue
    Chen, Xiaoqing
    Liu, Wenzhong
    Luo, Hanbin
    Wu, Dongrui
    INFORMATION FUSION, 2024, 107
  • [23] Detecting backdoor in deep neural networks via intentional adversarial perturbations
    Xue, Mingfu
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    INFORMATION SCIENCES, 2023, 634 : 564 - 577
  • [24] Optimized EEG based mood detection with signal processing and deep neural networks for brain-computer interface
    Adhikary, Subhrangshu
    Jain, Kushal
    Saha, Biswajit
    Chowdhury, Deepraj
    BIOMEDICAL PHYSICS & ENGINEERING EXPRESS, 2023, 9 (03)
  • [25] Defending against backdoor attack on deep neural networks based on multi-scale inactivation
    Zhang, Anqing
    Chen, Honglong
    Wang, Xiaomeng
    Li, Junjian
    Gao, Yudong
    Wang, Xingang
    INFORMATION SCIENCES, 2025, 690
  • [26] DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Peng, Ya
    Ning, Jianting
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [27] Defending Deep Neural Networks Against Backdoor Attack by Using De-Trigger Autoencoder
    Kwon, Hyun
    IEEE ACCESS, 2025, 13 : 11159 - 11169
  • [28] Knowledge-Driven Backdoor Removal in Deep Neural Networks via Reinforcement Learning
    Song, Jiayin
    Li, Yike
    Tian, Yunzhe
    Wu, Xingyu
    Li, Qiong
    Tong, Endong
    Niu, Wenjia
    Zhang, Zhenguo
    Liu, Jiqiang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 336 - 348
  • [29] Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images
    Matsuo, Yuki
    Takemoto, Kazuhiro
    APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [30] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024