Spatialspectral-Backdoor: Realizing backdoor attack for deep neural networks in brain-computer interface via EEG characteristics

被引:0
|
作者
Li, Fumin [1 ,3 ]
Huang, Mengjie [2 ]
You, Wenlong [1 ,3 ]
Zhu, Longsheng [1 ,3 ]
Cheng, Hanjing [4 ]
Yang, Rui [1 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch Adv Technol, Suzhou 215123, Peoples R China
[2] Xian Jiaotong Liverpool Univ, Design Sch, Suzhou 215123, Peoples R China
[3] Univ Liverpool, Sch Elect Engn Elect & Comp Sci, Liverpool L69 3BX, England
[4] Suzhou Univ Sci & Technol, Sch Elect & Informat Engn, Suzhou 215009, Peoples R China
基金
中国国家自然科学基金;
关键词
Backdoor attack; Deep neural networks; Brain-computer interfaces; Electroencephalogram;
D O I
10.1016/j.neucom.2024.128902
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, electroencephalogram (EEG) based on the brain-computer interface (BCI) systems have become increasingly advanced, with researcher using deep neural networks as tools to enhance performance. BCI systems heavily rely on EEG signals for effective human-computer interactions, and deep neural networks show excellent performance in processing and classifying these signals. Nevertheless, the vulnerability to backdoor attack is still a major problem. Backdoor attack is the injection of specially designed triggers into the model training process, which can lead to significant security issues. Therefore, in order to simulate the negative impact of backdoor attack and bridge the research gap in the field of BCI, this paper proposes anew backdoor attack method to call researcher attention to the security issues of BCI. In this paper, Spatialspectral-Backdoor is proposed to effectively attack the BCI system. The method is carefully designed to target the spectral active backdoor attack of the BCI system and includes a multi-channel preference method to select the electrode channels sensitive to the target task. Ultimately, the effectiveness of the comparison and ablation experiments is validated on the publicly available BCI competition datasets. The results show that the average attack success rate and clean sample accuracy of Spatialspectral-Backdoor in the BCI scenario are 97.12% and 85.16%, respectively, compared with other backdoor attack methods. Furthermore, by observing the infection ratio of backdoor triggers and visualization of the feature space, the proposed Spatialspectral-Backdoor outperforms other backdoor attack methods.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [2] Patch Based Backdoor Attack on Deep Neural Networks
    Manna, Debasmita
    Tripathy, Somanath
    INFORMATION SYSTEMS SECURITY, ICISS 2024, 2025, 15416 : 422 - 440
  • [3] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04): : 883 - 887
  • [4] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [6] Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
    Zheng, Haibin
    Xiong, Haiyang
    Chen, Jinyin
    Ma, Haonan
    Huang, Guohan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02): : 2479 - 2493
  • [7] Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
    Oyama, Tatsuya
    Okura, Shunsuke
    Yoshida, Kota
    Fujino, Takeshi
    SENSORS, 2023, 23 (10)
  • [8] EEG-Based Brain-Computer Interfaces are Vulnerable to Backdoor Attacks
    Meng, Lubin
    Jiang, Xue
    Huang, Jian
    Zeng, Zhigang
    Yu, Shan
    Jung, Tzyy-Ping
    Lin, Chin-Teng
    Chavarriaga, Ricardo
    Wu, Dongrui
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2023, 31 : 2224 - 2234
  • [9] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    APPLIED SOFT COMPUTING, 2023, 143
  • [10] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    Applied Intelligence, 2023, 53 : 20402 - 20417