Debiasing backdoor attack: A benign application of backdoor attack in eliminating data bias

被引:1
|
作者
Wu, Shangxi [1 ]
He, Qiuyang [1 ]
Zhang, Yi [1 ]
Lu, Dongyuan [2 ]
Sang, Jitao [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, Beijing 100091, Peoples R China
[2] Univ Int Business & Econ, Sch Informat Technol & Management, Beijing 100029, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Backdoor attack; Debias; Benign application;
D O I
10.1016/j.ins.2023.119171
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Backdoor attack which carries out a threat to model training has received increasing attention in recent years. Reviewing the previous research on adversarial attacks posing risk at the testing stage while at the same time facilitating the understanding of model predictions, we argue that the backdoor attack also has the potential to probe into the model learning process and help improve model performance. We started by attributing the phenomenon of Clean Accuracy Drop (CAD) in backdoor attack as the result of pseudo-deletion to the training data. Then an explanation from the perspective of model classification boundary is provided to explain this phenomenon that backdoor attack has advantages over undersampling in the data debiasing problem. Based on the above findings, we proposed Debiasing Backdoor Attack (DBA), employing backdoor attacks to address the data bias problem. Experiments demonstrate the effectiveness of backdoor attacks in debiasing tasks, with the envisioning of a broader range of benign application scenarios. Our code for the study can be found at https://github .com /KirinNg /DBA.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] A Pragmatic Label-Specific Backdoor Attack
    Wang, Yu
    Yang, Haomiao
    Li, Jiasheng
    Ge, Mengyu
    FRONTIERS IN CYBER SECURITY, FCS 2022, 2022, 1726 : 149 - 162
  • [32] Defense against backdoor attack in federated learning
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    Chen, Xuan
    COMPUTERS & SECURITY, 2022, 121
  • [33] Textual Backdoor Attack via Keyword Positioning
    Chen, Depeng
    Mao, Fangfang
    Jin, Hulin
    Cui, Jie
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT X, ICIC 2024, 2024, 14871 : 55 - 66
  • [34] Poster: Backdoor Attack on Extreme Learning Machines
    Tajalli, Behrad
    Abad, Gorka
    Picek, Stjepan
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 3588 - 3590
  • [35] BadCS: A Backdoor Attack Framework for Code search
    Qi, Shiyi
    Yang, Yuanhang
    Gao, Shuzheng
    Gao, Cuiyun
    Xu, Zenglin
    arXiv, 2023,
  • [36] Conditional Backdoor Attack via JPEG Compression
    Duan, Qiuyu
    Hua, Zhongyun
    Liao, Qing
    Zhang, Yushu
    Zhang, Leo Yu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 18, 2024, : 19823 - 19831
  • [37] Use Procedural Noise to Achieve Backdoor Attack
    Chen, Xuan
    Ma, Yuena
    Lu, Shiwei
    IEEE ACCESS, 2021, 9 : 127204 - 127216
  • [38] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04): : 883 - 887
  • [39] Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
    Zheng, Haibin
    Xiong, Haiyang
    Chen, Jinyin
    Ma, Haonan
    Huang, Guohan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02): : 2479 - 2493
  • [40] DBIA: DATA-FREE BACKDOOR ATTACK AGAINST TRANSFORMER NETWORKS
    Lv, Peizhuo
    Ma, Hualong
    Zhou, Jiachen
    Liang, Ruigang
    Chen, Kai
    Zhang, Shengzhi
    Yang, Yunfei
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2819 - 2824