Debiasing backdoor attack: A benign application of backdoor attack in eliminating data bias

被引:1
|
作者
Wu, Shangxi [1 ]
He, Qiuyang [1 ]
Zhang, Yi [1 ]
Lu, Dongyuan [2 ]
Sang, Jitao [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, Beijing 100091, Peoples R China
[2] Univ Int Business & Econ, Sch Informat Technol & Management, Beijing 100029, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Backdoor attack; Debias; Benign application;
D O I
10.1016/j.ins.2023.119171
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Backdoor attack which carries out a threat to model training has received increasing attention in recent years. Reviewing the previous research on adversarial attacks posing risk at the testing stage while at the same time facilitating the understanding of model predictions, we argue that the backdoor attack also has the potential to probe into the model learning process and help improve model performance. We started by attributing the phenomenon of Clean Accuracy Drop (CAD) in backdoor attack as the result of pseudo-deletion to the training data. Then an explanation from the perspective of model classification boundary is provided to explain this phenomenon that backdoor attack has advantages over undersampling in the data debiasing problem. Based on the above findings, we proposed Debiasing Backdoor Attack (DBA), employing backdoor attacks to address the data bias problem. Experiments demonstrate the effectiveness of backdoor attacks in debiasing tasks, with the envisioning of a broader range of benign application scenarios. Our code for the study can be found at https://github .com /KirinNg /DBA.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [42] Reverse Backdoor Distillation: Towards Online Backdoor Attack Detection for Deep Neural Network Models
    Yao, Zeming
    Zhang, Hangtao
    Guo, Yicheng
    Tian, Xin
    Peng, Wei
    Zou, Yi
    Zhang, Leo Yu
    Chen, Chao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5098 - 5111
  • [43] Distributed Swift and Stealthy Backdoor Attack on Federated Learning
    Sundar, Agnideven Palanisamy
    Li, Feng
    Zou, Xukai
    Gao, Tianchong
    2022 IEEE INTERNATIONAL CONFERENCE ON NETWORKING, ARCHITECTURE AND STORAGE (NAS), 2022, : 193 - 200
  • [44] Patch Based Backdoor Attack on Deep Neural Networks
    Manna, Debasmita
    Tripathy, Somanath
    INFORMATION SYSTEMS SECURITY, ICISS 2024, 2025, 15416 : 422 - 440
  • [45] Energy-Based Learning for Preventing Backdoor Attack
    Gao, Xiangyu
    Qiu, Meikang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2022, PT III, 2022, 13370 : 706 - 721
  • [46] Chronic Poisoning: Backdoor Attack against Split Learning
    Yu, Fangchao
    Zeng, Bo
    Zhao, Kai
    Pang, Zhi
    Wang, Lina
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16531 - 16538
  • [47] Practical Backdoor Attack Against Speaker Recognition System
    Luo, Yuxiao
    Tai, Jianwei
    Jia, Xiaoqi
    Zhang, Shengzhi
    INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2022, 2022, 13620 : 468 - 484
  • [48] Federated learning backdoor attack detection with persistence diagram
    Ma, Zihan
    Gao, Tianchong
    COMPUTERS & SECURITY, 2024, 136
  • [49] AdvDoor: Adversarial Backdoor Attack of Deep Learning System
    Zhang, Quan
    Ding, Yifeng
    Tian, Yongqiang
    Guo, Jianmin
    Yuan, Min
    Jiang, Yu
    ISSTA '21: PROCEEDINGS OF THE 30TH ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, 2021, : 127 - 138
  • [50] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,