Debiasing backdoor attack: A benign application of backdoor attack in eliminating data bias

被引:1
|
作者
Wu, Shangxi [1 ]
He, Qiuyang [1 ]
Zhang, Yi [1 ]
Lu, Dongyuan [2 ]
Sang, Jitao [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Traff Data Anal & Min, Beijing 100091, Peoples R China
[2] Univ Int Business & Econ, Sch Informat Technol & Management, Beijing 100029, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Backdoor attack; Debias; Benign application;
D O I
10.1016/j.ins.2023.119171
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Backdoor attack which carries out a threat to model training has received increasing attention in recent years. Reviewing the previous research on adversarial attacks posing risk at the testing stage while at the same time facilitating the understanding of model predictions, we argue that the backdoor attack also has the potential to probe into the model learning process and help improve model performance. We started by attributing the phenomenon of Clean Accuracy Drop (CAD) in backdoor attack as the result of pseudo-deletion to the training data. Then an explanation from the perspective of model classification boundary is provided to explain this phenomenon that backdoor attack has advantages over undersampling in the data debiasing problem. Based on the above findings, we proposed Debiasing Backdoor Attack (DBA), employing backdoor attacks to address the data bias problem. Experiments demonstrate the effectiveness of backdoor attacks in debiasing tasks, with the envisioning of a broader range of benign application scenarios. Our code for the study can be found at https://github .com /KirinNg /DBA.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Camouflage Backdoor Attack against Pedestrian Detection
    Wu, Yalun
    Gu, Yanfeng
    Chen, Yuanwan
    Cui, Xiaoshu
    Li, Qiong
    Xiang, Yingxiao
    Tong, Endong
    Li, Jianhua
    Han, Zhen
    Liu, Jiqiang
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [22] Backdoor Attack with Imperceptible Input and Latent Modification
    Khoa Doan
    Lao, Yingjie
    Li, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [23] Poison Ink: Robust and Invisible Backdoor Attack
    Zhang, Jie
    Chen, Dongdong
    Huang, Qidong
    Liao, Jing
    Zhang, Weiming
    Feng, Huamin
    Hua, Gang
    Yu, Nenghai
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 5691 - 5705
  • [24] Beating Backdoor Attack at Its Own Game
    Liu, Min
    Sangiovanni-Vincentelli, Alberto
    Yue, Xiangyu
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4597 - 4606
  • [25] An Invisible Backdoor Attack Based on Semantic Feature
    Chen, Yangming
    Xu, Xiaowei
    Wang, Xiaodong
    Li, Zewen
    Chen, Wenmin
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2025,
  • [26] Textual Backdoor Attack for the Text Classification System
    Kwon, Hyun
    Lee, Sanghyun
    SECURITY AND COMMUNICATION NETWORKS, 2021, 2021
  • [27] Imperceptible and multi-channel backdoor attack
    Mingfu Xue
    Shifeng Ni
    Yinghao Wu
    Yushu Zhang
    Weiqiang Liu
    Applied Intelligence, 2024, 54 : 1099 - 1116
  • [28] Backdoor Attack and Defense on Deep Learning: A Survey
    Bai, Yang
    Xing, Gaojie
    Wu, Hongyan
    Rao, Zhihong
    Ma, Chuan
    Wang, Shiping
    Liu, Xiaolei
    Zhou, Yimin
    Tang, Jiajia
    Huang, Kaijun
    Kang, Jiale
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025, 12 (01): : 404 - 434
  • [29] Input-Aware Dynamic Backdoor Attack
    Nguyen, Tuan Anh
    Tran, Tuan Anh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [30] Sparse Backdoor Attack Against Neural Networks
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    COMPUTER JOURNAL, 2023, 67 (05): : 1783 - 1793