CFA: Class-wise Calibrated Fair Adversarial Training

被引:13
|
作者
Wei, Zeming [1 ]
Wang, Yifei [1 ]
Guo, Yiwen
Wang, Yisen [2 ,3 ]
机构
[1] Peking Univ, Sch Math Sci, Beijing, Peoples R China
[2] Peking Univ, Natl Key Lab Gen Artificial Intelligence, Sch Intelligence Sci & Technol, Beijing, Peoples R China
[3] Peking Univ, Inst Artificial Intelligence, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
10.1109/CVPR52729.2023.00792
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training has been widely acknowledged as the most effective method to improve the adversarial robustness against adversarial examples for Deep Neural Networks (DNNs). So far, most existing works focus on enhancing the overall model robustness, treating each class equally in both the training and testing phases. Although revealing the disparity in robustness among classes, few works try to make adversarial training fair at the class level without sacrificing overall robustness. In this paper, we are the first to theoretically and empirically investigate the preference of different classes for adversarial configurations, including perturbation margin, regularization, and weight averaging. Motivated by this, we further propose a Class-wise calibrated Fair Adversarial training framework, named CFA, which customizes specific training configurations for each class automatically. Experiments on benchmark datasets demonstrate that our proposed CFA can improve both overall robustness and fairness notably over other state-of-the-art methods. Code is available at https://github.com/PKU-ML/CFA.
引用
收藏
页码:8193 / 8201
页数:9
相关论文
共 50 条
  • [1] Analysis and Applications of Class-wise Robustness in Adversarial Training
    Tian, Qi
    Kuang, Kun
    Jiang, Kelu
    Wu, Fei
    Wang, Yisen
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1561 - 1570
  • [2] CLASS-WISE ADVERSARIAL TRANSFER NETWORK FOR REMOTE SENSING SCENE CLASSIFICATION
    Liu, Zixu
    Ma, Li
    IGARSS 2020 - 2020 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2020, : 1357 - 1360
  • [3] Class-wise Information Gain
    Zhang, Pengtao
    Tan, Ying
    2013 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE AND TECHNOLOGY (ICIST), 2013, : 972 - 978
  • [4] Adversarial class-wise self-knowledge distillation for medical image segmentation
    Xiangchun Yu
    Jiaqing Shen
    Dingwen Zhang
    Jian Zheng
    Scientific Reports, 15 (1)
  • [5] Deep Class-Wise Hashing: Semantics-Preserving Hashing via Class-Wise Loss
    Zhe, Xuefei
    Chen, Shifeng
    Yan, Hong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (05) : 1681 - 1695
  • [6] Class-wise and reduced calibration methods
    Panchenko, Michael
    Benmerzoug, Anes
    Delgado, Miguel de Benito
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1093 - 1100
  • [7] Class-wise Deep Dictionary Learning
    Singhal, Vanika
    Khurana, Prerna
    Majumdar, Angshul
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1125 - 1132
  • [8] TRANSFERABLE POSITIVE/NEGATIVE SPEECH EMOTION RECOGNITION VIA CLASS-WISE ADVERSARIAL DOMAIN ADAPTATION
    Zhou, Hao
    Chen, Ke
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3732 - 3736
  • [9] Kernel class-wise locality preserving projection
    Li, Jun-Bao
    Pan, Jeng-Shyang
    Chu, Shu-Chuan
    INFORMATION SCIENCES, 2008, 178 (07) : 1825 - 1835
  • [10] Constrained class-wise feature selection (CCFS)
    Hussain, Syed Fawad
    Shahzadi, Fatima
    Munir, Badre
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (10) : 3211 - 3224