Adversarial training with distribution normalization and margin balance

被引:9
|
作者
Cheng, Zhen [1 ,2 ]
Zhu, Fei [1 ,2 ]
Zhang, Xu-Yao [1 ,2 ]
Liu, Cheng-Lin [1 ,2 ]
机构
[1] Chinese Acad Sci, Natl Lab Pattern Recognit NLPR, Inst Automat, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 100049, Peoples R China
基金
中国国家自然科学基金;
关键词
Adversarial robustness; Adversarial training; Distribution normalization; Margin balance;
D O I
10.1016/j.patcog.2022.109182
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training is the most effective method to improve adversarial robustness. However, it does not explicitly regularize the feature space during training. Adversarial attacks usually move a sample it-eratively along the direction which causes the steepest ascent of classification loss by crossing decision boundary. To alleviate this problem, we propose to regularize the distributions of different classes to increase the difficulty of finding an attacking direction. Specifically, we propose two strategies named Distribution Normalization (DN) and Margin Balance (MB) for adversarial training. The purpose of DN is to normalize the features of each class to have identical variance in every direction, in order to elimi-nate easy-to-attack intra-class directions. The purpose of MB is to balance the margins between different classes, making it harder to find confusing class directions (i.e., those with smaller margins) to attack. When integrated with adversarial training, our method can significantly improve adversarial robustness. Extensive experiments under white-box, black-box, and adaptive attacks demonstrate the effectiveness of our method over other state-of-the-art methods.(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Face illumination normalization based on generative adversarial network
    Guo, Dequan
    Zhu, Lingrui
    Ling, Shenggui
    Li, Tianxiang
    Zhang, Gexiang
    Yang, Qiang
    Wang, Ping
    Jiang, Shiqi
    Wu, Sidong
    Liu, Junbao
    NATURAL COMPUTING, 2023, 22 (01) : 105 - 117
  • [42] Adversarial Normalization: I Can visualize Everything (ICE)
    Choi, Hoyoung
    Jin, Seungwan
    Han, Kyungsik
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 12115 - 12124
  • [43] POSTURE TRAINING AS A MEANS OF NORMALIZATION
    SHERRILL, C
    MENTAL RETARDATION, 1980, 18 (03): : 135 - 138
  • [44] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [45] Sliced Wasserstein adversarial training for improving adversarial robustness
    Lee W.
    Lee S.
    Kim H.
    Lee J.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (08) : 3229 - 3242
  • [46] On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training
    Liu, Chen
    Huang, Zhichao
    Salzmann, Mathieu
    Zhang, Tong
    Susstrunk, Sabine
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 46
  • [47] Enhancing Adversarial Robustness through Stable Adversarial Training
    Yan, Kun
    Yang, Luyi
    Yang, Zhanpeng
    Ren, Wenjuan
    SYMMETRY-BASEL, 2024, 16 (10):
  • [48] Variational Adversarial Defense: A Bayes Perspective for Adversarial Training
    Zhao, Chenglong
    Mei, Shibin
    Ni, Bingbing
    Yuan, Shengchao
    Yu, Zhenbo
    Wang, Jun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3047 - 3063
  • [49] Boosting Fast Adversarial Training With Learnable Adversarial Initialization
    Jia, Xiaojun
    Zhang, Yong
    Wu, Baoyuan
    Wang, Jue
    Cao, Xiaochun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4417 - 4430
  • [50] Enhancing Fast Adversarial Training with Learnable Adversarial Perturbations
    Xu, Li
    Liu, Chang
    Yu, Kaibo
    Fan, Chunlong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT IV, 2025, 15034 : 148 - 161