Global Wasserstein Margin maximization for boosting generalization in adversarial training

被引:0
|
作者
Tingyue Yu
Shen Wang
Xiangzhan Yu
机构
[1] Harbin Institute of Technology,School of Cyberspace Science
来源
Applied Intelligence | 2023年 / 53卷
关键词
Deep learning; Adversarial examples; Adversarial robustness; Adversarial training;
D O I
暂无
中图分类号
学科分类号
摘要
In recent researches on adversarial robustness boosting, the trade-off between standard and robust generalization has been widely concerned, in which margin, the average distance from samples to the decision boundary, has become the bridge between the two ends. In this paper, the problems of the existing methods to improve the adversarial robustness by maximizing the margin are discussed and analyzed. On this basis, a new method to approximate the margin from a global point of view through the Wasserstein Distance of distribution of representation is proposed, which is called Global Wasserstein Margin. By maximizing the Global Wasserstein Margin in the process of adversarial training, the generalization capability of the model can be improved, reflected as the standard and robust accuracy advantages on the latest baseline of adversarial training.
引用
收藏
页码:11490 / 11504
页数:14
相关论文
共 50 条
  • [31] Boosting Adversarial Training with Hardness-Guided Attack Strategy
    He, Shiyuan
    Wei, Jiwei
    Zhang, Chaoning
    Xu, Xing
    Song, Jingkuan
    Yang, Yang
    Shen, Heng Tao
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 7748 - 7760
  • [32] A2: Efficient Automated Attacker for Boosting Adversarial Training
    Xu, Zhuoer
    Zhu, Guanghui
    Meng, Changhua
    Cui, Shiwen
    Ying, Zhenzhe
    Wang, Weiqiang
    Gu, Ming
    Huang, Yihua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [33] Boosting Adversarial Training Using Robust Selective Data Augmentation
    Bader Rasheed
    Asad Masood Khattak
    Adil Khan
    Stanislav Protasov
    Muhammad Ahmad
    International Journal of Computational Intelligence Systems, 16
  • [34] Boosting Adversarial Training Using Robust Selective Data Augmentation
    Rasheed, Bader
    Khattak, Asad Masood
    Khan, Adil
    Protasov, Stanislav
    Ahmad, Muhammad
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2023, 16 (01)
  • [35] Increasing-Margin Adversarial (IMA) training to improve adversarial robustness of neural networks
    Ma, Linhai
    Liang, Liang
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 240
  • [36] On Domain Generalization for Batched Prediction: the Benefit of Contextual Adversarial Training
    Li, Chune
    Mao, Yongyi
    Zhang, Richong
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 577 - 584
  • [37] Revisiting single-step adversarial training for robustness and generalization
    Li, Zhuorong
    Yu, Daiwei
    Wu, Minghui
    Chan, Sixian
    Yu, Hongchuan
    Han, Zhike
    PATTERN RECOGNITION, 2024, 151
  • [38] ATGAN: Adversarial training-based GAN for improving adversarial robustness generalization on image classification
    Desheng Wang
    Weidong Jin
    Yunpu Wu
    Aamir Khan
    Applied Intelligence, 2023, 53 : 24492 - 24508
  • [39] ATGAN: Adversarial training-based GAN for improving adversarial robustness generalization on image classification
    Wang, Desheng
    Jin, Weidong
    Wu, Yunpu
    Khan, Aamir
    APPLIED INTELLIGENCE, 2023, 53 (20) : 24492 - 24508
  • [40] BOOSTING NOISE ROBUSTNESS OF ACOUSTIC MODEL VIA DEEP ADVERSARIAL TRAINING
    Liu, Bin
    Nie, Shuai
    Zhang, Yaping
    Ke, Dengfeng
    Liang, Shan
    Liu, Wenju
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5034 - 5038