Global Wasserstein Margin maximization for boosting generalization in adversarial training

被引:0
|
作者
Tingyue Yu
Shen Wang
Xiangzhan Yu
机构
[1] Harbin Institute of Technology,School of Cyberspace Science
来源
Applied Intelligence | 2023年 / 53卷
关键词
Deep learning; Adversarial examples; Adversarial robustness; Adversarial training;
D O I
暂无
中图分类号
学科分类号
摘要
In recent researches on adversarial robustness boosting, the trade-off between standard and robust generalization has been widely concerned, in which margin, the average distance from samples to the decision boundary, has become the bridge between the two ends. In this paper, the problems of the existing methods to improve the adversarial robustness by maximizing the margin are discussed and analyzed. On this basis, a new method to approximate the margin from a global point of view through the Wasserstein Distance of distribution of representation is proposed, which is called Global Wasserstein Margin. By maximizing the Global Wasserstein Margin in the process of adversarial training, the generalization capability of the model can be improved, reflected as the standard and robust accuracy advantages on the latest baseline of adversarial training.
引用
收藏
页码:11490 / 11504
页数:14
相关论文
共 50 条
  • [1] Global Wasserstein Margin maximization for boosting generalization in adversarial training
    Yu, Tingyue
    Wang, Shen
    Yu, Xiangzhan
    APPLIED INTELLIGENCE, 2023, 53 (10) : 11490 - 11504
  • [2] Adversarial Margin Maximization Networks
    Yan, Ziang
    Guo, Yiwen
    Zhang, Changshui
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (04) : 1129 - 1139
  • [3] Optimal Minimal Margin Maximization with Boosting
    Gronlund, Allan
    Larsen, Kasper Green
    Mathiasen, Alexander
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [4] Sliced Wasserstein adversarial training for improving adversarial robustness
    Lee W.
    Lee S.
    Kim H.
    Lee J.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (08) : 3229 - 3242
  • [5] WASSERTRAIN: AN ADVERSARIAL TRAINING FRAMEWORK AGAINST WASSERSTEIN ADVERSARIAL ATTACKS
    Zhao, Qingye
    Chen, Xin
    Zhao, Zhuoyu
    Tang, Enyi
    Li, Xuandong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2734 - 2738
  • [6] On the Generalization Properties of Adversarial Training
    Xing, Yue
    Song, Qifan
    Cheng, Guang
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130 : 505 - +
  • [7] Boosting Fast Adversarial Training With Learnable Adversarial Initialization
    Jia, Xiaojun
    Zhang, Yong
    Wu, Baoyuan
    Wang, Jue
    Cao, Xiaochun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4417 - 4430
  • [8] Remove to Regenerate: Boosting Adversarial Generalization With Attack Invariance
    Fu, Xiaowei
    Ma, Lina
    Zhang, Lei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 1999 - 2012
  • [9] Joint Distribution Adaptation via Wasserstein Adversarial Training
    Wang, Xiaolu
    Zhang, Wenyong
    Shen, Xin
    Liu, Huikang
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [10] Boosting Adversarial Training with Learnable Distribution
    Chen, Kai
    Wang, Jinwei
    Adeke, James Msughter
    Liu, Guangjie
    Dai, Yuewei
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 78 (03): : 3247 - 3265