Improving Adversarial Robustness of Deep Neural Networks via Linear Programming

被引:0
|
作者
Tang, Xiaochao [1 ]
Yang, Zhengfeng [1 ]
Fu, Xuanming [1 ]
Wang, Jianlin [2 ]
Zeng, Zhenbing [3 ]
机构
[1] East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai, Peoples R China
[2] Henan Univ, Sch Comp & Informat Engn, Kaifeng, Peoples R China
[3] Shanghai Univ, Dept Math, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Linear programming; PGD; Robust training; Adversarial training;
D O I
10.1007/978-3-031-10363-6_22
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Adversarial training provides an effective means to improve the robustness of neural networks against adversarial attacks. The nonlinear feature of neural networks makes it difficult to find good adversarial examples where project gradient descent (PGD) based training is reported to perform best. In this paper, we build an iterative training framework to implement effective robust training. It introduces the Least-Squares based linearization to build a set of affine functions to approximate the nonlinear functions calculating the difference of discriminant values between a specific class and the correct class and solves it using LP solvers by simplex methods. The solutions found by LP solvers turn out to be very close to the real optimum so that our method outperforms PGD based adversarial training, as is shown by extensive experiments on the MNIST and CIFAR-10 datasets. Especially, our methods can provide considerable robust networks on CIFAR-10 against the strong strength attacks, where the other methods get stuck and do not converge.
引用
收藏
页码:326 / 343
页数:18
相关论文
共 50 条
  • [21] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [22] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, José
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    arXiv, 2023,
  • [23] Sanitizing hidden activations for improving adversarial robustness of convolutional neural networks
    Mu, Tianshi
    Lin, Kequan
    Zhang, Huabing
    Wang, Jian
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2021, 41 (02) : 3993 - 4003
  • [24] IMPROVING ROBUSTNESS OF DEEP NEURAL NETWORKS VIA SPECTRAL MASKING FOR AUTOMATIC SPEECH RECOGNITION
    Li, Bo
    Sim, Khe Chai
    2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2013, : 279 - 284
  • [25] Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
    Li, Chaofei
    Zhu, Ziyuan
    Niu, Ruicheng
    Zhao, Yuting
    COMPUTERS & SECURITY, 2024, 143
  • [26] Robustness of convergence in finite time for linear programming neural networks
    Di Marco, M
    Forti, M
    Grazzini, M
    INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, 2006, 34 (03) : 307 - 316
  • [27] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    ENTROPY, 2023, 25 (06)
  • [28] Enhancing the Robustness of Deep Neural Networks by Meta-Adversarial Training
    Chang, You-Kang
    Zhao, Hong
    Wang, Wei-Jie
    International Journal of Network Security, 2023, 25 (01) : 122 - 130
  • [29] CSTAR: Towards Compact and Structured Deep Neural Networks with Adversarial Robustness
    Phan, Huy
    Yin, Miao
    Sui, Yang
    Yuan, Bo
    Zonouz, Saman
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 2, 2023, : 2065 - 2073
  • [30] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,