Improving Adversarial Robustness of Deep Neural Networks via Linear Programming

被引:0
|
作者
Tang, Xiaochao [1 ]
Yang, Zhengfeng [1 ]
Fu, Xuanming [1 ]
Wang, Jianlin [2 ]
Zeng, Zhenbing [3 ]
机构
[1] East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai, Peoples R China
[2] Henan Univ, Sch Comp & Informat Engn, Kaifeng, Peoples R China
[3] Shanghai Univ, Dept Math, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Linear programming; PGD; Robust training; Adversarial training;
D O I
10.1007/978-3-031-10363-6_22
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Adversarial training provides an effective means to improve the robustness of neural networks against adversarial attacks. The nonlinear feature of neural networks makes it difficult to find good adversarial examples where project gradient descent (PGD) based training is reported to perform best. In this paper, we build an iterative training framework to implement effective robust training. It introduces the Least-Squares based linearization to build a set of affine functions to approximate the nonlinear functions calculating the difference of discriminant values between a specific class and the correct class and solves it using LP solvers by simplex methods. The solutions found by LP solvers turn out to be very close to the real optimum so that our method outperforms PGD based adversarial training, as is shown by extensive experiments on the MNIST and CIFAR-10 datasets. Especially, our methods can provide considerable robust networks on CIFAR-10 against the strong strength attacks, where the other methods get stuck and do not converge.
引用
收藏
页码:326 / 343
页数:18
相关论文
共 50 条
  • [31] Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
    Rasheed, Bader
    Abdelhamid, Mohamed
    Khan, Adil
    Menezes, Igor
    Khatak, Asad Masood
    IEEE ACCESS, 2024, 12 : 131323 - 131335
  • [32] Towards Robustness of Deep Neural Networks via Regularization
    Li, Yao
    Min, Martin Renqiang
    Lee, Thomas
    Yu, Wenchao
    Kruus, Erik
    Wang, Wei
    Hsieh, Cho-Jui
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7476 - 7485
  • [33] Improving Face Liveness Detection Robustness with Deep Convolutional Generative Adversarial Networks
    Padnevych, Ruslan
    Semedo, David
    Carmo, David
    Magalhaes, Joao
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 1866 - 1870
  • [34] IMPROVING ROBUSTNESS OF DEEP NETWORKS USING CLUSTER-BASED ADVERSARIAL TRAINING
    Rasheed, Bader
    Khan, Adil
    RUSSIAN LAW JOURNAL, 2023, 11 (09) : 412 - 420
  • [35] Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations
    Xiao, Yatie
    Pun, Chi-Man
    INFORMATION SCIENCES, 2021, 571 : 104 - 132
  • [36] Parseval Networks: Improving Robustness to Adversarial Examples
    Cisse, Moustapha
    Bojanowski, Piotr
    Grave, Edouard
    Dauphin, Yann
    Usunier, Nicolas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [37] Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
    Li, Xingjian
    Goodman, Dou
    Liu, Ji
    Wei, Tao
    Dou, Dejing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 4
  • [38] IMPROVING THE ROBUSTNESS OF CONVOLUTIONAL NEURAL NETWORKS VIA SKETCH ATTENTION
    Chu, Tianshu
    Yang, Zuopeng
    Yang, Jie
    Huang, Xiaolin
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 869 - 873
  • [39] GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
    Lee, Sungyoon
    Kim, Hoki
    Lee, Jaewook
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 2645 - 2651
  • [40] Hardening Deep Neural Networks via Adversarial Model Cascades
    Vijaykeerthy, Deepak
    Suri, Anshuman
    Mehta, Sameep
    Kumaraguru, Ponnurangam
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,