Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks

被引:25
|
作者
Panda, Priyadarshini [1 ]
Chakraborty, Indranil [1 ]
Roy, Kaushik [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
基金
美国国家科学基金会;
关键词
Adversarial robustness; deep learning; discretization techniques; binarized neural networks;
D O I
10.1109/ACCESS.2019.2919463
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples are perturbed inputs that are designed (from a deep learning network's (DLN) parameter gradients) to mislead the DLN during test time. Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the "space" in which adversarial examples exist. Guided by this intuition, we demonstrate that discretization greatly improves the robustness of the DLNs against adversarial attacks. Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8bit to 4 values or 2bit) extensively improves the adversarial robustness of the DLNs for a substantial range of perturbations for minimal loss in test accuracy. Furthermore, we find that binary neural networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios. Combining input discretization with the BNNs furthers the robustness, even waiving the need for adversarial training for the certain magnitude of perturbation values. We evaluate the effect of discretization on MNIST, CIFAR10, CIFAR100, and ImageNet datasets. Across all datasets, we observe maximal adversarial resistance with 2bit input discretization that incurs an adversarial accuracy loss of just similar to 1% - 2% as compared to clean test accuracy against single-step attacks. We also show standalone discretization remains vulnerable to stronger multi-step attack scenarios necessitating the use of adversarial training with discretization as an improved defense strategy.
引用
收藏
页码:70157 / 70168
页数:12
相关论文
共 50 条
  • [31] Detection of adversarial attacks on machine learning systems
    Judah, Matthew
    Sierchio, Jen
    Planer, Michael
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [32] Safe Machine Learning and Defeating Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javidi, Tara
    Koushanfar, Farinaz
    IEEE SECURITY & PRIVACY, 2019, 17 (02) : 31 - 38
  • [33] SLC: A Permissioned Blockchain for Secure Distributed Machine Learning against Byzantine Attacks
    Liang, Lun
    Cao, Xianghui
    Zhang, Jun
    Sun, Changyin
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7073 - 7078
  • [34] Self-Triggering Secure Consensus Against Adversarial Attacks
    Liao, Zirui
    Shi, Jian
    Wang, Shaoping
    Zhang, Yuwei
    Mu, Rui
    Sun, Zhiyong
    GUIDANCE NAVIGATION AND CONTROL, 2025,
  • [35] Secure Indoor Localization Against Adversarial Attacks Using DCGAN
    Yan, Qingli
    Xiong, Wang
    Wang, Hui-Ming
    IEEE COMMUNICATIONS LETTERS, 2025, 29 (01) : 130 - 134
  • [36] Adversarial Attacks to Machine Learning-Based Smart Healthcare Systems
    Newaz, A. K. M. Iqtidar
    Haque, Nur Imtiazul
    Sikder, Amit Kumar
    Rahman, Mohammad Ashiqur
    Uluagac, A. Selcuk
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [37] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [38] Federated Machine Learning in Medical imaging and against Adversarial Attacks: A retrospective multicohort study
    Teo, Zhen Ling
    Zhang, Xiaoman
    Tan, Ting Fang
    Ravichandran, Narrendar
    Yong, Liu
    Ting, Daniel S. W.
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [39] FriendlyFoe: Adversarial Machine Learning as a Practical Architectural Defense against Side Channel Attacks
    Nam, Hyoungwook
    Pothukuchi, Raghavendra Pradyumna
    Li, Bo
    Kim, Nam Sung
    Torrellas, Josep
    PROCEEDINGS OF THE 2024 THE INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES, PACT 2024, 2024, : 338 - 350
  • [40] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529