TENSORSHIELD: Tensor-based Defense Against Adversarial Attacks on Images

被引:0
|
作者
Entezari, Negin [1 ]
Papalexakis, Evangelos E. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
关键词
adversarial machine learning; deep neural networks; image classification;
D O I
10.1109/MILCOM55135.2022.10017763
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks. Subtle and imperceptible perturbations of the data are able to change the result of deep neural networks. Leveraging vulnerable machine learning methods raises many concerns, especially in domains where security is an important factor. Therefore, it is crucial to design defense mechanisms against adversarial attacks. For the task of image classification, unnoticeable perturbations mostly occur in the high-frequency spectrum of the image. In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images that can significantly discard high-frequency perturbations. Recently a defense framework called SHIELD [1] could "vaccinate" Convolutional Neural Networks (CNN) against adversarial examples by performing random-quality JPEG compressions on local patches of images on the ImageNet dataset. Our tensor-based defense mechanism outperforms the SLQ method from SHIELD by 14% against Fast Gradient Descent (FGSM) adversarial attacks, while maintaining comparable speed.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] AGS: Attribution Guided Sharpening as a Defense Against Adversarial Attacks
    Tobia, Javier Perez
    Braun, Phillip
    Narayan, Apurva
    ADVANCES IN INTELLIGENT DATA ANALYSIS XX, IDA 2022, 2022, 13205 : 225 - 236
  • [42] Defense-PointNet: Protecting PointNet Against Adversarial Attacks
    Zhang, Yu
    Liang, Gongbo
    Salem, Tawfiq
    Jacobs, Nathan
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 5654 - 5660
  • [43] Local Gradients Smoothing: Defense against localized adversarial attacks
    Naseer, Muzammal
    Khan, Salman H.
    Porikli, Fatih
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1300 - 1307
  • [44] Using Uncertainty as a Defense Against Adversarial Attacks for Tabular Datasets
    Santhosh, Poornima
    Gressel, Gilad
    Darling, Michael C.
    AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 719 - 732
  • [45] A NEURO-INSPIRED AUTOENCODING DEFENSE AGAINST ADVERSARIAL ATTACKS
    Bakiskan, Can
    Cekic, Metehan
    Sezer, Ahmet Dundar
    Madhow, Upamanyu
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3922 - 3926
  • [46] Assured Deep Learning: Practical Defense Against Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javaheripi, Mojan
    Javidi, Tara
    Koushanfar, Farinaz
    2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [47] MAEDefense: An Effective Masked AutoEncoder Defense against Adversarial Attacks
    Lyu, Wanli
    Wu, Mengjiang
    Yin, Zhaoxia
    Luo, Bin
    2023 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2023, : 1915 - 1922
  • [48] Deadversarial Multiverse Network - A defense architecture against adversarial attacks
    Berg, Aviram
    Tulchinsky, Elin
    Zaidenerg, Nezer Jacob
    SYSTOR '19: PROCEEDINGS OF THE 12TH ACM INTERNATIONAL SYSTEMS AND STORAGE CONFERENCE, 2019, : 190 - 190
  • [49] Boundary Defense Against Black-box Adversarial Attacks
    Aithal, Manjushree B.
    Li, Xiaohua
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2349 - 2356
  • [50] Image Super-Resolution as a Defense Against Adversarial Attacks
    Mustafa, Aamir
    Khan, Salman H.
    Hayat, Munawar
    Shen, Jianbing
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 1711 - 1724