Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses

被引:0
|
作者
Sriramanan, Gaurang [1 ]
Addepalli, Sravanti [1 ]
Baburaj, Arya [1 ]
Babu, R. Venkatesh [1 ]
机构
[1] Indian Inst Sci, Dept Computat & Data Sci, Video Analyt Lab, Bangalore, India
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Advances in the development of adversarial attacks have been fundamental to the progress of adversarial defense research. Efficient and effective attacks are crucial for reliable evaluation of defenses, and also for developing robust models. Adversarial attacks are often generated by maximizing standard losses such as the cross-entropy loss or maximum-margin loss within a constraint set using Projected Gradient Descent (PGD). In this work, we introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training. We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries, thereby resulting in stronger attacks. We evaluate our attack against multiple defenses and show improved performance when compared to existing attacks. Further, we propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses by utilizing the proposed relaxation term for both attack generation and training.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Metrics for Evaluating Adversarial Attack Patterns
    Smith, Savanna
    Muto, Shunta
    Evans, Anna
    Ward, Chris M.
    Harguess, Josh
    GEOSPATIAL INFORMATICS XII, 2022, 12099
  • [2] MaskDGA: An Evasion Attack Against DGA Classifiers and Adversarial Defenses
    Sidi, Lior
    Nadler, Asaf
    Shabtai, Asaf
    IEEE ACCESS, 2020, 8 : 161580 - 161592
  • [3] LOCAL TEXTURE COMPLEXITY GUIDED ADVERSARIAL ATTACK
    Zhang, Jiefei
    Wang, Jie
    Lyu, Wanli
    Yin, Zhaoxia
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2065 - 2069
  • [4] Evaluating Adversarial Robustness of Secret Key-Based Defenses
    Ali, Ziad Tariq Muhammad
    Mohammed, Ameer
    Ahmad, Imtiaz
    IEEE ACCESS, 2022, 10 : 34872 - 34882
  • [5] Evaluating the Adversarial Robustness of Adaptive Test-time Defenses
    Croce, Francesco
    Gowal, Sven
    Brunner, Thomas
    Shelhamer, Evan
    Hein, Matthias
    Cemgil, Taylan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] Enhancing Adversarial Example Transferability with an Intermediate Level Attack
    Huang, Qian
    Katsman, Isay
    He, Horace
    Gu, Zeqi
    Belongie, Serge
    Lim, Ser-Nam
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4732 - 4741
  • [7] Enhancing adversarial attack transferability with multi-scale feature attack
    Sun, Caixia
    Zou, Lian
    Fan, Cien
    Shi, Yu
    Liu, Yifeng
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2021, 19 (02)
  • [8] Fashion-Guided Adversarial Attack on Person Segmentation
    Treu, Marc
    Trung-Nghia Le
    Nguyen, Huy H.
    Yamagishi, Junichi
    Echizen, Isao
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 943 - 952
  • [9] Scaling provable adversarial defenses
    Wong, Eric
    Schmidt, Frank R.
    Metzen, Jan Hendrik
    Kolter, J. Zico
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Evaluating the Effectiveness of Attacks and Defenses on Machine Learning Through Adversarial Samples
    Gala, Viraj R.
    Schneider, Martin A.
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS, ICSTW, 2023, : 90 - 97