Superpixel Attack Enhancing Black-Box Adversarial Attack with Image-Driven Division Areas

被引:0
|
作者
Oe, Issa [1 ]
Yamamura, Keiichiro [1 ]
Ishikura, Hiroki [1 ]
Hamahira, Ryo [1 ]
Fujisawa, Katsuki [2 ]
机构
[1] Kyushu Univ, Grad Sch Math, Fukuoka, Japan
[2] Kyushu Univ, Inst Math Ind, Fukuoka, Japan
基金
日本科学技术振兴机构;
关键词
adversarial attack; security for AI; computer vision; deep learning;
D O I
10.1007/978-981-99-8388-9_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models are used in safety-critical tasks such as automated driving and face recognition. However, small perturbations in the model input can significantly change the predictions. Adversarial attacks are used to identify small perturbations that can lead to misclassifications. More powerful black-box adversarial attacks are required to develop more effective defenses. A promising approach to black-box adversarial attacks is to repeat the process of extracting a specific image area and changing the perturbations added to it. Existing attacks adopt simple rectangles as the areas where perturbations are changed in a single iteration. We propose applying superpixels instead, which achieve a good balance between color variance and compactness. We also propose a new search method, versatile search, and a novel attack method, Superpixel Attack, which applies superpixels and performs versatile search. Superpixel Attack improves attack success rates by an average of 2.10% compared with existing attacks. Most models used in this study are robust against adversarial attacks, and this improvement is significant for blackbox adversarial attacks. The code is available at https://github.com/oe1307/SuperpixelAttack.git.
引用
收藏
页码:141 / 152
页数:12
相关论文
共 50 条
  • [1] SCHMIDT: IMAGE AUGMENTATION FOR BLACK-BOX ADVERSARIAL ATTACK
    Shi, Yucheng
    Han, Yahong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [2] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [3] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [4] Amora: Black-box Adversarial Morphing Attack
    Wang, Run
    Juefei-Xu, Felix
    Guo, Qing
    Huang, Yihao
    Xie, Xiaofei
    Ma, Lei
    Liu, Yang
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1376 - 1385
  • [5] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [6] A black-Box adversarial attack for poisoning clustering
    Cina, Antonio Emanuele
    Torcinovich, Alessandro
    Pelillo, Marcello
    PATTERN RECOGNITION, 2022, 122
  • [7] Targeted Black-Box Adversarial Attack Method for Image Classification Models
    Zheng, Su
    Chen, Jialin
    Wang, Lingli
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [8] Boosting Black-box Adversarial Attack with a Better Convergence
    Yin, Heng
    Wang, Jindong
    Mi, Yan
    Zhang, Xiaoning
    2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, : 1234 - 1238
  • [9] Generalizable Black-Box Adversarial Attack With Meta Learning
    Yin, Fei
    Zhang, Yong
    Wu, Baoyuan
    Feng, Yan
    Zhang, Jingyi
    Fan, Yanbo
    Yang, Yujiu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) : 1804 - 1818
  • [10] Black-box Bayesian adversarial attack with transferable priors
    Zhang, Shudong
    Gao, Haichang
    Shu, Chao
    Cao, Xiwen
    Zhou, Yunyi
    He, Jianping
    MACHINE LEARNING, 2024, 113 (04) : 1511 - 1528