Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks

被引:0
|
作者
Li, Yining [1 ]
You, Shu [1 ]
Chen, Yihan [1 ]
Li, Zhenhua [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Sch Comp Sci & Technol, MIIT Key Lab Pattern Anal & Machine Intelligence, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Black-box attack; Imperceptible perturbation; Salient region; High-quality; Local Imperceptible Random Search Approach;
D O I
10.1007/978-981-97-5612-4_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks involve making subtle perturbations to input images, which cause the DNN model to output incorrect predictions. Most existing black-box attacks fool the target model by querying the target model to generate global perturbation, which requires many queries and makes the perturbation easily detectable. We propose a local black-box attack algorithm based on salient region localization called Local Imperceptible Random Search (LIRS). This method combines the precise localization of sensitive regions with a random search algorithm to generate a universal framework for local perturbation, which is compatible with most black-box attack algorithms. We conducted comprehensive experiments and found that it efficiently generates adversarial examples with subtle perturbations under limited queries. Additionally, it can effectively identify perturbation-sensitive regions in images, outperforming existing state-of-the-art black-box attack methods.
引用
收藏
页码:325 / 336
页数:12
相关论文
共 50 条
  • [1] Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks
    Yatsura, Maksym
    Metzen, Jan Hendrik
    Hein, Matthias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Query-based Local Black-box Adversarial Attacks
    Shi, Jing
    Zhang, Xiaolin
    Xu, Enhui
    Wang, Yongping
    Zhang, Wenwen
    International Journal of Network Security, 2023, 25 (06) : 1048 - 1058
  • [3] White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks
    Gil, Yotam
    Chai, Yoav
    Gorodissky, Or
    Berant, Jonathan
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 1373 - 1379
  • [4] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [6] Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks
    Brunner, Thomas
    Diehl, Frederik
    Le, Michael Truong
    Knoll, Alois
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 4957 - 4965
  • [7] Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization
    Moon, Seungyong
    An, Gaon
    Song, Hyun Oh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [8] Resiliency of SNN on Black-Box Adversarial Attacks
    Paudel, Bijay Raj
    Itani, Aashish
    Tragoudas, Spyros
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 799 - 806
  • [9] Local Black-box Adversarial Attack based on Random Segmentation Channel
    Xu, Li
    Yang, Zejin
    Guo, Huiting
    Wan, Xu
    Fan, Chunlong
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1437 - 1442
  • [10] Query-Efficient Black-Box Adversarial Attacks on Automatic Speech Recognition
    Tong, Chuxuan
    Zheng, Xi
    Li, Jianhua
    Ma, Xingjun
    Gao, Longxiang
    Xiang, Yong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3981 - 3992