Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks

被引:0
|
作者
Li, Yining [1 ]
You, Shu [1 ]
Chen, Yihan [1 ]
Li, Zhenhua [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Sch Comp Sci & Technol, MIIT Key Lab Pattern Anal & Machine Intelligence, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Black-box attack; Imperceptible perturbation; Salient region; High-quality; Local Imperceptible Random Search Approach;
D O I
10.1007/978-981-97-5612-4_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks involve making subtle perturbations to input images, which cause the DNN model to output incorrect predictions. Most existing black-box attacks fool the target model by querying the target model to generate global perturbation, which requires many queries and makes the perturbation easily detectable. We propose a local black-box attack algorithm based on salient region localization called Local Imperceptible Random Search (LIRS). This method combines the precise localization of sensitive regions with a random search algorithm to generate a universal framework for local perturbation, which is compatible with most black-box attack algorithms. We conducted comprehensive experiments and found that it efficiently generates adversarial examples with subtle perturbations under limited queries. Additionally, it can effectively identify perturbation-sensitive regions in images, outperforming existing state-of-the-art black-box attack methods.
引用
收藏
页码:325 / 336
页数:12
相关论文
共 50 条
  • [21] Curls & Whey: Boosting Black-Box Adversarial Attacks
    Shi, Yucheng
    Wang, Siyu
    Han, Yahong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6512 - 6520
  • [22] Boundary Defense Against Black-box Adversarial Attacks
    Aithal, Manjushree B.
    Li, Xiaohua
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2349 - 2356
  • [23] Black-box Adversarial Attacks with Limited Queries and Information
    Ilyas, Andrew
    Engstrom, Logan
    Athalye, Anish
    Lin, Jessy
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [24] Black-box adversarial attacks by manipulating image attributes
    Wei, Xingxing
    Guo, Ying
    Li, Bo
    Information Sciences, 2021, 550 : 285 - 296
  • [25] Query-Efficient Black-Box Adversarial Attack with Random Pattern Noises
    Yuito, Makoto
    Suzuki, Kenta
    Yoneyama, Kazuki
    INFORMATION AND COMMUNICATIONS SECURITY, ICICS 2022, 2022, 13407 : 303 - 323
  • [26] KENKU: Towards Efficient and Stealthy Black-box Adversarial Attacks against ASR Systems
    Wu, Xinghui
    Ma, Shiqing
    Shen, Chao
    Lin, Chenhao
    Wang, Qian
    Li, Qi
    Rao, Yuan
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 247 - 264
  • [27] Black-Box Adversarial Attacks against Audio Forensics Models
    Jiang, Yi
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [28] AutoAttacker: A reinforcement learning approach for black-box adversarial attacks
    Tsingenopoulos, Ilias
    Preuveneers, Davy
    Joosen, Wouter
    2019 4TH IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (EUROS&PW), 2019, : 229 - 237
  • [29] Black-box adversarial attacks on XSS attack detection model
    Wang, Qiuhua
    Yang, Hui
    Wu, Guohua
    Choo, Kim-Kwang Raymond
    Zhang, Zheng
    Miao, Gongxun
    Ren, Yizhi
    COMPUTERS & SECURITY, 2022, 113
  • [30] Black-box transferable adversarial attacks based on ensemble advGAN
    Huang S.-N.
    Li Y.-X.
    Mao Y.-H.
    Ban A.-Y.
    Zhang Z.-Y.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (10): : 2391 - 2398