Efficient Local Imperceptible Random Search for Black-Box Adversarial Attacks

被引:0
|
作者
Li, Yining [1 ]
You, Shu [1 ]
Chen, Yihan [1 ]
Li, Zhenhua [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Sch Comp Sci & Technol, MIIT Key Lab Pattern Anal & Machine Intelligence, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Black-box attack; Imperceptible perturbation; Salient region; High-quality; Local Imperceptible Random Search Approach;
D O I
10.1007/978-981-97-5612-4_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks involve making subtle perturbations to input images, which cause the DNN model to output incorrect predictions. Most existing black-box attacks fool the target model by querying the target model to generate global perturbation, which requires many queries and makes the perturbation easily detectable. We propose a local black-box attack algorithm based on salient region localization called Local Imperceptible Random Search (LIRS). This method combines the precise localization of sensitive regions with a random search algorithm to generate a universal framework for local perturbation, which is compatible with most black-box attack algorithms. We conducted comprehensive experiments and found that it efficiently generates adversarial examples with subtle perturbations under limited queries. Additionally, it can effectively identify perturbation-sensitive regions in images, outperforming existing state-of-the-art black-box attack methods.
引用
收藏
页码:325 / 336
页数:12
相关论文
共 50 条
  • [41] Black-box attacks on dynamic graphs via adversarial topology perturbations
    Tao, Haicheng
    Cao, Jie
    Chen, Lei
    Sun, Hongliang
    Shi, Yong
    Zhu, Xingquan
    NEURAL NETWORKS, 2024, 171 : 308 - 319
  • [42] Adversarial Black-Box Attacks with Timing Side-Channel Leakage
    Nakai, Tsunato
    Suzuki, Daisuke
    Omatsu, Fumio
    Fujino, Takeshi
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2021, E104A (01) : 143 - 151
  • [43] Black-box Attacks on Spoofing Countermeasures Using Transferability of Adversarial Examples
    Zhang, Yuekai
    Jiang, Ziyan
    Villalba, Jesus
    Dehak, Najim
    INTERSPEECH 2020, 2020, : 4238 - 4242
  • [44] Improving the transferability of adversarial examples through black-box feature attacks
    Wang, Maoyuan
    Wang, Jinwei
    Ma, Bin
    Luo, Xiangyang
    NEUROCOMPUTING, 2024, 595
  • [45] Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation
    Aithal, Manjushree B.
    Li, Xiaohua
    IEEE ACCESS, 2022, 10 : 12395 - 12411
  • [46] Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information
    Zhene, Baolin
    Jiang, Peipei
    Wang, Qian
    Li, Qi
    Shen, Chao
    Wang, Cong
    Ge, Yunjie
    Teng, Qingyang
    Zhang, Shenyi
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 86 - 107
  • [47] Black-box attacks against log anomaly detection with adversarial examples
    Lu, Siyang
    Wang, Mingquan
    Wang, Dongdong
    Wei, Xiang
    Xiao, Sizhe
    Wang, Zhiwei
    Han, Ningning
    Wang, Liqiang
    INFORMATION SCIENCES, 2023, 619 : 249 - 262
  • [48] Black-Box Adversarial Attacks Against SQL Injection Detection Model
    Alqhtani, Maha
    Alghazzawi, Daniyal
    Alarifi, Suaad
    CONTEMPORARY MATHEMATICS, 2024, 5 (04): : 5098 - 5112
  • [49] Simultaneously Optimizing Perturbations and Positions for Black-Box Adversarial Patch Attacks
    Wei, Xingxing
    Guo, Ying
    Yu, Jie
    Zhang, Bo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (07) : 9041 - 9054
  • [50] Improving Black-box Adversarial Attacks with a Transfer-based Prior
    Cheng, Shuyu
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32