Query Complexity of Adversarial Attacks

被引:0
|
作者
Gluch, Grzegorz [1 ]
Urbanke, Ruediger [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Sch Comp & Commun Sci, Lausanne, Switzerland
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There are two main attack models considered in the adversarial robustness literature: black-box and white-box. We consider these threat models as two ends of a fine-grained spectrum, indexed by the number of queries the adversary can ask. Using this point of view we investigate how many queries the adversary needs to make to design an attack that is comparable to the best possible attack in the white-box model. We give a lower bound on that number of queries in terms of entropy of decision boundaries of the classifier. Using this result we analyze two classical learning algorithms on two synthetic tasks for which we prove meaningful security guarantees. The obtained bounds suggest that some learning algorithms are inherently more robust against query-bounded adversaries than others.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] On the query complexity of sets
    Beigel, R
    Gasarch, W
    Kummer, M
    Martin, G
    McNicholl, T
    Stephan, F
    MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE 1996, 1996, 1113 : 206 - 217
  • [32] Query Complexity in Expectation
    Kaniewski, Jedrzej
    Lee, Troy
    de Wolf, Ronald
    AUTOMATA, LANGUAGES, AND PROGRAMMING, PT I, 2015, 9134 : 761 - 772
  • [33] QE-DBA: Query-Efficient Decision-Based Adversarial Attacks via Bayesian Optimization
    Zhang, Zhuosheng
    Ahmed, Noor
    Yu, Shucheng
    2024 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2024, : 783 - 788
  • [34] Sparse-RS: A Versatile Framework for Query-Efficient Sparse Black-Box Adversarial Attacks
    Croce, Francesco
    Andriushchenko, Maksym
    Singh, Naman D.
    Flammarion, Nicolas
    Hein, Matthias
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6437 - 6445
  • [35] Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
    He, Fengmei
    Chen, Yihuai
    Chen, Ruidong
    Nie, Weizhi
    IEEE ACCESS, 2023, 11 : 2767 - 2774
  • [36] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [37] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [38] Recovering Localized Adversarial Attacks
    Goepfert, Jan Philip
    Wersing, Heiko
    Hammer, Barbara
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: THEORETICAL NEURAL COMPUTATION, PT I, 2019, 11727 : 302 - 311
  • [39] Adversarial Attacks on an Oblivious Recommender
    Christakopoulou, Konstantina
    Banerjee, Arindam
    RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, : 322 - 330
  • [40] XGAN : Adversarial Attacks with GAN
    Fang, Xiaoyu
    Cao, Guoxu
    Song, Huapeng
    Ouyang, Zhiyou
    2019 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2019, 11321