A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

被引:20
|
作者
Mu, Jiaming [1 ,2 ]
Wang, Binghui [3 ]
Li, Qi [1 ,2 ]
Sun, Kun [4 ]
Xu, Mingwei [1 ,2 ]
Liu, Zhuotao [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing, Peoples R China
[3] Illinois Inst Technol, Chicago, IL USA
[4] George Mason Univ, Fairfax, VA 22030 USA
基金
国家重点研发计划;
关键词
Black-box adversarial attack; structural perturbation; graph neural networks; graph classification;
D O I
10.1145/3460120.3484796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model. To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.
引用
收藏
页码:108 / 125
页数:18
相关论文
共 50 条
  • [21] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [22] A black-Box adversarial attack for poisoning clustering
    Cina, Antonio Emanuele
    Torcinovich, Alessandro
    Pelillo, Marcello
    PATTERN RECOGNITION, 2022, 122
  • [23] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [24] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [25] Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation
    Liu, Ganlin
    Huang, Xiaowei
    Yi, Xinping
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 227 - 243
  • [26] Spectral Privacy Detection on Black-box Graph Neural Networks
    Yang, Yining
    Lu, Jialiang
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [27] Revisiting Black-box Ownership Verification for Graph Neural Networks
    Zhou, Ruikai
    Yang, Kang
    Wang, Xiuling
    Wang, Wendy Hui
    Xu, Jun
    45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 2478 - 2496
  • [28] A General Black-box Adversarial Attack on Graph-based Fake News Detectors
    School of Artificial Intelligence, Optics and Electronics, Northwestern Polytechnical University, China
    不详
    不详
    不详
    arXiv,
  • [29] Boosting Black-box Adversarial Attack with a Better Convergence
    Yin, Heng
    Wang, Jindong
    Mi, Yan
    Zhang, Xiaoning
    2020 5TH INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE 2020), 2020, : 1234 - 1238
  • [30] An Effective Way to Boost Black-Box Adversarial Attack
    Feng, Xinjie
    Yao, Hongxun
    Che, Wenbin
    Zhang, Shengping
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 393 - 404