Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

被引:0
|
作者
Wu, Huijun [1 ,2 ]
Wang, Chen [2 ]
Tyshetskiy, Yuriy [2 ]
Docherty, Andrew [2 ]
Lu, Kai [3 ]
Zhu, Liming [1 ,2 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIRO, Data61, Canberra, ACT, Australia
[3] Natl Univ Def Technol, Changsha, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph deep learning models, such as graph convolutional networks (GCN) achieve state-of-the-art performance for tasks on graph data. However, similar to other deep learning models, graph deep learning models are susceptible to adversarial attacks. However, compared with non-graph data the discrete nature of the graph connections and features provide unique challenges and opportunities for adversarial attacks and defenses. In this paper, we propose techniques for both an adversarial attack and a defense against adversarial attacks. Firstly, we show that the problem of discrete graph connections and the discrete features of common datasets can be handled by using the integrated gradient technique that accurately determines the effect of changing selected features or edges while still benefiting from parallel computations. In addition, we show that an adversarially manipulated graph using a targeted attack statistically differs from un-manipulated graphs. Based on this observation, we propose a defense approach which can detect and recover a potential adversarial perturbation. Our experiments on a number of datasets show the effectiveness of the proposed techniques.
引用
收藏
页码:4816 / 4823
页数:8
相关论文
共 50 条
  • [41] Learning defense transformations for counterattacking adversarial examples
    Li, Jincheng
    Zhang, Shuhai
    Cao, Jiezhang
    Tan, Mingkui
    NEURAL NETWORKS, 2023, 164 : 177 - 185
  • [42] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [43] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [44] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [45] Attack-less adversarial training for a robust adversarial defense
    Ho, Jiacang
    Lee, Byung-Gook
    Kang, Dae-Ki
    APPLIED INTELLIGENCE, 2022, 52 (04) : 4364 - 4381
  • [46] Attack-less adversarial training for a robust adversarial defense
    Jiacang Ho
    Byung-Gook Lee
    Dae-Ki Kang
    Applied Intelligence, 2022, 52 : 4364 - 4381
  • [47] Guard: Graph Universal Adversarial Defense
    Li, Jintang
    Liao, Jie
    Wu, Ruofan
    Chen, Liang
    Zheng, Zibin
    Dan, Jiawang
    Meng, Changhua
    Wang, Weiqiang
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1198 - 1207
  • [48] Adversarial Technique Validation & Defense Selection Using Attack Graph & ATT&CK Matrix
    Haque, Md Ariful
    Shetty, Sachin
    Kamhoua, Charles A.
    Gold, Kimberly
    2023 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2023, : 181 - 187
  • [49] Adversarial Attack and Defense Based Hydrangea Classification via Deep Learning: Autoencoder and MobileNet
    Lee, Jongwhee
    Cheon, Minjong
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, 2023, 543 : 584 - 596
  • [50] Wireless Universal Adversarial Attack and Defense for Deep Learning-Based Modulation Classification
    Wang, Zhaowei
    Liu, Weicheng
    Wang, Hui-Ming
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (03) : 582 - 586