Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

被引:0
|
作者
Wu, Huijun [1 ,2 ]
Wang, Chen [2 ]
Tyshetskiy, Yuriy [2 ]
Docherty, Andrew [2 ]
Lu, Kai [3 ]
Zhu, Liming [1 ,2 ]
机构
[1] Univ New South Wales, Sydney, NSW, Australia
[2] CSIRO, Data61, Canberra, ACT, Australia
[3] Natl Univ Def Technol, Changsha, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph deep learning models, such as graph convolutional networks (GCN) achieve state-of-the-art performance for tasks on graph data. However, similar to other deep learning models, graph deep learning models are susceptible to adversarial attacks. However, compared with non-graph data the discrete nature of the graph connections and features provide unique challenges and opportunities for adversarial attacks and defenses. In this paper, we propose techniques for both an adversarial attack and a defense against adversarial attacks. Firstly, we show that the problem of discrete graph connections and the discrete features of common datasets can be handled by using the integrated gradient technique that accurately determines the effect of changing selected features or edges while still benefiting from parallel computations. In addition, we show that an adversarially manipulated graph using a targeted attack statistically differs from un-manipulated graphs. Based on this observation, we propose a defense approach which can detect and recover a potential adversarial perturbation. Our experiments on a number of datasets show the effectiveness of the proposed techniques.
引用
收藏
页码:4816 / 4823
页数:8
相关论文
共 50 条
  • [21] DIPDefend: Deep Image Prior Driven Defense against Adversarial Examples
    Dai, Tao
    Feng, Yan
    Wu, Dongxian
    Chen, Bin
    Lu, Jian
    Jiang, Yong
    Xia, Shu-Tao
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1404 - 1412
  • [22] Research on Graph Structure Data Adversarial Examples Based on Graph Theory Metrics
    He, Wenyong
    Lu, Mingming
    Zheng, Yiji
    Xiong, Neal N.
    SMART COMPUTING AND COMMUNICATION, 2022, 13202 : 394 - 403
  • [23] Sinkhorn Adversarial Attack and Defense
    Subramanyam, A. V.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4039 - 4049
  • [24] Adversarial Attack and Defense: A Survey
    Liang, Hongshuo
    He, Erlu
    Zhao, Yangyang
    Jia, Zhe
    Li, Hao
    ELECTRONICS, 2022, 11 (08)
  • [25] Adversarial Sample Attack and Defense Method for Encrypted Traffic Data
    Ding, Yi
    Zhu, Guiqin
    Chen, Dajiang
    Qin, Xue
    Cao, Mingsheng
    Qin, Zhiguang
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 18024 - 18039
  • [26] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    CYBER SENSING 2018, 2018, 10630
  • [27] Adversarial Attack and Defense on Deep Learning for Air Transportation Communication Jamming
    Liu, Mingqian
    Zhang, Zhenju
    Chen, Yunfei
    Ge, Jianhua
    Zhao, Nan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (01) : 973 - 986
  • [28] Friend-Safe Adversarial Examples in an Evasion Attack on a Deep Neural Network
    Kwon, Hyun
    Yoon, Hyunsoo
    Choi, Daeseon
    INFORMATION SECURITY AND CRYPTOLOGY - ICISC 2017, 2018, 10779 : 351 - 367
  • [29] Moving Target Defense for Embedded Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    PROCEEDINGS OF THE 17TH CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS (SENSYS '19), 2019, : 124 - 137
  • [30] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (01)