Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation

被引:20
|
作者
He, Huarui [1 ]
Wang, Jie [1 ,2 ]
Zhang, Zhanqiu [1 ]
Wu, Feng [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
关键词
Graph Neural Networks; Knowledge Distillation; Adversarial Training; Network Compression;
D O I
10.1145/3534678.3539315
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep graph neural networks (GNNs) have been shown to be expressive for modeling graph-structured data. Nevertheless, the over-stacked architecture of deep graph models makes it difficult to deploy and rapidly test on mobile or embedded systems. To compress over-stacked GNNs, knowledge distillation via a teacher-student architecture turns out to be an effective technique, where the key step is to measure the discrepancy between teacher and student networks with predefined distance functions. However, using the same distance for graphs of various structures may be unfit, and the optimal distance formulation is hard to determine. To tackle these problems, we propose a novel Adversarial Knowledge Distillation framework for graph models named GraphAKD, which adversarially trains a discriminator and a generator to adaptively detect and decrease the discrepancy. Specifically, noticing that the well-captured inter-node and inter-class correlations favor the success of deep GNNs, we propose to criticize the inherited knowledge from node-level and class-level views with a trainable discriminator. The discriminator distinguishes between teacher knowledge and what the student inherits, while the student GNN works as a generator and aims to fool the discriminator. Experiments on node-level and graph-level classification benchmarks demonstrate that GraphAKD improves the student performance by a large margin. The results imply that GraphAKD can precisely transfer knowledge from a complicated teacher GNN to a compact student GNN.
引用
收藏
页码:534 / 544
页数:11
相关论文
共 50 条
  • [41] Defending adversarial attacks in Graph Neural Networks via tensor enhancement
    Zhang, Jianfu
    Hong, Yan
    Cheng, Dawei
    Zhang, Liqing
    Zhao, Qibin
    PATTERN RECOGNITION, 2025, 158
  • [42] Hardening Deep Neural Networks via Adversarial Model Cascades
    Vijaykeerthy, Deepak
    Suri, Anshuman
    Mehta, Sameep
    Kumaraguru, Ponnurangam
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [43] Compressing Visual-linguistic Model via Knowledge Distillation
    Fang, Zhiyuan
    Wang, Jianfeng
    Hu, Xiaowei
    Wang, Lijuan
    Yang, Yezhou
    Liu, Zicheng
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1408 - 1418
  • [44] Concept Distillation in Graph Neural Networks
    Magister, Lucie Charlotte
    Barbiero, Pietro
    Kazhdan, Dmitry
    Siciliano, Federico
    Ciravegna, Gabriele
    Silvestri, Fabrizio
    Jamnik, Mateja
    Lio, Pietro
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT III, 2023, 1903 : 233 - 255
  • [45] Cross-layer knowledge distillation with KL divergence and offline ensemble for compressing deep neural network
    Chou, Hsing-Hung
    Chiu, Ching-Te
    Liao, Yi-Ping
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2021, 10 : 303 - 338
  • [46] Compressing Deep Neural Networks for Recognizing Places
    Saha, Soham
    Varma, Girish
    Jawahar, C. V.
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 352 - 357
  • [47] Compressing convolutional neural networks with cheap convolutions and online distillation
    Xie, Jiao
    Lin, Shaohui
    Zhang, Yichen
    Luo, Linkai
    DISPLAYS, 2023, 78
  • [48] Knowledge Reverse Distillation Based Confidence Calibration for Deep Neural Networks
    Jiang, Xianhui
    Deng, Xiaogang
    NEURAL PROCESSING LETTERS, 2023, 55 (01) : 345 - 360
  • [49] Knowledge Reverse Distillation Based Confidence Calibration for Deep Neural Networks
    Xianhui Jiang
    Xiaogang Deng
    Neural Processing Letters, 2023, 55 : 345 - 360
  • [50] Feature Distribution-based Knowledge Distillation for Deep Neural Networks
    Hong, Hyeonseok
    Kim, Hyun
    2022 19TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2022, : 75 - 76