Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation

被引:20
|
作者
He, Huarui [1 ]
Wang, Jie [1 ,2 ]
Zhang, Zhanqiu [1 ]
Wu, Feng [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
关键词
Graph Neural Networks; Knowledge Distillation; Adversarial Training; Network Compression;
D O I
10.1145/3534678.3539315
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep graph neural networks (GNNs) have been shown to be expressive for modeling graph-structured data. Nevertheless, the over-stacked architecture of deep graph models makes it difficult to deploy and rapidly test on mobile or embedded systems. To compress over-stacked GNNs, knowledge distillation via a teacher-student architecture turns out to be an effective technique, where the key step is to measure the discrepancy between teacher and student networks with predefined distance functions. However, using the same distance for graphs of various structures may be unfit, and the optimal distance formulation is hard to determine. To tackle these problems, we propose a novel Adversarial Knowledge Distillation framework for graph models named GraphAKD, which adversarially trains a discriminator and a generator to adaptively detect and decrease the discrepancy. Specifically, noticing that the well-captured inter-node and inter-class correlations favor the success of deep GNNs, we propose to criticize the inherited knowledge from node-level and class-level views with a trainable discriminator. The discriminator distinguishes between teacher knowledge and what the student inherits, while the student GNN works as a generator and aims to fool the discriminator. Experiments on node-level and graph-level classification benchmarks demonstrate that GraphAKD improves the student performance by a large margin. The results imply that GraphAKD can precisely transfer knowledge from a complicated teacher GNN to a compact student GNN.
引用
收藏
页码:534 / 544
页数:11
相关论文
共 50 条
  • [31] A Lightweight Method for Graph Neural Networks Based on Knowledge Distillation and Graph Contrastive Learning
    Wang, Yong
    Yang, Shuqun
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [32] DHBE: Data-free Holistic Backdoor Erasing in Deep Neural Networks via Restricted Adversarial Distillation
    Yan, Zhicong
    Li, Shenghong
    Zhao, Ruijie
    Tian, Yuan
    Zhao, Yuanyuan
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 731 - 745
  • [33] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [34] DATA-FREE WATERMARK FOR DEEP NEURAL NETWORKS BY TRUNCATED ADVERSARIAL DISTILLATION
    Yan, Chao-Bo
    Li, Fang-Qi
    Wang, Shi-Lin
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4480 - 4484
  • [35] Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework
    Yang, Cheng
    Liu, Jiawei
    Shi, Chuan
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 1227 - 1237
  • [36] KDGAN: Knowledge Distillation with Generative Adversarial Networks
    Wang, Xiaojie
    Zhang, Rui
    Sun, Yu
    Qi, Jianzhong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [37] Research on Knowledge Distillation of Generative Adversarial Networks
    Wang, Wei
    Zhang, Baohua
    Cui, Tao
    Chai, Yimeng
    Li, Yue
    2021 DATA COMPRESSION CONFERENCE (DCC 2021), 2021, : 376 - 376
  • [38] Application of Knowledge Distillation in Generative Adversarial Networks
    Zhang, Xu
    2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 65 - 71
  • [39] Automatic Modulation Classification with Neural Networks via Knowledge Distillation
    Wang, Shuai
    Liu, Chunwu
    ELECTRONICS, 2022, 11 (19)
  • [40] Emulating quantum dynamics with neural networks via knowledge distillation
    Yao, Yu
    Cao, Chao
    Haas, Stephan
    Agarwal, Mahak
    Khanna, Divyam
    Abram, Marcin
    FRONTIERS IN MATERIALS, 2023, 9