Adaptively Denoising Graph Neural Networks for Knowledge Distillation

被引:0
|
作者
Guo, Yuxin [1 ]
Yang, Cheng [1 ]
Shi, Chuan [1 ]
Tu, Ke [2 ]
Wu, Zhengwei [2 ]
Zhang, Zhiqiang [2 ]
Zhou, Jun [2 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing, Peoples R China
[2] Ant Financial, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph Neural Networks; Knowledge Distillation;
D O I
10.1007/978-3-031-70371-3_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have excelled in various graph-based applications. Recently, knowledge distillation (KD) has provided a new approach to further boost GNNs performance. However, in the KD process, the GNN student may encounter noise issues while learning from GNN teacher and input graph. GNN teachers may carry noise as deep models inevitably introduce noise during training, leading to error propagation in GNN students. Besides, noisy structures in input graph may also disrupt information during message-passing in GNNs. Hence, we propose DKDG to adaptively remove noise in GNN teacher and graph structure for better distillation. DKDG comprises two modules: (1) teacher knowledge denoising module, which separates GNN teacher knowledge into noise and label knowledge, and removes parameters fitting noise knowledge in the GNN student. (2) graph structure denoising module is designed to enhance node representations discrimination. Detailly, we propose a discrimination-preserving objective based on total variation loss and update edge weights between adjacent nodes to minimize this objective. These two modules are integrated through GNN's forward propagation and trained iteratively. Experiments on five benchmark datasets and three GNNs demonstrate the GNN student distilled by DKDG gains 1.86% relative improvement compared to the best baseline of recent state-of-the-art GNN-based KD methods.
引用
收藏
页码:253 / 269
页数:17
相关论文
共 50 条
  • [1] On Representation Knowledge Distillation for Graph Neural Networks
    Joshi, Chaitanya K.
    Liu, Fayao
    Xun, Xu
    Lin, Jie
    Foo, Chuan Sheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 4656 - 4667
  • [2] Graph-Free Knowledge Distillation for Graph Neural Networks
    Deng, Xiang
    Zhang, Zhongfei
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2321 - 2327
  • [3] RELIANT: Fair Knowledge Distillation for Graph Neural Networks
    Dong, Yushun
    Zhang, Binchi
    Yuan, Yiling
    Zou, Na
    Wang, Qi
    Li, Jundong
    PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 154 - +
  • [4] Online adversarial knowledge distillation for graph neural networks
    Wang, Can
    Wang, Zhe
    Chen, Defang
    Zhou, Sheng
    Feng, Yan
    Chen, Chun
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [5] Knowledge Distillation Improves Graph Structure Augmentation for Graph Neural Networks
    Wu, Lirong
    Lin, Haitao
    Huang, Yufei
    Li, Stan Z.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Accelerating Molecular Graph Neural Networks via Knowledge Distillation
    Kelvinius, Filip Ekstrom
    Georgiev, Dimitar
    Toshev, Artur Petrov
    Gasteiger, Johannes
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36, NEURIPS 2023, 2023,
  • [7] Knowledge Distillation with Graph Neural Networks for Epileptic Seizure Detection
    Zheng, Qinyue
    Venkitaraman, Arun
    Petravic, Simona
    Frossard, Pascal
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 547 - 563
  • [8] Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks
    Yang, Chenxiao
    Wu, Qitian
    Yan, Junchi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Boosting Graph Neural Networks via Adaptive Knowledge Distillation
    Guo, Zhichun
    Zhang, Chunhui
    Fan, Yujie
    Tian, Yijun
    Zhang, Chuxu
    Chawla, Nitesh V.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7793 - 7801
  • [10] FreeKD: Free-direction Knowledge Distillation for Graph Neural Networks
    Feng, Kaituo
    Li, Changsheng
    Yuan, Ye
    Wang, Guoren
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 357 - 366