Enhanced Scalable Graph Neural Network via Knowledge Distillation

被引:0
|
作者
Mai, Chengyuan [1 ,2 ]
Chang, Yaomin [1 ,2 ]
Chen, Chuan [1 ,2 ]
Zheng, Zibin [2 ,3 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[2] Sun Yat Sen Univ, Natl Engn Res Ctr Digital Life, Guangzhou 510006, Peoples R China
[3] Sun Yat Sen Univ, Sch Software Engn, Zhuhai 519000, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural networks; Scalability; Computational modeling; Convolution; Training; Spectral analysis; Data mining; Graph neural network (GNN); knowledge distillation (KD); network embedding; scalability;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph representation learning scenarios. However, when applied to graph data in real world, GNNs have encountered scalability issues. Existing GNNs often have high computational load in both training and inference stages, making them incapable of meeting the performance needs of large-scale scenarios with a large number of nodes. Although several studies on scalable GNNs have developed, they either merely improve GNNs with limited scalability or come at the expense of reduced effectiveness. Inspired by knowledge distillation's (KDs) achievement in preserving performances while balancing scalability in computer vision and natural language processing, we propose an enhanced scalable GNN via KD (KD-SGNN) to improve the scalability and effectiveness of GNNs. On the one hand, KD-SGNN adopts the idea of decoupled GNNs, which decouples feature transformation and feature propagation in GNNs and leverages preprocessing techniques to improve the scalability of GNNs. On the other hand, KD-SGNN proposes two KD mechanisms (i.e., soft-target (ST) distillation and shallow imitation (SI) distillation) to improve the expressiveness. The scalability and effectiveness of KD-SGNN are evaluated on multiple real datasets. Besides, the effectiveness of the proposed KD mechanisms is also verified through comprehensive analyses.
引用
收藏
页码:1258 / 1271
页数:14
相关论文
共 50 条
  • [1] Enhanced Scalable Graph Neural Network via Knowledge Distillation
    Mai, Chengyuan
    Chang, Yaomin
    Chen, Chuan
    Zheng, Zibin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (01) : 1258 - 1271
  • [2] Knowledge distillation via adaptive meta-learning for graph neural network
    Shen, Tiesunlong
    Wang, Jin
    Zhang, Xuejie
    INFORMATION SCIENCES, 2025, 689
  • [3] Package Arrival Time Prediction via Knowledge Distillation Graph Neural Network
    Zhang, Lei
    Liu, Yong
    Zeng, Zhiwei
    Cao, Yiming
    Wu, Xingyu
    Xu, Yonghui
    Shen, Zhiqi
    Cui, Lizhen
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (05)
  • [4] A SCALABLE CONVOLUTIONAL NEURAL NETWORK FOR TASK-SPECIFIED SCENARIOS VIA KNOWLEDGE DISTILLATION
    Shi, Mengnan
    Qin, Fei
    Ye, Qixiang
    Han, Zhenjun
    Jiao, Jianbin
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2467 - 2471
  • [5] Accelerating Molecular Graph Neural Networks via Knowledge Distillation
    Kelvinius, Filip Ekstrom
    Georgiev, Dimitar
    Toshev, Artur Petrov
    Gasteiger, Johannes
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36, NEURIPS 2023, 2023,
  • [6] Boosting Graph Neural Networks via Adaptive Knowledge Distillation
    Guo, Zhichun
    Zhang, Chunhui
    Fan, Yujie
    Tian, Yijun
    Zhang, Chuxu
    Chawla, Nitesh V.
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7793 - 7801
  • [7] Reconstructed Graph Neural Network With Knowledge Distillation for Lightweight Anomaly Detection
    Zhou, Xiaokang
    Wu, Jiayi
    Liang, Wei
    Wang, Kevin I-Kai
    Yan, Zheng
    Yang, Laurence T.
    Jin, Qun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 11817 - 11828
  • [8] Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation
    He, Huarui
    Wang, Jie
    Zhang, Zhanqiu
    Wu, Feng
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 534 - 544
  • [9] EGNN: Constructing explainable graph neural networks via knowledge distillation
    Li, Yuan
    Liu, Li
    Wang, Guoyin
    Du, Yong
    Chen, Penggang
    KNOWLEDGE-BASED SYSTEMS, 2022, 241
  • [10] Knowledge Graph Enhanced Heterogeneous Graph Neural Network for Fake News Detection
    Xie, Bingbing
    Ma, Xiaoxiao
    Wu, Jia
    Yang, Jian
    Fan, Hao
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 2826 - 2837