Iterative Graph Self-Distillation

被引:2
|
作者
Zhang, Hanlin [1 ]
Lin, Shuai [2 ]
Liu, Weiyang [3 ]
Zhou, Pan [4 ]
Tang, Jian [5 ]
Liang, Xiaodan [2 ]
Xing, Eric P. [6 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[2] Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangzhou 510275, Guangdong, Peoples R China
[3] Univ Cambridge, Dept Comp Sci, Cambridge CB2 1TN, England
[4] SEA Grp Ltd, SEA AI Lab, Singapore 138680, Singapore
[5] HEC Montreal, Montreal, PQ H3T 2A7, Canada
[6] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
关键词
Task analysis; Representation learning; Kernel; Graph neural networks; Iterative methods; Data augmentation; Training; graph representation learning; self-supervised learning;
D O I
10.1109/TKDE.2023.3303885
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that fine-tuning the IGSD-trained models with self-training can further improve graph representation learning. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
引用
收藏
页码:1161 / 1169
页数:9
相关论文
共 50 条
  • [21] VT-Grapher: Video Tube Graph Network With Self-Distillation for Human Action Recognition
    Liu, Xiaoxi
    Liu, Ju
    Cheng, Xuejun
    Li, Jing
    Wan, Wenbo
    Sun, Jiande
    IEEE SENSORS JOURNAL, 2024, 24 (09) : 14855 - 14868
  • [22] Understanding Self-Distillation in the Presence of Label Noise
    Das, Rudrajit
    Sanghavi, Sujay
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [23] Graph masked self-distillation learning for prediction of mutation impact on protein-protein interactions
    Zhang, Yuan
    Dong, Mingyuan
    Deng, Junsheng
    Wu, Jiafeng
    Zhao, Qiuye
    Gao, Xieping
    Xiong, Dapeng
    COMMUNICATIONS BIOLOGY, 2024, 7 (01)
  • [24] DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection
    Fan, Cunhang
    Zhang, Hongyu
    Huang, Wei
    Xue, Jun
    Tao, Jianhua
    Yi, Jiangyan
    Lv, Zhao
    Wu, Xiaopei
    NEURAL NETWORKS, 2024, 179
  • [25] Self-supervised heterogeneous graph learning with iterative similarity distillation
    Wang, Tianfeng
    Pan, Zhisong
    Hu, Guyu
    Xu, Kun
    Zhang, Yao
    KNOWLEDGE-BASED SYSTEMS, 2023, 276
  • [26] Deep Contrastive Representation Learning With Self-Distillation
    Xiao, Zhiwen
    Xing, Huanlai
    Zhao, Bowen
    Qu, Rong
    Luo, Shouxi
    Dai, Penglin
    Li, Ke
    Zhu, Zonghai
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 3 - 15
  • [27] Self-Distillation Amplifies Regularization in Hilbert Space
    Mobahi, Hossein
    Farajtabar, Mehrdad
    Bartlett, Peter L.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] Self-distillation and self-supervision for partial label learning
    Yu, Xiaotong
    Sun, Shiding
    Tian, Yingjie
    PATTERN RECOGNITION, 2024, 146
  • [29] Enhancing Tiny Tissues Segmentation via Self-Distillation
    Zhou, Chuan
    Chen, Yuchu
    Fan, Minghao
    Wen, Yang
    Chen, Hang
    Chen, Leiting
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 934 - 940
  • [30] Toward Generalized Multistage Clustering: Multiview Self-Distillation
    Wang, Jiatai
    Xu, Zhiwei
    Wang, Xin
    Li, Tao
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,