Iterative Graph Self-Distillation

被引:2
|
作者
Zhang, Hanlin [1 ]
Lin, Shuai [2 ]
Liu, Weiyang [3 ]
Zhou, Pan [4 ]
Tang, Jian [5 ]
Liang, Xiaodan [2 ]
Xing, Eric P. [6 ]
机构
[1] Carnegie Mellon Univ, Machine Learning Dept, Pittsburgh, PA 15213 USA
[2] Sun Yat Sen Univ, Sch Intelligent Syst Engn, Guangzhou 510275, Guangdong, Peoples R China
[3] Univ Cambridge, Dept Comp Sci, Cambridge CB2 1TN, England
[4] SEA Grp Ltd, SEA AI Lab, Singapore 138680, Singapore
[5] HEC Montreal, Montreal, PQ H3T 2A7, Canada
[6] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
关键词
Task analysis; Representation learning; Kernel; Graph neural networks; Iterative methods; Data augmentation; Training; graph representation learning; self-supervised learning;
D O I
10.1109/TKDE.2023.3303885
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach. IGSD involves a teacher-student distillation process that uses graph diffusion augmentations and constructs the teacher model using an exponential moving average of the student model. The intuition behind IGSD is to predict the teacher network representation of the graph pairs under different augmented views. As a natural extension, we also apply IGSD to semi-supervised scenarios by jointly regularizing the network with both supervised and self-supervised contrastive loss. Finally, we show that fine-tuning the IGSD-trained models with self-training can further improve graph representation learning. Empirically, we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings, which well validates the superiority of IGSD.
引用
收藏
页码:1161 / 1169
页数:9
相关论文
共 50 条
  • [41] Self-Distillation for Few-Shot Image Captioning
    Chen, Xianyu
    Jiang, Ming
    Zhao, Qi
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 545 - 555
  • [42] SILC: Improving Vision Language Pretraining with Self-distillation
    Naeem, Muhammad Ferjad
    Xiang, Yongqin
    Zhai, Xiaohua
    Hoyer, Lukas
    Van Gool, Luc
    Tombari, Federico
    COMPUTER VISION - ECCV 2024, PT XXI, 2025, 15079 : 38 - 55
  • [43] Few-shot Learning with Online Self-Distillation
    Liu, Sihan
    Wang, Yue
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021), 2021, : 1067 - 1070
  • [44] Efficient Semantic Segmentation via Self-Attention and Self-Distillation
    An, Shumin
    Liao, Qingmin
    Lu, Zongqing
    Xue, Jing-Hao
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (09) : 15256 - 15266
  • [45] Improving Differentiable Architecture Search via self-distillation
    Zhu, Xunyu
    Li, Jian
    Liu, Yong
    Wang, Weiping
    NEURAL NETWORKS, 2023, 167 : 656 - 667
  • [46] Balanced self-distillation for long-tailed recognition
    Ren, Ning
    Li, Xiaosong
    Wu, Yanxia
    Fu, Yan
    KNOWLEDGE-BASED SYSTEMS, 2024, 290
  • [47] Monocular Depth Estimation via Self-Supervised Self-Distillation
    Hu, Haifeng
    Feng, Yuyang
    Li, Dapeng
    Zhang, Suofei
    Zhao, Haitao
    SENSORS, 2024, 24 (13)
  • [48] Self-supervised Anomaly Detection by Self-distillation and Negative Sampling
    Rafiee, Nima
    Gholamipoor, Rahil
    Adaloglou, Nikolas
    Jaxy, Simon
    Ramakers, Julius
    Kollmann, Markus
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 459 - 470
  • [49] Transferable adversarial masked self-distillation for unsupervised domain adaptation
    Xia, Yuelong
    Yun, Li-Jun
    Yang, Chengfu
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (06) : 6567 - 6580
  • [50] Spatial Self-Distillation for Object Detection with Inaccurate Bounding Boxes
    Wu, Di
    Chen, Pengfei
    Yu, Xuehui
    Li, Guorong
    Han, Zhenjun
    Jiao, Jianbin
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 6832 - 6842