Multi-Scale Self-Supervised Graph Contrastive Learning With Injective Node Augmentation

被引:4
|
作者
Zhang, Haonan [1 ]
Ren, Yuyang [1 ]
Fu, Luoyi [1 ]
Wang, Xinbing [1 ]
Chen, Guihai [1 ]
Zhou, Chenghu [2 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
[2] Chinese Acad Sci, Inst Geog Sci & Nat Resources Res, Beijing 100045, Peoples R China
关键词
Graph contrastive learning; graph representation learning; node augmentation; self-supervised learning;
D O I
10.1109/TKDE.2023.3278463
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Contrastive Learning (GCL) with Graph Neural Networks (GNN) has emerged as a promising method for learning latent node representations in a self-supervised manner. Most of existing GCL methods employ random sampling for graph view augmentation and maximize the agreement of the node representations between the views. However, the random augmentation manner, which is likely to produce very similar graph view samplings, may easily result in incomplete nodal contextual information, thus weakening the discrimination of node representations. To this end, this paper proposes a novel trainable scheme from the perspective of node augmentation, which is theoretically proved to be injective and utilizes the subgraphs consisting of each node with its neighbors to enhance the distinguishability of nodal view. Notably, our proposed scheme tries to enrich node representations via a multi-scale contrastive training that integrates three different levels of training granularity, i.e., subgraph level, graph- and node-level contextual information. In particular, the subgraph-level objective between augmented and original node views is constructed to enhance the discrimination of node representations while graph- and node-level objectives with global and local information from the original graph are developed to improve the generalization ability of representations. Experiment results demonstrate that our framework outperforms existing state-of-the-art baselines and even surpasses several supervised counterparts on four real-world datasets for node classification.
引用
收藏
页码:261 / 274
页数:14
相关论文
共 50 条
  • [1] Self-supervised contrastive graph representation with node and graph augmentation?
    Duan, Haoran
    Xie, Cheng
    Li, Bin
    Tang, Peng
    NEURAL NETWORKS, 2023, 167 : 223 - 232
  • [2] Multi-Scale Contrastive Siamese Networks for Self-Supervised Graph Representation Learning
    Jin, Ming
    Zheng, Yizhen
    Li, Yuan-Fang
    Gong, Chen
    Zhou, Chuan
    Pan, Shirui
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1477 - 1483
  • [3] A multi-scale self-supervised hypergraph contrastive learning framework for video question answering
    Wang, Zheng
    Wu, Bin
    Ota, Kaoru
    Dong, Mianxiong
    Li, He
    NEURAL NETWORKS, 2023, 168 : 272 - 286
  • [4] Contrastive Self-supervised Learning for Graph Classification
    Zeng, Jiaqi
    Xie, Pengtao
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10824 - 10832
  • [5] CoLM2S: Contrastive self-supervised learning on attributed multiplex graph network with multi-scale information
    Han, Beibei
    Wei, Yingmei
    Wang, Qingyong
    Wan, Shanshan
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2023, 8 (04) : 1464 - 1479
  • [6] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105
  • [7] Multi-scale motion contrastive learning for self-supervised skeleton-based action recognition
    Wu, Yushan
    Xu, Zengmin
    Yuan, Mengwei
    Tang, Tianchi
    Meng, Ruxing
    Wang, Zhongyuan
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [8] Self-supervised graph representation learning using multi-scale subgraph views contrast
    Chen, Lei
    Huang, Jin
    Li, Jingjing
    Cao, Yang
    Xiao, Jing
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (15): : 12559 - 12569
  • [9] Self-supervised graph representation learning using multi-scale subgraph views contrast
    Lei Chen
    Jin Huang
    Jingjing Li
    Yang Cao
    Jing Xiao
    Neural Computing and Applications, 2022, 34 : 12559 - 12569
  • [10] Self-supervised Multi-scale Consistency for Weakly Supervised Segmentation Learning
    Valvano, Gabriele
    Leo, Andrea
    Tsaftaris, Sotirios A.
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021), 2021, 12968 : 14 - 24