Self-supervised Graph-level Representation Learning with Adversarial Contrastive Learning

被引:41
|
作者
Luo, Xiao [1 ]
Ju, Wei [2 ]
Gu, Yiyang [2 ]
Mao, Zhengyang [2 ]
Liu, Luchen [2 ]
Yuan, Yuhui [3 ]
Zhang, Ming [2 ]
机构
[1] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90095 USA
[2] Peking Univ, Sch Comp Sci, Beijing 100871, Peoples R China
[3] Microsoft Res Asia, Beijing, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Graph representation learning; graph neural networks; contrastive learning; self-supervised learning; PREDICTION;
D O I
10.1145/3624018
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The recently developed unsupervised graph representation learning approaches apply contrastive learning into graph-structured data and achieve promising performance. However, these methods mainly focus on graph augmentation for positive samples, while the negative mining strategies for graph contrastive learning are less explored, leading to sub-optimal performance. To tackle this issue, we propose a Graph Adversarial Contrastive Learning (GraphACL) scheme that learns a bank of negative samples for effective self-supervised whole-graph representation learning. Our GraphACL consists of (i) a graph encoding branch that generates the representations of positive samples and (ii) an adversarial generation branch that produces a bank of negative samples. To generate more powerful hard negative samples, our method minimizes the contrastive loss during encoding updating while maximizing the contrastive loss adversarially over the negative samples for providing the challenging contrastive task. Moreover, the quality of representations produced by the adversarial generation branch is enhanced through the regularization of carefully designed bank divergence loss and bank orthogonality loss. We optimize the parameters of the graph encoding branch and adversarial generation branch alternately. Extensive experiments on 14 real-world benchmarks on both graph classification and transfer learning tasks demonstrate the effectiveness of the proposed approach over existing graph self-supervised representation learning methods.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] Self-supervised contrastive graph representation with node and graph augmentation?
    Duan, Haoran
    Xie, Cheng
    Li, Bin
    Tang, Peng
    NEURAL NETWORKS, 2023, 167 : 223 - 232
  • [42] Boost Supervised Pretraining for Visual Transfer Learning: Implications of Self-Supervised Contrastive Representation Learning
    Sun, Jinghan
    Wei, Dong
    Ma, Kai
    Wang, Liansheng
    Zheng, Yefeng
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2307 - 2315
  • [43] Augmentation Adversarial Training for Self-Supervised Speaker Representation Learning
    Kang, Jingu
    Huh, Jaesung
    Heo, Hee Soo
    Chung, Joon Son
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1253 - 1262
  • [44] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [45] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [46] Graph explicit pooling for graph-level representation learning
    Liu, Chuang
    Yu, Wenhang
    Gao, Kuang
    Ma, Xueqi
    Zhan, Yibing
    Wu, Jia
    Hu, Wenbin
    Du, Bo
    NEURAL NETWORKS, 2025, 181
  • [47] Graph pooling for graph-level representation learning: a survey
    Zhi-Peng Li
    Si-Guo Wang
    Qin-Hu Zhang
    Yi-Jie Pan
    Nai-An Xiao
    Jia-Yang Guo
    Chang-An Yuan
    Wen-Jian Liu
    De-Shuang Huang
    Huang, De-Shuang (dshuang@tongji.edu.cn), 2025, 58 (02)
  • [48] Federated Graph Anomaly Detection via Contrastive Self-Supervised Learning
    Kong, Xiangjie
    Zhang, Wenyi
    Wang, Hui
    Hou, Mingliang
    Chen, Xin
    Yan, Xiaoran
    Das, Sajal K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 14
  • [49] Negative sampling strategies for contrastive self-supervised learning of graph representations
    Hafidi, Hakim
    Ghogho, Mounir
    Ciblat, Philippe
    Swami, Ananthram
    SIGNAL PROCESSING, 2022, 190
  • [50] Stereo Depth Estimation via Self-supervised Contrastive Representation Learning
    Tukra, Samyakh
    Giannarou, Stamatia
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VII, 2022, 13437 : 604 - 614