Self-Supervised Temporal Graph Learning With Temporal and Structural Intensity Alignment

被引:31
|
作者
Liu, Meng [1 ]
Liang, Ke [1 ]
Zhao, Yawei [2 ]
Tu, Wenxuan [1 ]
Gan, Xinbiao [1 ]
Zhou, Sihang [3 ]
Liu, Xinwang [1 ]
He, Kunlun [2 ]
机构
[1] Natl Univ Def Technol, Sch Comp, Changsha 410073, Peoples R China
[2] Chinese Peoples Liberat Army Gen Hosp, Med Big Data Res Ctr, Beijing 100853, Peoples R China
[3] Natl Univ Def Technol, Coll Intelligence Sci & Technol, Changsha 410073, Peoples R China
关键词
Task analysis; Learning systems; Data mining; Vectors; Feature extraction; Medical diagnostic imaging; Industries; Conditional intensity alignment; self-supervised learning; temporal graph learning; NETWORK;
D O I
10.1109/TNNLS.2024.3386168
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information, which has recently garnered increasing attention. In contrast to static graphs, temporal graphs are typically organized as node interaction sequences over continuous time rather than an adjacency matrix. Most temporal graph learning methods model current interactions by incorporating historical neighborhood. However, such methods only consider first-order temporal information while disregarding crucial high-order structural information, resulting in suboptimal performance. To address this issue, we propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information to learn more informative node representations. Notably, the initial node representations combine first-order temporal and high-order structural information differently to calculate two conditional intensities. An alignment loss is then introduced to optimize the node representations, narrowing the gap between the two intensities and making them more informative. Concretely, in addition to modeling temporal information using historical neighbor sequences, we further consider structural knowledge at both local and global levels. At the local level, we generate structural intensity by aggregating features from high-order neighbor sequences. At the global level, a global representation is generated based on all nodes to adjust the structural intensity according to the active statuses on different nodes. Extensive experiments demonstrate that the proposed model S2T achieves at most 10.13% performance improvement compared with the state-of-the-art competitors on several datasets.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 50 条
  • [41] SelfSAGCN: Self-Supervised Semantic Alignment for Graph Convolution Network
    Yang, Xu
    Deng, Cheng
    Dang, Zhiyuan
    Wei, Kun
    Yan, Junchi
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 16770 - 16779
  • [42] JGCL: Joint Self-Supervised and Supervised Graph Contrastive Learning
    Akkas, Selahattin
    Azad, Ariful
    COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2022, WWW 2022 COMPANION, 2022, : 1099 - 1105
  • [43] Multi-scale self-supervised representation learning with temporal alignment for multi-rate time series modeling☆
    Chen, Jiawei
    Song, Pengyu
    Zhao, Chunhui
    PATTERN RECOGNITION, 2024, 145
  • [44] Self-Supervised Video Representation Learning by Uncovering Spatio-Temporal Statistics
    Wang, Jiangliu
    Jiao, Jianbo
    Bao, Linchao
    He, Shengfeng
    Liu, Wei
    Liu, Yun-hui
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (07) : 3791 - 3806
  • [45] Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation
    Zhang, Yujia
    Po, Lai-Man
    Xu, Xuyuan
    Liu, Mengyang
    Wang, Yexin
    Ou, Weifeng
    Zhao, Yuzhi
    Yu, Wing-Yin
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3380 - 3389
  • [46] Self-Supervised Learning for Driving Maneuver Prediction from Multivariate Temporal Signals
    Gao, Jun
    Yi, Jiangang
    Murphey, Yi Lu
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 4891 - 4895
  • [47] Cross-View Temporal Contrastive Learning for Self-Supervised Video Representation
    Wang, Lulu
    Xu, Zengmin
    Zhang, Xuelian
    Meng, Ruxing
    Lu, Tao
    Computer Engineering and Applications, 2024, 60 (18) : 158 - 166
  • [48] Attentive spatial-temporal contrastive learning for self-supervised video representation
    Yang, Xingming
    Xiong, Sixuan
    Wu, Kewei
    Shan, Dongfeng
    Xie, Zhao
    IMAGE AND VISION COMPUTING, 2023, 137
  • [49] Temporal Coherence-based Self-supervised Learning for Laparoscopic Workflow Analysis
    Funke, Isabel
    Jenke, Alexander
    Mees, Soeren Torge
    Weitz, Juergen
    Speidel, Stefanie
    Bodenstedt, Sebastian
    OR 2.0 CONTEXT-AWARE OPERATING THEATERS, COMPUTER ASSISTED ROBOTIC ENDOSCOPY, CLINICAL IMAGE-BASED PROCEDURES, AND SKIN IMAGE ANALYSIS, OR 2.0 2018, 2018, 11041 : 85 - 93
  • [50] Self-Supervised Representation Learning From Temporal Ordering of Automated Driving Sequences
    Lang, Christopher
    Braun, Alexander
    Schillingmann, Lars
    Haug, Karsten
    Valada, Abhinav
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (03) : 2582 - 2589