Sequence-to-sequence modeling for graph representation learning

被引:4
|
作者
Taheri, Aynaz [1 ]
Gimpel, Kevin [2 ]
Berger-Wolf, Tanya [1 ]
机构
[1] Univ Illinois, Chicago, IL 60607 USA
[2] Toyota Technol Inst Chicago, Chicago, IL USA
关键词
Graph representation learning; Deep learning; Graph classification; Recurrent models;
D O I
10.1007/s41109-019-0174-8
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We propose sequence-to-sequence architectures for graph representation learning in both supervised and unsupervised regimes. Our methods use recurrent neural networks to encode and decode information from graph-structured data. Recurrent neural networks require sequences, so we choose several methods of traversing graphs using different types of substructures with various levels of granularity to generate sequences of nodes for encoding. Our unsupervised approaches leverage long short-term memory (LSTM) encoder-decoder models to embed the graph sequences into a continuous vector space. We then represent a graph by aggregating its graph sequence representations. Our supervised architecture uses an attention mechanism to collect information from the neighborhood of a sequence. The attention module enriches our model in order to focus on the subgraphs that are crucial for the purpose of a graph classification task. We demonstrate the effectiveness of our approaches by showing improvements over the existing state-of-the-art approaches on several graph classification tasks.
引用
收藏
页数:26
相关论文
共 50 条
  • [41] Rhythm-Aware Sequence-to-Sequence Learning for Labanotation Generation With Gesture-Sensitive Graph Convolutional Encoding
    Li, Min
    Miao, Zhenjiang
    Zhang, Xiao-Ping
    Xu, Wanru
    Ma, Cong
    Xie, Ningwei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 1488 - 1502
  • [42] Synthesizing waveform sequence-to-sequence to augment training data for sequence-to-sequence speech recognition
    Ueno, Sei
    Mimura, Masato
    Sakai, Shinsuke
    Kawahara, Tatsuya
    ACOUSTICAL SCIENCE AND TECHNOLOGY, 2021, 42 (06) : 333 - 343
  • [43] Pattern Anomaly Detection based on Sequence-to-Sequence Regularity Learning
    Cheng, Yuzhen
    LI, Min
    TEHNICKI VJESNIK-TECHNICAL GAZETTE, 2023, 30 (04): : 1112 - 1117
  • [44] Mental Healthcare Chatbot Using Sequence-to-Sequence Learning and BiLSTM
    Rakib, Afsana Binte
    Rumky, Esika Arifin
    Ashraf, Ananna J.
    Hillas, Md Monsur
    Rahman, Muhammad Arifur
    BRAIN INFORMATICS, BI 2021, 2021, 12960 : 378 - 387
  • [45] Attention Strategies for Multi-Source Sequence-to-Sequence Learning
    Libovicky, Jindrich
    Helcl, Jindrich
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 2, 2017, : 196 - 202
  • [46] Understanding Subtitles by Character-Level Sequence-to-Sequence Learning
    Zhang, Haijun
    Li, Jingxuan
    Ji, Yuzhu
    Yue, Heng
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2017, 13 (02) : 616 - 624
  • [47] A NEW SEQUENCE-TO-SEQUENCE TRANSFORMATION
    CLARK, WD
    GRAY, HL
    SIAM REVIEW, 1969, 11 (04) : 648 - &
  • [48] Sparse Sequence-to-Sequence Models
    Peters, Ben
    Niculae, Vlad
    Martins, Andre F. T.
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1504 - 1519
  • [49] Linear Sequence-to-Sequence Alignment
    Padua, Flavio L. C.
    Carceroni, Rodrigo L.
    Santos, Geraldo A. M. R.
    Kutulakos, Kiriakos N.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (02) : 304 - 320
  • [50] Sequence-to-sequence self calibration
    Wolf, L
    Zomet, A
    COMPUTER VISION - ECCV 2002, PT II, 2002, 2351 : 370 - 382