Sequence-to-sequence modeling for graph representation learning

被引:4
|
作者
Taheri, Aynaz [1 ]
Gimpel, Kevin [2 ]
Berger-Wolf, Tanya [1 ]
机构
[1] Univ Illinois, Chicago, IL 60607 USA
[2] Toyota Technol Inst Chicago, Chicago, IL USA
关键词
Graph representation learning; Deep learning; Graph classification; Recurrent models;
D O I
10.1007/s41109-019-0174-8
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We propose sequence-to-sequence architectures for graph representation learning in both supervised and unsupervised regimes. Our methods use recurrent neural networks to encode and decode information from graph-structured data. Recurrent neural networks require sequences, so we choose several methods of traversing graphs using different types of substructures with various levels of granularity to generate sequences of nodes for encoding. Our unsupervised approaches leverage long short-term memory (LSTM) encoder-decoder models to embed the graph sequences into a continuous vector space. We then represent a graph by aggregating its graph sequence representations. Our supervised architecture uses an attention mechanism to collect information from the neighborhood of a sequence. The attention module enriches our model in order to focus on the subgraphs that are crucial for the purpose of a graph classification task. We demonstrate the effectiveness of our approaches by showing improvements over the existing state-of-the-art approaches on several graph classification tasks.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] Sequence-to-sequence modeling for graph representation learning
    Aynaz Taheri
    Kevin Gimpel
    Tanya Berger-Wolf
    Applied Network Science, 4
  • [2] Sequence-to-Sequence Learning via Shared Latent Representation
    Shen, Xu
    Tian, Xinmei
    Xing, Jun
    Rui, Yong
    Tao, Dacheng
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 2395 - 2402
  • [3] Semantic Matching for Sequence-to-Sequence Learning
    Zhang, Ruiyi
    Chen, Changyou
    Zhang, Xinyuan
    Bai, Ke
    Carin, Lawrence
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 212 - 222
  • [4] Sequence-to-Sequence Knowledge Graph Completion and Question Answering
    Saxena, Apoorv
    Kochsiek, Adrian
    Gemulla, Rainer
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2814 - 2828
  • [5] MULTILINGUAL SEQUENCE-TO-SEQUENCE SPEECH RECOGNITION: ARCHITECTURE, TRANSFER LEARNING, AND LANGUAGE MODELING
    Cho, Jaejin
    Baskar, Murali Karthick
    Li, Ruizhi
    Wiesner, Matthew
    Mallidi, Sri Harish
    Yalta, Nelson
    Karafiat, Martin
    Watanabe, Shinji
    Hori, Takaaki
    2018 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2018), 2018, : 521 - 527
  • [6] Incorporating Copying Mechanism in Sequence-to-Sequence Learning
    Gu, Jiatao
    Lu, Zhengdong
    Li, Hang
    Li, Victor O. K.
    PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2016, : 1631 - 1640
  • [7] Sequence-to-Sequence Acoustic Modeling for Voice Conversion
    Zhang, Jing-Xuan
    Ling, Zhen-Hua
    Liu, Li-Juan
    Jiang, Yuan
    Dai, Li-Rong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (03) : 631 - 644
  • [8] Foundations of Sequence-to-Sequence Modeling for Time Series
    Kuznetsov, Vitaly
    Mariet, Zelda
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 408 - 417
  • [9] Deep Reinforcement Learning for Sequence-to-Sequence Models
    Keneshloo, Yaser
    Shi, Tian
    Ramakrishnan, Naren
    Reddy, Chandan K.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (07) : 2469 - 2489
  • [10] Matrix product operators for sequence-to-sequence learning
    Guo, Chu
    Jie, Zhanming
    Lu, Wei
    Poletti, Dario
    PHYSICAL REVIEW E, 2018, 98 (04)