Sequence-to-sequence modeling for graph representation learning

被引:4
|
作者
Taheri, Aynaz [1 ]
Gimpel, Kevin [2 ]
Berger-Wolf, Tanya [1 ]
机构
[1] Univ Illinois, Chicago, IL 60607 USA
[2] Toyota Technol Inst Chicago, Chicago, IL USA
关键词
Graph representation learning; Deep learning; Graph classification; Recurrent models;
D O I
10.1007/s41109-019-0174-8
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We propose sequence-to-sequence architectures for graph representation learning in both supervised and unsupervised regimes. Our methods use recurrent neural networks to encode and decode information from graph-structured data. Recurrent neural networks require sequences, so we choose several methods of traversing graphs using different types of substructures with various levels of granularity to generate sequences of nodes for encoding. Our unsupervised approaches leverage long short-term memory (LSTM) encoder-decoder models to embed the graph sequences into a continuous vector space. We then represent a graph by aggregating its graph sequence representations. Our supervised architecture uses an attention mechanism to collect information from the neighborhood of a sequence. The attention module enriches our model in order to focus on the subgraphs that are crucial for the purpose of a graph classification task. We demonstrate the effectiveness of our approaches by showing improvements over the existing state-of-the-art approaches on several graph classification tasks.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] Exploring Sequence-to-Sequence Learning in Aspect Term Extraction
    Ma, Dehong
    Li, Sujian
    Wu, Fangzhao
    Xie, Xing
    Wang, Houfeng
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3538 - 3547
  • [32] Proactive Mobility Management of UEs using Sequence-to-Sequence Modeling
    Yajnanarayana, Vijaya
    2022 NATIONAL CONFERENCE ON COMMUNICATIONS (NCC), 2022, : 320 - 325
  • [33] ON SEQUENCE-TO-SEQUENCE TRANSFORMATIONS
    UPRETI, R
    INDIAN JOURNAL OF PURE & APPLIED MATHEMATICS, 1982, 13 (04): : 454 - 457
  • [34] Concept Identification with Sequence-to-Sequence Models in Abstract Meaning Representation Parsing
    Batiz, Orsolya Bernadeu
    Helmer, Robert Paul
    Pop, Roxana
    Macicasan, Florin
    Lemnaru, Camelia
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTER COMMUNICATION AND PROCESSING (ICCP 2020), 2020, : 83 - 90
  • [35] Sequence-to-sequence deep learning model for building energy consumption prediction with dynamic simulation modeling
    Kim, Chul Ho
    Kim, Marie
    Song, Yu Jin
    JOURNAL OF BUILDING ENGINEERING, 2021, 43
  • [36] Attentive Sequence-to-Sequence Modeling of Stroke Gestures Articulation Performance
    Kumar, T. Lokesh
    Leiva, Luis A.
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2021, 51 (06) : 663 - 672
  • [37] MANDARIN ELECTROLARYNGEAL SPEECH VOICE CONVERSION WITH SEQUENCE-TO-SEQUENCE MODELING
    Yen, Ming-Chi
    Huang, Wen-Chin
    Kobayashi, Kazuhiro
    Peng, Yu-Huai
    Tsai, Shu-Wei
    Tsao, Yu
    Toda, Tomoki
    Jang, Jyh-Shing Roger
    Wang, Hsin-Min
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 650 - 657
  • [38] FORWARD ATTENTION IN SEQUENCE-TO-SEQUENCE ACOUSTIC MODELING FOR SPEECH SYNTHESIS
    Zhang, Jing-Xuan
    Ling, Zhen-Hua
    Dai, Li-Rong
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4789 - 4793
  • [39] A Character-Level Sequence-to-Sequence Method for Subtitle learning
    Zhang, Haijun
    Li, Jingxuan
    Ji, Yuzhu
    Yue, Heng
    2016 IEEE 14TH INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS (INDIN), 2016, : 780 - 783
  • [40] Agreement on Target-Bidirectional LSTMs for Sequence-to-Sequence Learning
    Liu, Lemao
    Finch, Andrew
    Utiyama, Masao
    Sumita, Eiichiro
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2630 - 2637