Self-Attentive Attributed Network Embedding Through Adversarial Learning

被引:8
|
作者
Yu, Wenchao [1 ]
Cheng, Wei [1 ]
Aggarwal, Charu [2 ]
Zong, Bo [1 ]
Chen, Haifeng [1 ]
Wang, Wei [3 ]
机构
[1] NEC Labs Amer Inc, Princeton, NJ 08540 USA
[2] IBM Res AI, Yorktown Hts, NY USA
[3] Univ Calif Los Angeles, Dept Comp Sci, Los Angeles, CA 90024 USA
关键词
network embedding; attributed network; deep embedding; generative adversarial networks; self-attention;
D O I
10.1109/ICDM.2019.00086
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Network embedding aims to learn the low-dimensional representations/embeddings of vertices which preserve the structure and inherent properties of the networks. The resultant embeddings are beneficial to downstream tasks such as vertex classification and link prediction. A vast majority of real-world networks are coupled with a rich set of vertex attributes, which could be potentially complementary in learning better embeddings. Existing attributed network embedding models, with shallow or deep architectures, typically seek to match the representations in topology space and attribute space for each individual vertex by assuming that the samples from the two spaces are drawn uniformly. The assumption, however, can hardly be guaranteed in practice. Due to the intrinsic sparsity of sampled vertex sequences and incompleteness in vertex attributes, the discrepancy between the attribute space and the network topology space inevitably exists. Furthermore, the interactions among vertex attributes, a.k.a cross features, have been largely ignored by existing approaches. To address the above issues, in this paper, we propose NETTENTION, a self-attentive network embedding approach that can efficiently learn vertex embeddings on attributed network. Instead of sample-wise optimization, NETTENTION aggregates the two types of information through minimizing the difference between the representation distributions in the low-dimensional topology and attribute spaces. The joint inference is encapsulated in a generative adversarial training process, yielding better generalization performance and robustness. The learned distributions consider both locality-preserving and global reconstruction constraints which can be inferred from the learning of the adversarially regularized autoencoders. Additionally, a multi-head self-attention module is developed to explicitly model the attribute interactions. Extensive experiments on benchmark datasets have verified the effectiveness of the proposed NETTENTION model on a variety of tasks, including vertex classification and link prediction.
引用
收藏
页码:758 / 767
页数:10
相关论文
共 50 条
  • [31] Lightweight Self-Attentive Sequential Recommendation
    Li, Yang
    Chen, Tong
    Zhang, Peng-Fei
    Yin, Hongzhi
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 967 - 977
  • [32] Adversarial regularized attributed network embedding for graph anomaly detection
    Tian, Chongrui
    Zhang, Fengbin
    Wang, Ruidong
    PATTERN RECOGNITION LETTERS, 2024, 183 : 111 - 116
  • [33] Constituency Parsing with a Self-Attentive Encoder
    Kitaev, Nikita
    Klein, Dan
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 2676 - 2686
  • [34] Fast Self-Attentive Multimodal Retrieval
    Wehrmann, Jonatas
    Lopes, Mauricio A.
    More, Martin D.
    Barros, Rodrigo C.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 1871 - 1878
  • [35] Self-attentive Biaffine Dependency Parsing
    Li, Ying
    Li, Zhenghua
    Zhang, Min
    Wang, Rui
    Li, Sheng
    Si, Luo
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5067 - 5073
  • [36] SAEA: Self-Attentive Heterogeneous Sequence Learning Model for Entity Alignment
    Chen, Jia
    Gu, Binbin
    Li, Zhixu
    Zhao, Pengpeng
    Liu, An
    Zhao, Lei
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS (DASFAA 2020), PT I, 2020, 12112 : 452 - 467
  • [37] Adversarial Capsule Learning for Network Embedding
    Jin, Di
    Li, Zhigang
    Yang, Liang
    He, Dongxiao
    Jiao, Pengfei
    Zhai, Lu
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 214 - 221
  • [38] Slot Self-Attentive Dialogue State Tracking
    Ye, Fanghua
    Manotumruksa, Jarana
    Zhang, Qiang
    Li, Shenghui
    Yilmaz, Emine
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 1598 - 1608
  • [39] Sequential Self-Attentive Model for Knowledge Tracing
    Zhang, Xuelong
    Zhang, Juntao
    Lin, Nanzhou
    Yang, Xiandi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 318 - 330
  • [40] Attributed Network Embedding for Learning in a Dynamic Environment
    Li, Jundong
    Dani, Harsh
    Hu, Xia
    Tang, Jiliang
    Chang, Yi
    Liu, Huan
    CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, : 387 - 396