Learning Network Representations With Different Order Structural Information

被引:2
|
作者
Liu, Qidong [1 ,2 ]
Zhou, Xin [3 ]
Long, Cheng [3 ]
Zhang, Jie [3 ]
Xu, Mingliang [1 ]
机构
[1] Zhengzhou Univ, Sch Informat Engn, Zhengzhou 450001, Peoples R China
[2] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[3] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金; 新加坡国家研究基金会;
关键词
Classification; link prediction; network embeddings;
D O I
10.1109/TCSS.2020.3000528
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Network embeddings aim to learn representations of nodes in a network with both the first- and the high-order proximities preserved. The first-order proximity corresponds to network reconstruction, while the high-order proximity is in tune with network inference. Since the tradeoff between the two proximities varies on scenarios, we propose an adjustable network embedding (ANE) algorithm for adjusting the weight between the first- and the high-order proximities. ANE is based on two hypotheses: 1) nodes in closed triplets are more important than nodes in open triplets and 2) closed triplets with higher degrees are more important. In addition, we change the bidirectional sampling of Word2vec into directional sampling to preserve the frequency of node pairs in the training set. Three common tasks, network reconstruction, link prediction, and classification are conducted on various publicly available data sets to validate the abovementioned statements.
引用
收藏
页码:907 / 914
页数:8
相关论文
共 50 条
  • [41] Distinct representations of olfactory information in different cortical centres
    Sosulski, Dara L.
    Bloom, Maria Lissitsyna
    Cutforth, Tyler
    Axel, Richard
    Datta, Sandeep Robert
    NATURE, 2011, 472 (7342) : 213 - 216
  • [42] Distinct representations of olfactory information in different cortical centres
    Dara L. Sosulski
    Maria Lissitsyna Bloom
    Tyler Cutforth
    Richard Axel
    Sandeep Robert Datta
    Nature, 2011, 472 : 213 - 216
  • [43] Selective information enhancement learning for creating interpretable representations in competitive learning
    Kamimura, Ryotaro
    NEURAL NETWORKS, 2011, 24 (04) : 387 - 405
  • [44] Using structural information to construct ensemble representations in imperfect-information scenes
    Zhu, Jingyin
    Lu, Yilong
    Zhou, Jifan
    Shen, Mowei
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2023, 58 : 654 - 654
  • [45] INFORMATION NETWORK FOR COMMUNICATIONS AND LEARNING
    CHESHIER, RG
    BULLETIN OF THE MEDICAL LIBRARY ASSOCIATION, 1974, 62 (03): : 325 - 326
  • [46] Structural representations: causally relevant and different from detectors
    Paweł Gładziejewski
    Marcin Miłkowski
    Biology & Philosophy, 2017, 32 : 337 - 355
  • [47] Learning Unbiased Representations via Mutual Information Backpropagation
    Ragonesi, Ruggero
    Volpi, Riccardo
    Cavazza, Jacopo
    Murino, Vittorio
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 2723 - 2732
  • [48] Learning Representations by Maximizing Mutual Information Across Views
    Bachman, Philip
    Hjelm, R. Devon
    Buchwalter, William
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [49] Learning Representations by Maximizing Mutual Information in Variational Autoencoders
    Rezaabad, Ali Lotfi
    Vishwanath, Sriram
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2729 - 2734
  • [50] Structural representations: causally relevant and different from detectors
    Gladziejewski, Pawel
    Mikowski, Marcin
    BIOLOGY & PHILOSOPHY, 2017, 32 (03) : 337 - 355