DINE: Dimensional Interpretability of Node Embeddings

被引:0
|
作者
Piaggesi, Simone [1 ]
Khosla, Megha [2 ]
Panisson, Andre [3 ]
Anand, Avishek [2 ]
机构
[1] Univ Pisa, I-56126 Pisa, Italy
[2] Delft Univ Technol, NL-2628 CD Delft, Netherlands
[3] CENTAI Inst, I-10138 Turin, Italy
关键词
node embeddings; representation learning; Interpretability; link prediction;
D O I
10.1109/TKDE.2024.3425460
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph representation learning methods, such as node embeddings, are powerful approaches to map nodes into a latent vector space, allowing their use for various graph learning tasks. Despite their success, these techniques are inherently black-boxes and few studies have focused on investigating local explanations of node embeddings for specific instances. Moreover, explaining the overall behavior of unsupervised embedding models remains an unexplored problem, limiting global interpretability and debugging potentials. We address this gap by developing human-understandable explanations for latent space dimensions in node embeddings. Towards that, we first develop new metrics that measure the global interpretability of embeddings based on the marginal contribution of the latent dimensions to predicting graph structure. We say an embedding dimension is more interpretable if it can faithfully map to an understandable sub-structure in the input graph - like community structure. Having observed that standard node embeddings have low interpretability, we then introduce Dine (Dimension-based Interpretable Node Embedding). This novel approach can retrofit existing node embeddings by making them more interpretable without sacrificing their task performance. We conduct extensive experiments on synthetic and real-world graphs and show that we can simultaneously learn highly interpretable node embeddings with effective performance in link prediction and node classification.
引用
收藏
页码:7986 / 7997
页数:12
相关论文
共 50 条
  • [1] Semantic Structure and Interpretability of Word Embeddings
    Senel, Lutfi Kerem
    Utlu, Ihsan
    Yucesoy, Veysel
    Koc, Aykut
    Cukur, Tolga
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (10) : 1769 - 1779
  • [2] Interpretability Analysis for Turkish Word Embeddings
    Senel, Lutfi Kerem
    Yucesoy, Veysel
    Koc, Aykut
    Cukur, Tolga
    2018 26TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2018,
  • [3] Rotations and Interpretability of Word Embeddings: The Case of the Russian Language
    Zobnin, Alexey
    ANALYSIS OF IMAGES, SOCIAL NETWORKS AND TEXTS, AIST 2017, 2018, 10716 : 116 - 128
  • [4] Kernel Node Embeddings
    Celikkanat, Abdulkadir
    Malliaros, Fragkiskos D.
    2019 7TH IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (IEEE GLOBALSIP), 2019,
  • [5] Improving interpretability of word embeddings by generating definition and usage
    Zhang, Haitong
    Du, Yongping
    Sun, Jiaxin
    Li, Qingxiao
    EXPERT SYSTEMS WITH APPLICATIONS, 2020, 160 (160)
  • [6] Centrality-based Interpretability Measures for Graph Embeddings
    Khoshraftar, Shima
    Mahdavi, Sedigheh
    An, Aijun
    2021 IEEE 8TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2021,
  • [7] Link prediction using low-dimensional node embeddings: The measurement problem
    Menand, Nicolas
    Seshadhri, C.
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (08)
  • [8] Imparting interpretability to word embeddings while preserving semantic structure
    Senel, Lutfi Kerem
    Utlu, Ihsan
    Sahinuc, Furkan
    Ozaktas, Haldun M.
    Koc, Aykut
    NATURAL LANGUAGE ENGINEERING, 2021, 27 (06) : 721 - 746
  • [9] Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings
    Wen-Yi, Andrea W.
    Mimno, David
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 1124 - 1131
  • [10] Bringing Back Semantics to Knowledge Graph Embeddings: An Interpretability Approach
    Domingues, Antoine
    Jain, Nitisha
    Penuela, Albert Merono
    Simperl, Elena
    NEURAL-SYMBOLIC LEARNING AND REASONING, PT II, NESY 2024, 2024, 14980 : 192 - 203