Breaking the Expression Bottleneck of Graph Neural Networks

被引:7
|
作者
Yang, Mingqi [1 ]
Wang, Renjian [1 ]
Shen, Yanming [1 ]
Qi, Heng [1 ]
Yin, Baocai [1 ,2 ]
机构
[1] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Liaoning, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518066, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Graph neural networks; Convolution; Buildings; Systematics; Representation learning; Power measurement; Deep learning; graph representation learning; graph neural networks;
D O I
10.1109/TKDE.2022.3168070
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
the Weisfeiler-Lehman (WL) graph isomorphism test was used to measure the expressiveness of graph neural networks (GNNs), showing that the neighborhood aggregation GNNs were at most as powerful as 1-WL test in distinguishing graph structures. There were also improvements proposed in analogy to k-WL test (k > 1). However, the aggregations in these GNNs are far from injective as required by the WL test, and suffer from weak distinguishing strength, making it become the expression bottleneck. In this paper, we improve the expressiveness by exploring powerful aggregations. We reformulate an aggregation with the corresponding aggregation coefficient matrix, and then systematically analyze the requirements on this matrix for building more powerful and even injective aggregations. We also show the necessity of applying nonlinear units ahead of aggregations, which is different from most existing GNNs. Based on our theoretical analysis, we develop ExpandingConv. Experimental results show that our model significantly boosts performance, especially for large and densely connected graphs.
引用
收藏
页码:5652 / 5664
页数:13
相关论文
共 50 条
  • [1] Discovering the Representation Bottleneck of Graph Neural Networks
    Wu, Fang
    Li, Siyuan
    Li, Stan Z.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 7998 - 8008
  • [2] Breaking the Limits of Message Passing Graph Neural Networks
    Balcilar, Muhammet
    Heroux, Pierre
    Gauzere, Benoit
    Vasseur, Pascal
    Adam, Sebastien
    Honeine, Paul
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [3] InsGNN: Interpretable spatio-temporal graph neural networks via information bottleneck
    Fang, Hui
    Wang, Haishuai
    Gao, Yang
    Zhang, Yonggang
    Bu, Jiajun
    Han, Bo
    Lin, Hui
    INFORMATION FUSION, 2025, 119
  • [4] RAN-GNNs: Breaking the Capacity Limits of Graph Neural Networks
    Valsesia, Diego
    Fracastoro, Giulia
    Magli, Enrico
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4610 - 4619
  • [5] Breaking the Bottleneck
    Henry, Ben Andrew
    SCIENTIST, 2017, 31 (01): : 15 - 16
  • [6] Late Breaking Results: Reinforcement Learning for Scalable Logic Optimization with Graph Neural Networks
    Timoneda, Xavier
    Cavigelli, Lukas
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1378 - 1379
  • [7] Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns
    Suresh, Susheel
    Budde, Vinith
    Neville, Jennifer
    Li, Pan
    Ma, Jianzhu
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1541 - 1551
  • [8] Graph neural networks
    Corso G.
    Stark H.
    Jegelka S.
    Jaakkola T.
    Barzilay R.
    Nature Reviews Methods Primers, 4 (1):
  • [9] Graph neural networks
    不详
    NATURE REVIEWS METHODS PRIMERS, 2024, 4 (01):
  • [10] Information Bottleneck Theory on Convolutional Neural Networks
    Li, Junjie
    Liu, Ding
    NEURAL PROCESSING LETTERS, 2021, 53 (02) : 1385 - 1400