T-HyperGNNs: Hypergraph Neural Networks via Tensor Representations

被引:8
|
作者
Wang, Fuli [1 ]
Pena-Pena, Karelia [2 ]
Qian, Wei [3 ]
Arce, Gonzalo R. [2 ]
机构
[1] Univ Delaware, Inst Financial Serv Analyt, Newark, DE 19716 USA
[2] Univ Delaware, Dept Elect & Comp Engn, Newark, DE 19716 USA
[3] Univ Delaware, Dept Appl Econ & Stat, Newark, DE 19716 USA
基金
美国国家科学基金会;
关键词
Convolution; hypergraphs; message passing; neural networks; tensors;
D O I
10.1109/TNNLS.2024.3371382
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hypergraph neural networks (HyperGNNs) are a family of deep neural networks designed to perform inference on hypergraphs. HyperGNNs follow either a spectral or a spatial approach, in which a convolution or message-passing operation is conducted based on a hypergraph algebraic descriptor. While many HyperGNNs have been proposed and achieved state-of-the-art performance on broad applications, there have been limited attempts at exploring high-dimensional hypergraph descriptors (tensors) and joint node interactions carried by hyperedges. In this article, we depart from hypergraph matrix representations and present a new tensor-HyperGNN (T-HyperGNN) framework with cross-node interactions (CNIs). The T-HyperGNN framework consists of T-spectral convolution, T-spatial convolution, and T-message-passing HyperGNNs (T-MPHN). The T-spectral convolution HyperGNN is defined under the t-product algebra that closely connects to the spectral space. To improve computational efficiency for large hypergraphs, we localize the T-spectral convolution approach to formulate the T-spatial convolution and further devise a novel tensor-message-passing algorithm for practical implementation by studying a compressed adjacency tensor representation. Compared to the state-of-the-art approaches, our T-HyperGNNs preserve intrinsic high-order network structures without any hypergraph reduction and model the joint effects of nodes through a CNI layer. These advantages of our T-HyperGNNs are demonstrated in a wide range of real-world hypergraph datasets. The implementation code is available at https://github.com/wangfuli/T-HyperGNNs.git.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [1] T-HyperGNNs: Hypergraph Neural Networks via Tensor Representations
    Wang, Fuli
    Pena-Pena, Karelia
    Qian, Wei
    Arce, Gonzalo R.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (03) : 5044 - 5058
  • [2] Representations of hypergraph states with neural networks*
    Yang, Ying
    Cao, Huaixin
    COMMUNICATIONS IN THEORETICAL PHYSICS, 2021, 73 (10)
  • [3] Representations of hypergraph states with neural networks
    杨莹
    曹怀信
    CommunicationsinTheoreticalPhysics, 2021, 73 (10) : 99 - 108
  • [4] Transferable Hypergraph Neural Networks via Spectral Similarity
    Hayhoe, Mikhail
    Riess, Hans
    Zavlanos, Michael M.
    Preciado, Victor M.
    Ribeiro, Alejandro
    LEARNING ON GRAPHS CONFERENCE, VOL 231, 2023, 231
  • [5] Hypergraph Neural Networks for Hypergraph Matching
    Liao, Xiaowei
    Xu, Yong
    Ling, Haibin
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1246 - 1255
  • [6] Hypergraph Neural Networks
    Feng, Yifan
    You, Haoxuan
    Zhang, Zizhao
    Ji, Rongrong
    Gao, Yue
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 3558 - 3565
  • [7] Tensor neural networks via circulant convolution
    Nie, Chang
    Wang, Huan
    NEUROCOMPUTING, 2022, 483 : 22 - 31
  • [8] Hypergraph Transformer Neural Networks
    Li, Mengran
    Zhang, Yong
    Li, Xiaoyong
    Zhang, Yuchen
    Yin, Baocai
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2023, 17 (05)
  • [9] Molecular hypergraph neural networks
    Chen, Junwu
    Schwaller, Philippe
    JOURNAL OF CHEMICAL PHYSICS, 2024, 160 (14):
  • [10] Equivariant Hypergraph Neural Networks
    Kim, Jinwoo
    Oh, Saeyoon
    Cho, Sungjun
    Hong, Seunghoon
    COMPUTER VISION, ECCV 2022, PT XXI, 2022, 13681 : 86 - 103