Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure

被引:0
|
作者
Fan, Shaohua [1 ,2 ]
Wang, Xiao [1 ]
Mo, Yanhu [1 ]
Shi, Chuan [1 ]
Tang, Jian [2 ,3 ,4 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing, Peoples R China
[2] Mila Quebec AI Inst, Toronto, ON, Canada
[3] HEC Montreal, Montreal, PQ, Canada
[4] CIFAR AI Res Chair, Toronto, ON, Canada
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022) | 2022年
基金
中国国家自然科学基金; 加拿大自然科学与工程研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most Graph Neural Networks (GNNs) predict the labels of unseen graphs by learning the correlation between the input graphs and labels. However, by presenting a graph classification investigation on the training graphs with severe bias, surprisingly, we discover that GNNs always tend to explore the spurious correlations to make decision, even if the causal correlation always exists. This implies that existing GNNs trained on such biased datasets will suffer from poor generalization capability. By analyzing this problem in a causal view, we find that disentangling and decorrelating the causal and bias latent variables from the biased graphs are both crucial for debiasing. Inspired by this, we propose a general disentangled GNN framework to learn the causal substructure and bias substructure, respectively. Particularly, we design a parameterized edge mask generator to explicitly split the input graph into causal and bias subgraphs. Then two GNN modules supervised by causal/bias-aware loss functions respectively are trained to encode causal and bias subgraphs into their corresponding representations. With the disentangled representations, we synthesize the counterfactual unbiased training samples to further decorrelate causal and bias variables. Moreover, to better benchmark the severe bias problem, we construct three new graph datasets, which have controllable bias degrees and are easier to visualize and explain. Experimental results well demonstrate that our approach achieves superior generalization performance over existing baselines. Furthermore, owing to the learned edge mask, the proposed model has appealing interpretability and transferability.(3)
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Reinforced Causal Explainer for Graph Neural Networks
    Wang, Xiang
    Wu, Yingxin
    Zhang, An
    Feng, Fuli
    He, Xiangnan
    Chua, Tat-Seng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 2297 - 2309
  • [22] Generative Causal Explanations for Graph Neural Networks
    Lin, Wanyu
    Lan, Hao
    Li, Baochun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [23] Molecular contrastive learning of representations via graph neural networks
    Yuyang Wang
    Jianren Wang
    Zhonglin Cao
    Amir Barati Farimani
    Nature Machine Intelligence, 2022, 4 : 279 - 287
  • [24] Molecular contrastive learning of representations via graph neural networks
    Wang, Yuyang
    Wang, Jianren
    Cao, Zhonglin
    Farimani, Amir Barati
    NATURE MACHINE INTELLIGENCE, 2022, 4 (03) : 279 - 287
  • [25] Learning Stable Graph Neural Networks via Spectral Regularization
    Gao, Zhan
    Isufi, Elvin
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 361 - 365
  • [26] Dynamic Representation Learning via Recurrent Graph Neural Networks
    Zhang, Chun-Yang
    Yao, Zhi-Liang
    Yao, Hong-Yu
    Huang, Feng
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2023, 53 (02): : 1284 - 1297
  • [27] Learning Deep Graph Representations via Convolutional Neural Networks
    Ye, Wei
    Askarisichani, Omid
    Jones, Alex
    Singh, Ambuj
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (05) : 2268 - 2279
  • [28] The Devil is in the Conflict: Disentangled Information Graph Neural Networks for Fraud Detection
    Li, Zhixun
    Chen, Dingshuo
    Liu, Qiang
    Wu, Shu
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 1059 - 1064
  • [29] Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms
    Komanduri, Aneesh
    Wu, Yongkai
    Chen, Feng
    Wu, Xintao
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 4308 - 4316
  • [30] Estimating textual treatment effect via causal disentangled representation learning
    Yang, Zhimi
    Shen, Bo
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (02):