Outlier-Robust Gromov-Wasserstein for Graph Data

被引:0
|
作者
Kong, Lemin [1 ]
Li, Jiajin [2 ]
Tang, Jianheng [3 ]
So, Anthony Man-Cho [1 ]
机构
[1] CUHK, Hong Kong, Peoples R China
[2] Stanford Univ, Stanford, CA USA
[3] HKUST, Hong Kong, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gromov-Wasserstein (GW) distance is a powerful tool for comparing and aligning probability distributions supported on different metric spaces. Recently, GW has become the main modeling technique for aligning heterogeneous data for a wide range of graph learning tasks. However, the GW distance is known to be highly sensitive to outliers, which can result in large inaccuracies if the outliers are given the same weight as other samples in the objective function. To mitigate this issue, we introduce a new and robust version of the GW distance called RGW. RGW features optimistically perturbed marginal constraints within a Kullback-Leibler divergence-based ambiguity set. To make the benefits of RGW more accessible in practice, we develop a computationally efficient and theoretically provable procedure using Bregman proximal alternating linearized minimization algorithm. Through extensive experimentation, we validate our theoretical results and demonstrate the effectiveness of RGW on real-world graph learning tasks, such as subgraph matching and partial shape correspondence.
引用
收藏
页数:23
相关论文
共 50 条
  • [1] Outlier-Robust Wasserstein DRO
    Nietert, Sloan
    Goldfeld, Ziv
    Shafiee, Soroosh
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] On a linear fused Gromov-Wasserstein distance for graph structured data
    Nguyen, Dai Hai
    Tsuda, Koji
    PATTERN RECOGNITION, 2023, 138
  • [3] Gromov-Wasserstein Factorization Models for Graph Clustering
    Xu, Hongteng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6478 - 6485
  • [4] Certifying Robust Graph Classification under Orthogonal Gromov-Wasserstein Threats
    Jin, Hongwei
    Yu, Zishun
    Zhang, Xinhua
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [5] Sliced Gromov-Wasserstein
    Vayer, Titouan
    Flamary, Remi
    Tavenard, Romain
    Chapel, Laetitia
    Courty, Nicolas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [6] Gromov-Wasserstein Learning for Graph Matching and Node Embedding
    Xu, Hongteng
    Luo, Dixin
    Zha, Hongyuan
    Carin, Lawrence
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Scalable Gromov-Wasserstein Learning for Graph Partitioning and Matching
    Xu, Hongteng
    Luo, Dixin
    Carin, Lawrence
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Quantized Gromov-Wasserstein
    Chowdhury, Samir
    Miller, David
    Needham, Tom
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021: RESEARCH TRACK, PT III, 2021, 12977 : 811 - 827
  • [9] Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications
    Ma, Xinyu
    Chu, Xu
    Wang, Yasha
    Lin, Yang
    Zhao, Junfeng
    Ma, Liantao
    Zhu, Wenwu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [10] The Ultrametric Gromov-Wasserstein Distance
    Memoli, Facundo
    Munk, Axel
    Wan, Zhengchao
    Weitkamp, Christoph
    DISCRETE & COMPUTATIONAL GEOMETRY, 2023, 70 (04) : 1378 - 1450