Outlier-Robust Gromov-Wasserstein for Graph Data

被引:0
|
作者
Kong, Lemin [1 ]
Li, Jiajin [2 ]
Tang, Jianheng [3 ]
So, Anthony Man-Cho [1 ]
机构
[1] CUHK, Hong Kong, Peoples R China
[2] Stanford Univ, Stanford, CA USA
[3] HKUST, Hong Kong, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gromov-Wasserstein (GW) distance is a powerful tool for comparing and aligning probability distributions supported on different metric spaces. Recently, GW has become the main modeling technique for aligning heterogeneous data for a wide range of graph learning tasks. However, the GW distance is known to be highly sensitive to outliers, which can result in large inaccuracies if the outliers are given the same weight as other samples in the objective function. To mitigate this issue, we introduce a new and robust version of the GW distance called RGW. RGW features optimistically perturbed marginal constraints within a Kullback-Leibler divergence-based ambiguity set. To make the benefits of RGW more accessible in practice, we develop a computationally efficient and theoretically provable procedure using Bregman proximal alternating linearized minimization algorithm. Through extensive experimentation, we validate our theoretical results and demonstrate the effectiveness of RGW on real-world graph learning tasks, such as subgraph matching and partial shape correspondence.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] Fused Gromov-Wasserstein Distance for Structured Objects
    Vayer, Titouan
    Chapel, Laetitia
    Flamary, Remi
    Tavenard, Romain
    Courty, Nicolas
    ALGORITHMS, 2020, 13 (09)
  • [22] Weisfeiler-Lehman Meets Gromov-Wasserstein
    Chen, Samantha
    Lim, Sunhyuk
    Memoli, Facundo
    Wan, Zhengchao
    Wang, Yusu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [23] Comparison results for Gromov-Wasserstein and Gromov-Monge distances
    Memoli, Facundo
    Needham, Tom
    ESAIM-CONTROL OPTIMISATION AND CALCULUS OF VARIATIONS, 2024, 30
  • [24] Hybrid Gromov-Wasserstein Embedding for Capsule Learning
    Shamsolmoali, Pourya
    Zareapoor, Masoumeh
    Das, Swagatam
    Granger, Eric
    Garcia, Salvador
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 2480 - 2494
  • [25] A brief survey on Computational Gromov-Wasserstein distance
    Zheng, Lei
    Xiao, Yang
    Niu, Lingfeng
    8TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND QUANTITATIVE MANAGEMENT (ITQM 2020 & 2021): DEVELOPING GLOBAL DIGITAL ECONOMY AFTER COVID-19, 2022, 199 : 697 - 702
  • [26] Gromov-Wasserstein Averaging of Kernel and Distance Matrices
    Peyre, Gabriel
    Cuturi, Marco
    Solomon, Justin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [27] Gromov-Wasserstein distances between Gaussian distributions
    Delon, Julie
    Desolneux, Agnes
    Salmona, Antoine
    JOURNAL OF APPLIED PROBABILITY, 2022, 59 (04) : 1178 - 1198
  • [28] Gromov-Wasserstein Alignment of Word Embedding Spaces
    Alvarez-Melis, David
    Jaakkola, Tommi S.
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 1881 - 1890
  • [29] On the Existence of Monge Maps for the Gromov-Wasserstein Problem
    Dumont, Theo
    Lacombe, Theo
    Vialard, Francois-Xavier
    FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2024, 25 (2) : 463 - 510
  • [30] Entropic Gromov-Wasserstein between Gaussian Distributions
    Khang Le
    Dung Le
    Huy Nguyen
    Dat Do
    Tung Pham
    Nhat Ho
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,