A Biased Graph Neural Network Sampler with Near-Optimal Regret

被引:0
|
作者
Zhang, Qingru [1 ]
Wipf, David [2 ]
Gan, Quan [2 ]
Song, Le [1 ,3 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Amazon Shanghai AI Lab, Shanghai, Peoples R China
[3] Mohamed Bin Zayed Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNN) have recently emerged as a vehicle for applying deep network architectures to graph and relational data. However, given the increasing size of industrial datasets, in many practical situations the message passing computations required for sharing information across GNN layers are no longer scalable. Although various sampling methods have been introduced to approximate full-graph training within a tractable budget, there remain unresolved complications such as high variances and limited theoretical guarantees. To address these issues, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem but with a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded pay outs. And unlike prior bandit-GNN use cases, the resulting policy leads to near-optimal regret while accounting for the GNN training dynamics introduced by SGD. From a practical standpoint, this translates into lower variance estimates and competitive or superior test accuracy across several benchmarks.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Near-optimal Regret Bounds for Reinforcement Learning
    Jaksch, Thomas
    Ortner, Ronald
    Auer, Peter
    JOURNAL OF MACHINE LEARNING RESEARCH, 2010, 11 : 1563 - 1600
  • [2] Near-Optimal Regret Bounds for Thompson Sampling
    Agrawal, Shipra
    Goyal, Navin
    JOURNAL OF THE ACM, 2017, 64 (05)
  • [3] Near-optimal regret bounds for reinforcement learning
    Jaksch, Thomas
    Ortner, Ronald
    Auer, Peter
    Journal of Machine Learning Research, 2010, 11 : 1563 - 1600
  • [4] Data-augmentation acceleration framework by graph neural network for near-optimal unit commitment
    Wei, Lishen
    Ai, Xiaomeng
    Fang, Jiakun
    Cui, Shichang
    Gao, Liqian
    Li, Kun
    Wen, Jinyu
    APPLIED ENERGY, 2025, 377
  • [5] Near-Optimal Design of Experiments via Regret Minimization
    Allen-Zhu, Zeyuan
    Li, Yuanzhi
    Singh, Aarti
    Wang, Yining
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [6] Near-Optimal No-Regret Learning in General Games
    Daskalakis, Constantinos
    Fishelson, Maxwell
    Golowich, Noah
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] COMPLEXITY OF NEAR-OPTIMAL GRAPH COLORING
    GAREY, MR
    JOHNSON, DS
    JOURNAL OF THE ACM, 1976, 23 (01) : 43 - 49
  • [8] Near-Optimal Φ-Regret Learning in Extensive-Form Games
    Anagnostides, Ioannis
    Farina, Gabriele
    Sandholm, Tuomas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202 : 814 - 839
  • [9] Near-Optimal Regret for Adversarial MDP with Delayed Bandit Feedback
    Jin, Tiancheng
    Lancewicki, Tal
    Luo, Haipeng
    Mansour, Yishay
    Rosenberg, Aviv
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [10] Near-Optimal Massively Parallel Graph Connectivity
    Behnezhad, Soheil
    Dhulipala, Laxman
    Esfandiari, Hossein
    Lacki, Jakub
    Mirrokni, Vahab
    2019 IEEE 60TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS 2019), 2019, : 1615 - 1636