FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning

被引:0
|
作者
Zhu, Yinlin [1 ]
Lie, Xunkai [2 ]
Wu, Zhengyu [2 ]
Wu, Di [1 ]
Hu, Miao [1 ]
Li, Rong-Hua [2 ]
机构
[1] Sun Yat Sen Univ, Guangzhou, Peoples R China
[2] Beijing Inst Technol, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Subgraph federated learning (subgraph-FL) is a new distributed paradigm that facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs. Unfortunately, a significant challenge of subgraph-FL arises from subgraph heterogeneity, which stems from node and topology variation, causing the impaired performance of the global GNN. Despite various studies, they have not yet thoroughly investigated the impact mechanism of subgraph heterogeneity. To this end, we decouple node and topology variation, revealing that they correspond to differences in label distribution and structure homophily. Remarkably, these variations lead to significant differences in the class-wise knowledge reliability of multiple local GNNs, misguiding the model aggregation with varying degrees. Building on this insight, we propose topology-aware data-free knowledge distillation technology (FedTAD), enhancing reliable knowledge transfer from the local model to the global model. Extensive experiments on six public datasets consistently demonstrate the superiority of FedTAD over state-of-the-art baselines.
引用
收藏
页码:5716 / 5724
页数:9
相关论文
共 50 条
  • [41] TOPOLOGY-AWARE LEARNING FOR VOLUMETRIC CEREBROVASCULAR SEGMENTATION
    Banerjee, Subhashis
    Toumpanakis, Dimitrios
    Dhara, Ashis Kumar
    Wikstrom, Johan
    Strand, Robin
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [42] NAYER: Noisy Layer Data Generation for Efficient and Effective Data-free Knowledge Distillation
    Tran, Minh-Tuan
    Le, Trung
    Le, Xuan-May
    Harandi, Mehrtash
    Tran, Quan Hung
    Phung, Dinh
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23860 - 23869
  • [43] Synthetic data generation method for data-free knowledge distillation in regression neural networks
    Zhou, Tianxun
    Chiam, Keng-Hwee
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [44] Olive Branch Learning: A Topology-Aware Federated Learning Framework for Space-Air-Ground Integrated Network
    Fang, Qingze
    Zhai, Zhiwei
    Yu, Shuai
    Wu, Qiong
    Gong, Xiaowen
    Chen, Xu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (07) : 4534 - 4551
  • [45] Teacher as a Lenient Expert: Teacher-Agnostic Data-Free Knowledge Distillation
    Shin, Hyunjune
    Choi, Dong-Wan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14991 - 14999
  • [46] Effective and efficient conditional contrast for data-free knowledge distillation with low memory
    Jiang, Chenyang
    Li, Zhendong
    Yang, Jun
    Wu, Yiqiang
    Li, Shuai
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (04):
  • [47] Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint
    Yu, Shikang
    Chen, Jiachen
    Han, Hu
    Jiang, Shuqiang
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 24266 - 24275
  • [48] Data-Free Knowledge Distillation for Privacy-Preserving Efficient UAV Networks
    Yu, Guyang
    2022 6TH INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES (ICRAS 2022), 2022, : 52 - 56
  • [49] De-confounded Data-free Knowledge Distillation for Handling Distribution Shifts
    Wang, Yuzheng
    Yang, Dingkang
    Chen, Zhaoyu
    Liu, Yang
    Liu, Siao
    Zhang, Wenqiang
    Zhang, Lihua
    Qi, Lizhe
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 12615 - 12625
  • [50] Discovering and Overcoming Limitations of Noise-engineered Data-free Knowledge Distillation
    Raikwar, Piyush
    Mishra, Deepak
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,