Data-Free Evaluation of User Contributions in Federated Learning

被引:0
|
作者
Lv, Hongtao [1 ]
Zheng, Zhenzhe [1 ]
Luo, Tie [2 ]
Wu, Fan [1 ]
Tang, Shaojie [3 ]
Hua, Lifeng [4 ]
Jie, Rongfei [4 ]
Lv, Chengfei [4 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
[2] Missouri Univ Sci & Technol, Rolla, MO 65409 USA
[3] Univ Texas Dallas, Dallas, TX USA
[4] Alibaba Grp, Hangzhou, Peoples R China
基金
美国国家科学基金会;
关键词
Peer prediction; correlated agreement;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) trains a machine learning model on mobile devices in a distributed manner using each device's private data and computing resources. A critical issues is to evaluate individual users' contributions so that (1) users' effort in model training can be compensated with proper incentives and (2) malicious and low-quality users can be detected and removed. The state-of-the-art solutions require a representative test dataset for the evaluation purpose, but such a dataset is often unavailable and hard to synthesize. In this paper, we propose a method called Pairwise Correlated Agreement (PCA) based on the idea of peer prediction to evaluate user contribution in FL without a test dataset. PCA achieves this using the statistical correlation of the model parameters uploaded by users. We then apply PCA to designing (1) a new federated learning algorithm called Fed-PCA, and (2) a new incentive mechanism that guarantees truthfulness. We evaluate the performance of PCA and Fed-PCA using the MNIST dataset and a large industrial product recommendation dataset. The results demonstrate that our Fed-PCA outperforms the canonical FedAvg algorithm and other baseline methods in accuracy, and at the same time, PCA effectively incentivizes users to behave truthfully.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Data-free adaptive structured pruning for federated learning
    Fan, Wei
    Yang, Keke
    Wang, Yifan
    Chen, Cong
    Li, Jing
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (13): : 18600 - 18626
  • [2] Data-Free Knowledge Distillation for Heterogeneous Federated Learning
    Zhu, Zhuangdi
    Hong, Junyuan
    Zhou, Jiayu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [3] DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning
    Luo, Kangyang
    Wang, Shuai
    Fu, Yexuan
    Li, Xiang
    Lan, Yunshi
    Gao, Ming
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] FedGhost: Data-Free Model Poisoning Enhancement in Federated Learning
    Ma, Zhuoran
    Huang, Xinyi
    Wang, Zhuzhu
    Qin, Zhan
    Wang, Xiangyu
    Ma, Jianfeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2096 - 2108
  • [5] DENSE: Data-Free One-Shot Federated Learning
    Zhang, Jie
    Chen, Chen
    Li, Bo
    Lyu, Lingjuan
    Wu, Shuang
    Ding, Shouhong
    Shen, Chunhua
    Wu, Chao
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [6] Robust Heterogeneous Federated Learning via Data-Free Knowledge Amalgamation
    Ma, Jun
    Fan, Zheng
    Fan, Chaoyu
    Kang, Qi
    ADVANCES IN SWARM INTELLIGENCE, PT II, ICSI 2024, 2024, 14789 : 61 - 71
  • [7] FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph Federated Learning
    Zhu, Yinlin
    Lie, Xunkai
    Wu, Zhengyu
    Wu, Di
    Hu, Miao
    Li, Rong-Hua
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 5716 - 5724
  • [8] Data-free knowledge distillation via generator-free data generation for Non-IID federated learning
    Zhao, Siran
    Liao, Tianchi
    Fu, Lele
    Chen, Chuan
    Bian, Jing
    Zheng, Zibin
    NEURAL NETWORKS, 2024, 179
  • [9] Data-Free Learning of Student Networks
    Chen, Hanting
    Wang, Yunhe
    Xu, Chang
    Yang, Zhaohui
    Liu, Chuanjian
    Shi, Boxin
    Xu, Chunjing
    Xu, Chao
    Tian, Qi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3513 - 3521
  • [10] Customizing Synthetic Data for Data-Free Student Learning
    Luo, Shiya
    Chen, Defang
    Wang, Can
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1817 - 1822