CSRA: Robust Incentive Mechanism Design for Differentially Private Federated Learning

被引:2
|
作者
Yang, Yunchao [1 ,2 ]
Hu, Miao [1 ,2 ]
Zhou, Yipeng [3 ]
Liu, Xuezheng [1 ,2 ]
Wu, Di [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[2] Guangdong Key Lab Big Data Anal & Proc, Guangzhou 510006, Peoples R China
[3] Macquarie Univ, Fac Sci & Engn, Dept Comp, Sydney, NSW 2112, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; incentive mechanism; dishonest behavior; differential privacy;
D O I
10.1109/TIFS.2023.3329441
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The differentially private federated learning (DPFL) paradigm emerges to firmly preserve data privacy from two perspectives. First, decentralized clients merely exchange model updates rather than raw data with a parameter server (PS) over multiple communication rounds for model training. Secondly, model updates to be exposed to the PS will be distorted by clients with differentially private (DP) noises. To incentivize clients to participate in DPFL, various incentive mechanisms have been proposed by existing works which reward participating clients based on their data quality and DP noise scales assuming that all clients are honest and genuinely report their DP noise scales. However, the PS cannot directly measure or observe DP noise scales leaving the vulnerability that clients can boost their rewards and lower DPFL utility by dishonestly reporting their DP noise scales. Through a quantitative study, we validate the adverse influence of dishonest clients in DPFL. To overcome this deficiency, we propose a robust incentive mechanism called client selection with reverse auction (CSRA) for DPFL. We prove that CSRA satisfies the properties of truthfulness, individual rationality, budget feasibility and computational efficiency. Besides, CSRA can prevent dishonest clients with two steps in each communication round. First, CSRA compares the variance of exposed model updates and claimed DP noise scale for each individual to identify suspicious clients. Second, suspicious clients will be further clustered based on their model updates to finally identify dishonest clients. Once dishonest clients are identified, CSRA will not only remove them from the current round but also lower their probability of being selected in subsequent rounds. Extensive experimental results demonstrate that CSRA can provide robust incentive against dishonest clients in DPFL and significantly outperform other baselines on three real public datasets.
引用
收藏
页码:892 / 906
页数:15
相关论文
共 50 条
  • [31] Differentially private knowledge transfer for federated learning
    Qi, Tao
    Wu, Fangzhao
    Wu, Chuhan
    He, Liang
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [32] Stochastic Coded Federated Learning: Theoretical Analysis and Incentive Mechanism Design
    Sun, Yuchang
    Shao, Jiawei
    Mao, Yuyi
    Li, Songze
    Zhang, Jun
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (06) : 6623 - 6638
  • [33] Differentially private knowledge transfer for federated learning
    Tao Qi
    Fangzhao Wu
    Chuhan Wu
    Liang He
    Yongfeng Huang
    Xing Xie
    Nature Communications, 14
  • [34] Efficient, Private and Robust Federated Learning
    Hao, Meng
    Li, Hongwei
    Xu, Guowen
    Chen, Hanxiao
    Zhang, Tianwei
    37TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2021, 2021, : 45 - 60
  • [35] Differentially Private Auction Design for Federated Learning With non-IID Data
    Ren, Kean
    Liao, Guocheng
    Ma, Qian
    Chen, Xu
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (05) : 2236 - 2247
  • [36] A Learning-Based Incentive Mechanism for Federated Learning
    Zhan, Yufeng
    Li, Peng
    Qu, Zhihao
    Zeng, Deze
    Guo, Song
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 6360 - 6368
  • [37] FIFL: A Fair Incentive Mechanism for Federated Learning
    Gao, Liang
    Li, Li
    Chen, Yingwen
    Zheng, Wenli
    Xu, ChengZhong
    Xu, Ming
    50TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2021,
  • [38] A Hierarchical Incentive Mechanism for Coded Federated Learning
    Ng, Jer Shyuan
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Deng, Xianjun
    Zhang, Yang
    Niyato, Dusit
    Leung, Cyril
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 17 - 24
  • [39] RIFL: A Fair Incentive Mechanism for Federated Learning
    Tang, Huanrong
    Liao, Xinghai
    Ouyang, Jianquan
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT I, ICIC 2024, 2024, 14875 : 365 - 377
  • [40] RATE: Game-Theoretic Design of Sustainable Incentive Mechanism for Federated Learning
    Li, Bing
    Lu, Jianfeng
    Cao, Shuqin
    Hu, Lijuan
    Dai, Qing
    Yang, Shasha
    Ye, Zhiwei
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (01): : 81 - 96