CSRA: Robust Incentive Mechanism Design for Differentially Private Federated Learning

被引:2
|
作者
Yang, Yunchao [1 ,2 ]
Hu, Miao [1 ,2 ]
Zhou, Yipeng [3 ]
Liu, Xuezheng [1 ,2 ]
Wu, Di [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[2] Guangdong Key Lab Big Data Anal & Proc, Guangzhou 510006, Peoples R China
[3] Macquarie Univ, Fac Sci & Engn, Dept Comp, Sydney, NSW 2112, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; incentive mechanism; dishonest behavior; differential privacy;
D O I
10.1109/TIFS.2023.3329441
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The differentially private federated learning (DPFL) paradigm emerges to firmly preserve data privacy from two perspectives. First, decentralized clients merely exchange model updates rather than raw data with a parameter server (PS) over multiple communication rounds for model training. Secondly, model updates to be exposed to the PS will be distorted by clients with differentially private (DP) noises. To incentivize clients to participate in DPFL, various incentive mechanisms have been proposed by existing works which reward participating clients based on their data quality and DP noise scales assuming that all clients are honest and genuinely report their DP noise scales. However, the PS cannot directly measure or observe DP noise scales leaving the vulnerability that clients can boost their rewards and lower DPFL utility by dishonestly reporting their DP noise scales. Through a quantitative study, we validate the adverse influence of dishonest clients in DPFL. To overcome this deficiency, we propose a robust incentive mechanism called client selection with reverse auction (CSRA) for DPFL. We prove that CSRA satisfies the properties of truthfulness, individual rationality, budget feasibility and computational efficiency. Besides, CSRA can prevent dishonest clients with two steps in each communication round. First, CSRA compares the variance of exposed model updates and claimed DP noise scale for each individual to identify suspicious clients. Second, suspicious clients will be further clustered based on their model updates to finally identify dishonest clients. Once dishonest clients are identified, CSRA will not only remove them from the current round but also lower their probability of being selected in subsequent rounds. Extensive experimental results demonstrate that CSRA can provide robust incentive against dishonest clients in DPFL and significantly outperform other baselines on three real public datasets.
引用
收藏
页码:892 / 906
页数:15
相关论文
共 50 条
  • [41] Incentive Mechanism Design for Multi-Round Federated Learning With a Single Budget
    Ren, Zhihao
    Zhang, Xinglin
    Ng, Wing W. Y.
    Zhang, Junna
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2025, 12 (01): : 198 - 209
  • [42] Incentive mechanism design for Federated Learning with Stackelberg game perspective in the industrial scenario
    Guo, Wei
    Wang, Yijin
    Jiang, Pingyu
    COMPUTERS & INDUSTRIAL ENGINEERING, 2023, 184
  • [43] Incentive Mechanism Design For Federated Learning in Multi-access Edge Computing
    Liu, Jingyuan
    Chang, Zheng
    Min, Geyong
    Han, Zhu
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3454 - 3459
  • [44] Federated Learning Incentive Mechanism Design via Shapley Value and Pareto Optimality
    Yang, Xun
    Xiang, Shuwen
    Peng, Changgen
    Tan, Weijie
    Li, Zhen
    Wu, Ningbo
    Zhou, Yan
    AXIOMS, 2023, 12 (07)
  • [45] Federated Learning Incentive Mechanism Design via Enhanced Shapley Value Method
    Yang, Xun
    Tan, Weijie
    Peng, Changgen
    Xiang, Shuwen
    Niu, Kun
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [46] FLAME: Differentially Private Federated Learning in the Shuffle Model
    Liu, Ruixuan
    Cao, Yang
    Chen, Hong
    Guo, Ruoyang
    Yoshikawa, Masatoshi
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8688 - 8696
  • [47] Local differentially private federated learning with homomorphic encryption
    Jianzhe Zhao
    Chenxi Huang
    Wenji Wang
    Rulin Xie
    Rongrong Dong
    Stan Matwin
    The Journal of Supercomputing, 2023, 79 : 19365 - 19395
  • [48] Differentially Private Federated Learning with Heterogeneous Group Privacy
    Jiang, Mingna
    Wei, Linna
    Cai, Guoyue
    Wu, Xuangou
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 143 - 150
  • [49] FLDS: differentially private federated learning with double shufflers
    Qi, Qingqiang
    Yang, Xingye
    Hu, Chengyu
    Tang, Peng
    Su, Zhiyuan
    Guo, Shanqing
    COMPUTER JOURNAL, 2024,
  • [50] DPAUC: Differentially Private AUC Computation in Federated Learning
    Sun, Jiankai
    Yang, Xin
    Yao, Yuanshun
    Xie, Junyuan
    Wu, Di
    Wang, Chong
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 15170 - 15178