CSRA: Robust Incentive Mechanism Design for Differentially Private Federated Learning

被引:2
|
作者
Yang, Yunchao [1 ,2 ]
Hu, Miao [1 ,2 ]
Zhou, Yipeng [3 ]
Liu, Xuezheng [1 ,2 ]
Wu, Di [1 ,2 ]
机构
[1] Sun Yat Sen Univ, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
[2] Guangdong Key Lab Big Data Anal & Proc, Guangzhou 510006, Peoples R China
[3] Macquarie Univ, Fac Sci & Engn, Dept Comp, Sydney, NSW 2112, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; incentive mechanism; dishonest behavior; differential privacy;
D O I
10.1109/TIFS.2023.3329441
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The differentially private federated learning (DPFL) paradigm emerges to firmly preserve data privacy from two perspectives. First, decentralized clients merely exchange model updates rather than raw data with a parameter server (PS) over multiple communication rounds for model training. Secondly, model updates to be exposed to the PS will be distorted by clients with differentially private (DP) noises. To incentivize clients to participate in DPFL, various incentive mechanisms have been proposed by existing works which reward participating clients based on their data quality and DP noise scales assuming that all clients are honest and genuinely report their DP noise scales. However, the PS cannot directly measure or observe DP noise scales leaving the vulnerability that clients can boost their rewards and lower DPFL utility by dishonestly reporting their DP noise scales. Through a quantitative study, we validate the adverse influence of dishonest clients in DPFL. To overcome this deficiency, we propose a robust incentive mechanism called client selection with reverse auction (CSRA) for DPFL. We prove that CSRA satisfies the properties of truthfulness, individual rationality, budget feasibility and computational efficiency. Besides, CSRA can prevent dishonest clients with two steps in each communication round. First, CSRA compares the variance of exposed model updates and claimed DP noise scale for each individual to identify suspicious clients. Second, suspicious clients will be further clustered based on their model updates to finally identify dishonest clients. Once dishonest clients are identified, CSRA will not only remove them from the current round but also lower their probability of being selected in subsequent rounds. Extensive experimental results demonstrate that CSRA can provide robust incentive against dishonest clients in DPFL and significantly outperform other baselines on three real public datasets.
引用
收藏
页码:892 / 906
页数:15
相关论文
共 50 条
  • [1] Incentive Mechanism for Differentially Private Federated Learning in Industrial Internet of Things
    Xu, Yin
    Xiao, Mingjun
    Tan, Haisheng
    Liu, An
    Gao, Guoju
    Yan, Zhaoyang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6927 - 6939
  • [2] Game Analysis and Incentive Mechanism Design for Differentially Private Cross-Silo Federated Learning
    Mao, Wuxing
    Ma, Qian
    Liao, Guocheng
    Chen, Xu
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (10) : 9337 - 9351
  • [3] The Skellam Mechanism for Differentially Private Federated Learning
    Agarwal, Naman
    Kairouz, Peter
    Liu, Ziyu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [4] Incentive Mechanism Design for Federated Learning with Multi-Dimensional Private Information
    Ding, Ningning
    Fang, Zhixuan
    Huang, Jianwei
    2020 18TH INTERNATIONAL SYMPOSIUM ON MODELING AND OPTIMIZATION IN MOBILE, AD HOC, AND WIRELESS NETWORKS (WIOPT), 2020,
  • [5] Distributionally Robust Federated Learning for Differentially Private Data
    Shi, Siping
    Hu, Chuang
    Wang, Dan
    Zhu, Yifei
    Han, Zhu
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 842 - 852
  • [6] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [7] A Survey of Incentive Mechanism Design for Federated Learning
    Zhan, Yufeng
    Zhang, Jie
    Hong, Zicong
    Wu, Leijie
    Li, Peng
    Guo, Song
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (02) : 1035 - 1044
  • [8] Incentive Mechanism Design for Federated Learning and Unlearning
    Ding, Ningning
    Sun, Zhenyu
    Wei, Ermin
    Berry, Randall
    PROCEEDINGS OF THE 2023 INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2023, 2023, : 11 - 20
  • [9] Incentive Mechanism Design for Vertical Federated Learning
    Yang, Ni
    Cheung, Man Hon
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3054 - 3059
  • [10] Differentially Private Federated Learning With an Adaptive Noise Mechanism
    Xue, Rui
    Xue, Kaiping
    Zhu, Bin
    Luo, Xinyi
    Zhang, Tianwei
    Sun, Qibin
    Lu, Jun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 74 - 87