Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation

被引:7
|
作者
Yang, Yuchen [1 ]
Yuan, Haolin [1 ]
Hui, Bo [1 ]
Gong, Neil [2 ]
Fendley, Neil [1 ,3 ]
Burlina, Philippe [3 ]
Cao, Yinzhi [1 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD USA
[2] Duke Univ, Durham, NC USA
[3] Johns Hopkins Appl Phys Lab, Laurel, MD USA
基金
美国国家科学基金会;
关键词
RISK;
D O I
10.1109/DSN58367.2023.00037
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Membership inference (MI) attacks are more diverse in a Federated Learning (FL) setting, because an adversary may be either an FL client, a server, or an external attacker. Existing defenses against MI attacks rely on perturbations to either the model's output predictions or the training process. However, output perturbations are ineffective in an FL setting, because a malicious server can access the model without output perturbation while training perturbations struggle to achieve a good utility. This paper proposes a novel defense, called CIP, to fortify FL against MI attacks via a client-level input perturbation during training and inference procedures. The key insight is to shift each client's local data distribution via a personalized perturbation to get a shifted model. CIP achieves a good balance between privacy and utility. Our evaluation shows that CIP causes accuracy to drop at most 0.7% while reducing attacks to random guessing.
引用
收藏
页码:288 / 301
页数:14
相关论文
共 50 条
  • [31] Leveraging Multiple Adversarial Perturbation Distances for Enhanced Membership Inference Attack in Federated Learning
    Xia, Fan
    Liu, Yuhao
    Jin, Bo
    Yu, Zheng
    Cai, Xingwei
    Li, Hao
    Zha, Zhiyong
    Hou, Dai
    Peng, Kai
    SYMMETRY-BASEL, 2024, 16 (12):
  • [32] A generative adversarial network-based client-level handwriting forgery attack in federated learning scenario
    Shi, Lei
    Wu, Han
    Ding, Xu
    Xu, Hao
    Pan, Sinan
    EXPERT SYSTEMS, 2025, 42 (02)
  • [33] Membership Inference Attacks against Language Models via Neighbourhood Comparison
    Mattern, Justus
    Mireshghallah, Fatemehsadat
    Jin, Zhijing
    Schoelkopf, Bernhard
    Sachan, Mrinmaya
    Berg-Kirkpatrick, Taylor
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 11330 - 11343
  • [34] Link Membership Inference Attacks against Unsupervised Graph Representation Learning
    Wang, Xiuling
    Wang, Wendy Hui
    39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 477 - 491
  • [35] Towards Securing Machine Learning Models Against Membership Inference Attacks
    Ben Hamida, Sana
    Mrabet, Hichem
    Belguith, Sana
    Alhomoud, Adeeb
    Jemai, Abderrazak
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 70 (03): : 4897 - 4919
  • [36] Shielding Federated Learning Systems against Inference Attacks with ARM TrustZone
    Messaoud, Aghiles Ait
    Ben Mokhtar, Sonia
    Nitu, Vlad
    Schiavoni, Valerio
    PROCEEDINGS OF THE TWENTY-THIRD ACM/IFIP INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2022, 2022, : 335 - 348
  • [37] A defense mechanism against label inference attacks in Vertical Federated Learning
    Arazzi, Marco
    Nicolazzo, Serena
    Nocera, Antonino
    NEUROCOMPUTING, 2025, 624
  • [38] GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison
    Wang, Xiaodong
    Wu, Longfei
    Guan, Zhitao
    INFORMATION SCIENCES, 2024, 658
  • [39] User-Level Membership Inference for Federated Learning in Wireless Network Environment
    Zhao, Yanchao
    Chen, Jiale
    Zhang, Jiale
    Yang, Zilu
    Tu, Huawei
    Han, Hao
    Zhu, Kun
    Chen, Bing
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [40] TrustBandit: Optimizing Client Selection for Robust Federated Learning Against Poisoning Attacks
    Deressa, Biniyam
    Hasan, M. Anwar
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS, INFOCOM WKSHPS 2024, 2024,