Fortifying Federated Learning against Membership Inference Attacks via Client-level Input Perturbation

被引:7
|
作者
Yang, Yuchen [1 ]
Yuan, Haolin [1 ]
Hui, Bo [1 ]
Gong, Neil [2 ]
Fendley, Neil [1 ,3 ]
Burlina, Philippe [3 ]
Cao, Yinzhi [1 ]
机构
[1] Johns Hopkins Univ, Baltimore, MD USA
[2] Duke Univ, Durham, NC USA
[3] Johns Hopkins Appl Phys Lab, Laurel, MD USA
基金
美国国家科学基金会;
关键词
RISK;
D O I
10.1109/DSN58367.2023.00037
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Membership inference (MI) attacks are more diverse in a Federated Learning (FL) setting, because an adversary may be either an FL client, a server, or an external attacker. Existing defenses against MI attacks rely on perturbations to either the model's output predictions or the training process. However, output perturbations are ineffective in an FL setting, because a malicious server can access the model without output perturbation while training perturbations struggle to achieve a good utility. This paper proposes a novel defense, called CIP, to fortify FL against MI attacks via a client-level input perturbation during training and inference procedures. The key insight is to shift each client's local data distribution via a personalized perturbation to get a shifted model. CIP achieves a good balance between privacy and utility. Our evaluation shows that CIP causes accuracy to drop at most 0.7% while reducing attacks to random guessing.
引用
收藏
页码:288 / 301
页数:14
相关论文
共 50 条
  • [41] Fortifying graph neural networks against adversarial attacks via ensemble learning
    Zhou, Chenyu
    Huang, Wei
    Miao, Xinyuan
    Peng, Yabin
    Kong, Xianglong
    Cao, Yi
    Chen, Xi
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [42] Client-specific Property Inference against Secure Aggregation in Federated Learning
    Kerkouche, Raouf
    Acs, Gergely
    Fritz, Mario
    PROCEEDINGS OF THE 22ND WORKSHOP ON PRIVACY IN THE ELECTRONIC SOCIETY, WPES 2023, 2023, : 44 - 59
  • [43] Defending against membership inference attacks: RM Learning is all you need
    Zhang, Zheng
    Ma, Jianfeng
    Ma, Xindi
    Yang, Ruikang
    Wang, Xiangyu
    Zhang, Junying
    INFORMATION SCIENCES, 2024, 670
  • [44] Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
    Gomrokchi, Maziar
    Amin, Susan
    Aboutalebi, Hossein
    Wong, Alexander
    Precup, Doina
    IEEE ACCESS, 2023, 11 : 42796 - 42808
  • [45] Efficient Privacy-Preserving Federated Learning Against Inference Attacks for IoT
    Miao, Yifeng
    Chen, Siguang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [46] FLSG: A Novel Defense Strategy Against Inference Attacks in Vertical Federated Learning
    Fan, Kai
    Hong, Jingtao
    Li, Wenjie
    Zhao, Xingwen
    Li, Hui
    Yang, Yintang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (02) : 1816 - 1826
  • [47] Semi-Leak: Membership Inference Attacks Against Semi-supervised Learning
    He, Xinlei
    Liu, Hongbin
    Gong, Neil Zhenqiang
    Zhang, Yang
    COMPUTER VISION, ECCV 2022, PT XXXI, 2022, 13691 : 365 - 381
  • [48] Defending Batch-Level Label Inference and Replacement Attacks in Vertical Federated Learning
    Zou, Tianyuan
    Liu, Yang
    Kang, Yan
    Liu, Wenhan
    He, Yuanqin
    Yi, Zhihao
    Yang, Qiang
    Zhang, Ya-Qin
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 1016 - 1027
  • [49] MIXNN: Protection of Federated Learning Against Inference Attacks by Mixing Neural Network Layers
    Lebrun, Thomas
    Boutet, Antoine
    Aalmoes, Jan
    Baud, Adrien
    PROCEEDINGS OF THE TWENTY-THIRD ACM/IFIP INTERNATIONAL MIDDLEWARE CONFERENCE, MIDDLEWARE 2022, 2022, : 135 - 147
  • [50] Digestive neural networks: A novel defense strategy against inference attacks in federated learning
    Lee, Hongkyu
    Kim, Jeehyeong
    Ahn, Seyoung
    Hussain, Rasheed
    Cho, Sunghyun
    Son, Junggab
    COMPUTERS & SECURITY, 2021, 109