Efficient Membership Inference Attacks against Federated Learning via Bias Differences

被引:1
|
作者
Zhang, Liwei [1 ]
Li, Linghui [1 ]
Li, Xiaoyong [1 ]
Cai, Binsi [1 ]
Gao, Yali [1 ]
Dou, Ruobin [2 ]
Chen, Luying [3 ]
机构
[1] Beijing Univ Posts & Telecommun, Key Lab Trustworthy Distributed Comp & Serv MoE, Beijing, Peoples R China
[2] China Mobile Grp Tianjin Co Itd, Tianjin, Peoples R China
[3] HAOHAN Data Technol Co Ltd, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; membership inference attack; bias; PRIVACY;
D O I
10.1145/3607199.3607204
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning aims to complete model training without private data sharing, but many privacy risks remain. Recent studies have shown that federated learning is vulnerable to membership inference attacks. The weight as an important parameter in neural networks has been proven effective for membership inference attacks, but it leads to significant overhead. Facing this issue, in this paper, we propose a bias-based method for efficient membership inference attacks against federated learning. Different from the weight that determines the direction of the decision surface, the bias also plays an important role in determining the distance to move along the direction. Moreover, the number of bias is way less than the weight. We consider two types of attacks: local attack and global attack, corresponding to two possible types of insiders: participant and central aggregator. For the local attack, we design a neural network-based inference, which fully learns the vertical bias changes of the member data and non-member data. For the global attack, we design a difference comparison-based inference to determine the data source. Extensive experimental results on four public datasets show that the proposed method achieves state-of-the-art inference accuracy. Moreover, experiments prove the effectiveness of the proposed method to resist some commonly used defenses.
引用
收藏
页码:222 / 235
页数:14
相关论文
共 50 条
  • [31] A defense mechanism against label inference attacks in Vertical Federated Learning
    Arazzi, Marco
    Nicolazzo, Serena
    Nocera, Antonino
    NEUROCOMPUTING, 2025, 624
  • [32] Computation and communication efficient approach for federated learning based urban sensing applications against inference attacks
    Kapoor, Ayshika
    Kumar, Dheeraj
    PERVASIVE AND MOBILE COMPUTING, 2024, 98
  • [33] GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison
    Wang, Xiaodong
    Wu, Longfei
    Guan, Zhitao
    INFORMATION SCIENCES, 2024, 658
  • [34] Membership Inference Attacks Against the Graph Classification
    Yang, Junze
    Li, Hongwei
    Fan, Wenshu
    Zhang, Xilin
    Hao, Meng
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6729 - 6734
  • [35] Membership Inference Attacks against Diffusion Models
    Matsumoto, Tomoya
    Miura, Takayuki
    Yanai, Naoto
    2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 77 - 83
  • [36] Membership inference attacks against compression models
    Jin, Yong
    Lou, Weidong
    Gao, Yanghua
    COMPUTING, 2023, 105 (11) : 2419 - 2442
  • [37] Membership inference attacks against compression models
    Yong Jin
    Weidong Lou
    Yanghua Gao
    Computing, 2023, 105 : 2419 - 2442
  • [38] Membership Inference Attacks Against Recommender Systems
    Zhang, Minxing
    Ren, Zhaochun
    Wang, Zihan
    Ren, Pengjie
    Chen, Zhumin
    Hu, Pengfei
    Zhang, Yang
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 864 - 879
  • [39] Defending against membership inference attacks: RM Learning is all you need
    Zhang, Zheng
    Ma, Jianfeng
    Ma, Xindi
    Yang, Ruikang
    Wang, Xiangyu
    Zhang, Junying
    INFORMATION SCIENCES, 2024, 670
  • [40] Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
    Gomrokchi, Maziar
    Amin, Susan
    Aboutalebi, Hossein
    Wong, Alexander
    Precup, Doina
    IEEE ACCESS, 2023, 11 : 42796 - 42808