TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning

被引:9
|
作者
Liu, Gaoyang [1 ,2 ]
Tian, Zehao [1 ]
Chen, Jian [1 ]
Wang, Chen [1 ]
Liu, Jiangchuan [2 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Hubei Key Lab Smart Internet Technol, Wuhan 430074, Peoples R China
[2] Simon Fraser Univ, Sch Comp Sci, Burnaby, BC V5A 1S6, Canada
基金
中国国家自然科学基金;
关键词
Federated learning; membership inference attack; adversarial robustness; temporal evolution;
D O I
10.1109/TIFS.2023.3303718
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables multiple clients to train a unified model without disclosing their private data. However, susceptibility to membership inference attacks (MIAs) arises due to the natural inclination of FL models to overfit on the training data during the training process, thereby enabling MIAs to exploit the subtle differences in the FL model's parameters, activations, or predictions between the training and testing data to infer membership information. It is worth noting that most if not all existing MIAs against FL require access to the model's internal information or modification of the training process, yielding them unlikely to be performed in practice. In this paper, we present with TEAR the first evidence that it is possible for an honest-but-curious federated client to perform MIA against an FL system, by exploring the Temporal Evolution of the Adversarial Robustness between the training and non-training data. We design a novel adversarial example generation method to quantify the target sample's adversarial robustness, which can be utilized to obtain the membership features to train the inference model in a supervised manner. Extensive experiment results on five realistic datasets demonstrate that TEAR can achieve a strong inference performance compared with two existing MIAs, and is able to escape from the protection of two representative defenses.
引用
收藏
页码:4996 / 5010
页数:15
相关论文
共 50 条
  • [1] Defending against Membership Inference Attacks in Federated learning via Adversarial Example
    Xie, Yuanyuan
    Chen, Bing
    Zhang, Jiale
    Wu, Di
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 153 - 160
  • [2] Source Inference Attacks: Beyond Membership Inference Attacks in Federated Learning
    Hu, Hongsheng
    Zhang, Xuyun
    Salcic, Zoran
    Sun, Lichao
    Choo, Kim-Kwang Raymond
    Dobbie, Gillian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 3012 - 3029
  • [3] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [4] Efficient Membership Inference Attacks against Federated Learning via Bias Differences
    Zhang, Liwei
    Li, Linghui
    Li, Xiaoyong
    Cai, Binsi
    Gao, Yali
    Dou, Ruobin
    Chen, Luying
    PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 222 - 235
  • [5] FD-Leaks: Membership Inference Attacks Against Federated Distillation Learning
    Yang, Zilu
    Zhao, Yanchao
    Zhang, Jiale
    WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 364 - 378
  • [6] Membership Inference Attacks and Defenses in Federated Learning: A Survey
    Bai, Li
    Hu, Haibo
    Ye, Qingqing
    Li, Haoyang
    Wang, Leixia
    Xu, Jianliang
    ACM COMPUTING SURVEYS, 2025, 57 (04)
  • [7] Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning
    Abbasi Tadi, Ali
    Dayal, Saroj
    Alhadidi, Dima
    Mohammed, Noman
    INFORMATION, 2023, 14 (11)
  • [8] Exploring Adversarial Attacks in Federated Learning for Medical Imaging
    Darzi, Erfan
    Dubost, Florian
    Sijtsema, Nanna. M.
    van Ooijen, P. M. A.
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (12) : 13591 - 13599
  • [9] LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks
    Ma, Mengyao
    Zhang, Yanjun
    Chamikara, M. A. P.
    Zhang, Leo Yu
    Chhetri, Mohan Baruwal
    Bai, Guangdong
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 122 - 135
  • [10] EAR: An Enhanced Adversarial Regularization Approach against Membership Inference Attacks
    Hu, Hongsheng
    Salcic, Zoran
    Dobbie, Gillian
    Chen, Yi
    Zhang, Xuyun
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,