FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks

被引:0
|
作者
Chen, Jian [1 ]
Lin, Zehui [1 ]
Lin, Wanyu [1 ,2 ]
Shi, Wenlong [3 ]
Yin, Xiaoyan [4 ]
Wang, Di [5 ]
机构
[1] Hong Kong Polytech Univ, Dept Data Sci & Artificial Intelligence, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[4] Northwest Univ, Sch Informat Sci & Technol, Xian 710069, Peoples R China
[5] King Abdullah Univ Sci & Technol, Div Comp Elect & Math Sci & Engn, Thuwal 23955, Saudi Arabia
关键词
Predictive models; Data models; Servers; Federated learning; Computational modeling; Training; Training data; Robustness; General Data Protection Regulation; Distributed databases; unlearning attacks; targeted attacks;
D O I
10.1109/TIFS.2025.3531141
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently, the practical needs of "the right to be forgotten" in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client's removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client's model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.
引用
收藏
页码:1665 / 1678
页数:14
相关论文
共 50 条
  • [41] Exploring vulnerabilities in preparedness – rail bound traffic and terrorist attacks
    Strandh V.
    Journal of Transportation Security, 2017, 10 (3-4) : 45 - 62
  • [42] MDIFL: Robust Federated Learning Based on Malicious Detection and Incentives
    Wu, Ruolan
    Chen, Yuling
    Tan, Chaoyue
    Luo, Yun
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [43] Federated learning secure model: A framework for malicious clients detection
    Kolasa, Dominik
    Pilch, Kinga
    Mazurczyk, Wojciech
    SOFTWAREX, 2024, 27
  • [44] Malicious Attacks against Deep Reinforcement Learning Interpretations
    Huai, Mengdi
    Sun, Jianhui
    Cai, Renqin
    Yao, Liuyi
    Zhang, Aidong
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 472 - 482
  • [45] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [46] Jamming Attacks on Federated Learning in Wireless Networks
    Shi, Yi
    Sagduyu, Yalin E.
    arXiv, 2022,
  • [47] FLAS: A Platform for Studying Attacks on Federated Learning
    Loh, Yuanchao
    Chen, Zichen
    Zhao, Yansong
    Yu, Han
    SOCIAL COMPUTING AND SOCIAL MEDIA: DESIGN, USER EXPERIENCE AND IMPACT, SCSM 2022, PT I, 2022, 13315 : 160 - 169
  • [48] A Survey of Federated Learning: Review, Attacks, Defenses
    Yao, Zhongyi
    Cheng, Jieren
    Fu, Cebin
    Huang, Zhennan
    BIG DATA AND SECURITY, ICBDS 2023, PT I, 2024, 2099 : 166 - 177
  • [49] ANODYNE: Mitigating backdoor attacks in federated learning
    Gu, Zhipin
    Shi, Jiangyong
    Yang, Yuexiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [50] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375