FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks

被引:0
|
作者
Chen, Jian [1 ]
Lin, Zehui [1 ]
Lin, Wanyu [1 ,2 ]
Shi, Wenlong [3 ]
Yin, Xiaoyan [4 ]
Wang, Di [5 ]
机构
[1] Hong Kong Polytech Univ, Dept Data Sci & Artificial Intelligence, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[4] Northwest Univ, Sch Informat Sci & Technol, Xian 710069, Peoples R China
[5] King Abdullah Univ Sci & Technol, Div Comp Elect & Math Sci & Engn, Thuwal 23955, Saudi Arabia
关键词
Predictive models; Data models; Servers; Federated learning; Computational modeling; Training; Training data; Robustness; General Data Protection Regulation; Distributed databases; unlearning attacks; targeted attacks;
D O I
10.1109/TIFS.2025.3531141
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently, the practical needs of "the right to be forgotten" in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client's removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client's model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.
引用
收藏
页码:1665 / 1678
页数:14
相关论文
共 50 条
  • [31] Mitigating Sybil Attacks in Federated Learning
    Samy, Ahmed E.
    Girdzijauskas, Sarunas
    INFORMATION SECURITY PRACTICE AND EXPERIENCE, ISPEC 2023, 2023, 14341 : 36 - 51
  • [32] Gradient leakage attacks in federated learning
    Haimei Gong
    Liangjun Jiang
    Xiaoyang Liu
    Yuanqi Wang
    Omary Gastro
    Lei Wang
    Ke Zhang
    Zhen Guo
    Artificial Intelligence Review, 2023, 56 : 1337 - 1374
  • [33] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [34] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699
  • [35] Provably Secure Federated Learning against Malicious Clients
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 6885 - 6893
  • [36] Exploring the Vulnerabilities of Machine Learning and Quantum Machine Learning to Adversarial Attacks using a Malware Dataset: A Comparative Analysis
    Akter, Mst Shapna
    Shahriar, Hossain
    Iqbal, Iysa
    Hossain, M. D.
    Karim, M. A.
    Clincy, Victor
    Voicu, Razvan
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE SERVICES ENGINEERING, SSE, 2023, : 222 - 231
  • [37] ELSA: Secure Aggregation for Federated Learning with Malicious Actors
    Rathee, Mayank
    Shen, Conghao
    Wagh, Sameer
    Popa, Raluca Ada
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1961 - 1979
  • [38] IoT Malicious Traffic Detection Based on Federated Learning
    Shen, Yi
    Zhang, Yuhan
    Li, Yuwei
    Ding, Wanmeng
    Hu, Miao
    Li, Yang
    Huang, Cheng
    Wang, Jie
    DIGITAL FORENSICS AND CYBER CRIME, PT 1, ICDF2C 2023, 2024, 570 : 249 - 263
  • [39] TEAR: Exploring Temporal Evolution of Adversarial Robustness for Membership Inference Attacks Against Federated Learning
    Liu, Gaoyang
    Tian, Zehao
    Chen, Jian
    Wang, Chen
    Liu, Jiangchuan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4996 - 5010
  • [40] Sparsity Brings Vulnerabilities: Exploring New Metrics in Backdoor Attacks
    Tian, Jianwen
    Qiu, Kefan
    Gao, Debin
    Wang, Zhi
    Kuang, Xiaohui
    Zhao, Gang
    PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2689 - 2706