FedMUA: Exploring the Vulnerabilities of Federated Learning to Malicious Unlearning Attacks

被引:0
|
作者
Chen, Jian [1 ]
Lin, Zehui [1 ]
Lin, Wanyu [1 ,2 ]
Shi, Wenlong [3 ]
Yin, Xiaoyan [4 ]
Wang, Di [5 ]
机构
[1] Hong Kong Polytech Univ, Dept Data Sci & Artificial Intelligence, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[3] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[4] Northwest Univ, Sch Informat Sci & Technol, Xian 710069, Peoples R China
[5] King Abdullah Univ Sci & Technol, Div Comp Elect & Math Sci & Engn, Thuwal 23955, Saudi Arabia
关键词
Predictive models; Data models; Servers; Federated learning; Computational modeling; Training; Training data; Robustness; General Data Protection Regulation; Distributed databases; unlearning attacks; targeted attacks;
D O I
10.1109/TIFS.2025.3531141
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Recently, the practical needs of "the right to be forgotten" in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client's removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client's model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. Specifically, clients may act as attackers by crafting malicious unlearning requests to manipulate the prediction behavior of the global model. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.
引用
收藏
页码:1665 / 1678
页数:14
相关论文
共 50 条
  • [21] A Taxonomy of Attacks on Federated Learning
    Jere, Malhar
    Farnan, Tyler
    Koushanfar, Farinaz
    IEEE SECURITY & PRIVACY, 2021, 19 (02) : 20 - 28
  • [22] Fault Tolerant and Malicious Secure Federated Learning
    Karakoc, Ferhat
    Kupcu, Alptekin
    Onen, Melek
    CRYPTOLOGY AND NETWORK SECURITY, PT II, CANS 2024, 2025, 14906 : 73 - 95
  • [23] Compressed Particle-Based Federated Bayesian Learning and Unlearning
    Gong, Jinu
    Simeone, Osvaldo
    Kang, Joonhyuk
    IEEE COMMUNICATIONS LETTERS, 2023, 27 (02) : 556 - 560
  • [24] FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks
    Zhang, Lefeng
    Zhu, Tianqing
    Zhang, Haibin
    Xiong, Ping
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4732 - 4746
  • [25] Addressing unreliable local models in federated learning through unlearning
    Ameen, Muhammad
    Khan, Riaz Ullah
    Wang, Pengfei
    Batool, Sidra
    Alajmi, Masoud
    NEURAL NETWORKS, 2024, 180
  • [26] Federated Learning and Unlearning as Enablers of Wind Turbine Digital Twins
    Stadtmann, Florian
    Rasheed, Adil
    SCIENCE OF MAKING TORQUE FROM WIND, TORQUE 2024, 2024, 2767
  • [27] EXPLORING LEARNING AND UNLEARNING IN SINGING TECHNIQUE (BREATH)
    Radu-Giurgiu, Cristina
    STUDIA UNIVERSITATIS BABES-BOLYAI MUSICA, 2023, 68 : 181 - 196
  • [28] Gradient leakage attacks in federated learning
    Gong, Haimei
    Jiang, Liangjun
    Liu, Xiaoyang
    Wang, Yuanqi
    Gastro, Omary
    Wang, Lei
    Zhang, Ke
    Guo, Zhen
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 1) : 1337 - 1374
  • [29] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [30] Source Inference Attacks in Federated Learning
    Hu, Hongsheng
    Salcic, Zoran
    Sun, Lichao
    Dobbie, Gillian
    Zhang, Xuyun
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 1102 - 1107