A defense mechanism against label inference attacks in Vertical Federated Learning

被引:0
|
作者
Arazzi, Marco [1 ]
Nicolazzo, Serena [2 ]
Nocera, Antonino [1 ]
机构
[1] Univ Pavia, Dept Elect Comp & Biomed Engn, Via A Ferrata 5, I-27100 Pavia, PV, Italy
[2] Univ Milan, Dept Comp Sci, Via G Celoria 18, I-20133 Milan, MI, Italy
关键词
Federated learning; Vertical Federated Learning; VFL; Label inference attack; Knowledge distillation; k-anonymity;
D O I
10.1016/j.neucom.2025.129476
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Vertical Federated Learning (VFL, for short) is a category of Federated Learning that is gaining increasing attention in the context of Artificial Intelligence. According to this paradigm, machine/deep learning models are trained collaboratively among parties with vertically partitioned data. Typically, in a VFL scenario, the labels of the samples are kept private from all parties except the aggregating server, that is, the label owner. However, recent work discovered that by exploiting the gradient information returned by the server to bottom models, with the knowledge of only a small set of auxiliary labels on a very limited subset of training data points, an adversary could infer the private labels. These attacks are known as label inference attacks in VFL. In our work, we propose a novel framework called KDk (knowledge distillation with k-anonymity) that combines knowledge distillation and k-anonymity to provide a defense mechanism against potential label inference attacks in a VFL scenario. Through an exhaustive experimental campaign, we demonstrate that by applying our approach, the performance of the analyzed label inference attacks decreases consistently, even by more than 60%, maintaining the accuracy of the whole VFL almost unaltered.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
    Yan, Haonan
    Zhang, Wenjing
    Chen, Qian
    Li, Xiaoguang
    Sun, Wenhai
    Li, Hui
    Lin, Xiaodong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
    Yang, Jie
    Zheng, Jun
    Wang, Haochen
    Li, Jiaxing
    Sun, Haipeng
    Han, Weifeng
    Jiang, Nan
    Tan, Yu-An
    SENSORS, 2023, 23 (03)
  • [33] Decentralized Defense: Leveraging Blockchain against Poisoning Attacks in Federated Learning Systems
    Thennakoon, Rashmi
    Wanigasundara, Arosha
    Weerasinghe, Sanjaya
    Seneviratne, Chatura
    Siriwardhana, Yushan
    Liyanage, Madhusanka
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 950 - 955
  • [34] FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning
    Jia, Jinyuan
    Yuan, Zhuowen
    Sahabandu, Dinuka
    Niu, Luyao
    Rajabi, Arezoo
    Ramasubramanian, Bhaskar
    Li, Bo
    Poovendran, Radha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [35] Enhance membership inference attacks in federated learning
    He, Xinlong
    Xu, Yang
    Zhang, Sicong
    Xu, Weida
    Yan, Jiale
    COMPUTERS & SECURITY, 2024, 136
  • [36] Inference attacks based on GAN in federated learning
    Trung Ha
    Tran Khanh Dang
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2022, 18 (2/3) : 117 - 136
  • [37] Label noise analysis meets adversarial training: A defense against label poisoning in federated learning
    Hallaji, Ehsan
    Razavi-Far, Roozbeh
    Saif, Mehrdad
    Herrera-Viedma, Enrique
    KNOWLEDGE-BASED SYSTEMS, 2023, 266
  • [38] LDIA: Label distribution inference attack against federated learning in edge computing
    Gu, Yuhao
    Bai, Yuebin
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 74
  • [39] LFGurad: A Defense against Label Flipping Attack in Federated Learning for Vehicular Network
    Sameera, K. M.
    Vinod, P.
    Rehiman, K. A. Rafidha
    Conti, Mauro
    COMPUTER NETWORKS, 2024, 254
  • [40] Efficient Federated Matrix Factorization Against Inference Attacks
    Chai, Di
    Wang, Leye
    Chen, Kai
    Yang, Qiang
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (04)