Mitigation of Membership Inference Attack by Knowledge Distillation on Federated Learning

被引:0
|
作者
Ueda, Rei [1 ]
Nakai, Tsunato [2 ]
Yoshida, Kota [3 ]
Fujino, Takeshi [3 ]
机构
[1] Ritsumeikan Univ, Grad Sch Sci & Engn, Kusatsu 5258577, Japan
[2] Mitsubishi Electr Corp, Kamakura 2478501, Japan
[3] Ritsumeikan Univ, Dept Sci & Engn, Kusatsu 5258577, Japan
关键词
federated learning; knowledge distillation; membership inference attack;
D O I
10.1587/transfun.2024CIP0004
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a distributed deep learning technique involving multiple clients and a server. In FL, each client individually trains a model with its own training data and sends only the model to the server. The server then aggregates the received client models to build a server model. Because each client does not share its own training data with other clients or the server, FL is considered a distributed deep learning technique with privacy protection. However, several attacks that steal information about a specific client's training data from the aggregated model on the server have been reported for FL. These include membership inference attacks (MIAs), which identify whether or not specific data was used to train a target model. MIAs have been shown to work mainly because of over fitting of the model to the training data, and mitigation techniques based on knowledge distillation have thus been proposed. Because these techniques assume a lot of training data and computational power, they are difficult to introduce simply to clients in FL. In this paper, we propose a knowledge-distillation-based defense against MIAs that is designed for application in FL. The proposed method is effective against various MIAs without requiring additional training data, in contrast to the conventional defenses.
引用
收藏
页码:267 / 279
页数:13
相关论文
共 50 条
  • [31] Demystifying the Membership Inference Attack
    Irolla, Paul
    Chatel, Gregory
    2019 12TH CMI CONFERENCE ON CYBERSECURITY AND PRIVACY (CMI), 2019, : 1 - 7
  • [32] On Defensive Neural Networks Against Inference Attack in Federated Learning
    Lee, Hongkyu
    Kim, Jeehyeong
    Hussain, Rasheed
    Cho, Sunghyun
    Son, Junggab
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [33] Feature Inference Attack on Model Predictions in Vertical Federated Learning
    Luo, Xinjian
    Wu, Yuncheng
    Xiao, Xiaokui
    Ooi, Beng Chin
    2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021), 2021, : 181 - 192
  • [34] Perfectly Accurate Membership Inference by a Dishonest Central Server in Federated Learning
    Pichler, Georg
    Romanelli, Marco
    Vega, Leonardo Rey
    Piantanida, Pablo
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 4290 - 4296
  • [35] GradDiff: Gradient-based membership inference attacks against federated distillation with differential comparison
    Wang, Xiaodong
    Wu, Longfei
    Guan, Zhitao
    INFORMATION SCIENCES, 2024, 658
  • [36] Data distribution inference attack in federated learning via reinforcement learning support
    Yu, Dongxiao
    Zhang, Hengming
    Huang, Yan
    Xie, Zhenzhen
    HIGH-CONFIDENCE COMPUTING, 2025, 5 (01):
  • [37] Logits Poisoning Attack in Federated Distillation
    Tang, Yuhan
    Wu, Zhiyuan
    Gao, Bo
    Wen, Tian
    Wang, Yuwei
    Sun, Sheng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 286 - 298
  • [38] Efficient Federated Learning for AIoT Applications Using Knowledge Distillation
    Liu, Tian
    Xia, Jun
    Ling, Zhiwei
    Fu, Xin
    Yu, Shui
    Chen, Mingsong
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (08) : 7229 - 7243
  • [39] A Decentralized Federated Learning Based on Node Selection and Knowledge Distillation
    Zhou, Zhongchang
    Sun, Fenggang
    Chen, Xiangyu
    Zhang, Dongxu
    Han, Tianzhen
    Lan, Peng
    MATHEMATICS, 2023, 11 (14)
  • [40] Fedadkd:heterogeneous federated learning via adaptive knowledge distillation
    Song, Yalin
    Liu, Hang
    Zhao, Shuai
    Jin, Haozhe
    Yu, Junyang
    Liu, Yanhong
    Zhai, Rui
    Wang, Longge
    PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (04)