Mitigation of Membership Inference Attack by Knowledge Distillation on Federated Learning

被引:0
|
作者
Ueda, Rei [1 ]
Nakai, Tsunato [2 ]
Yoshida, Kota [3 ]
Fujino, Takeshi [3 ]
机构
[1] Ritsumeikan Univ, Grad Sch Sci & Engn, Kusatsu 5258577, Japan
[2] Mitsubishi Electr Corp, Kamakura 2478501, Japan
[3] Ritsumeikan Univ, Dept Sci & Engn, Kusatsu 5258577, Japan
关键词
federated learning; knowledge distillation; membership inference attack;
D O I
10.1587/transfun.2024CIP0004
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a distributed deep learning technique involving multiple clients and a server. In FL, each client individually trains a model with its own training data and sends only the model to the server. The server then aggregates the received client models to build a server model. Because each client does not share its own training data with other clients or the server, FL is considered a distributed deep learning technique with privacy protection. However, several attacks that steal information about a specific client's training data from the aggregated model on the server have been reported for FL. These include membership inference attacks (MIAs), which identify whether or not specific data was used to train a target model. MIAs have been shown to work mainly because of over fitting of the model to the training data, and mitigation techniques based on knowledge distillation have thus been proposed. Because these techniques assume a lot of training data and computational power, they are difficult to introduce simply to clients in FL. In this paper, we propose a knowledge-distillation-based defense against MIAs that is designed for application in FL. The proposed method is effective against various MIAs without requiring additional training data, in contrast to the conventional defenses.
引用
收藏
页码:267 / 279
页数:13
相关论文
共 50 条
  • [41] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [42] Toward Selective Membership Inference Attack against Deep Learning Model
    Kwon, Hyun
    Kim, Yongchul
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (11) : 1911 - 1915
  • [43] Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
    Lee, Gihun
    Jeong, Minchan
    Shin, Yongjin
    Bae, Sangmin
    Yun, Se-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [44] Membership Inference Attack and Defense for Wireless Signal Classifiers With Deep Learning
    Shi, Yi
    Sagduyu, Yalin E.
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (07) : 4032 - 4043
  • [45] PCA-based membership inference attack for machine learning models
    Peng C.
    Gao T.
    Liu H.
    Ding H.
    Tongxin Xuebao/Journal on Communications, 2022, 43 (01): : 149 - 160
  • [46] Membership Inference Attack against Differentially Private Deep Learning Model
    Rahman, Md Atiqur
    Rahman, Tanzila
    Laganiere, Robert
    Mohammed, Noman
    Wang, Yang
    TRANSACTIONS ON DATA PRIVACY, 2018, 11 (01) : 61 - 79
  • [47] FEDGUARD: Selective Parameter Aggregation for Poisoning Attack Mitigation in Federated Learning
    Chelli, Melvin
    Prigent, Cedric
    Schubotz, Rene
    Costan, Alexandru
    Antoniu, Gabriel
    Cudennec, Loic
    Slusallek, Philipp
    2023 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, CLUSTER, 2023, : 72 - 81
  • [48] Resource Allocation for Federated Knowledge Distillation Learning in Internet of Drones
    Yao, Jingjing
    Cal, Semih
    Sun, Xiang
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (07): : 8064 - 8074
  • [49] Data-Free Knowledge Distillation for Heterogeneous Federated Learning
    Zhu, Zhuangdi
    Hong, Junyuan
    Zhou, Jiayu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [50] Performance analysis of various machine learning models for membership inference attack
    Karthikeyan, K.
    Padmanaban, K.
    Kavitha, Datchanamoorthy
    Sekhar, Jampani Chandra
    INTERNATIONAL JOURNAL OF SENSOR NETWORKS, 2023, 43 (04) : 232 - 245