Mitigation of Membership Inference Attack by Knowledge Distillation on Federated Learning

被引:0
|
作者
Ueda, Rei [1 ]
Nakai, Tsunato [2 ]
Yoshida, Kota [3 ]
Fujino, Takeshi [3 ]
机构
[1] Ritsumeikan Univ, Grad Sch Sci & Engn, Kusatsu 5258577, Japan
[2] Mitsubishi Electr Corp, Kamakura 2478501, Japan
[3] Ritsumeikan Univ, Dept Sci & Engn, Kusatsu 5258577, Japan
关键词
federated learning; knowledge distillation; membership inference attack;
D O I
10.1587/transfun.2024CIP0004
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is a distributed deep learning technique involving multiple clients and a server. In FL, each client individually trains a model with its own training data and sends only the model to the server. The server then aggregates the received client models to build a server model. Because each client does not share its own training data with other clients or the server, FL is considered a distributed deep learning technique with privacy protection. However, several attacks that steal information about a specific client's training data from the aggregated model on the server have been reported for FL. These include membership inference attacks (MIAs), which identify whether or not specific data was used to train a target model. MIAs have been shown to work mainly because of over fitting of the model to the training data, and mitigation techniques based on knowledge distillation have thus been proposed. Because these techniques assume a lot of training data and computational power, they are difficult to introduce simply to clients in FL. In this paper, we propose a knowledge-distillation-based defense against MIAs that is designed for application in FL. The proposed method is effective against various MIAs without requiring additional training data, in contrast to the conventional defenses.
引用
收藏
页码:267 / 279
页数:13
相关论文
共 50 条
  • [21] Comparative Analysis of Membership Inference Attacks in Federated and Centralized Learning
    Abbasi Tadi, Ali
    Dayal, Saroj
    Alhadidi, Dima
    Mohammed, Noman
    INFORMATION, 2023, 14 (11)
  • [22] CMI: Client-Targeted Membership Inference in Federated Learning
    Zheng, Tianhang
    Li, Baochun
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 4122 - 4132
  • [23] Membership Inference Vulnerabilities in Peer-to-Peer Federated Learning
    Luqman, Alka
    Chattopadhyay, Anupam
    Lam, Kwok Yan
    PROCEEDINGS OF THE INAUGURAL ASIACCS 2023 WORKSHOP ON SECURE AND TRUSTWORTHY DEEP LEARNING SYSTEMS, SECTL, 2022,
  • [24] Resource-Aware Knowledge Distillation for Federated Learning
    Chen, Zheyi
    Tian, Pu
    Liao, Weixian
    Chen, Xuhui
    Xu, Guobin
    Yu, Wei
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (03) : 706 - 719
  • [25] DECENTRALIZED FEDERATED LEARNING VIA MUTUAL KNOWLEDGE DISTILLATION
    Huang, Yue
    Kong, Lanju
    Li, Qingzhong
    Zhang, Baochen
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 342 - 347
  • [26] Poster: AsyncFedKD: Asynchronous Federated Learning with Knowledge Distillation
    Mohammed, Malik Naik
    Zhang, Xinyue
    Valero, Maria
    Xie, Ying
    2023 IEEE/ACM CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES, CHASE, 2023, : 207 - 208
  • [27] Federated Split Learning via Mutual Knowledge Distillation
    Luo, Linjun
    Zhang, Xinglin
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (03): : 2729 - 2741
  • [28] FedX: Unsupervised Federated Learning with Cross Knowledge Distillation
    Han, Sungwon
    Park, Sungwon
    Wu, Fangzhao
    Kim, Sundong
    Wu, Chuhan
    Xie, Xing
    Cha, Meeyoung
    COMPUTER VISION - ECCV 2022, PT XXX, 2022, 13690 : 691 - 707
  • [29] Practical Private Aggregation in Federated Learning Against Inference Attack
    Zhao, Ping
    Cao, Zhikui
    Jiang, Jin
    Gao, Fei
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (01) : 318 - 329
  • [30] Secure Aggregation is Insecure: Category Inference Attack on Federated Learning
    Gao, Jiqiang
    Hou, Boyu
    Guo, Xiaojie
    Liu, Zheli
    Zhang, Ying
    Chen, Kai
    Li, Jin
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 147 - 160