A Privacy-Preserving Local Differential Privacy-Based Federated Learning Model to Secure LLM from Adversarial Attacks

被引:0
|
作者
Salim, Mikail Mohammed [1 ]
Deng, Xianjun [2 ]
Park, Jong Hyuk [1 ]
机构
[1] Seoul Natl Univ Sci & Technol SeoulTech, Dept Comp Sci & Engn, Seoul, South Korea
[2] Huazhong Univ Sci & Technol, Dept Cyber Sci & Engn, Wuhan, Peoples R China
基金
新加坡国家研究基金会;
关键词
Federated Learning; Local Differential Privacy; Blockchain; Secret Sharing; INTERNET;
D O I
10.22967/HCIS.2024.14.057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Chatbot applications using large language models (LLMs) offer human-like responses to user queries, but their widespread use raises significant concerns about data privacy and integrity. Adversarial attacks can extract confidential data during model training and submit poisoned data, compromising chatbot reliability. Additionally, the transmission of unencrypted user data for local model training poses new privacy challenges. This paper addresses these issues by proposing a blockchain and federated learning-enabled LLM model to ensure user data privacy and integrity. A local differential privacy method adds noise to anonymize user data during the data collection phase for local training at the edge layer. Federated learning prevents the sharing of private local training data with the cloud-based global model. Secure multi-party computation using secret sharing and blockchain ensures secure and reliable model aggregation, preventing adversarial model poisoning. Evaluation results show a 46% higher accuracy in global model training compared to models trained with poisoned data. The study demonstrates that the proposed local differential privacy method effectively prevents adversarial attacks and protects federated learning models from poisoning during training, enhancing the security and reliability of chatbot applications.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] Privacy-Preserving Localization for Underwater Acoustic Sensor Networks: A Differential Privacy-Based Deep Learning Approach
    Yan, Jing
    Zheng, Yuhan
    Yang, Xian
    Chen, Cailian
    Guan, Xinping
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 737 - 752
  • [22] A Novel Approach for Differential Privacy-Preserving Federated Learning
    Elgabli, Anis
    Mesbah, Wessam
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2025, 6 : 466 - 476
  • [23] A survey on privacy-preserving federated learning against poisoning attacks
    Xia, Feng
    Cheng, Wenhao
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (10): : 13565 - 13582
  • [24] Toward Secure Weighted Aggregation for Privacy-Preserving Federated Learning
    He, Yunlong
    Yu, Jia
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3475 - 3488
  • [25] Secure, privacy-preserving and federated machine learning in medical imaging
    Georgios A. Kaissis
    Marcus R. Makowski
    Daniel Rückert
    Rickmer F. Braren
    Nature Machine Intelligence, 2020, 2 : 305 - 311
  • [26] TPFL: Privacy-preserving personalized federated learning mitigates model poisoning attacks
    Zuo, Shaojun
    Xie, Yong
    Yao, Hehua
    Ke, Zhijie
    INFORMATION SCIENCES, 2025, 702
  • [27] Privacy-Preserving and Decentralized Federated Learning Model Based on the Blockchain
    Zhou W.
    Wang C.
    Xu J.
    Hu K.
    Wang J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2022, 59 (11): : 2423 - 2436
  • [28] Secure, privacy-preserving and federated machine learning in medical imaging
    Kaissis, Georgios A.
    Makowski, Marcus R.
    Ruckert, Daniel
    Braren, Rickmer F.
    NATURE MACHINE INTELLIGENCE, 2020, 2 (06) : 305 - 311
  • [29] ESVFL: Efficient and secure verifiable federated learning with privacy-preserving
    Cai, Jiewang
    Shen, Wenting
    Qin, Jing
    INFORMATION FUSION, 2024, 109
  • [30] TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
    Xu, Runhua
    Li, Bo
    Li, Chao
    Joshi, James B. D.
    Ma, Shuai
    Li, Jianxin
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4309 - 4323