A Privacy-Preserving Local Differential Privacy-Based Federated Learning Model to Secure LLM from Adversarial Attacks

被引:0
|
作者
Salim, Mikail Mohammed [1 ]
Deng, Xianjun [2 ]
Park, Jong Hyuk [1 ]
机构
[1] Seoul Natl Univ Sci & Technol SeoulTech, Dept Comp Sci & Engn, Seoul, South Korea
[2] Huazhong Univ Sci & Technol, Dept Cyber Sci & Engn, Wuhan, Peoples R China
基金
新加坡国家研究基金会;
关键词
Federated Learning; Local Differential Privacy; Blockchain; Secret Sharing; INTERNET;
D O I
10.22967/HCIS.2024.14.057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Chatbot applications using large language models (LLMs) offer human-like responses to user queries, but their widespread use raises significant concerns about data privacy and integrity. Adversarial attacks can extract confidential data during model training and submit poisoned data, compromising chatbot reliability. Additionally, the transmission of unencrypted user data for local model training poses new privacy challenges. This paper addresses these issues by proposing a blockchain and federated learning-enabled LLM model to ensure user data privacy and integrity. A local differential privacy method adds noise to anonymize user data during the data collection phase for local training at the edge layer. Federated learning prevents the sharing of private local training data with the cloud-based global model. Secure multi-party computation using secret sharing and blockchain ensures secure and reliable model aggregation, preventing adversarial model poisoning. Evaluation results show a 46% higher accuracy in global model training compared to models trained with poisoned data. The study demonstrates that the proposed local differential privacy method effectively prevents adversarial attacks and protects federated learning models from poisoning during training, enhancing the security and reliability of chatbot applications.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] PPeFL: Privacy-Preserving Edge Federated Learning With Local Differential Privacy
    Wang, Baocang
    Chen, Yange
    Jiang, Hang
    Zhao, Zhen
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (17) : 15488 - 15500
  • [2] Privacy-preserving cloud-based secure digital locker with differential privacy-based deep learning technique
    Shanthi, P.
    Vidivelli, S.
    Padmakumari, P.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (34) : 81299 - 81324
  • [3] Local Model Privacy-Preserving Study for Federated Learning
    Pan, Kaiyun
    He, Daojing
    Xu, Chuan
    SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, SECURECOMM 2021, PT I, 2021, 398 : 287 - 307
  • [4] Local Differential Privacy-Based Federated Learning for Internet of Things
    Zhao, Yang
    Zhao, Jun
    Yang, Mengmeng
    Wang, Teng
    Wang, Ning
    Lyu, Lingjuan
    Niyato, Dusit
    Lam, Kwok-Yan
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (11) : 8836 - 8853
  • [5] Privacy-Preserving Robust Federated Learning with Distributed Differential Privacy
    Wang, Fayao
    He, Yuanyuan
    Guo, Yunchuan
    Li, Peizhi
    Wei, Xinyu
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 598 - 605
  • [6] Privacy-Preserving Federated Learning based on Differential Privacy and Momentum Gradient Descent
    Weng, Shangyin
    Zhang, Lei
    Feng, Daquan
    Feng, Chenyuan
    Wang, Ruiyu
    Klaine, Paulo Valente
    Imran, Muhammad Ali
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [7] Model poisoning attack in differential privacy-based federated learning
    Yang, Ming
    Cheng, Hang
    Chen, Fei
    Liu, Ximeng
    Wang, Meiqing
    Li, Xibin
    INFORMATION SCIENCES, 2023, 630 : 158 - 172
  • [8] Privacy-Preserving Federated Learning Resistant to Byzantine Attacks
    Mu X.-T.
    Cheng K.
    Song A.-X.
    Zhang T.
    Zhang Z.-W.
    Shen Y.-L.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (04): : 842 - 861
  • [9] Privacy-Preserving Detection of Poisoning Attacks in Federated Learning
    Muhr, Trent
    Zhang, Wensheng
    2022 19TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY & TRUST (PST), 2022,
  • [10] PrivacyFL: A Simulator for Privacy-Preserving and Secure Federated Learning
    Mugunthan, Vaikkunth
    Peraire-Bueno, Anton
    Kagal, Lalana
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 3085 - 3092