Enhancing user prompt confidentiality in Large Language Models through advanced differential encryption

被引:3
|
作者
Gupta, Brij B. [1 ,2 ,3 ,4 ,5 ]
Gaurav, Akshat [6 ]
Arya, Varsha [7 ,8 ]
Alhalabi, Wadee [9 ]
Alsalman, Dheyaaldin [10 ]
Vijayakumar, P. [11 ]
机构
[1] Asia Univ, Int Ctr AI & Cyber Secur Res & Innovat CCRI, Taichung, Taiwan
[2] Asia Univ, Dept Comp Sci & Informat Engn, Taichung, Taiwan
[3] Kyung Hee Univ, 26 Kyungheedae Ro, Seoul, South Korea
[4] Symbiosis Int Univ, Symbiosis Ctr Informat Technol SCIT, Pune, India
[5] Univ Petr & Energy Studies UPES, Ctr Interdisciplinary Res, Dehra Dun, India
[6] Ronin Inst, Montclair, NJ USA
[7] Asia Univ, Dept Business Adm, Taichung, Taiwan
[8] Lebanese Amer Univ, Dept Elect & Comp Engn, Beirut 1102, Lebanon
[9] King Abdulaziz Univ, Dept Comp Sci, Immers Virtual Real Res Grp, Jeddah, Saudi Arabia
[10] Dar Al Hekma Univ, Sch Engn Comp & Informat, Jeddah, Saudi Arabia
[11] Univ Coll Engn Tindivanam, Dept Comp Sci & Engn, Tindivanam 604001, Tamil Nadu, India
关键词
Cryptographic privacy; Large Language Models; Data anonymization; Secure AI framework; Personal data protection; AUTHENTICATION PROTOCOL; DESIGN;
D O I
10.1016/j.compeleceng.2024.109215
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the era of artificial intelligence (AI) advancements heralded by Large Language Models (LLMs) like GPT-3, the capacity to parse and generate human -like text brings to light substantial privacy concerns. These arise notably from LLMs' reliance on vast datasets often laden with personal information, underscoring the potential for inadvertent memorization and disclosure of sensitive data. Addressing these pivotal privacy concerns, our research introduces a novel two -fold approach aimed at bolstering the confidentiality and security of user data in LLM applications. Firstly, we deploy advanced cryptographic techniques, incorporating bespoke encryption and hashing protocols, to preprocess user data. This strategy effectively anonymizes personal identifiers prior to their processing by LLMs, directly tackling the challenges of sensitive information exposure. Concurrently, our methodology encompasses a secure mutual authentication protocol utilizing lightweight cryptographic measures. This ensures that system interactions are strictly reserved for authenticated users, thereby enhancing overall data security. Collectively, our approach not only preserves the utility of data for AI tasks but also fortifies the privacy framework surrounding LLMs, significantly reducing the likelihood of privacy breaches and steering AI development towards a more secure and ethically grounded future.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] IAPT: Instruction-Aware Prompt Tuning for Large Language Models
    Zhu, Wei
    Tian, Aaron Xuxiang
    Yin, Congrui
    Ni, Yuan
    Wang, Xiaoling
    Xie, Guotong
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 14285 - 14304
  • [32] Attack Prompt Generation for Red Teaming and Defending Large Language Models
    Deng, Boyi
    Wang, Wenjie
    Feng, Fuli
    Deng, Yang
    Wang, Qifan
    He, Xiangnan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 2176 - 2189
  • [33] Select, Prompt, Filter: Distilling Large Language Models for Summarizing Conversations
    Pham, Minh-Quang
    Indurthi, Sathish Reddy
    Chollampatt, Shamil
    Turchi, Marco
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 12257 - 12265
  • [34] Integrating chemistry knowledge in large language models via prompt engineering
    Liu, Hongxuan
    Yin, Haoyu
    Luo, Zhiyao
    Wang, Xiaonan
    SYNTHETIC AND SYSTEMS BIOTECHNOLOGY, 2025, 10 (01) : 23 - 38
  • [35] Assessing the Impact of Prompt Strategies on Text Summarization with Large Language Models
    Onan, Aytug
    Alhumyani, Hesham
    COMPUTER APPLICATIONS IN INDUSTRY AND ENGINEERING, CAINE 2024, 2025, 2242 : 41 - 55
  • [36] Soft prompt tuning for augmenting dense retrieval with large language models
    Peng, Zhiyuan
    Wu, Xuyang
    Wang, Qifan
    Fang, Yi
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [37] Robust Prompt Optimization for Large Language Models Against Distribution Shifts
    Li, Moxin
    Wang, Wenjie
    Feng, Fuli
    Cao, Yixin
    Zhang, Jizhi
    Chua, Tat-Seng
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 1539 - 1554
  • [38] Prompt Wrangling: On Replication and Generalization in Large Language Models for PCG Levels
    Karkaj, Arash Moradi
    Nelson, Mark J.
    Koutis, Ioannis
    Hoover, Amy K.
    PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF DIGITAL GAMES, FDG 2024, 2024,
  • [39] CSPO: chain-structured prompt optimisation for large language models
    Wang, Jinshui
    Lin, Sining
    Xue, Xingsi
    Chen, Shuguang
    Tang, Zhengyi
    International Journal of Ad Hoc and Ubiquitous Computing, 2025, 48 (04) : 233 - 243
  • [40] Understanding Telecom Language Through Large Language Models
    Bariah, Lina
    Zou, Hang
    Zhao, Qiyang
    Mouhouche, Belkacem
    Bader, Faouzi
    Debbah, Merouane
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6542 - 6547