PPTIF: Privacy-Preserving Transformer Inference Framework for Language Translation

被引:0
|
作者
Liu, Yanxin [1 ]
Su, Qianqian [1 ]
机构
[1] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Peoples R China
关键词
Computational modeling; Transformers; Cryptography; Neural networks; Data models; Protocols; Task analysis; Homomorphic encryption; Outsourcing; Privacy; Privacy-preserving; replicated secret-sharing; secure multi-party computation; secure outsourcing; transformer; NEURAL-NETWORK INFERENCE; SYSTEM;
D O I
10.1109/ACCESS.2024.3384268
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The Transformer model has emerged as a prominent machine learning tool within the field of natural language processing. Nevertheless, running the Transformer model on resource-constrained devices presents a notable challenge that needs to be addressed. Although outsourcing services can significantly reduce the computational overhead associated with using the model, it also incurs privacy risks to the provider's proprietary model and the client's sensitive data. In this paper, we propose an efficient privacy-preserving Transformer inference framework (PPTIF) for language translation tasks based on three-party replicated secret-sharing techniques. PPTIF offers a secure approach for users to leverage Transformer-based applications, such as language translation, while maintaining the confidentiality of their original input and inference results, thereby preventing any disclosure to the cloud server. Meanwhile, PPTIF ensures robust protection for the model parameters, guaranteeing their integrity and confidentiality. In PPTIF, we design a series of interaction protocols to implement the secure computation of Transformer components, namely secure Encoder and secure Decoder. To improve the efficiency of PPTIF, we optimize the computation of the Scaled Dot-Product Attention (Transformer's core operation) under secret sharing, effectively reducing its computation and communication overhead. Compared with Privformer, the optimized Masked Multi-Head Attention achieves about 1.7x lower runtime and 2.3x lower communication. In total, PPTIF achieve about 1.3x lower runtime and 1.2x lower communication. The effectiveness and security of PPTIF have been rigorously evaluated through comprehensive theoretical analysis and experimental validation.
引用
收藏
页码:48881 / 48897
页数:17
相关论文
共 50 条
  • [1] A Privacy-preserving Framework for Rank Inference
    Gao, Yunpeng
    Yan, Tong
    Zhang, Nan
    2017 1ST IEEE SYMPOSIUM ON PRIVACY-AWARE COMPUTING (PAC), 2017, : 180 - 181
  • [2] SecureGPT: A Framework for Multi-Party Privacy-Preserving Transformer Inference in GPT
    Zeng, Chenkai
    He, Debiao
    Feng, Qi
    Yang, Xiaolin
    Luo, Qingcai
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9480 - 9493
  • [3] THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
    Chen, Tianyu
    Bao, Hangbo
    Huang, Shaohan
    Dong, Li
    Jiao, Binxing
    Jiang, Daxin
    Zhou, Haoyi
    Li, Jianxin
    Wei, Furu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3510 - 3520
  • [4] Privacy-Preserving and Verifiable Outsourcing Linear Inference Computing Framework
    Liu, Jiao
    Li, Xinghua
    Liu, Ximeng
    Tang, Jiawei
    Wang, Yunwei
    Tong, Qiuyun
    Ma, Jianfeng
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (06) : 4591 - 4604
  • [5] PPCNN: An efficient privacy-preserving CNN training and inference framework
    Zhao, Fan
    Li, Zhi
    Wang, Hao
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 10988 - 11018
  • [6] SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC
    Luo, Jinglong
    Zhang, Yehong
    Zhang, Zhuo
    Zhang, Jiaqi
    Mu, Xin
    Wang, Hui
    Yu, Yue
    Xu, Zenglin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13333 - 13348
  • [7] Privacy-preserving generative framework for images against membership inference attacks
    Yang, Ruikang
    Ma, Jianfeng
    Miao, Yinbin
    Ma, Xindi
    IET COMMUNICATIONS, 2023, 17 (01) : 45 - 62
  • [8] PrivStream: A privacy-preserving inference framework on IoT streaming data at the edge
    Wang, Dan
    Ren, Ju
    Wang, Zhibo
    Zhang, Yaoxue
    Shen, Xuemin
    INFORMATION FUSION, 2022, 80 : 282 - 294
  • [9] Privformer: Privacy-preserving Transformer with MPC
    Akimoto, Yoshimasa
    Fukuchi, Kazuto
    Akimoto, Youhei
    Sakuma, Jun
    2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P, 2023, : 392 - 410
  • [10] A Privacy-Preserving Data Inference Framework for Internet of Health Things Networks
    Kang, James Jin
    Dibaei, Mahdi
    Luo, Gang
    Yang, Wencheng
    Zheng, Xi
    2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 1210 - 1215