Privacy preserving layer partitioning for Deep Neural Network models

被引:0
|
作者
Rajasekar, Kishore [1 ]
Loh, Randolph [1 ]
Fok, Kar Wai [1 ]
Thing, Vrizlynn L. L. [1 ]
机构
[1] ST Engn, Singapore, Singapore
关键词
enclave; model partition; private inference; Trusted execution environment; intel sgx; CNN;
D O I
10.1109/CAI59869.2024.00202
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
MLaaS (Machine Learning as a Service) has become popular in the cloud computing domain, allowing users to leverage cloud resources for running private inference of ML models on their data. However, ensuring user input privacy and secure inference execution is essential. One of the approaches to protect data privacy and integrity is to use Trusted Execution Environments (TEEs) by enabling execution of programs in secure hardware enclave. Using TEEs can introduce significant performance overhead due to the additional layers of encryption, decryption, security and integrity checks. This can lead to slower inference times compared to running on unprotected hardware. In our work, we enhance the runtime performance of ML models by introducing layer partitioning technique and offloading computations to GPU. The technique comprises two distinct partitions: one executed within the TEE, and the other carried out using a GPU accelerator. Layer partitioning exposes intermediate feature maps in the clear which can lead to reconstruction attacks to recover the input. We conduct experiments to demonstrate the effectiveness of our approach in protecting against input reconstruction attacks developed using trained conditional Generative Adversarial Network(c-GAN). The evaluation is performed on widely used models such as VGG-16, ResNet-50, and EfficientNetB0, using two datasets: ImageNet for Image classification and TON IoT dataset for cybersecurity attack detection.
引用
收藏
页码:1129 / 1135
页数:7
相关论文
共 50 条
  • [41] A privacy-preserving algorithm for distributed training of neural network ensembles
    Zhang, Yuan
    Zhong, Sheng
    NEURAL COMPUTING & APPLICATIONS, 2013, 22 : S269 - S282
  • [42] pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network Testing
    Weng, Jiasi
    Weng, Jian
    Tang, Gui
    Yang, Anjia
    Li, Ming
    Liu, Jia-Nan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2218 - 2233
  • [43] A privacy preserving neural network learning algorithm for horizontally partitioned databases
    Guang L.
    Ya-Dong W.
    Xiao-Hong S.
    Information Technology Journal, 2010, 9 (01) : 1 - 10
  • [44] Non-interactive privacy-preserving neural network prediction
    Ma, Xu
    Chen, Xiaofeng
    Zhang, Xiaoyu
    INFORMATION SCIENCES, 2019, 481 : 507 - 519
  • [45] FENet: Privacy-preserving Neural Network Training with Functional Encryption
    Panzade, Prajwal
    Takabi, Daniel
    PROCEEDINGS OF THE 9TH ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, IWSPA 2023, 2023, : 33 - 43
  • [46] Privacy-preserving Deep-learning Models for Fingerprint Data Using Differential Privacy
    Mohammadi, Maryam
    Sabry, Farida
    Labda, Wadha
    Malluhi, Qutaibah
    PROCEEDINGS OF THE 9TH ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, IWSPA 2023, 2023, : 45 - 53
  • [47] Privacy-Preserving in Double Deep-Q-Network with Differential Privacy in Continuous Spaces
    Abahussein, Suleiman
    Cheng, Zishuo
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    AI 2021: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13151 : 15 - 26
  • [48] Neural network models and deep learning
    Kriegeskorte, Nikolaus
    Golan, Tal
    CURRENT BIOLOGY, 2019, 29 (07) : R231 - R236
  • [49] A Neuron Noise-Injection Technique for Privacy Preserving Deep Neural Networks
    Adesuyi, Tosin A.
    Kim, Byeong Man
    OPEN COMPUTER SCIENCE, 2020, 10 (01) : 137 - 152
  • [50] Deep Neural Network and GAN-Based Reversible Data Hiding in Encrypted Images: A Privacy-Preserving Approach
    Nalavade J.E.
    Patil A.
    Buchade A.
    Jadhav N.
    SN Computer Science, 5 (1)