B-LNN: Inference-time linear model for secure neural network inference

被引:3
|
作者
Wang, Qizheng [1 ,2 ]
Ma, Wenping [1 ]
Wang, Weiwei [1 ]
机构
[1] Xidian Univ, Sch Commun Engn, Xian, Peoples R China
[2] Shandong Inspur Sci Res Inst Co Ltd, Jinan, Peoples R China
关键词
Neural networks; Activation function; Privacy protection; Secure neural network inference;
D O I
10.1016/j.ins.2023.118966
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine Learning as a Service (MLaaS) provides clients with well-trained neural networks for predicting private data. Conventional prediction processes of MLaaS require clients to send sensitive inputs to the server, or proprietary models must be stored on the client-side device. The former reveals client privacy, while the latter harms the interests of model providers. Existing works on privacy-preserving MLaaS introduce cryptographic primitives to allow two parties to perform neural network inference without revealing either party's data. However, nonlinear activation functions bring high computational overhead and response delays to the inference process of these schemes.In this paper, we analyze the mechanism by which activation functions enhance model expressivity, and design an activation function S -cos that is friendly to secure neural network inference. Our proposed S -cos can be re-parameterized into a linear layer during the inference phase. Further, we propose an inference-time linear model called Beyond Linear Neural Network (B-LNN) equipped with S -cos, which exhibits promising performance on several benchmark datasets.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Bayesian inference of non-linear multiscale model parameters accelerated by a Deep Neural Network
    Wu, Ling
    Zulueta, Kepa
    Major, Zoltan
    Arriaga, Aitor
    Noels, Ludovic
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2020, 360
  • [32] Real-time inference in a VLSI spiking neural network
    Corneil, Dane
    Sonnleithner, Daniel
    Neftci, Emre
    Chicca, Elisabetta
    Cook, Matthew
    Indiveri, Giacomo
    Douglas, Rodney
    2012 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS 2012), 2012, : 2425 - 2428
  • [33] Lightweight Inference by Neural Network Pruning: Accuracy, Time and Comparison
    Paralikas, Ilias
    Spantideas, Sotiris
    Giannopoulos, Anastasios
    Trakadas, Panagiotis
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, PT III, AIAI 2024, 2024, 713 : 248 - 257
  • [34] A linear programming model based on network flow for pathway inference
    Xianwen Ren
    Xiang-Sun Zhang
    Journal of Systems Science and Complexity, 2010, 23 : 971 - 977
  • [35] A linear programming model based on network flow for pathway inference
    Ren, Xianwen
    Zhang, Xiang-Sun
    JOURNAL OF SYSTEMS SCIENCE & COMPLEXITY, 2010, 23 (05) : 971 - 977
  • [36] SMITIN: Self-Monitored Inference-Time INtervention for Generative Music Transformers
    Koo, Junghyun
    Wichern, Gordon
    Germain, Francois G.
    Khurana, Sameer
    Roux, Jonathan Le
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2025, 6 : 266 - 275
  • [37] Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention
    Tan, Zhen
    Chen, Tianlong
    Zhang, Zhenyu
    Liu, Huan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21619 - 21627
  • [38] Remapping in a recurrent neural network model of navigation and context inference
    Low, Isabel I. C.
    Giocomo, Lisa M.
    Williams, Alex H.
    ELIFE, 2023, 12
  • [39] Automating Deep Neural Network Model Selection for Edge Inference
    Lu, Bingqian
    Yang, Jianyi
    Chen, Lydia Y.
    Ren, Shaolei
    2019 IEEE FIRST INTERNATIONAL CONFERENCE ON COGNITIVE MACHINE INTELLIGENCE (COGMI 2019), 2019, : 184 - 193
  • [40] Binding and Perspective Taking as Inference in a Generative Neural Network Model
    Sadeghi, Mandi
    Schrodt, Fabian
    Otte, Sebastian
    Butz, Martin, V
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT III, 2021, 12893 : 3 - 14