BNN-Flip: Enhancing the Fault Tolerance and Security of Compute-in-Memory Enabled Binary Neural Network Accelerators

被引:1
|
作者
Malhotra, Akul [1 ]
Wang, Chunguang [1 ]
Gupta, Sumeet Kumar [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
关键词
Binary Neural Networks; Compute-in-Memory; DNN Security; Fault Tolerance;
D O I
10.1109/ASP-DAC58780.2024.10473947
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Compute-in-memory based binary neural networks or CiM-BNNs offer high energy/area efficiency for the design of edge deep neural network (DNN) accelerators, with only a mild accuracy reduction. However, for successful deployment, the design of CiM-BNNs must consider challenges such as memory faults and data security that plague existing DNN accelerators. In this work, we aim to mitigate both these problems simultaneously by proposing BNN-Flip, a training-free weight transformation algorithm that not only enhances the fault tolerance of CiM-BNNs but also protects them from weight theft attacks. BNN-Flip inverts the rows and columns of the BNN weight matrix in a way that reduces the impact of memory faults on the CiM-BNN's inference accuracy, while preserving the correctness of the CiM operation. Concurrently, our technique encodes the CiM-BNN weights, securing them from weight theft. Our experiments on various CiM-BNNs show that BNN-Flip achieves an inference accuracy increase of up to 10.55% over the baseline (i.e. CiM-BNNs not employing BNN-Flip) in the presence of memory faults. Additionally, we show that the encoded weights generated by BNN-Flip furnish extremely low (near 'random guess') inference accuracy for the adversary attempting weight theft. The benefits of BNN-Flip come with an energy overhead of < 3%.
引用
收藏
页码:146 / 152
页数:7
相关论文
共 2 条
  • [1] A Method for Reverse Engineering Neural Network Parameters from Compute-in-Memory Accelerators
    Read, James
    Li, Wantong
    Yu, Shimeng
    2022 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2022), 2022, : 302 - 307
  • [2] A System-Level Exploration of Binary Neural Network Accelerators with Monolithic 3D Based Compute-in-Memory SRAM
    Choi, Jeong Hwan
    Gong, Young-Ho
    Chung, Sung Woo
    ELECTRONICS, 2021, 10 (05) : 1 - 11