Efficient Sum-Check Protocol for Convolution

被引:0
|
作者
Ju, Chanyang [1 ,2 ]
Lee, Hyeonbum [1 ,2 ]
Chung, Heewon [1 ,2 ]
Seo, Jae Hong [1 ,2 ]
Kim, Sungwook [3 ]
机构
[1] Hanyang Univ, Dept Math, Seoul 04763, South Korea
[2] Hanyang Univ, Res Inst Nat Sci, Seoul 64763, South Korea
[3] Seoul Womens Univ, Dept Informat Secur, Seoul 01797, South Korea
基金
新加坡国家研究基金会;
关键词
Verifiable computation; matrix multiplication; convolutional neural networks; interactive proofs; sum-check protocol;
D O I
10.1109/ACCESS.2021.3133442
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many applications have recently adopted machine learning and deep learning techniques. Convolutional neural networks (CNNs) are made up of sequential operations including activation, pooling, convolution, and fully connected layer, and their computation cost is enormous, with convolution and fully connected layer dominating. In general, a user with insufficient computer capacity delegated certain tasks to a server with sufficient computing power, and the user may want to verify that the outputs are truly machine learning model predictions. In this paper, we are interested in verifying that the delegation of CNNs, one of the deep learning models for image recognition and classification, is correct. Specifically, we focus on the verifiable computation of matrix multiplications in a CNN convolutional layer. We use Thaler's idea (CRYPTO 2013) for validating matrix multiplication operations and present a predicate function based on the insight that the sequence of operations can be viewed as sequential matrix multiplication. Furthermore, we lower the cost of proving by splitting a convolution operation into two halves. As a result, we can provide an efficient sum-check protocol for a convolution operation that, like the state-of-the-art zkCNN (ePrint 2021) approach, achieves asymptotically optimal proving cost. The suggested protocol is about 2x cheaper than zkCNN in terms of communication costs. We also propose a verified inference system based on our method as the fundamental building component.
引用
收藏
页码:164047 / 164059
页数:13
相关论文
共 50 条
  • [41] A CONVOLUTION INTEGRAL FOR RESOLVENT OF SUM OF 2 COMMUTING OPERATORS
    BIANCHI, L
    FAVELLA, L
    NUOVO CIMENTO, 1964, 34 (06): : 1825 - +
  • [42] DSConv: Efficient Convolution Operator
    do Nascimento, Marcelo Gennari
    Fawcett, Roger
    Prisacariu, Victor Adrian
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5147 - 5156
  • [43] Tensorisation of vectors and their efficient convolution
    Hackbusch, Wolfgang
    NUMERISCHE MATHEMATIK, 2011, 119 (03) : 465 - 488
  • [44] Power Efficient Modulo Convolution
    Kambhampati, Satyakiran
    2016 INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTATION TECHNOLOGIES (ICICT), VOL 1, 2016, : 481 - 486
  • [45] Efficient convolution pooling on the GPU
    Suita, Shunsuke
    Nishimura, Takahiro
    Tokura, Hiroki
    Nakano, Koji
    Ito, Yasuaki
    Kasagi, Akihiko
    Tabaru, Tsuguchika
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2020, 138 : 222 - 229
  • [46] Efficient FPGA implementation of convolution
    Mohammad, Khader
    Agaian, Sos
    2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 3478 - 3483
  • [47] On the efficient convolution with the Newton potential
    Hackbusch, W.
    Naraparaju, K. K.
    Schneider, J.
    JOURNAL OF NUMERICAL MATHEMATICS, 2010, 18 (04) : 257 - 280
  • [48] An efficient algorithm for multidimensional convolution
    Elnaggar, A
    Aboelaze, M
    2000 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, PROCEEDINGS, VOLS I-VI, 2000, : 2163 - 2166
  • [49] Tensorisation of vectors and their efficient convolution
    Wolfgang Hackbusch
    Numerische Mathematik, 2011, 119
  • [50] Optimized separable convolution: Yet another efficient convolution operator
    Wei, Tao
    Tian, Yonghong
    Wang, Yaowei
    Liang, Yun
    Chen, Chang Wen
    AI OPEN, 2022, 3 : 162 - 171