Compressing neural networks with two-layer decoupling

被引:0
|
作者
De Jonghe, Joppe [1 ]
Usevich, Konstantin [2 ]
Dreesen, Philippe [3 ]
Ishteva, Mariya [1 ]
机构
[1] Katholieke Univ Leuven, Dept Comp Sci, Geel, Belgium
[2] Univ Lorraine, CNRS, Nancy, France
[3] Maastricht Univ, DACS, Maastricht, Netherlands
关键词
tensor; tensor decomposition; decoupling; compression; neural network; MODEL COMPRESSION; ACCELERATION;
D O I
10.1109/CAMSAP58249.2023.10403509
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The single-layer decoupling problem has recently been used for the compression of neural networks. However, methods that are based on the single-layer decoupling problem only allow the compression of a neural network to a single flexible layer. As a result, compressing more complex networks leads to worse approximations of the original network due to only having one flexible layer. Having the ability to compress to more than one flexible layer thus allows to better approximate the underlying network compared to compression into only a single flexible layer. Performing compression into more than one flexible layer corresponds to solving a multi-layer decoupling problem. As a first step towards general multi-layer decoupling, this work introduces a method for solving the two-layer decoupling problem in the approximate case. This method enables the compression of neural networks into two flexible layers.
引用
收藏
页码:226 / 230
页数:5
相关论文
共 50 条
  • [41] Self-Regularity of Output Weights for Overparameterized Two-Layer Neural Networks
    Gamarnik, David
    Kizildag, Eren C.
    Zadik, Ilias
    2021 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2021, : 819 - 824
  • [42] A Riemannian mean field formulation for two-layer neural networks with batch normalization
    Chao Ma
    Lexing Ying
    Research in the Mathematical Sciences, 2022, 9
  • [43] Use of two-layer neural networks to answer scientific questions in radiation oncology
    Schmelz, Helmut
    Eich, Hans Theodor
    Haverkamp, Uwe
    Rehn, Stephan
    Hering, Dominik
    STRAHLENTHERAPIE UND ONKOLOGIE, 2023, 199 : S66 - S66
  • [44] Cumulant-based training algorithms of two-layer feedforward neural networks
    Dai, XH
    SIGNAL PROCESSING, 2000, 80 (08) : 1597 - 1606
  • [45] Improved learning algorithm for two-layer neural networks for identification of nonlinear systems
    Vargas, Jose A. R.
    Pedrycz, Witold
    Hemerly, Elder M.
    NEUROCOMPUTING, 2019, 329 : 86 - 96
  • [46] On the learning dynamics of two-layer quadratic neural networks for understanding deep learning
    Tan, Zhenghao
    Chen, Songcan
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (03)
  • [47] Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
    Liu, Fanghui
    Dadi, Leello
    Cevher, Volkan
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 42
  • [48] A Riemannian mean field formulation for two-layer neural networks with batch normalization
    Ma, Chao
    Ying, Lexing
    RESEARCH IN THE MATHEMATICAL SCIENCES, 2022, 9 (03)
  • [49] A heuristic two-layer reinforcement learning algorithm based on BP neural networks
    Liu, Zhibin
    Zeng, Xiaoqin
    Liu, Huiyi
    Chu, Rong
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2015, 52 (03): : 579 - 587
  • [50] Dynamics of the two-layer pseudoinverse neural network
    黎树军
    黄五群
    陈天仑
    Chinese Science Bulletin, 1995, (20) : 1691 - 1694