A Representer Theorem for Deep Neural Networks

被引:0
|
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Biomed Imaging Grp, CH-1015 Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
splines; regularization; sparsity; learning; deep neural networks; activation functions; LINEAR INVERSE PROBLEMS; SPLINES; KERNELS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots. The bottom line is that the action of each neuron is encoded by a spline whose parameters (including the number of knots) are optimized during the training procedure. The scheme results in a computational structure that is compatible with existing deep-ReLU, parametric ReLU, APL (adaptive piecewise-linear) and MaxOut architectures. It also suggests novel optimization challenges and makes an explicit link with l(1) minimization and sparsity-promoting techniques.
引用
收藏
页数:30
相关论文
共 50 条
  • [21] A GENERALIZED CONVERGENCE THEOREM FOR NEURAL NETWORKS
    BRUCK, J
    GOODMAN, JW
    IEEE TRANSACTIONS ON INFORMATION THEORY, 1988, 34 (05) : 1089 - 1092
  • [22] Uniqueness theorem for quaternionic neural networks
    Kobayashi, Masaki
    SIGNAL PROCESSING, 2017, 136 : 102 - 106
  • [23] Least-squares collocation: a spherical harmonic representer theorem
    Chang, Guobin
    Bian, Shaofeng
    GEOPHYSICAL JOURNAL INTERNATIONAL, 2023, 234 (02) : 879 - 886
  • [24] KOLMOGOROV THEOREM AND MULTILAYER NEURAL NETWORKS
    KURKOVA, V
    NEURAL NETWORKS, 1992, 5 (03) : 501 - 506
  • [25] On the Singularity in Deep Neural Networks
    Nitta, Tohru
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT IV, 2016, 9950 : 389 - 396
  • [26] Topology of Deep Neural Networks
    Naitzat, Gregory
    Zhitnikov, Andrey
    Lim, Lek-Heng
    JOURNAL OF MACHINE LEARNING RESEARCH, 2020, 21
  • [27] On Numerosity of Deep Neural Networks
    Zhang, Xi
    Wu, Xiaolin
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] Topology of deep neural networks
    1600, Microtome Publishing (21):
  • [29] Tracking with Deep Neural Networks
    Kucharczyk, Marcin
    Wolter, Marcin
    PHOTONICS APPLICATIONS IN ASTRONOMY, COMMUNICATIONS, INDUSTRY, AND HIGH-ENERGY PHYSICS EXPERIMENTS 2019, 2019, 11176
  • [30] Deep neural networks in psychiatry
    Daniel Durstewitz
    Georgia Koppe
    Andreas Meyer-Lindenberg
    Molecular Psychiatry, 2019, 24 : 1583 - 1598