A Representer Theorem for Deep Neural Networks

被引:0
|
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne, Biomed Imaging Grp, CH-1015 Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
splines; regularization; sparsity; learning; deep neural networks; activation functions; LINEAR INVERSE PROBLEMS; SPLINES; KERNELS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots. The bottom line is that the action of each neuron is encoded by a spline whose parameters (including the number of knots) are optimized during the training procedure. The scheme results in a computational structure that is compatible with existing deep-ReLU, parametric ReLU, APL (adaptive piecewise-linear) and MaxOut architectures. It also suggests novel optimization challenges and makes an explicit link with l(1) minimization and sparsity-promoting techniques.
引用
收藏
页数:30
相关论文
共 50 条
  • [1] A representer theorem for deep neural networks
    Unser, Michael
    Journal of Machine Learning Research, 2019, 20
  • [2] Representer Point Selection for Explaining Deep Neural Networks
    Yeh, Chih-Kuan
    Kim, Joon Sik
    Yen, Ian E. H.
    Ravikumar, Pradeep
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] A Representer Theorem for Deep Kernel Learning
    Bohn, Bastian
    Rieger, Christian
    Griebel, Michael
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20
  • [4] A representer theorem for deep kernel learning
    Bohn, Bastian
    Rieger, Christian
    Griebel, Michael
    Journal of Machine Learning Research, 2019, 20
  • [5] A generalized representer theorem
    Schölkopf, B
    Herbrich, R
    Smola, AJ
    COMPUTATIONAL LEARNING THEORY, PROCEEDINGS, 2001, 2111 : 416 - 426
  • [6] An Epigraphical Approach to the Representer Theorem
    Duval, Vincent
    JOURNAL OF CONVEX ANALYSIS, 2021, 28 (03) : 819 - 836
  • [7] Banach space representer theorems for neural networks and ridge splines
    Parhi, Rahul
    Nowak, Robert D.
    Journal of Machine Learning Research, 2021, 22
  • [8] Banach Space Representer Theorems for Neural Networks and Ridge Splines
    Parhi, Rahul
    Nowak, Robert D.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [9] Representer Theorem for Learning Koopman Operators
    Khosravi, Mohammad
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 2995 - 3010
  • [10] When is there a representer theorem?Reflexive Banach spaces
    Kevin Schlegel
    Advances in Computational Mathematics, 2021, 47