From Kernel Methods to Neural Networks: A Unifying Variational Formulation

被引:1
|
作者
Unser, Michael [1 ]
机构
[1] Ecole Polytech Fed Lausanne EPFL, Biomed Imaging Grp, Stn 17, CH-1015 Lausanne, Switzerland
关键词
Machine learning; Convex optimization; Regularization; Representer theorem; Kernel methods; Neural networks; Banach space; APPROXIMATION; SPLINES; INTERPOLATION; REGRESSION; TRANSFORM;
D O I
10.1007/s10208-023-09624-9
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator L\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\textrm{L}$$\end{document} and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms ||center dot||=||center dot||Lp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \cdot \Vert =\Vert \cdot \Vert _{L_p}$$\end{document} with p is an element of(1,2]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p\in (1,2]$$\end{document}. Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.
引用
收藏
页码:1779 / 1818
页数:40
相关论文
共 50 条
  • [1] OPTIMIZING KERNEL METHODS - A UNIFYING VARIATIONAL PRINCIPLE
    GRANOVSKY, BL
    MULLER, HG
    INTERNATIONAL STATISTICAL REVIEW, 1991, 59 (03) : 373 - 388
  • [2] Designing rotationally invariant neural networks from PDEs and variational methods
    Alt, Tobias
    Schrader, Karl
    Weickert, Joachim
    Peter, Pascal
    Augustin, Matthias
    RESEARCH IN THE MATHEMATICAL SCIENCES, 2022, 9 (03)
  • [3] Designing rotationally invariant neural networks from PDEs and variational methods
    Tobias Alt
    Karl Schrader
    Joachim Weickert
    Pascal Peter
    Matthias Augustin
    Research in the Mathematical Sciences, 2022, 9
  • [4] Revisiting Convolutional Neural Networks from the Viewpoint of Kernel-Based Methods
    Jones, Corinne
    Roulet, Vincent
    Harchaoui, Zaid
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2023, 32 (04) : 1237 - 1247
  • [5] An introduction to recursive neural networks and kernel methods for cheminformatics
    Micheli, Alessio
    Sperduti, Alessandro
    Starita, Antonina
    CURRENT PHARMACEUTICAL DESIGN, 2007, 13 (14) : 1469 - 1495
  • [6] When Do Neural Networks Outperform Kernel Methods?
    Ghorbani, Behrooz
    Mei, Song
    Misiakiewicz, Theodor
    Montanari, Andrea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [7] KERNEL METHODS MATCH DEEP NEURAL NETWORKS ON TIMIT
    Huang, Po-Sen
    Avron, Haim
    Sainath, Tara N.
    Sindhwani, Vikas
    Ramabhadran, Bhuvana
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [8] When do neural networks outperform kernel methods?*
    Ghorbani, Behrooz
    Mei, Song
    Misiakiewicz, Theodor
    Montanari, Andrea
    JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT, 2021, 2021 (12):
  • [9] KERNEL METHODS AND NEURAL NETWORKS FOR WATER RESOURCES MANAGEMENT
    Iliadis, Lazaros S.
    Spartalis, Stefanos I.
    Tachos, Stavros
    ENVIRONMENTAL ENGINEERING AND MANAGEMENT JOURNAL, 2010, 9 (02): : 181 - 187
  • [10] Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
    Mescheder, Lars
    Nowozin, Sebastian
    Geiger, Andreas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70