An approximation theory approach to learning with l1 regularization

被引:13
|
作者
Wang, Hong-Yan [1 ]
Xiao, Quan-Wu [2 ]
Zhou, Ding-Xuan [3 ]
机构
[1] Zhejiang Gongshang Univ, Sch Math & Stat, Hangzhou 310018, Zhejiang, Peoples R China
[2] Microsoft Search Technol Ctr Asia, Beijing 100080, Peoples R China
[3] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
关键词
Learning theory; Data dependent hypothesis spaces; Kernel-based regularization scheme; E-1-regularizer; Multivariate approximation; MODEL SELECTION; SPACES; INTERPOLATION; REGRESSION; OPERATORS;
D O I
10.1016/j.jat.2012.12.004
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Regularization schemes with an l(1)-regularizer often produce sparse representations for objects in approximation theory, image processing, statistics and learning theory. In this paper, we study a kernel-based learning algorithm for regression generated by regularization schemes associated with the l(1)-regularizer. We show that convergence rates of the learning algorithm can be independent of the dimension of the input space of the regression problem when the kernel is smooth enough. This confirms the effectiveness of the learning algorithm. Our error analysis is carried out by means of an approximation theory approach using a local polynomial reproduction formula and the nonning set condition. (C) 2012 Elsevier Inc. All rights reserved.
引用
收藏
页码:240 / 258
页数:19
相关论文
共 50 条
  • [41] αl1 - βl2 regularization for sparse recovery
    Ding, Liang
    Han, Weimin
    INVERSE PROBLEMS, 2019, 35 (12)
  • [42] ELM with L1/L2 regularization constraints
    Feng B.
    Qin K.
    Jiang Z.
    Hanjie Xuebao/Transactions of the China Welding Institution, 2018, 39 (09): : 31 - 35
  • [44] Learning Optimized Structure of Neural Networks by Hidden Node Pruning With L1 Regularization
    Xie, Xuetao
    Zhang, Huaqing
    Wang, Junze
    Chang, Qin
    Wang, Jian
    Pal, Nikhil R.
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (03) : 1333 - 1346
  • [45] Convex Hull Collaborative Representation Learning on Grassmann Manifold with L1 Norm Regularization
    Guan, Yao
    Yan, Wenzhu
    Li, Yanmeng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT II, 2024, 14426 : 453 - 465
  • [46] Learning non-linear classifiers with a sparsity constraint using L1 regularization
    Blondel, Mathieu
    Seki, Kazuhiro
    Uehara, Kuniaki
    Transactions of the Japanese Society for Artificial Intelligence, 2012, 27 (06) : 401 - 410
  • [47] The L1/2 regularization approach for survival analysis in the accelerated failure time model
    Chai, Hua
    Liang, Yong
    Liu, Xiao-Ying
    COMPUTERS IN BIOLOGY AND MEDICINE, 2015, 64 : 283 - 290
  • [48] Compact Deep Neural Networks with l1,1 and l1,2 Regularization
    Ma, Rongrong
    Niu, Lingfeng
    2018 18TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2018, : 1248 - 1254
  • [49] Tight bounds on l1 approximation and learning of self-bounding functions
    Feldman, Vitaly
    Kothari, Pravesh
    Vondrak, Jan
    THEORETICAL COMPUTER SCIENCE, 2020, 808 : 86 - 98
  • [50] Lp approximation of variational problems in L1 and L∞
    Attouch, H
    Cominetti, R
    NONLINEAR ANALYSIS-THEORY METHODS & APPLICATIONS, 1999, 36 (03) : 373 - 399