Sparse kernel logistic regression based on L1/2 regularization

被引:0
|
作者
XU Chen [1 ]
PENG ZhiMing [2 ]
JING WenFeng [2 ]
机构
[1] Department of Statistics,University of British Columbia,Vancouver,BC V6T1Z2,Canada
[2] Institute for Information and System Science,School of Mathematics and Statistics,Xi'an Jiaotong University
基金
中国国家自然科学基金;
关键词
classification; L1/2; regularization; thresholding algorithm; kernel logistic regression; support vectors;
D O I
暂无
中图分类号
O212.1 [一般数理统计];
学科分类号
摘要
The sparsity driven classification technologies have attracted much attention in recent years,due to their capability of providing more compressive representations and clear interpretation.Two most popular classification approaches are support vector machines(SVMs) and kernel logistic regression(KLR),each having its own advantages.The sparsification of SVM has been well studied,and many sparse versions of 2-norm SVM,such as 1-norm SVM(1-SVM),have been developed.But,the sparsifiation of KLR has been less studied.The existing sparsification of KLR is mainly based on L1 norm and L2 norm penalties,which leads to the sparse versions that yield solutions not so sparse as it should be.A very recent study on L1/2 regularization theory in compressive sensing shows that L1/2 sparse modeling can yield solutions more sparse than those of 1 norm and 2 norm,and,furthermore,the model can be efficiently solved by a simple iterative thresholding procedure.The objective function dealt with in L1/2 regularization theory is,however,of square form,the gradient of which is linear in its variables(such an objective function is the so-called linear gradient function).In this paper,through extending the linear gradient function of L1/2 regularization framework to the logistic function,we propose a novel sparse version of KLR,the 1/2 quasi-norm kernel logistic regression(1/2-KLR).The version integrates advantages of KLR and L1/2 regularization,and defines an efficient implementation scheme of sparse KLR.We suggest a fast iterative thresholding algorithm for 1/2-KLR and prove its convergence.We provide a series of simulations to demonstrate that 1/2-KLR can often obtain more sparse solutions than the existing sparsity driven versions of KLR,at the same or better accuracy level.The conclusion is also true even in comparison with sparse SVMs(1-SVM and 2-SVM).We show an exclusive advantage of 1/2-KLR that the regularization parameter in the algorithm can be adaptively set whenever the sparsity(correspondingly,the number of support vectors) is given,which suggests a methodology of comparing sparsity promotion capability of different sparsity driven classifiers.As an illustration of benefits of 1/2-KLR,we give two applications of 1/2-KLR in semi-supervised learning,showing that 1/2-KLR can be successfully applied to the classification tasks in which only a few data are labeled.
引用
收藏
页码:75 / 90
页数:16
相关论文
共 50 条
  • [31] Sparse Logistic Regression: Comparison of Regularization and Bayesian Implementations
    Zanon, Mattia
    Zambonin, Giuliano
    Susto, Gian Antonio
    McLoone, Sean
    ALGORITHMS, 2020, 13 (06)
  • [32] Bregman Distance to L1 Regularized Logistic Regression
    Das Gupta, Mithun
    Huang, Thomas S.
    19TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOLS 1-6, 2008, : 2577 - 2580
  • [33] Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
    Shen, Xinyue
    Gu, Yuantao
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2018, 66 (12) : 3199 - 3211
  • [34] A novel l1/2 sparse regression method for hyperspectral unmixing
    Sun, Le
    Wu, Zebin
    Xiao, Liang
    Liu, Jianjun
    Wei, Zhihui
    Dang, Fuxing
    INTERNATIONAL JOURNAL OF REMOTE SENSING, 2013, 34 (20) : 6983 - 7001
  • [35] Vigilance estimation using truncated l1 distance kernel-based sparse representation regression with physiological signals
    Zhang, Xuan
    Wang, Dixin
    Wu, Hongtong
    Chao, Jinlong
    Zhong, Jitao
    Peng, Hong
    Hu, Bin
    COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 242
  • [36] Iteratively Reweighted l1 Approaches to Sparse Composite Regularization
    Ahmad, Rizwan
    Schniter, Philip
    IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2015, 1 (04) : 220 - 235
  • [37] Sparse Auto-encoder with Smoothed l1 Regularization
    Zhang, Li
    Lu, Yaping
    Zhang, Zhao
    Wang, Bangjun
    Li, Fanzhang
    NEURAL INFORMATION PROCESSING, ICONIP 2016, PT III, 2016, 9949 : 555 - 563
  • [38] A Simple Neural Network for Sparse Optimization With l1 Regularization
    Ma, Litao
    Bian, Wei
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (04): : 3430 - 3442
  • [39] Make l1 regularization effective in training sparse CNN
    He, Juncai
    Jia, Xiaodong
    Xu, Jinchao
    Zhang, Lian
    Zhao, Liang
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2020, 77 (01) : 163 - 182
  • [40] l1/2-Regularization Based Sparse Channel hstimation for MmWave Massive MIMO Systems
    Zhang, Zhenyue
    Gui, Guan
    Liang, Yan
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,