Zero-Norm ELM with Non-convex Quadratic Loss Function for Sparse and Robust Regression

被引:2
|
作者
Wang, Xiaoxue [1 ]
Wang, Kuaini [2 ,3 ]
She, Yanhong [2 ]
Cao, Jinde [3 ,4 ]
机构
[1] Xian Shiyou Univ, Coll Comp Sci, Xian 710065, Shaanxi, Peoples R China
[2] Xian Shiyou Univ, Coll Sci, Xian 710065, Shaanxi, Peoples R China
[3] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
[4] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Non-convex quadratic loss function; Zero-norm; DC programming; DCA; EXTREME LEARNING-MACHINE; SUPPORT VECTOR MACHINES; STATISTICAL COMPARISONS; CLASSIFICATION; CLASSIFIERS;
D O I
10.1007/s11063-023-11424-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) is a machine learning technique with simple structure, fast learning speed, and excellent generalization ability, which has received a lot of attention since it was proposed. In order to further improve the sparsity of output weights and the robustness of the model, this paper proposes a sparse and robust ELM based on zero-norm regularization and a non-convex quadratic loss function. The zero-norm regularization obtains sparse hidden nodes automatically, and the introduced non-convex quadratic loss function enhances the robustness by setting constant penalties to outliers. The optimization problem can be formulated as the difference of convex functions (DC) programming. This DC programming is solved by using the DC algorithm (DCA) in this paper. The experiments on the artificial and Benchmark datasets verify that the proposed method has promising robustness while reducing the number of hidden nodes, especially on the datasets with higher outliers level.
引用
收藏
页码:12367 / 12399
页数:33
相关论文
共 44 条
  • [1] Zero-Norm ELM with Non-convex Quadratic Loss Function for Sparse and Robust Regression
    Xiaoxue Wang
    Kuaini Wang
    Yanhong She
    Jinde Cao
    Neural Processing Letters, 2023, 55 : 12367 - 12399
  • [2] Robust non-convex least squares loss function for regression with outliers
    Wang, Kuaini
    Zhong, Ping
    KNOWLEDGE-BASED SYSTEMS, 2014, 71 : 290 - 302
  • [3] Training robust support vector regression with smooth non-convex loss function
    Zhong, Ping
    OPTIMIZATION METHODS & SOFTWARE, 2012, 27 (06): : 1039 - 1058
  • [4] Using zero-norm constraint for sparse probability density function estimation
    Hong, X.
    Chen, S.
    Harris, C. J.
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2012, 43 (11) : 2107 - 2113
  • [6] Robust regularized extreme learning machine for regression with non-convex loss function via DC program
    Wang, Kuaini
    Pei, Huimin
    Cao, Jinde
    Zhong, Ping
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2020, 357 (11): : 7069 - 7091
  • [7] Distributed Quantile Regression with Non-Convex Sparse Penalties
    Mirzaeifard, Reza
    Gogineni, Vinay Chakravarthi
    Venkategowda, Naveen K. D.
    Werner, Stefan
    2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 250 - 254
  • [8] Robust Sparse Recovery via Non-Convex Optimization
    Chen, Laming
    Gu, Yuantao
    2014 19TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2014, : 742 - 747
  • [9] Non-Convex P-norm Projection for Robust Sparsity
    Das Gupta, Mithun
    Kumar, Sanjeev
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 1593 - 1600
  • [10] Relaxed sparse eigenvalue conditions for sparse estimation via non-convex regularized regression
    Pan, Zheng
    Zhang, Changshui
    PATTERN RECOGNITION, 2015, 48 (01) : 231 - 243