Zero-Norm ELM with Non-convex Quadratic Loss Function for Sparse and Robust Regression

被引:2
|
作者
Wang, Xiaoxue [1 ]
Wang, Kuaini [2 ,3 ]
She, Yanhong [2 ]
Cao, Jinde [3 ,4 ]
机构
[1] Xian Shiyou Univ, Coll Comp Sci, Xian 710065, Shaanxi, Peoples R China
[2] Xian Shiyou Univ, Coll Sci, Xian 710065, Shaanxi, Peoples R China
[3] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
[4] Yonsei Univ, Yonsei Frontier Lab, Seoul 03722, South Korea
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Non-convex quadratic loss function; Zero-norm; DC programming; DCA; EXTREME LEARNING-MACHINE; SUPPORT VECTOR MACHINES; STATISTICAL COMPARISONS; CLASSIFICATION; CLASSIFIERS;
D O I
10.1007/s11063-023-11424-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme learning machine (ELM) is a machine learning technique with simple structure, fast learning speed, and excellent generalization ability, which has received a lot of attention since it was proposed. In order to further improve the sparsity of output weights and the robustness of the model, this paper proposes a sparse and robust ELM based on zero-norm regularization and a non-convex quadratic loss function. The zero-norm regularization obtains sparse hidden nodes automatically, and the introduced non-convex quadratic loss function enhances the robustness by setting constant penalties to outliers. The optimization problem can be formulated as the difference of convex functions (DC) programming. This DC programming is solved by using the DC algorithm (DCA) in this paper. The experiments on the artificial and Benchmark datasets verify that the proposed method has promising robustness while reducing the number of hidden nodes, especially on the datasets with higher outliers level.
引用
收藏
页码:12367 / 12399
页数:33
相关论文
共 44 条