Face recognition by sparse discriminant analysis via joint L2,1-norm minimization

被引:104
|
作者
Shi, Xiaoshuang [1 ]
Yang, Yujiu [1 ]
Guo, Zhenhua [1 ]
Lai, Zhihui [2 ,3 ]
机构
[1] Tsinghua Univ, Grad Sch Shenzhen, Shenzhen Key Lab Broadband Network & Multimedia, Shenzhen 518055, Peoples R China
[2] Harbin Inst Technol, Shenzhen Grad Sch, Biocomp Res Ctr, Shenzhen 518052, Peoples R China
[3] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518055, Guangdong, Peoples R China
基金
中国博士后科学基金;
关键词
L-2; L-1-norm; Fisher linear discriminant analysis; Sparse discriminant analysis; REGRESSION; SELECTION;
D O I
10.1016/j.patcog.2014.01.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, joint feature selection and subspace learning, which can perform feature selection and subspace learning simultaneously, is proposed and has encouraging ability on face recognition. In the literature, a framework of utilizing L-2,L-1-norm penalty term has also been presented, but some important algorithms cannot be covered, such as Fisher Linear Discriminant Analysis and Sparse Discriminant Analysis. Therefore, in this paper, we add L-2,L-1-norm penalty term on FLDA and propose a feasible solution by transforming its nonlinear model into linear regression type. In addition, we modify the optimization model of SDA by replacing elastic net with L-2,L-1-norm penalty term and present its optimization method. Experiments on three standard face databases illustrate FLDA and SDA via L-2,L-1-norm penalty term can significantly improve their recognition performance, and obtain inspiring results with low computation cost and for low-dimension feature. (C) 2014 Elsevier Ltd. All rights reserved.
引用
收藏
页码:2447 / 2453
页数:7
相关论文
共 50 条
  • [31] Transfer Learning for Survival Analysis via Efficient L2,1-norm Regularized Cox Regression
    Li, Yan
    Wang, Lu
    Wang, Jie
    Ye, Jieping
    Reddy, Chandan K.
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 231 - 240
  • [32] Robust feature selection via l2,1-norm in finite mixture of regression
    Li, Xiangrui
    Zhu, Dongxiao
    PATTERN RECOGNITION LETTERS, 2018, 108 : 15 - 22
  • [33] A Fast and Accurate Matrix Completion Method Based on QR Decomposition and L2,1-Norm Minimization
    Liu, Qing
    Davoine, Franck
    Yang, Jian
    Cui, Ying
    Jin, Zhong
    Han, Fei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (03) : 803 - 817
  • [34] Robust local K-proximal plane clustering based on L2,1-norm minimization
    Wang, Jiawei
    Liu, Yingan
    Fu, Liyong
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (11) : 5143 - 5158
  • [35] Learning Robust Distance Metric with Side Information via Ratio Minimization of Orthogonally Constrained l2,1-Norm Distances
    Liu, Kai
    Brand, Lodewijk
    Wang, Hua
    Nie, Feiping
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 3008 - 3014
  • [36] l2,1-norm minimization based negative label relaxation linear regression for feature selection
    Peng, Yali
    Sehdev, Paramjit
    Liu, Shigang
    Lie, Jun
    Wang, Xili
    PATTERN RECOGNITION LETTERS, 2018, 116 : 170 - 178
  • [37] Unsupervised maximum margin feature selection via L 2,1-norm minimization
    Yang, Shizhun
    Hou, Chenping
    Nie, Feiping
    Wu, Yi
    NEURAL COMPUTING & APPLICATIONS, 2012, 21 (07): : 1791 - 1799
  • [38] Recovery Guarantee Analyses of Joint Sparse Recovery via Tail l2,1 Minimization
    Zheng, Baifu
    Zeng, Cao
    Li, Shidong
    Liao, Guisheng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 4342 - 4352
  • [39] Assessing Dry Weight of Hemodialysis Patients via Sparse Laplacian Regularized RVFL Neural Network with L2,1-Norm
    Guo, Xiaoyi
    Zhou, Wei
    Lu, Qun
    Du, Aiyan
    Cai, Yinghua
    Ding, Yijie
    BIOMED RESEARCH INTERNATIONAL, 2021, 2021
  • [40] Robust Feature Selection via Simultaneous Capped l2 -Norm and l2,1 -Norm Minimization
    Lan, Gongmin
    Hou, Chenping
    Yi, Dongyun
    PROCEEDINGS OF 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA ANALYSIS (ICBDA), 2016, : 147 - 151