Learning Discriminative Dictionary for Facial Expression Recognition

被引:3
|
作者
Zhang, Shiqing [1 ]
Zhao, Xiaoming [1 ]
Chuang, Yuelong [1 ]
Guo, Wenping [1 ]
Chen, Ying [1 ]
机构
[1] Taizhou Univ, Inst Intelligent Informat Proc, Taizhou 318000, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Data locality; Dictionary learning; Facial expression recognition; Fisher discrimination; Group Lasso regularization; Sparse coding; SPARSE REPRESENTATION; FACE RECOGNITION; CLASSIFICATION; REDUCTION;
D O I
10.1080/02564602.2017.1283251
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Sparse coding is currently an active subject in signal processing, computer vision, pattern recognition, etc. Fisher discrimination dictionary learning (FDDL) is a recently developed discriminative dictionary learning method and exhibits promising performance for classification. However, FDDL could not capture the locality structure of data, and it produces discriminative sparse coding coefficients, which is not effective enough for classification. To address these issues, this paper proposes an advanced version of FDDL by integrating data locality and group Lasso regularization in the procedure of FDDL's sparse coding. The proposed method is used to learn locality- and group-sensitive discriminative dictionary for facial expression recognition. Our experimental results on two public facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database, demonstrate the effectiveness of the proposed method on facial expression recognition tasks, giving a significant performance improvement over FDDL.
引用
收藏
页码:275 / 281
页数:7
相关论文
共 50 条
  • [31] DR-FER: Discriminative and Robust Representation Learning for Facial Expression Recognition
    Li, Ming
    Fu, Huazhu
    He, Shengfeng
    Fan, Hehe
    Liu, Jun
    Keppo, Jussi
    Shou, Mike Zheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 6297 - 6309
  • [32] MULTIPLE INSTANCE DISCRIMINATIVE DICTIONARY LEARNING FOR ACTION RECOGNITION
    Li, Hongyang
    Chen, Jun
    Xu, Zengmin
    Chen, Huafeng
    Hu, Ruimin
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2014 - 2018
  • [33] Robust, discriminative and comprehensive dictionary learning for face recognition
    Lin, Guojun
    Yang, Meng
    Yang, Jian
    Shen, Linlin
    Xie, Weicheng
    PATTERN RECOGNITION, 2018, 81 : 341 - 356
  • [34] Jointly Learning the Discriminative Dictionary and Projection for Face Recognition
    Bi, Chao
    Yi, Yugen
    Zhang, Lei
    Zheng, Caixia
    Shi, Yanjiao
    Xie, Xiaochun
    Wang, Jianzhong
    Wu, Yan
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [35] An extended dictionary representation approach with deep subspace learning for facial expression recognition
    Sun, Zhe
    Chiong, Raymond
    Hu, Zheng-ping
    NEUROCOMPUTING, 2018, 316 : 1 - 9
  • [36] Learning Discriminative Features with Region Attention and Refinement Network for Facial Expression Recognition in the Wild
    Li, Xiao
    Li, Chunlei
    Tian, Bo
    Liu, Zhoufeng
    Yang, Ruimin
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1113 - 1119
  • [37] Discriminative feature learning-based pixel difference representation for facial expression recognition
    Sun, Zhe
    Hu, Zheng-Ping
    Wang, Meng
    Zhao, Shu-Huan
    IET COMPUTER VISION, 2017, 11 (08) : 675 - 682
  • [38] Joint Local-Global Discriminative Subspace Transfer Learning for Facial Expression Recognition
    Zhang, Wenjing
    Song, Peng
    Zheng, Wenming
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (03) : 2484 - 2495
  • [39] Discriminative Deep Feature Learning for Facial Emotion Recognition
    Dinh Viet Sang
    Le Tran Bao Cuong
    Pham Thai Ha
    2018 1ST INTERNATIONAL CONFERENCE ON MULTIMEDIA ANALYSIS AND PATTERN RECOGNITION (MAPR), 2018,
  • [40] Enhanced discriminative global-local feature learning with priority for facial expression recognition
    Zhang, Ziyang
    Tian, Xiang
    Zhang, Yuan
    Guo, Kailing
    Xu, Xiangmin
    INFORMATION SCIENCES, 2023, 630 : 370 - 384