A lightweight method for face expression recognition based on improved MobileNetV3

被引:3
|
作者
Liang, Xunru [1 ,2 ,3 ]
Liang, Jianfeng [1 ,2 ,3 ]
Yin, Tao [1 ,2 ,3 ]
Tang, Xiaoyu [1 ,2 ,3 ]
机构
[1] South China Normal Univ, Sch Phys & Telecommun Engn, Guangzhou, Peoples R China
[2] South China Normal Univ, Sch Elect & Informat Engn, Foshan, Peoples R China
[3] Natl Demonstrat Ctr Expt Phys Educ, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
computer vision; emotion recognition; image classification; image recognition; FACIAL EXPRESSION; NETWORK;
D O I
10.1049/ipr2.12798
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Facial expression recognition plays a significant role in the application of man-machine interaction. However, existing models typically have shortcomings with numerous parameters, large model sizes, and high computational costs, which are difficult to deploy in resource-constrained devices. This paper proposes a lightweight network based on improved MobileNetV3 to mitigate these disadvantages. Firstly, we adjust the channels in the high-level network to reduce the number of parameters and model size, and then, the coordinate attention mechanism is introduced to the network, which enhances the attention of the network with few parameters and low computing cost. Furthermore, a complementary pooling structure is designed to improve the coordinate attention mechanism, which enables it to assist the network in extracting salient features sufficiently. In addition, the network with the joint loss consisting of the softmax loss and centre loss is trained, which can minimize the intra-class gap and improve the classification performance. Finally, the network is trained and tested on public datasets FERPlus and RAF-DB, with the best accuracy of 87.5% and 86.6%, respectively. The FLOPs, parameters, and the memory storage size are only 0.19GMac, 1.3 M, and 15.9 MB, respectively, which is lighter than most state-of-the-art networks. Code is available at .
引用
收藏
页码:2375 / 2384
页数:10
相关论文
共 50 条
  • [41] Parking Lot Occupancy Detection with Improved MobileNetV3 (vol 23, 7642, 2023)
    Yuldashev, Yusufbek
    Mukhiddinov, Mukhriddin
    Abdusalomov, Akmalbek Bobomirzaevich
    Nasimov, Rashid
    Cho, Jinsoo
    SENSORS, 2024, 24 (16)
  • [42] Lightweight Identification of Rice Diseases Based on Improved ECA and MobileNetV3Small
    Yuan P.
    Ouyang L.
    Zhai Z.
    Tian Y.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2024, 55 (01): : 253 - 262
  • [43] Face Recognition Method Based on Improved LDA
    Yuan Wei
    2017 NINTH INTERNATIONAL CONFERENCE ON INTELLIGENT HUMAN-MACHINE SYSTEMS AND CYBERNETICS (IHMSC 2017), VOL 2, 2017, : 456 - 459
  • [44] 'Symmetrical face' based improved LPP method for face recognition
    Wu, Shuai
    Cao, Jian
    OPTIK, 2014, 125 (14): : 3530 - 3533
  • [45] An Improved Face Detection Method Based on Face Recognition Application
    Li, Qinfeng
    2019 4TH ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2019), 2019, : 260 - 264
  • [46] Real-Time object detector based MobileNetV3 for UAV applications
    Yonghao Yang
    Jin Han
    Multimedia Tools and Applications, 2023, 82 : 18709 - 18725
  • [47] Real-Time object detector based MobileNetV3 for UAV applications
    Yang, Yonghao
    Han, Jin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (12) : 18709 - 18725
  • [48] A recognition method of coating surface defects based on the improved MobileNetV2 network
    Chen Z.
    Zhao H.
    Lyu Y.
    Sha J.
    Sha X.
    Harbin Gongcheng Daxue Xuebao/Journal of Harbin Engineering University, 2022, 43 (04): : 572 - 579
  • [49] A Weld Surface Defect Recognition Method Based on Improved MobileNetV2 Algorithm
    Ding, Kai
    Niu, Zhangqi
    Hui, Jizhuang
    Zhou, Xueliang
    Chan, Felix T. S.
    MATHEMATICS, 2022, 10 (19)
  • [50] An improved 3D face recognition method based on normal map
    Abate, AF
    Nappi, M
    Ricciardi, S
    Sabatino, G
    HUMAN & MACHINE PERCEPTION: COMMUNICATION, INTERACTION, AND INTEGRATION, 2005, : 77 - 88