Convolutional Features-Based Broad Learning With LSTM for Multidimensional Facial Emotion Recognition in Human-Robot Interaction

被引:3
|
作者
Chen, Luefeng [1 ,2 ]
Li, Min [1 ,2 ]
Wu, Min [1 ,2 ]
Pedrycz, Witold [3 ,4 ,5 ]
Hirota, Kaoru [6 ]
机构
[1] China Univ Geosci, Sch Automat, Hubei Key Lab Adv Control & Intelligent Automat C, Wuhan 430074, Peoples R China
[2] China Univ Geosci, Engn Res Ctr Intelligent Technol Geoexplorat, Minist Educ, Wuhan 430074, Peoples R China
[3] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 2R3, Canada
[4] Polish Acad Sci, Syst Res Inst, PL-00901 Warsaw, Poland
[5] Istinye Univ, Dept Comp Engn, TR-34396 Sariyer Istanbul, Turkiye
[6] Tokyo Inst Technol, Tokyo 2268502, Japan
基金
中国国家自然科学基金;
关键词
emotion recognition; human-robot interaction; long short-term memory (LSTM); EXPRESSION RECOGNITION; NETWORK; REGRESSION; FRAMEWORK; SYSTEM;
D O I
10.1109/TSMC.2023.3301001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional feature-based broad learning with long short-term memory (CBLSTM) is proposed to recognize multidimensional facial emotions in human-robot interaction. The CBLSTM model consists of convolution and pooling layers, broad learning (BL), and long-and short-term memory network. It aims to obtain the depth, width, and time scale information of facial emotion through three parts of the model, so as to realize multidimensional facial emotion recognition. CBLSTM adopts the structure of BL after processing was done at the convolution and pooling layer to replace the original random mapping method and extract features with more representation ability, which significantly reduces the computational time of the facial emotion recognition network. Moreover, we adopted incremental learning, which can quickly reconstruct the model without a complete retraining process. Experiments on three databases are developed, including CK+, MMI, and SFEW2.0 databases. The experimental results show that the proposed CBLSTM model using multidimensional information produces higher recognition accuracy than that without time scale information. It is 1.30% higher on the CK+ database and 1.06% higher on the MMI database. The computation time is 9.065 s, which is significantly shorter than the time reported for the convolutional neural network (CNN). In addition, the proposed method obtains improvement compared to the state-of-the-art methods. It improves the recognition rate by 3.97%, 1.77%, and 0.17% compared to that of CNN-SIPS, HOG-TOP, and CMACNN in the CK+ database, 5.17%, 5.14%, and 3.56% compared to TLMOS, ALAW, and DAUGN in the MMI database, and 7.08% and 2.98% compared to CNNVA and QCNN in the SFEW2.0 database.
引用
收藏
页码:64 / 75
页数:12
相关论文
共 50 条
  • [1] Enhanced Broad Siamese Network for Facial Emotion Recognition in Human-Robot Interaction
    Li Y.
    Zhang T.
    Philip Chen C.L.
    IEEE Transactions on Artificial Intelligence, 2021, 2 (05): : 413 - 423
  • [2] A Facial Expression Emotion Recognition Based Human-robot Interaction System
    Zhentao Liu
    Min Wu
    Weihua Cao
    Luefeng Chen
    Jianping Xu
    Ri Zhang
    Mengtian Zhou
    Junwei Mao
    IEEE/CAAJournalofAutomaticaSinica, 2017, 4 (04) : 668 - 676
  • [3] A Facial Expression Emotion Recognition Based Human-robot Interaction System
    Liu, Zhentao
    Wu, Min
    Cao, Weihua
    Chen, Luefeng
    Xu, Jianping
    Zhang, Ri
    Zhou, Mengtian
    Mao, Junwei
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2017, 4 (04) : 668 - 676
  • [4] Human-Robot Interaction Based on Facial Expression Recognition Using Deep Learning
    Maeda, Yoichiro
    Sakai, Tensei
    Kamei, Katsuari
    Cooper, Eric W.
    2020 JOINT 11TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 21ST INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS-ISIS), 2020, : 211 - 216
  • [5] Facial Expression Recognition and Positive Emotion Incentive System for Human-Robot Interaction
    Chen, Hu
    Gu, Ye
    Wang, Fei
    Sheng, Weihua
    2018 13TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2018, : 407 - 412
  • [6] Emotion in human-robot interaction: Recognition and display
    Wendt, Cornalia
    Kuehnlenz, Kolja
    Popp, Michael
    Karg, Michella
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2008, 43 (3-4) : 578 - 578
  • [7] cGAN Based Facial Expression Recognition for Human-Robot Interaction
    Deng, Jia
    Pang, Gaoyang
    Zhang, Zhiyu
    Pang, Zhibo
    Yang, Huayong
    Yang, Geng
    IEEE ACCESS, 2019, 7 : 9848 - 9859
  • [8] Human-robot interaction - Facial gesture recognition
    Rudall, BH
    ROBOTICA, 1996, 14 : 596 - 597
  • [9] Facial Expression Recognition for Human-Robot Interaction
    Hsu, Shih-Chung
    Huang, Hsin-Hui
    Huang, Chung-Lin
    2017 FIRST IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2017, : 1 - 7
  • [10] Facial Emotion Expressions in Human-Robot Interaction: A Survey
    Rawal, Niyati
    Stock-Homburg, Ruth Maria
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2022, 14 (07) : 1583 - 1604