Convolutional Features-Based Broad Learning With LSTM for Multidimensional Facial Emotion Recognition in Human-Robot Interaction

被引:3
|
作者
Chen, Luefeng [1 ,2 ]
Li, Min [1 ,2 ]
Wu, Min [1 ,2 ]
Pedrycz, Witold [3 ,4 ,5 ]
Hirota, Kaoru [6 ]
机构
[1] China Univ Geosci, Sch Automat, Hubei Key Lab Adv Control & Intelligent Automat C, Wuhan 430074, Peoples R China
[2] China Univ Geosci, Engn Res Ctr Intelligent Technol Geoexplorat, Minist Educ, Wuhan 430074, Peoples R China
[3] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6G 2R3, Canada
[4] Polish Acad Sci, Syst Res Inst, PL-00901 Warsaw, Poland
[5] Istinye Univ, Dept Comp Engn, TR-34396 Sariyer Istanbul, Turkiye
[6] Tokyo Inst Technol, Tokyo 2268502, Japan
基金
中国国家自然科学基金;
关键词
emotion recognition; human-robot interaction; long short-term memory (LSTM); EXPRESSION RECOGNITION; NETWORK; REGRESSION; FRAMEWORK; SYSTEM;
D O I
10.1109/TSMC.2023.3301001
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional feature-based broad learning with long short-term memory (CBLSTM) is proposed to recognize multidimensional facial emotions in human-robot interaction. The CBLSTM model consists of convolution and pooling layers, broad learning (BL), and long-and short-term memory network. It aims to obtain the depth, width, and time scale information of facial emotion through three parts of the model, so as to realize multidimensional facial emotion recognition. CBLSTM adopts the structure of BL after processing was done at the convolution and pooling layer to replace the original random mapping method and extract features with more representation ability, which significantly reduces the computational time of the facial emotion recognition network. Moreover, we adopted incremental learning, which can quickly reconstruct the model without a complete retraining process. Experiments on three databases are developed, including CK+, MMI, and SFEW2.0 databases. The experimental results show that the proposed CBLSTM model using multidimensional information produces higher recognition accuracy than that without time scale information. It is 1.30% higher on the CK+ database and 1.06% higher on the MMI database. The computation time is 9.065 s, which is significantly shorter than the time reported for the convolutional neural network (CNN). In addition, the proposed method obtains improvement compared to the state-of-the-art methods. It improves the recognition rate by 3.97%, 1.77%, and 0.17% compared to that of CNN-SIPS, HOG-TOP, and CMACNN in the CK+ database, 5.17%, 5.14%, and 3.56% compared to TLMOS, ALAW, and DAUGN in the MMI database, and 7.08% and 2.98% compared to CNNVA and QCNN in the SFEW2.0 database.
引用
收藏
页码:64 / 75
页数:12
相关论文
共 50 条
  • [31] Human-Robot Interaction based on Facial Expression Imitation
    Esfandbod, Alireza
    Rokhi, Zeynab
    Taheri, Alireza
    Alemi, Minoo
    Meghdari, Ali
    2019 7TH INTERNATIONAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ICROM 2019), 2019, : 69 - 73
  • [32] Emotion recognition in non-structured utterances for human-robot interaction
    Martínez, CK
    Cruz, AB
    2005 IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN), 2005, : 19 - 23
  • [33] Emotion Recognition for Human-Robot Interaction: Recent Advances and Future Perspectives
    Spezialetti, Matteo
    Placidi, Giuseppe
    Rossi, Silvia
    FRONTIERS IN ROBOTICS AND AI, 2020, 7
  • [34] Distant speech emotion recognition in an indoor human-robot interaction scenario
    Grageda, Nicolas
    Busso, Carlos
    Alvarado, Eduardo
    Mahu, Rodrigo
    Yoma, Nestor Becerra
    INTERSPEECH 2023, 2023, : 3657 - 3661
  • [35] Face and facial expression recognition with an embedded system for human-robot interaction
    Lee, YB
    Moon, SB
    Kim, YG
    AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, PROCEEDINGS, 2005, 3784 : 271 - 278
  • [36] Towards the Development of Affective Facial Expression Recognition for Human-Robot Interaction
    Faria, Diego Resende
    Vieira, Mario
    Faria, Fernanda C. C.
    10TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS (PETRA 2017), 2017, : 300 - 304
  • [37] An effective method for detecting facial features and face in human-robot interaction
    Lee, Taigun
    Park, Sung-Kee
    Park, Mignon
    INFORMATION SCIENCES, 2006, 176 (21) : 3166 - 3189
  • [38] A new facial features and face detection method for human-robot interaction
    Lee, T
    Park, SK
    Park, M
    2005 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-4, 2005, : 2063 - 2068
  • [39] Facial Expressions Recognition for Human-Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer
    Melinte, Daniel Octavian
    Vladareanu, Luige
    SENSORS, 2020, 20 (08)
  • [40] An Emotion-Based Interaction Strategy to Improve Human-Robot Interaction
    Ranieri, Caetano M.
    Romero, Roseli A. F.
    PROCEEDINGS OF 13TH LATIN AMERICAN ROBOTICS SYMPOSIUM AND 4TH BRAZILIAN SYMPOSIUM ON ROBOTICS - LARS/SBR 2016, 2016, : 31 - 36