Facial Communicative Signals Valence Recognition in Task-Oriented Human-Robot Interaction

被引:4
|
作者
Lang, Christian [1 ]
Wachsmuth, Sven [2 ]
Hanheide, Marc [3 ]
Wersing, Heiko [4 ]
机构
[1] Univ Bielefeld, Res Inst Cognit & Robot CoR Lab, D-33615 Bielefeld, Germany
[2] Univ Bielefeld, D-33615 Bielefeld, Germany
[3] Lincoln Univ, Sch Comp Sci, Lincoln, England
[4] Honda Res Inst Europe, Offenbach, Germany
关键词
Facial communicative signals; Valence recognition; Head gestures; Eye gaze; Facial expressions; Object teaching; Active appearance models; EMOTION RECOGNITION; GAZE DIRECTION; AUTOMATIC-ANALYSIS; CIRCUMPLEX MODEL; VISUAL BEHAVIOR; SOCIAL-CONTEXT; EXPRESSIONS; DEFINITIONS; UNIVERSALS; PERCEPTION;
D O I
10.1007/s12369-012-0145-z
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
This paper investigates facial communicative signals ( head gestures, eye gaze, and facial expressions) as nonverbal feedback in human-robot interaction. Motivated by a discussion of the literature, we suggest scenario-specific investigations due to the complex nature of these signals and present an object-teaching scenario where subjects teach the names of objects to a robot, which in turn shall term these objects correctly afterwards. The robot's verbal answers are to elicit facial communicative signals of its interaction partners. We investigated the human ability to recognize this spontaneous facial feedback and also the performance of two automatic recognition approaches. The first one is a static approach yielding baseline results, whereas the second considers the temporal dynamics and achieved classification rates comparable to the human performance.
引用
收藏
页码:249 / 262
页数:14
相关论文
共 50 条
  • [1] Facial Communicative SignalsValence Recognition in Task-Oriented Human-Robot Interaction
    Christian Lang
    Sven Wachsmuth
    Marc Hanheide
    Heiko Wersing
    International Journal of Social Robotics, 2012, 4 : 249 - 262
  • [2] Invoking and identifying task-oriented interlocutor confusion in human-robot interaction
    Li, Na
    Ross, Robert
    FRONTIERS IN ROBOTICS AND AI, 2023, 10
  • [3] Task-oriented human-robot interaction control of a robotic glove utilizing forearm electromyography
    Wang, Xianhe
    Zhang, Haotian
    Teng, Long
    Tang, Chak Yin
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2023, 360 (16): : 11351 - 11370
  • [4] Role Switching in Task-Oriented Multimodal Human-Robot Collaboration
    Monaikul, Natawut
    Abbasi, Bahareh
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 1150 - 1156
  • [5] Human-robot interaction - Facial gesture recognition
    Rudall, BH
    ROBOTICA, 1996, 14 : 596 - 597
  • [6] Facial Expression Recognition for Human-Robot Interaction
    Hsu, Shih-Chung
    Huang, Hsin-Hui
    Huang, Chung-Lin
    2017 FIRST IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC), 2017, : 1 - 7
  • [7] Affective Facial Expressions Recognition for Human-Robot Interaction
    Faria, Diego R.
    Vieira, Mario
    Faria, Fernanda C. C.
    Premebida, Cristiano
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 805 - 810
  • [8] Facial expression recognition for human-robot interaction - A prototype
    Wimmer, Matthias
    MacDonald, Bruce A.
    Jayamuni, Dinuka
    Yadav, Arpit
    ROBOT VISION, PROCEEDINGS, 2008, 4931 : 139 - +
  • [9] Human-robot interaction using facial gesture recognition
    Zelinsky, A
    Heinzmann, J
    RO-MAN '96 - 5TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 1996, : 256 - 261
  • [10] An End-to-End Human Simulator for Task-Oriented Multimodal Human-Robot Collaboration
    Shervedani, Afagh Mehri
    Li, Siyu
    Monaikul, Natawut
    Abbasi, Bahareh
    Di Eugenio, Barbara
    Zefran, Milos
    2023 32ND IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, RO-MAN, 2023, : 614 - 620