Recognition of affective communicative intent in robot-directed speech

被引:142
|
作者
Breazeal, C [1 ]
Aryananda, L [1 ]
机构
[1] MIT, Artificial Intelligence Lab, Cambridge, MA 02139 USA
关键词
affective computing; human computer interaction; humanoid robots; sociable robots; speech recognition;
D O I
10.1023/A:1013215010749
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human speech provides a natural and intuitive interface for both communicating with humanoid robots as well as for teaching them. In general, the acoustic pattern of speech contains three kinds of information: who the speaker is, what the speaker said, and how the speaker said it. This paper focuses on the question of recognizing affective communicative intent in robot-directed speech without looking into the linguistic content. We present an approach for recognizing four distinct prosodic patterns that communicate praise, prohibition, attention, and comfort to preverbal infants. These communicative intents are well matched to teaching a robot since praise, prohibition, and directing the robot's attention to relevant aspects of a task, could be used by a human instructor to intuitively facilitate the robot's learning process. We integrate this perceptual ability into our robot's "emotion" system, thereby allowing a human to directly manipulate the robot's affective state. This has a powerful organizing influence on the robot's behavior, and will ultimately be used to socially communicate affective reinforcement. Communicative efficacy has been tested with people very familiar with the robot as well as with naive subjects.
引用
收藏
页码:83 / 104
页数:22
相关论文
共 50 条
  • [11] Unpretty Please: Ostensibly PoliteWakewords Discourage Politeness in both Robot-Directed and Human-Directed Communication
    Wen, Ruchen
    Barton, Brandon
    Faure, Sebastian
    Williams, Tom
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, ICMI 2022, 2022, : 181 - 190
  • [13] Research on speech interaction of affective robot
    Xue, WM
    ICEMI 2005: CONFERENCE PROCEEDINGS OF THE SEVENTH INTERNATIONAL CONFERENCE ON ELECTRONIC MEASUREMENT & INSTRUMENTS, VOL 7, 2005, : 348 - 351
  • [14] Expressive speech recognition and synthesis as enabling technologies for affective robot-child communication
    Yilmazyildiz, Selma
    Mattheyses, Wesley
    Patsis, Yorgos
    Verhelst, Werner
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2006, PROCEEDINGS, 2006, 4261 : 1 - +
  • [15] Electrophysiological and Kinematic Correlates of Communicative Intent in the Planning and Production of Pointing Gestures and Speech
    Peeters, David
    Chu, Mingyuan
    Holler, Judith
    Hagoort, Peter
    Ozyurek, Asli
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2015, 27 (12) : 2352 - 2368
  • [16] Age-Specific Preferences for Infant-Directed Affective Intent
    Kitamura, Christine
    Lam, Christa
    INFANCY, 2009, 14 (01) : 77 - 100
  • [17] Listen with Intent: Improving Speech Recognition with Audio-to-Intent Front-End
    Ray, Swayambhu Nath
    Wu, Minhua
    Raju, Anirudh
    Ghahremani, Pegah
    Bilgi, Raghavendra
    Rao, Milind
    Arsikere, Harish
    Rastrow, Ariya
    Stolcke, Andreas
    Droppo, Jasha
    INTERSPEECH 2021, 2021, : 3455 - 3459
  • [18] Speech Emotion Recognition using Affective Saliency
    Chorianopoulou, Arodami
    Koatsakis, Polychronis
    Potamianos, Alexandros
    17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, : 500 - 504
  • [19] Speech and gesture recognition interactive robot
    Bommi, R. M.
    Vijay, J.
    Manohar, V. Murali
    Kumar, J. P. Dinesh
    Sriram, D.
    MATERIALS TODAY-PROCEEDINGS, 2021, 47 : 37 - 40
  • [20] Service robot system with ability of affective and speech interaction
    School of Information Engineering, University of Science and Technology, Beijing 100083, China
    不详
    J. Comput. Inf. Syst., 2006, 1 (133-138):