A tension-moderating mechanism for promoting speech-based human-robot interaction

被引:4
|
作者
Kanda, T [1 ]
Iwase, K [1 ]
Shiomi, M [1 ]
Ishiguro, H [1 ]
机构
[1] ATR Intelligent Robot & Commun Labs, Dept Commun Robots, Kyoto, Japan
关键词
human-robot interaction; emotion recognition; tension emotion; speech-based interaction;
D O I
10.1109/IROS.2005.1545035
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a method for promoting human-robot interaction based on emotion recognition with particular focus on tension emotion. There are two types of emotions expressed in a short time. One is autonomic emotion caused by a stimulus, such as joy and fear. The other is self-reported emotion, such as tension, that is relatively independent of a single stimulus. In our preliminary experiment, we observed that tension emotion (self-reported emotion) obstructs the expression of autonomic emotion, which has demerits on speech recognition and interaction. Our method is based on detection and moderation of tension emotion. If a robot detects tension emotion, it tries to ease it so that a person will interact with it more comfortably and express autonomic emotions. It also retrieves nuances from expressed emotions for supplementing insufficient speech recognition, which will also promote interaction.
引用
收藏
页码:527 / 532
页数:6
相关论文
共 50 条
  • [1] Speech-based Human-Robot Interaction Robust to Acoustic Reflections in Real Environment
    Gomez, Randy
    Inoue, Koji
    Nakamura, Keisuke
    Mizumoto, Takeshi
    Nakadai, Kazuhiro
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 1367 - 1373
  • [2] Temporal Smearing Compensation in Reverberant Environment for Speech-based Human-Robot Interaction
    Gomez, Randy
    Nakamura, Keisuke
    Mizumoto, Takeshi
    Nakadai, Kazuhiro
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 3347 - 3353
  • [3] Environment Compensation Using A Posteriori Statistics for Distant Speech-based Human-Robot Interaction
    Gomez, Randy
    Nakamura, Keisuke
    2016 IEEE-RAS 16TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2016, : 1211 - 1216
  • [4] Human Interaction Smart Subsystem-Extending Speech-Based Human-Robot Interaction Systems with an Implementation of External Smart Sensors
    Podpora, Michal
    Gardecki, Arkadiusz
    Beniak, Ryszard
    Klin, Bartlomiej
    Lopez Vicario, Jose
    Kawala-Sterniuk, Aleksandra
    SENSORS, 2020, 20 (08)
  • [5] Utilizing Visual Cues in Robot Audition for Sound Source Discrimination in Speech-based Human-Robot Communication
    Gomez, Randy
    Ivanchuk, Levko
    Nakamura, Keisuke
    Mizumoto, Takeshi
    Nakadai, Kazuhiro
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 4216 - 4222
  • [6] Human-robot interaction based on human emotions extracted from speech
    Kirandziska, Vesna
    Ackovska, Nevena
    2012 20TH TELECOMMUNICATIONS FORUM (TELFOR), 2012, : 1381 - 1384
  • [7] Research on multimodal human-robot interaction based on speech and gesture
    Deng Yongda
    Li Fang
    Xin Huang
    COMPUTERS & ELECTRICAL ENGINEERING, 2018, 72 : 443 - 454
  • [8] Toward a quizmaster robot for speech-based multiparty interaction
    Nishimuta, Izaya
    Itoyama, Katsutoshi
    Yoshii, Kazuyoshi
    Okuno, Hiroshi G.
    ADVANCED ROBOTICS, 2015, 29 (18) : 1205 - 1219
  • [9] Microblogging as a mechanism for human-robot interaction
    Bell, David
    Koulouri, Theodora
    Lauria, Stanislao
    Macredie, Robert D.
    Sutton, James
    KNOWLEDGE-BASED SYSTEMS, 2014, 69 : 64 - 77
  • [10] Space, Speech, and Gesture in Human-Robot Interaction
    Mead, Ross
    ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 333 - 336