Touch-text answer for human-robot interaction via supervised adversarial learning

被引:4
|
作者
Wang, Ya-Xin [1 ]
Meng, Qing-Hao [1 ]
Li, Yun-Kai [2 ]
Hou, Hui-Rang [1 ]
机构
[1] Tianjin Univ, Inst Robot & Autonomous Syst, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Zhengzhou Univ, Sch Elect & Informat Engn, Zhengzhou 450001, Henan, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Human-robot interaction; Cross-modal retrieval; Adversarial learning; Touch gesture; Text;
D O I
10.1016/j.eswa.2023.122738
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In daily life, touch modality plays an important role in conveying human intentions and emotions. To further improve touch-based human-robot interaction, robots need to infer human emotions from touch signals and respond accordingly. Therefore, it is a major challenge to correlate the emotional state of touch gestures with text responses. At present, there are few researches on touch-text dialogue, and robots cannot respond to human tactile gestures with appropriate text, so touch-text-based human-robot interaction is not yet possible. To solve these problems, we first built a touch-text dialogue (TTD) corpus based on six basic emotions through experiments, which contains 1109 touch-text sample pairs. And then, we designed a supervised adversarial learning for touch-text answer (SATTA) model to realize the touch-text based human-robot interaction. The SATTA model correlates the data of text mode with that of touch mode by reducing the emotion discrimination loss in the public space and the feature difference between the sample pairs of two modes. At the same time, the feature representation is mapped into the label space to reduce the classification loss of samples. The experiment in the TTD corpus validates the proposed method.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Supervised autonomy for online learning in human-robot interaction
    Senft, Emmanuel
    Baxter, Paul
    Kennedy, James
    Lemaignan, Severin
    Belpaeme, Tony
    PATTERN RECOGNITION LETTERS, 2017, 99 : 77 - 86
  • [2] Asynchronous federated learning system for human-robot touch interaction
    Gamboa-Montero, Juan Jose
    Alonso-Martin, Fernando
    Marques-Villarroya, Sara
    Sequeira, Joao
    Salichs, Miguel A.
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 211
  • [3] Asymmetric Identification Model for Human-Robot Contacts via Supervised Learning
    Abu Al-Haija, Qasem
    Al-Saraireh, Ja'afer
    SYMMETRY-BASEL, 2022, 14 (03):
  • [4] Learning from Human Collaborative Experience: Robot Learning via Crowdsourcing of Human-Robot Interaction
    Tan, Jeffrey Too Chuan
    Hagiwara, Yoshinobu
    Inamura, Tetsunari
    COMPANION OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, : 297 - 298
  • [5] Exploring the Causal Modeling of Human-Robot Touch Interaction
    Keshmiri, Soheil
    Sumioka, Hidenobu
    Minato, Takashi
    Shiomi, Masahiro
    Ishiguro, Hiroshi
    SOCIAL ROBOTICS, ICSR 2019, 2019, 11876 : 235 - 244
  • [6] Recognizing Touch Gestures for Social Human-Robot Interaction
    Altuglu, Tugce Balli
    Altun, Kerem
    ICMI'15: PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2015, : 407 - 413
  • [7] Programming by touch:: The different way of human-robot interaction
    Grunwald, G
    Schreiber, G
    Albu-Schäffer, A
    Hirzinger, G
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2003, 50 (04) : 659 - 666
  • [8] Weakly-Supervised Object Detection Learning through Human-Robot Interaction
    Maiettini, Elisa
    Tikhanoff, Vadim
    Natale, Lorenzo
    PROCEEDINGS OF THE 2020 IEEE-RAS 20TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS 2020), 2021, : 392 - 399
  • [9] Affective Touch in Human-Robot Interaction: Conveying Emotion to the Nao Robot
    Andreasson, Rebecca
    Alenljung, Beatrice
    Billing, Erik
    Lowe, Robert
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2018, 10 (04) : 473 - 491
  • [10] Constructive learning for human-robot interaction
    Singh, Amarjot
    Karanam, Srikrishna
    Kumar, Devinder
    IEEE Potentials, 2013, 32 (04): : 13 - 19