A multi-modal object attention system for a mobile robot

被引:20
|
作者
Haasch, A [1 ]
Hofemann, N [1 ]
Fritsch, J [1 ]
Sagerer, G [1 ]
机构
[1] Univ Bielefeld, Fac Technol, D-33594 Bielefeld, Germany
关键词
object attention; human-robot interaction; robot companion;
D O I
10.1109/IROS.2005.1545191
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robot companions are intended for operation in private homes with naive users. For this purpose, they need to be endowed with natural interaction capabilities. Additionally, such robots will need to be taught unknown objects that are present in private homes. We present a multi-modal object attention system that is able to identify objects referenced by the user with gestures and verbal instructions. The proposed system can detect known and unknown objects and stores newly acquired object information in a scene model for later retrieval. This way, the growing knowledge base of the robot companion improves the interaction quality as the robot can more easily focus its attention on objects it has been taught previously.
引用
收藏
页码:1499 / 1504
页数:6
相关论文
共 50 条
  • [21] Interactive multi-modal robot programming
    Iba, S
    Paredis, CJJ
    Khosla, PK
    EXPERIMENTAL ROBOTICS IX, 2006, 21 : 503 - +
  • [22] Multi-modal human-robot interface for interaction with a remotely operating mobile service robot
    Fischer, C
    Schmidt, G
    ADVANCED ROBOTICS, 1998, 12 (04) : 397 - 409
  • [23] Multi-modal mobile sensor data fusion for autonomous robot mapping problem
    Kassem, M. H.
    Shehata, Omar M.
    Morgan, E. I. Imam
    2015 3RD INTERNATIONAL CONFERENCE ON CONTROL, MECHATRONICS AND AUTOMATION (ICCMA 2015), 2016, 42
  • [24] A huggable, mobile robot for developmental disorder interventions in a multi-modal interaction space
    Bonarini, Andrea
    Garzotto, Franca
    Gelsomini, Mirko
    Romero, Maximiliano
    Clasadonte, Francesco
    Yilmaz, Ayse Naciye Celebi
    2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2016, : 823 - 830
  • [25] MULTI-MODAL PERSON DETECTION AND TRACKING FROM A MOBILE ROBOT IN A CROWDED ENVIRONMENT
    Mekonnen, A. A.
    Lerasle, F.
    Zuriarrain, I.
    VISAPP 2011: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, 2011, : 511 - 520
  • [26] Studying Multi-modal Human Robot Interaction Using a Mobile VR Simulation
    Milde, Sven
    Runzheimer, Tabea
    Friesen, Stefan
    Peiffer, Johannes-Hubert
    Hoefler, Johannes-Jeremias
    Geis, Kerstin
    Milde, Jan-Torsten
    Blum, Rainer
    HUMAN-COMPUTER INTERACTION, HCI 2023, PT III, 2023, 14013 : 140 - 155
  • [27] Multi-Modal Biometrics for Mobile Authentication
    Aronowitz, Hagai
    Li, Min
    Toledo-Ronen, Orith
    Harary, Sivan
    Geva, Amir
    Ben-David, Shay
    Rendel, Asaf
    Hoory, Ron
    Ratha, Nalini
    Pankanti, Sharath
    Nahamoo, David
    2014 IEEE/IAPR INTERNATIONAL JOINT CONFERENCE ON BIOMETRICS (IJCB 2014), 2014,
  • [28] A Probabilistic Approach for Attention-Based Multi-Modal Human-Robot Interaction
    Begum, Momotaz
    Karray, Fakhri
    Mann, George K. I.
    Gosine, Raymond
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 909 - +
  • [29] MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis
    Li, Shuzhen
    Zhang, Tong
    Chen, Bianna
    Chen, C. L. Philip
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2796 - 2809
  • [30] Preliminary Study of Multi-modal Dialogue System for Personal Robot with IoTs
    Yamasaki, Shintaro
    Matsui, Kenji
    DISTRIBUTED COMPUTING AND ARTIFICIAL INTELLIGENCE, 2018, 620 : 286 - 292