Investigating the Role of Multi-modal Social Cues in Human-Robot Collaboration in Industrial Settings

被引:4
|
作者
Cao, Hoang-Long [1 ,2 ]
Scholz, Constantin [1 ,3 ]
De Winter, Joris [1 ,2 ]
El Makrini, Ilias [1 ,2 ]
Vanderborght, Bram [1 ,3 ]
机构
[1] Vrije Univ Brussel, BruBot, Brussels, Belgium
[2] Flanders Make, Lommel, Belgium
[3] imec, Leuven, Belgium
基金
欧盟地平线“2020”;
关键词
Collaborative robots; Multi-modal social cues; Godspeed; Acceptance; COMMUNICATION; GESTURES; GAZE;
D O I
10.1007/s12369-023-01018-9
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Expressing social cues through different communication channels plays an important role in mutual understanding, in both human-human and human-robot collaborations. A few studies investigated the effects of zoomorphic and anthropomorphic social cues expressed by industrial robot arms on robot-to-human communication. In this work, we investigate the role of multi-modal social cues by combining the robot's head-like gestures with light and sound modalities in two studies. The first study found that multi-modal social cues have positive effects on people's perception of the robot, perceived enjoyment, and intention to use. The second study found that a combination of human-like gestures with light and/or sound modalities could lead to a higher understandability of the robot's social cues. These findings suggest the use of multi-modal social cues for robots in industrial settings. However, the possible negative impacts when implementing these social cues should be considered e.g. overtrust, and distraction.
引用
收藏
页码:1169 / 1179
页数:11
相关论文
共 50 条
  • [31] Survey on human-robot collaboration in industrial settings: Safety, intuitive interfaces and applications
    Villani, Valeria
    Pini, Fabio
    Leali, Francesco
    Secchi, Cristian
    MECHATRONICS, 2018, 55 : 248 - 266
  • [32] Multi-modal human-robot interface for interaction with a remotely operating mobile service robot
    Fischer, C
    Schmidt, G
    ADVANCED ROBOTICS, 1998, 12 (04) : 397 - 409
  • [33] The Role of Social Cues for Goal Disambiguation in Human-Robot Cooperation
    Vinanzi, Samuele
    Cangelosi, Angelo
    Goerick, Christian
    2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 971 - 977
  • [34] Are You Sure? - Multi-Modal Human Decision Uncertainty Detection in Human-Robot Interaction
    Scherf, Lisa
    Gasche, Lisa Alina
    Chemangui, Eya
    Koert, Dorothea
    PROCEEDINGS OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024, 2024, : 621 - 629
  • [35] Designing and Implementing a Platform for Collecting Multi-Modal Data of Human-Robot Interaction
    Vaughan, Brian
    Han, Jing Guang
    Gilmartin, Emer
    Campbell, Nick
    ACTA POLYTECHNICA HUNGARICA, 2012, 9 (01) : 7 - 17
  • [36] A Probabilistic Approach for Attention-Based Multi-Modal Human-Robot Interaction
    Begum, Momotaz
    Karray, Fakhri
    Mann, George K. I.
    Gosine, Raymond
    RO-MAN 2009: THE 18TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, 2009, : 909 - +
  • [37] Editorial: Integrated Multi-modal and Sensorimotor Coordination for Enhanced Human-Robot Interaction
    Fang, Bin
    Fang, Cheng
    Wen, Li
    Manoonpong, Poramate
    Fang, Bin (fangbin@tsinghua.edu.cn), 1600, Frontiers Media S.A. (15):
  • [38] Investigation of multi-modal interface features for adaptive automation of a human-robot system
    Kaber, DB
    Wright, MC
    Sheik-Nainar, MA
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2006, 64 (06) : 527 - 540
  • [39] Editorial: Integrated Multi-modal and Sensorimotor Coordination for Enhanced Human-Robot Interaction
    Fang, Bin
    Fang, Cheng
    Wen, Li
    Manoonpong, Poramate
    FRONTIERS IN NEUROROBOTICS, 2021, 15
  • [40] Towards Multi-Modal Intention Interfaces for Human-Robot Co-Manipulation
    Peternel, Luka
    Tsagarakis, Nikos
    Ajoudani, Arash
    2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2016), 2016, : 2663 - 2669