Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent Collaboration

被引:5
|
作者
Herse, Sarita [1 ]
Vitale, Jonathan [2 ,3 ]
Williams, Mary-Anne [1 ]
机构
[1] Univ New South Wales, Sch Management & Governance, UNSW Business Sch, Sydney, Australia
[2] Univ Technol Sydney, Sch Comp Sci, Sydney, Australia
[3] Univ New England, Sch Comp Sci, Armidale, Australia
关键词
ROBOT; AUTOMATION; STRATEGIES; ALLOCATION; POWER;
D O I
10.1080/10447318.2022.2150691
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Optimal performance of collaborative tasks requires consideration of the interactions between intelligent agents and their human counterparts. The functionality and success of these agents lie in their ability to maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology with an ability to vary user trust and decision making in-task. An online experiment was run to investigate whether stimulus difficulty and the implementation of agent features by a collaborative recommender system interact to influence user perception, trust and decision making. Agent features are changes to the Human-Agent interface and interaction style, and include presentation of a disclaimer message, a request for more information from the user and no additional feature. Signal detection theory is utilised to interpret decision making, with this applied to assess decision making on the task, as well as with the collaborative agent. The results demonstrate that decision change occurs more for hard stimuli, with participants choosing to change their initial decision across all features to follow the agent recommendation. Furthermore, agent features can be utilised to mediate user decision making and trust in-task, though the direction and extent of this influence is dependent on the implemented feature and difficulty of the task. The results emphasise the complexity of user trust in Human-Agent collaboration, highlighting the importance of considering task context in the wider perspective of trust calibration.
引用
收藏
页码:1740 / 1761
页数:22
相关论文
共 40 条
  • [1] Using Trust to Determine User Decision Making & Task Outcome During a Human-Agent Collaborative Task
    Herse, Sarita
    Vitale, Jonathan
    Johnston, Benjamin
    Williams, Mary-Anne
    2021 16TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI, 2021, : 73 - 82
  • [2] Simulation Evidence of Trust Calibration: Using POMDP with Signal Detection Theory to Adapt Agent Features for Optimised Task Outcome During Human-Agent Collaboration
    Herse, Sarita
    Vitale, Jonathan
    Williams, Mary-Anne
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024, 16 (06) : 1381 - 1403
  • [3] Human-Agent Collaboration for Time-Stressed Multicontext Decision Making
    Fan, Xiaocong
    McNeese, Michael
    Sun, Bingjun
    Hanratty, Timothy
    Allender, Laurel
    Yen, John
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART A-SYSTEMS AND HUMANS, 2010, 40 (02): : 306 - 320
  • [4] Let's Compete! The Influence of Human-Agent Competition and Collaboration on Agent Learning and Human Perception
    Phaijit, Ornnalin
    Sammut, Claude
    Johal, Wafa
    PROCEEDINGS OF THE 10TH CONFERENCE ON HUMAN-AGENT INTERACTION, HAI 2022, 2022, : 86 - 94
  • [5] Human-Agent Decision-making: Combining Theory and Practice
    Kraus, Sarit
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2016, (215): : 13 - 27
  • [6] Incorporating BDI Agents into Human-Agent Decision Making Research
    Kamphorst, Bart
    van Wissen, Arlette
    Dignum, Virginia
    ENGINEERING SOCIETIES IN THE AGENTS WORLD X, 2009, 5881 : 84 - 97
  • [7] Explainable Agents for Less Bias in Human-Agent Decision Making
    Malhi, Avleen
    Knapic, Samanta
    Framling, Kary
    EXPLAINABLE, TRANSPARENT AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS (EXTRAAMAS 2020), 2020, 12175 : 129 - 146
  • [8] Trust Lengthens Decision Time on Unexpected Recommendations in Human-agent Interaction
    Tokushige, Hiroyuki
    Narumi, Takuji
    Ono, Sayaka
    Fuwamoto, Yoshitaka
    Tanikawa, Tomohiro
    Hirose, Michitaka
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON HUMAN AGENT INTERACTION (HAI'17), 2017, : 245 - 252
  • [9] Leveraging Human-Agent Collaboration for Multimodal Task Guidance with Concurrent Authoring Capabilities
    Fleiner, Christian
    COMPANION PROCEEDINGS OF THE 27TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2022 COMPANION, 2022, : 138 - 142
  • [10] Moral Decision Making in Human-Agent Teams: Human Control and the Role of Explanations
    van der Waa, Jasper
    Verdult, Sabine
    van den Bosch, Karel
    van Diggelen, Jurriaan
    Haije, Tjalling
    van der Stigchel, Birgit
    Cocu, Ioana
    FRONTIERS IN ROBOTICS AND AI, 2021, 8