Investigating Trust in Human-AI Collaboration for a Speech-Based Data Analytics Task

被引:2
|
作者
Tutul, Abdullah Aman [1 ]
Nirjhar, Ehsanul Haque [1 ]
Chaspari, Theodora [2 ]
机构
[1] Texas A&M Univ, College Stn, TX 77843 USA
[2] Univ Colorado Boulder, Boulder, CO USA
基金
美国国家科学基金会;
关键词
Explainable AI; transparency; human trust; trust calibration; AUTOMATION; CALIBRATION; ATTITUDES; PEARSONS; AGE;
D O I
10.1080/10447318.2024.2328910
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Complex real-world problems can benefit from the collaboration between humans and artificial intelligence (AI) to achieve reliable decision-making. We investigate trust in a human-in-the-loop decision-making task, in which participants with background on psychological sciences collaborate with an explainable AI system for estimating one's anxiety level from speech. The AI system relies on the explainable boosting machine (EBM) model which takes prosodic features as the input and estimates the anxiety level. Trust in AI is quantified via self-reported (i.e., administered via a questionnaire) and behavioral (i.e., computed as user-AI agreement) measures, which are positively correlated with each other. Results indicate that humans and AI depict differences in performance depending on the characteristics of the specific case under review. Overall, human annotators' trust in the AI increases over time, with momentary decreases after the AI partner makes an error. Annotators further differ in terms of appropriate trust calibration in the AI system, with some annotators over-trusting and some under-trusting the system. Personality characteristics (i.e., agreeableness, conscientiousness) and overall propensity to trust machines further affect the level of trust in the AI system, with these findings approaching statistical significance. Results from this work will lead to a better understanding of human-AI collaboration and will guide the design of AI algorithms toward supporting better calibration of user trust.
引用
收藏
页码:2936 / 2954
页数:19
相关论文
共 50 条
  • [31] Synthesizing Explainable Behavior for Human-AI Collaboration
    Kambhampati, Subbarao
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 1 - 2
  • [32] Enhancing human-AI collaboration: The case of colonoscopy
    Introzzi, Luca
    Zonca, Joshua
    Cabitza, Federico
    Cherubini, Paolo
    Reverberi, Carlo
    DIGESTIVE AND LIVER DISEASE, 2024, 56 (07) : 1131 - 1139
  • [33] Exploration of Explainable AI for Trust Development on Human-AI Interaction
    Bernardo, Ezekiel L.
    Seva, Rosemary R.
    PROCEEDINGS OF 2023 6TH ARTIFICIAL INTELLIGENCE AND CLOUD COMPUTING CONFERENCE, AICCC 2023, 2023, : 238 - 246
  • [34] AI-Driven Personalization to Support Human-AI Collaboration
    Conati, Cristina
    COMPANION OF THE 2024 ACM SIGCHI SYMPOSIUM ON ENGINEERING INTERACTIVE COMPUTING SYSTEMS, EICS 2024, 2024, : 5 - 6
  • [35] How Do AI Explanations Affect Human-AI Trust?
    Bui, Lam
    Pezzola, Marco
    Bandara, Danushka
    ARTIFICIAL INTELLIGENCE IN HCI, AI-HCI 2023, PT I, 2023, 14050 : 175 - 183
  • [36] Exploring Motivators for Trust in the Dichotomy of Human-AI Trust Dynamics
    Gerlich, Michael
    SOCIAL SCIENCES-BASEL, 2024, 13 (05):
  • [37] Explanatory machine learning for justified trust in human-AI collaboration: Experiments on file deletion recommendations
    Goebel, Kyra
    Niessen, Cornelia
    Seufert, Sebastian
    Schmid, Ute
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [38] Reflexive Data Curation: Opportunities and Challenges for Embracing Uncertainty in Human-AI Collaboration
    Arzberger, Anne
    Lupetti, Maria luce
    Giaccardi, Elisa
    ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2024, 31 (06)
  • [39] Exploring the Role of Trust During Human-AI Collaboration in Managerial Decision-Making Processes
    Tuncer, Serdar
    Ramirez, Alejandro
    HCI INTERNATIONAL 2022 - LATE BREAKING PAPERS: INTERACTING WITH EXTENDED REALITY AND ARTIFICIAL INTELLIGENCE, 2022, 13518 : 541 - 557
  • [40] Would you trust an AI team member? Team trust in human-AI teams
    Georganta, Eleni
    Ulfert, Anna-Sophie
    JOURNAL OF OCCUPATIONAL AND ORGANIZATIONAL PSYCHOLOGY, 2024, 97 (03) : 1212 - 1241