Human-Machine Trust and Calibration Based on Human-in-the-Loop Experiment

被引:0
|
作者
Wang, Yifan [1 ]
Guo, Jianbin [1 ]
Zeng, Shengkui [1 ]
Mao, Qirui [1 ]
Lu, Zhenping [2 ]
Wang, Zengkai [3 ]
机构
[1] Beihang Univ, Sch Reliabil & Syst Engn, Beijing, Peoples R China
[2] Sichuan Gas Turbine Res Inst, Gen Technol Lab, Chengdu, Peoples R China
[3] Beijing Inst Elect Syst Engn, Beijing, Peoples R China
关键词
human-machine trust; anchoring effect; trust calibration; undertrust; overtrust; AUTOMATION;
D O I
10.1109/SRSE56746.2022.10067635
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
While the automation system brings efficiency improvements, people's trust in the automation system has become an important factor affecting the safety of the human-machine system. The operator's unsuitable trust in the automation system (such as undertrust and overtrust) makes the human-automation system not always well matched. In this paper, we took the aircraft engine fire alarm system as the research scene, carried out the human-in-the-loop simulation experiment by injecting aircraft engine fire alarms, and used the subjective report method to measure the trust level of the subject. Then, based on the experimental data, we studied the laws of human-machine trust, including the law of trust anchoring (that is, in the case of anchoring with a known false alarm rate, the subject's trust fluctuation range is smaller than that of the unknown false alarm rate), trust elasticity, and primacy effect. A human-machine trust calibration method was proposed to prevent undertrust and overtrust in the process of human-machine interaction, and different forms of calibration methods were verified. It was found that reminding the subjects when the human error probability (HEP) >= 0.3 and at the same time declaring whether the source of human error is overtrust or undertrust is a more effective calibration method, which can generally reduce the human error probability.
引用
收藏
页码:476 / 481
页数:6
相关论文
共 50 条
  • [31] Building trust and responsibility into autonomous human-machine teams
    Gillespie, Tony
    FRONTIERS IN PHYSICS, 2022, 10
  • [32] Human-machine teams need a little trust to work
    McDonald, Michele
    AEROSPACE AMERICA, 2018, 56 (02) : 12 - 12
  • [33] Human-in-the-Loop Mixup
    Collins, Katherine M.
    Bhatt, Umang
    Liu, Weiyang
    Piratla, Vihari
    Sucholutsky, Ilia
    Love, Bradley
    Weller, Adrian
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 454 - 464
  • [34] Experiment-driven improvements in Human-in-the-loop Machine Learning Annotation via significance-based A/B testing
    Alfaro-Flores, Rafael
    Salas-Bonilla, Jose
    Juillard, Loic
    Esquivel-Rodriguez, Juan
    2021 XLVII LATIN AMERICAN COMPUTING CONFERENCE (CLEI 2021), 2021,
  • [35] From human-machine interaction to human-machine cooperation
    Hoc, JM
    ERGONOMICS, 2000, 43 (07) : 833 - 843
  • [36] A risk-based trust framework for assuring the humans in human-machine teaming
    Assaad, Zena
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TRUSTWORTHY AUTONOMOUS SYSTEMS, TAS 2024, 2024,
  • [37] Human-in-the-Loop Based Named Entity Recognition
    Zhao, Yunpeng
    Liu, Ji
    2021 INTERNATIONAL CONFERENCE ON BIG DATA ENGINEERING AND EDUCATION (BDEE 2021), 2021, : 170 - 176
  • [38] Multi-agent modelling and analysis of the knowledge learning of a human-machine hybrid intelligent organization with human-machine trust
    Xue, Chaogai
    Zhang, Haoxiang
    Cao, Haiwang
    SYSTEMS SCIENCE & CONTROL ENGINEERING, 2024, 12 (01)
  • [39] A Rationale-Centric Framework for Human-in-the-loop Machine Learning
    Lu, Jinghui
    Yang, Linyi
    Mac Namee, Brian
    Zhang, Yue
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6986 - 6996
  • [40] LOOP CONTROLLERS GET ENHANCED HUMAN-MACHINE INTERFACES
    MORRIS, HM
    CONTROL ENGINEERING, 1994, 41 (07) : 62 - 65