The principles and limits of algorithm-in-the-loop decision making

被引:130
|
作者
Green B. [1 ]
Chen Y. [1 ]
机构
[1] Harvard University, United States
基金
美国国家科学基金会;
关键词
Behavioral experiment; Ethics; Fairness; Mechanical Turk; Risk assessment;
D O I
10.1145/3359152
中图分类号
学科分类号
摘要
The rise of machine learning has fundamentally altered decision making: rather than being made solely by people, many important decisions are now made through an “algorithm-in-the-loop” process where machine learning models inform people. Yet insufficient research has considered how the interactions between people and models actually influence human decisions. Society lacks both clear normative principles regarding how people should collaborate with algorithms as well as robust empirical evidence about how people do collaborate with algorithms. Given research suggesting that people struggle to interpret machine learning models and to incorporate them into their decisions—sometimes leading these models to produce unexpected outcomes—it is essential to consider how different ways of presenting models and structuring human-algorithm interactions affect the quality and type of decisions made. This paper contributes to such research in two ways. First, we posited three principles as essential to ethical and responsible algorithm-in-the-loop decision making. Second, through a controlled experimental study on Amazon Mechanical Turk, we evaluated whether people satisfy these principles when making predictions with the aid of a risk assessment. We studied human predictions in two contexts (pretrial release and financial lending) and under several conditions for risk assessment presentation and structure. Although these conditions did influence participant behaviors and in some cases improved performance, only one desideratum was consistently satisfied. Under all conditions, our study participants 1) were unable to effectively evaluate the accuracy of their own or the risk assessment’s predictions, 2) did not calibrate their reliance on the risk assessment based on the risk assessment’s performance, and 3) exhibited bias in their interactions with the risk assessment. These results highlight the urgent need to expand our analyses of algorithmic decision making aids beyond evaluating the models themselves to investigating the full sociotechnical contexts in which people and algorithms interact. © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 50 条
  • [31] PRINCIPLES OF COMPLEX ANALYSIS AND DECISION-MAKING
    DECHTYARENKO, V
    COMPUTER-AIDED DESIGN, 1978, 10 (05) : 313 - 319
  • [32] CONCEPTUAL PRINCIPLES OF BUSINESS DECISION-MAKING
    Terebukh, A. A.
    ACTUAL PROBLEMS OF ECONOMICS, 2010, (105): : 179 - 186
  • [33] Aeromedical Decision Making: From Principles to Practice
    Navathe, Pooshan
    Drane, Michael
    Preitner, Claude
    AVIATION SPACE AND ENVIRONMENTAL MEDICINE, 2014, 85 (05): : 576 - 580
  • [34] Principles of Accounting: Tools for Business Decision Making
    Mitra, Santanu
    ISSUES IN ACCOUNTING EDUCATION, 2005, 20 (01): : 131 - 132
  • [35] Principles of reference-aided decision making
    V. O. Groppen
    Automation and Remote Control, 2006, 67 : 660 - 675
  • [36] PRINCIPLES OF STATISTICAL DECISION MAKING - AIGNER,DJ
    MILLER, DW
    MANAGEMENT SCIENCE, 1968, 15 (02) : B113 - B113
  • [37] Principles of reference-aided decision making
    Groppen, VO
    AUTOMATION AND REMOTE CONTROL, 2006, 67 (04) : 660 - 675
  • [38] Main Principles of Decision Making in Conflict Theory
    Shimiyev, Hashim
    2012 IV INTERNATIONAL CONFERENCE PROBLEMS OF CYBERNETICS AND INFORMATICS (PCI), 2012,
  • [39] Metatarsalgia Assessment Algorithm and Decision Making
    Lopez, Valeria
    Slullitel, Gaston
    FOOT AND ANKLE CLINICS, 2019, 24 (04) : 561 - +
  • [40] Decision Making in the Difficult Airway Algorithm
    Nathan, Naveen
    ANESTHESIA AND ANALGESIA, 2022, 134 (05): : 909 - 909