Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork

被引:0
|
作者
Bansal, Gagan [1 ]
Nushi, Besmira [2 ]
Kamar, Ece [2 ]
Horvitz, Eric [2 ]
Weld, Daniel S. [1 ,3 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Microsoft Res, Redmond, WA USA
[3] Allen Inst AI, Seattle, WA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
AI practitioners typically strive to develop the most accurate systems, making an implicit assumption that the AI system will function autonomously. However, in practice, AI systems often are used to provide advice to people in domains ranging from criminal justice and finance to healthcare. In such Al-advised decision making, humans and machines form a team, where the human is responsible for making final decisions. But is the most accurate AI the best teammate'? We argue "not necessarily" - predictable performance may be worth a slight sacrifice in AI accuracy. Instead, we argue that AI systems should be trained in a human-centered manner, directly optimized for team performance. We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves. To optimize the team performance for this setting we maximize the team's expected utility, expressed in terms of the quality of the final decision, cost of verifying, and individual accuracies of people and machines. Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance and show the benefit of modeling teamwork during training through improvements in expected team utility across datasets, considering parameters such as human skill and the cost of mistakes. We discuss the shortcoming of current optimization approaches beyond well-studied loss functions such as log-loss, and encourage future work on AI optimization problems motivated by human-AI collaboration.
引用
收藏
页码:11405 / 11414
页数:10
相关论文
共 50 条
  • [41] What you say vs what you do: Utilizing positive emotional expressions to relay AI teammate intent within human-AI teams
    Mallick, Rohit
    Flathmann, Christopher
    Duan, Wen
    Schelble, Beau G.
    McNeese, Nathan J.
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2024, 192
  • [42] Ideas on Optimizing the Future Soft Law Governance of AI
    Gutierrez, Carlos Ignacio
    Marchant, Gary E.
    Michael, Katina
    SSRN, 2022,
  • [43] Optimizing Delegation Between Human and AI Collaborative Agents
    Fuchs, Andrew
    Passarella, Andrea
    Conti, Marco
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT II, 2025, 2134 : 245 - 260
  • [44] OPTIMIZING COMBUSTION WITH INTEGRATED NEURAL NETWORKS AND AI TECHNOLOGIES
    VERDUIN, WH
    CONTROL ENGINEERING, 1992, 39 (09) : 38 - 40
  • [45] AN OPTIMIZING COMPILER FOR AN SPAP ARCHITECTURE USING AI TOOLS
    AZARIA, H
    DVIR, A
    COMPUTER, 1992, 25 (06) : 39 - 48
  • [46] Trustworthy and Robust AI Deployment by Design: A framework to inject best practice support into AI deployment pipelines
    Schmelczer, Andras
    Visser, Joost
    2023 IEEE/ACM 2ND INTERNATIONAL CONFERENCE ON AI ENGINEERING - SOFTWARE ENGINEERING FOR AI, CAIN, 2023, : 127 - 138
  • [47] Optimizing human hand gestures for AI-systems
    Schneider, Johannes
    AI COMMUNICATIONS, 2022, 35 (03) : 153 - 169
  • [48] Optimizing Cancer Treatment: Exploring the Role of AI in Radioimmunotherapy
    Azadinejad, Hossein
    Farhadi Rad, Mohammad
    Shariftabrizi, Ahmad
    Rahmim, Arman
    Abdollahi, Hamid
    DIAGNOSTICS, 2025, 15 (03)
  • [49] Optimizing generative AI by backpropagating language model feedback
    Mert Yuksekgonul
    Federico Bianchi
    Joseph Boen
    Sheng Liu
    Pan Lu
    Zhi Huang
    Carlos Guestrin
    James Zou
    Nature, 2025, 639 (8055) : 609 - 616
  • [50] Poll: Trust in AI for Accurate Health Information Is Low
    Orrall, Avery
    Rekito, Andy
    JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2025,