A toolbox for surfacing health equity harms and biases in large language models

被引:6
|
作者
Pfohl, Stephen R. [1 ]
Cole-Lewis, Heather [1 ]
Sayres, Rory [1 ]
Neal, Darlene [1 ]
Asiedu, Mercy [1 ]
Dieng, Awa [2 ]
Tomasev, Nenad [2 ]
Rashid, Qazi Mamunur [1 ]
Azizi, Shekoofeh [2 ]
Rostamzadeh, Negar [1 ]
Mccoy, Liam G. [3 ]
Celi, Leo Anthony [4 ,5 ,6 ]
Liu, Yun [1 ]
Schaekermann, Mike [1 ]
Walton, Alanna [2 ]
Parrish, Alicia [2 ]
Nagpal, Chirag [1 ]
Singh, Preeti [1 ]
Dewitt, Akeiylah [1 ]
Mansfield, Philip [2 ]
Prakash, Sushant [1 ]
Heller, Katherine [1 ]
Karthikesalingam, Alan [1 ]
Semturs, Christopher [1 ]
Barral, Joelle [2 ]
Corrado, Greg [1 ]
Matias, Yossi [1 ]
Smith-Loud, Jamila [1 ]
Horn, Ivor [1 ]
Singhal, Karan [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Google DeepMind, Mountain View, CA USA
[3] Univ Alberta, Edmonton, AB, Canada
[4] MIT, Lab Computat Physiol, Cambridge, MA USA
[5] Beth Israel Deaconess Med Ctr, Div Pulm Crit Care & Sleep Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Biostat, Boston, MA USA
基金
美国国家科学基金会;
关键词
D O I
10.1038/s41591-024-03258-2
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases and EquityMedQA, a collection of seven datasets enriched for adversarial queries. Both our human assessment framework and our dataset design process are grounded in an iterative participatory approach and review of Med-PaLM 2 answers. Through our empirical study, we find that our approach surfaces biases that may be missed by narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. While our approach is not sufficient to holistically assess whether the deployment of an artificial intelligence (AI) system promotes equitable health outcomes, we hope that it can be leveraged and built upon toward a shared goal of LLMs that promote accessible and equitable healthcare.
引用
收藏
页码:3590 / 3600
页数:30
相关论文
共 50 条
  • [21] Learning the language of health equity
    Squires, Allison
    Thompson, Roy
    RESEARCH IN NURSING & HEALTH, 2021, 44 (06) : 869 - 871
  • [22] Pain in Heart Failure, Health Equity, and Healthcare Biases
    Razavi, Zach
    Zerfas, Isabelle
    Brazier, Lj M.
    Marks, Adam
    JOURNAL OF PAIN AND SYMPTOM MANAGEMENT, 2023, 65 (05) : E547 - E548
  • [23] More Is Different: Large Language Models in Health Care
    Lungren, Matthew P.
    Fishman, Elliot K.
    Chu, Linda C.
    Rizk, Ryan C.
    Rowe, Steven P.
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2024, 21 (07) : 1151 - 1154
  • [24] Large language models: a new chapter in digital health
    不详
    LANCET DIGITAL HEALTH, 2024, 6 (01): : e1 - e1
  • [25] Large language models: a new chapter in digital health
    The Lancet Digital Health
    The Lancet Digital Health, 2024, 6 (01):
  • [26] The Opportunities and Risks of Large Language Models in Mental Health
    Lawrence, Hannah R.
    Schneider, Renee A.
    Rubin, Susan B.
    Mataric, Maja J.
    McDuff, Daniel J.
    Bell, Megan Jones
    JMIR MENTAL HEALTH, 2024, 11
  • [27] CAN LARGE LANGUAGE MODELS GENERATE CONCEPTUAL HEALTH ECONOMIC MODELS?
    Chhatwal, J.
    Yildirim, I
    Balta, D.
    Ermis, T.
    Tenkin, S.
    Samur, S.
    Ayer, T.
    VALUE IN HEALTH, 2024, 27 (06) : S123 - S123
  • [28] Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases
    Ando, Risako
    Morishita, Takanobu
    Abe, Hirohiko
    Mineshima, Koji
    Okada, Mitsuhiro
    arXiv, 2023,
  • [29] Assessing Inherent Biases Following Prompt Compression of Large Language Models for Game Story Generation
    Taveekitworachai, Pittawat
    Plupattanakit, Kantinan
    Thawonmas, Ruck
    2024 IEEE CONFERENCE ON GAMES, COG 2024, 2024,
  • [30] Exploring Reasoning Biases in Large Language Models Through Syllogism: Insights from the NeuBAROCO Dataset
    Ozeki, Kentaro
    Ando, Risako
    Morishita, Takanobu
    Abe, Hirohiko
    Mineshima, Koji
    Okada, Mitsuhiro
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 16063 - 16077