A toolbox for surfacing health equity harms and biases in large language models

被引:6
|
作者
Pfohl, Stephen R. [1 ]
Cole-Lewis, Heather [1 ]
Sayres, Rory [1 ]
Neal, Darlene [1 ]
Asiedu, Mercy [1 ]
Dieng, Awa [2 ]
Tomasev, Nenad [2 ]
Rashid, Qazi Mamunur [1 ]
Azizi, Shekoofeh [2 ]
Rostamzadeh, Negar [1 ]
Mccoy, Liam G. [3 ]
Celi, Leo Anthony [4 ,5 ,6 ]
Liu, Yun [1 ]
Schaekermann, Mike [1 ]
Walton, Alanna [2 ]
Parrish, Alicia [2 ]
Nagpal, Chirag [1 ]
Singh, Preeti [1 ]
Dewitt, Akeiylah [1 ]
Mansfield, Philip [2 ]
Prakash, Sushant [1 ]
Heller, Katherine [1 ]
Karthikesalingam, Alan [1 ]
Semturs, Christopher [1 ]
Barral, Joelle [2 ]
Corrado, Greg [1 ]
Matias, Yossi [1 ]
Smith-Loud, Jamila [1 ]
Horn, Ivor [1 ]
Singhal, Karan [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Google DeepMind, Mountain View, CA USA
[3] Univ Alberta, Edmonton, AB, Canada
[4] MIT, Lab Computat Physiol, Cambridge, MA USA
[5] Beth Israel Deaconess Med Ctr, Div Pulm Crit Care & Sleep Med, Boston, MA USA
[6] Harvard TH Chan Sch Publ Hlth, Dept Biostat, Boston, MA USA
基金
美国国家科学基金会;
关键词
D O I
10.1038/s41591-024-03258-2
中图分类号
Q5 [生物化学]; Q7 [分子生物学];
学科分类号
071010 ; 081704 ;
摘要
Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases and EquityMedQA, a collection of seven datasets enriched for adversarial queries. Both our human assessment framework and our dataset design process are grounded in an iterative participatory approach and review of Med-PaLM 2 answers. Through our empirical study, we find that our approach surfaces biases that may be missed by narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. While our approach is not sufficient to holistically assess whether the deployment of an artificial intelligence (AI) system promotes equitable health outcomes, we hope that it can be leveraged and built upon toward a shared goal of LLMs that promote accessible and equitable healthcare.
引用
收藏
页码:3590 / 3600
页数:30
相关论文
共 50 条
  • [41] Enhancing health assessments with large language models: A methodological approach
    Wang, Xi
    Zhou, Yujia
    Zhou, Guangyu
    APPLIED PSYCHOLOGY-HEALTH AND WELL BEING, 2025, 17 (01)
  • [42] Towards Interpretable Mental Health Analysis with Large Language Models
    Yang, Kailai
    Ji, Shaoxiong
    Zhang, Tianlin
    Xie, Qianqian
    Kuang, Ziyan
    Ananiadou, Sophia
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6056 - 6077
  • [43] Harnessing the Power of Large Language Models in Agricultural Safety & Health
    Shutske, John M.
    JOURNAL OF AGRICULTURAL SAFETY AND HEALTH, 2023, 29 (04): : 205 - 224
  • [44] Potential of Large Language Models in Health Care: Delphi Study
    Denecke, Kerstin
    May, Richard
    Romero, Octavio Rivera
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [45] Additional Considerations in the Era of Large Language Models in Health Care
    Arachchige, Arosh S. Perera Molligoda
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2024, 21 (07) : 990 - 991
  • [46] Large language models in health care: Development, applications, and challenges
    Yang, Rui
    Tan, Ting Fang
    Lu, Wei
    Thirunavukarasu, Arun James
    Ting, Daniel Shu Wei
    Liu, Nan
    HEALTH CARE SCIENCE, 2023, 2 (04): : 255 - 263
  • [47] Opportunities and challenges for ChatGPT and large language models in biomedicine and health
    Tian, Shubo
    Jin, Qiao
    Yeganova, Lana
    Lai, Po-Ting
    Zhu, Qingqing
    Chen, Xiuying
    Yang, Yifan
    Chen, Qingyu
    Kim, Won
    Comeau, Donald C.
    Islamaj, Rezarta
    Kapoor, Aadit
    Gao, Xin
    Lu, Zhiyong
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (01)
  • [48] Large Language Models for Sexual, Reproductive, and Maternal Health Rights
    Sewunetie, Walelign
    Beza, Assefa
    Abebe, Hailemariam
    Abuhay, Tesfamariam M.
    Admass, Wasyihun
    Hassen, Hayat
    Haile, Tsion
    Hailemariam, Hana
    Debebe, Lydia
    Moges, Nurlign
    Bekele, Nathnael
    Tilahun, Surafel L.
    Berta, Mahlet
    Mammo, Mahlet
    Yimam, Seid Muhie
    Laszlo, Kovacs
    2024 IEEE 12TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS, ICHI 2024, 2024, : 568 - 573
  • [49] Large language models and synthetic health data: progress and prospects
    Smolyak, Daniel
    Bjarnadottir, Margret, V
    Crowley, Kenyon
    Agarwal, Ritu
    JAMIA OPEN, 2024, 7 (04)
  • [50] Large language models to identify social determinants of health in electronic health records
    Marco Guevara
    Shan Chen
    Spencer Thomas
    Tafadzwa L. Chaunzwa
    Idalid Franco
    Benjamin H. Kann
    Shalini Moningi
    Jack M. Qian
    Madeleine Goldstein
    Susan Harper
    Hugo J. W. L. Aerts
    Paul J. Catalano
    Guergana K. Savova
    Raymond H. Mak
    Danielle S. Bitterman
    npj Digital Medicine, 7