Evaluation and mitigation of cognitive biases in medical language models

被引:1
|
作者
Schmidgall, Samuel [1 ]
Harris, Carl [2 ]
Essien, Ime [2 ]
Olshvang, Daniel [2 ]
Rahman, Tawsifur [2 ]
Kim, Ji Woong [3 ]
Ziaei, Rojin [4 ]
Eshraghian, Jason [5 ]
Abadir, Peter [6 ]
Chellappa, Rama [1 ,2 ]
机构
[1] Johns Hopkins Univ, Dept Elect & Comp Engn, Baltimore, MD 21218 USA
[2] Johns Hopkins Univ, Dept Biomed Engn, Baltimore, MD USA
[3] Johns Hopkins Univ, Dept Mech Engn, Baltimore, MD USA
[4] Univ Maryland, Dept Comp Sci, College Pk, MD USA
[5] Univ Calif Santa Cruz, Dept Elect & Comp Engn, Santa Cruz, CA USA
[6] Johns Hopkins Univ, Sch Med, Div Geriatr Med & Gerontol, Baltimore, MD USA
来源
NPJ DIGITAL MEDICINE | 2024年 / 7卷 / 01期
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
D O I
10.1038/s41746-024-01283-6
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Increasing interest in applying large language models (LLMs) to medicine is due in part to their impressive performance on medical exam questions. However, these exams do not capture the complexity of real patient-doctor interactions because of factors like patient compliance, experience, and cognitive bias. We hypothesized that LLMs would produce less accurate responses when faced with clinically biased questions as compared to unbiased ones. To test this, we developed the BiasMedQA dataset, which consists of 1273 USMLE questions modified to replicate common clinically relevant cognitive biases. We assessed six LLMs on BiasMedQA and found that GPT-4 stood out for its resilience to bias, in contrast to Llama 2 70B-chat and PMC Llama 13B, which showed large drops in performance. Additionally, we introduced three bias mitigation strategies, which improved but did not fully restore accuracy. Our findings highlight the need to improve LLMs' robustness to cognitive biases, in order to achieve more reliable applications of LLMs in healthcare.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Using Natural Sentences for Understanding Biases in Language Models
    Alnegheimish, Sarah
    Guo, Alicia
    Sun, Yi
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 2824 - 2830
  • [22] A cognitive modeling approach to learning and using reference biases in language
    Toth, Abigail G. G.
    Hendriks, Petra
    Taatgen, Niels A. A.
    van Rij, Jacolien
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [23] Evaluation of large language models as a diagnostic aid for complex medical cases
    Rios-Hoyo, Alejandro
    Shan, Naing Lin
    Li, Anran
    Pearson, Alexander T.
    Pusztai, Lajos
    Howard, Frederick M.
    FRONTIERS IN MEDICINE, 2024, 11
  • [24] Medicine and heuristics: cognitive biases and medical decision-making
    Dale F. Whelehan
    Kevin C. Conlon
    Paul F. Ridgway
    Irish Journal of Medical Science (1971 -), 2020, 189 : 1477 - 1484
  • [25] IMPACT OF INSTRUCTION ON COGNITIVE BIASES IN MEDICAL DECISION-MAKING
    HERSHBERGER, PJ
    PART, HM
    COHEN, SM
    MARKERT, RJ
    FINGER, WW
    CLINICAL RESEARCH, 1993, 41 (02): : A559 - A559
  • [26] Medicine and heuristics: cognitive biases and medical decision-making
    Whelehan, Dale F.
    Conlon, Kevin C.
    Ridgway, Paul F.
    IRISH JOURNAL OF MEDICAL SCIENCE, 2020, 189 (04) : 1477 - 1484
  • [27] Cognitive Biases in Criminal Case Evaluation: A Review of the Research
    Vanessa Meterko
    Glinda Cooper
    Journal of Police and Criminal Psychology, 2022, 37 : 101 - 122
  • [28] Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents
    Santhanam, Sashank
    Karduni, Alireza
    Shaikh, Samira
    PROCEEDINGS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'20), 2020,
  • [29] Cognitive Biases in Criminal Case Evaluation: A Review of the Research
    Meterko, Vanessa
    Cooper, Glinda
    JOURNAL OF POLICE AND CRIMINAL PSYCHOLOGY, 2022, 37 (01) : 101 - 122
  • [30] Statistically Profiling Biases in Natural Language Reasoning Datasets and Models
    Huang, Shanshan
    Zhu, Kenny Q.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 4521 - 4530