Appropriateness of ChatGPT in Answering Heart Failure Related Questions

被引:12
|
作者
King, Ryan C. [1 ]
Samaan, Jamil S. [2 ]
Yeo, Yee Hui [2 ]
Mody, Behram [1 ]
Lombardo, Dawn M. [1 ]
Ghashghaei, Roxana [1 ]
机构
[1] Univ Calif Irvine, Irvine Med Ctr, Dept Med, Div Cardiol, 101 City Dr South, Orange, CA 92868 USA
[2] Cedars Sinai Med Ctr, Dept Med, Karsh Div Gastroenterol & Hepatol, Los Angeles, CA USA
来源
HEART LUNG AND CIRCULATION | 2024年 / 33卷 / 09期
关键词
Heart failure; ChatGPT; Health education; Artificial fi cial intelligence; Equity;
D O I
10.1016/j.hlc.2024.03.005
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background Heart failure requires complex management, and increased patient knowledge has been shown to improve outcomes. This study assessed the knowledge of Chat Generative Pre-trained Transformer (ChatGPT) and its appropriateness as a supplemental resource of information for patients with heart failure. Method A total of 107 frequently asked heart failure-related questions were included in 3 categories: "basic knowledge" (49), "management" (41) and "other" (17). Two responses per question were generated using both GPT-3.5 and GPT-4 (i.e., two responses per question per model). The accuracy and reproducibility of responses were graded by two reviewers, board-certified fi ed in cardiology, with differences resolved by a third reviewer, board-certified fi ed in cardiology and advanced heart failure. Accuracy was graded using a four-point scale: (1) comprehensive, (2) correct but inadequate, (3) some correct and some incorrect, and (4) completely incorrect. Results GPT-4 provided 107/107 (100%) responses with correct information. Further, GPT-4 displayed a greater proportion of comprehensive knowledge for the categories of "basic knowledge" and "management" (89.8% and 82.9%, respectively). For GPT-3, there were two total responses (1.9%) graded as "some correct and incorrect" for GPT-3.5, while no "completely incorrect" responses were produced. With respect to comprehensive knowledge, GPT-3.5 performed best in the "management" category and "other" category (prognosis, procedures, and support) (78.1%, 94.1%). The models also provided highly reproducible responses, with GPT-3.5 scoring above 94% in every category and GPT-4 with 100% for all answers. Conclusions GPT-3.5 and GPT-4 answered the majority of heart failure-related questions accurately and reliably. If validated in future studies, ChatGPT may serve as a useful tool in the future by providing accessible health-related information and education to patients living with heart failure. In its current state, ChatGPT necessitates further rigorous testing and validation to ensure patient safety and equity across all patient demographics.
引用
收藏
页码:1314 / 1318
页数:5
相关论文
共 50 条
  • [41] Answering Product-related Questions with Heterogeneous Information
    Zhang, Wenxuan
    Yu, Qian
    Lam, Wai
    1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 696 - 705
  • [42] Quality of ChatGPT Responses to Questions Related To Liver Transplantation
    Yutaka Endo
    Kazunari Sasaki
    Zorays Moazzam
    Henrique A. Lima
    Austin Schenk
    Ashley Limkemann
    Kenneth Washburn
    Timothy M. Pawlik
    Journal of Gastrointestinal Surgery, 2023, 27 : 1716 - 1719
  • [43] Quality of ChatGPT Responses to Questions Related To Liver Transplantation
    Endo, Yutaka
    Sasaki, Kazunari
    Moazzam, Zorays
    Lima, Henrique A.
    Schenk, Austin
    Limkemann, Ashley
    Washburn, Kenneth
    Pawlik, Timothy M.
    JOURNAL OF GASTROINTESTINAL SURGERY, 2023, 27 (08) : 1716 - 1719
  • [44] ChatGPT's responses to gout-related questions
    Hong, Daorong
    Huang, Chunyan
    Chen, Xiaoqing
    Chen, Lijun
    ASIAN JOURNAL OF SURGERY, 2023, 46 (12) : 5935 - 5936
  • [45] Can We Ask ChatGPT About Drug Safety? Appropriateness of ChatGPT Responses to Questions about Drug Use and Adverse Reactions
    Pariente, Antoine
    Salvo, Francesco
    Bres, Virginie
    Faillie, Jean-Luc
    DRUG SAFETY, 2024, 47 (12) : 1340 - 1340
  • [46] ANSWERING QUESTIONS
    GARMSTON, R
    WELLMAN, B
    EDUCATIONAL LEADERSHIP, 1994, 51 (05) : 88 - 89
  • [47] ON ANSWERING QUESTIONS
    ARMBRUSTER, BB
    READING TEACHER, 1992, 45 (09): : 724 - 725
  • [48] Answering questions
    Nakamura, DN
    OIL & GAS JOURNAL, 2005, 103 (09) : 17 - 17
  • [49] Answering questions
    Maley, Mary
    Resource: Engineering and Technology for Sustainable World, 2019, 26 (02):
  • [50] Answering the questions
    Dennison, N
    LAB ANIMAL, 2000, 29 (01) : 18 - 19