Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer?

被引:38
|
作者
Kuscu, Oguz [1 ]
Pamuk, A. Erim [1 ]
Suslu, Nilda Sutay [2 ]
Hosal, Sefik [2 ]
机构
[1] Hacettepe Univ, Sch Med, Dept Otorhinolaryngol, TR-06100 Ankara, Turkiye
[2] Atılım Univ, Sch Med, Dept Otorhinolaryngol, Ankara, Turkiye
来源
FRONTIERS IN ONCOLOGY | 2023年 / 13卷
关键词
ChatGPT; 4; head and neck (H&N) cancer; head and neck; artificial intelligence; chatbot; information literacy; natural language processing; machine learning; MODELS;
D O I
10.3389/fonc.2023.1256459
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Background and objective Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence (AI)-based language processing model using deep learning to create human-like text dialogue. It has been a popular source of information covering vast number of topics including medicine. Patient education in head and neck cancer (HNC) is crucial to enhance the understanding of patients about their medical condition, diagnosis, and treatment options. Therefore, this study aims to examine the accuracy and reliability of ChatGPT in answering questions regarding HNC.Methods 154 head and neck cancer-related questions were compiled from sources including professional societies, institutions, patient support groups, and social media. These questions were categorized into topics like basic knowledge, diagnosis, treatment, recovery, operative risks, complications, follow-up, and cancer prevention. ChatGPT was queried with each question, and two experienced head and neck surgeons assessed each response independently for accuracy and reproducibility. Responses were rated on a scale: (1) comprehensive/correct, (2) incomplete/partially correct, (3) a mix of accurate and inaccurate/misleading, and (4) completely inaccurate/irrelevant. Discrepancies in grading were resolved by a third reviewer. Reproducibility was evaluated by repeating questions and analyzing grading consistency.Results ChatGPT yielded "comprehensive/correct" responses to 133/154 (86.4%) of the questions whereas, rates of "incomplete/partially correct" and "mixed with accurate and inaccurate data/misleading" responses were 11% and 2.6%, respectively. There were no "completely inaccurate/irrelevant" responses. According to category, the model provided "comprehensive/correct" answers to 80.6% of questions regarding "basic knowledge", 92.6% related to "diagnosis", 88.9% related to "treatment", 80% related to "recovery - operative risks - complications - follow-up", 100% related to "cancer prevention" and 92.9% related to "other". There was not any significant difference between the categories regarding the grades of ChatGPT responses (p=0.88). The rate of reproducibility was 94.1% (145 of 154 questions).Conclusion ChatGPT generated substantially accurate and reproducible information to diverse medical queries related to HNC. Despite its limitations, it can be a useful source of information for both patients and medical professionals. With further developments in the model, ChatGPT can also play a crucial role in clinical decision support to provide the clinicians with up-to-date information.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Questions Regarding Patient-Reported Symptom Burden as a Predictor of Emergency Department Use and Unplanned Hospitalization in Head and Neck Cancer Reply
    Noel, Christopher W.
    Sutradhar, Rinku
    Zhao, Haoyu
    Delibasic, Victoria
    Forner, David
    Irish, Jonathan C.
    Kim, John
    Husain, Zain
    Mahar, Alyson
    Karam, Irene
    Enepekides, Danny J.
    Chan, Kelvin K. W.
    Singh, Simron
    Hallet, Julie
    Coburn, Natalie G.
    Eskander, Antoine
    JOURNAL OF CLINICAL ONCOLOGY, 2021, 39 (21) : 2417 - +
  • [42] Accuracy of ChatGPT3.5 in answering clinical questions on guidelines for severe acute pancreatitis
    Qiu, Jun
    Luo, Li
    Zhou, Youlian
    BMC GASTROENTEROLOGY, 2024, 24 (01)
  • [43] Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis
    Ye, Fan
    Zhang, He
    Luo, Xin
    Wu, Tong
    Yang, Qintai
    Shi, Zhaohui
    OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2024, 171 (02) : 571 - 577
  • [44] Reliable and Reproducible Tensor Radiomics Features in Prediction of Survival in Head and Neck Cancer
    Salmanpour, M.
    Hosseinzadeh, M.
    Rezaeijo, S.
    Maghsudi, M.
    Rahmim, A.
    EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2022, 49 (SUPPL 1) : S20 - S20
  • [45] ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case-Based Questions
    Buhr, Christoph Raphael
    Smith, Harry
    Huppertz, Tilman
    Bahr-Hamm, Katharina
    Matthias, Christoph
    Blaikie, Andrew
    Kelsey, Tom
    Kuhn, Sebastian
    Eckrich, Jonas
    JMIR MEDICAL EDUCATION, 2023, 9
  • [46] Comparative analysis of ChatGPT and Bard in answering pathology examination questions requiring image interpretation
    Apornvirat, Sompon
    Namboonlue, Chutimon
    Laohawetwanit, Thiyaphat
    AMERICAN JOURNAL OF CLINICAL PATHOLOGY, 2024, 162 (03) : 252 - 260
  • [47] Is ChatGPT an Accurate and Reliable Source of Information for Patients with Vaccine and Statin Hesitancy?
    Torun, Cundullah
    Sarmis, Abdurrahman
    Oguz, Aytekin
    MEDENIYET MEDICAL JOURNAL, 2024, 39 (01): : 1 - 7
  • [48] Perioperative stroke occurring in patients who undergo neck dissection for head and neck cancer: unanswered questions
    Thompson, SK
    McKinnon, JG
    Ghali, WA
    CANADIAN JOURNAL OF SURGERY, 2003, 46 (05) : 332 - 334
  • [49] Can ChatGPT answer patient questions regarding reverse shoulder arthroplasty?
    Lack, Benjamin T.
    Mouhawasse, Edwin
    Childers, Justin T.
    Jackson, Garrett R.
    Daji, Shay V.
    Yerke-Hansen, Payton
    Familiari, Filippo
    Knapik, Derrick M.
    Sabesan, Vani J.
    JOURNAL OF ISAKOS JOINT DISORDERS & ORTHOPAEDIC SPORTS MEDICINE, 2024, 9 (06)
  • [50] Can ChatGPT Answer Patient Questions Regarding Total Knee Arthroplasty?
    Mika, Aleksander P.
    Mulvey, Hillary E.
    Engstrom, Stephen M.
    Polkowski, Gregory G.
    Martin, J. Ryan
    Wilson, Jacob M.
    JOURNAL OF KNEE SURGERY, 2024, 37 (09) : 664 - 673