Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard

被引:31
|
作者
Cheong, Ryan Chin Taw [1 ]
Unadkat, Samit [2 ]
Mcneillis, Venkata [2 ]
Williamson, Andrew [1 ]
Joseph, Jonathan [2 ]
Randhawa, Premjit [2 ]
Andrews, Peter [2 ]
Paleri, Vinidh [1 ]
机构
[1] Royal Marsden NHS Fdn Trust, Otolaryngol Head & Neck Surg Dept, Fulham Rd, London SW3 6JJ, England
[2] Univ Coll London Hosp NHS Fdn Trust, Royal Natl ENT & Eastman Dent Hosp, Otolaryngol Head & Neck Surg Dept, London, England
关键词
Artificial intelligence; Large language models; ChatGPT; Google Bard; Obstructive sleep apnoea; Patient education material;
D O I
10.1007/s00405-023-08319-9
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Purpose To perform the first head-to-head comparative evaluation of patient education material for obstructive sleep apnoea generated by two artificial intelligence chatbots, ChatGPT and its primary rival Google Bard.Methods Fifty frequently asked questions on obstructive sleep apnoea in English were extracted from the patient information webpages of four major sleep organizations and categorized as input prompts. ChatGPT and Google Bard responses were selected and independently rated using the Patient Education Materials Assessment Tool-Printable (PEMAT-P) Auto-Scoring Form by two otolaryngologists, with a Fellowship of the Royal College of Surgeons (FRCS) and a special interest in sleep medicine and surgery. Responses were subjectively screened for any incorrect or dangerous information as a secondary outcome. The Flesch-Kincaid Calculator was used to evaluate the readability of responses for both ChatGPT and Google Bard.Results A total of 46 questions were curated and categorized into three domains: condition (n = 14), investigation (n = 9) and treatment (n = 23). Understandability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 90.86% vs.76.32% (p < 0.001); investigation 89.94% vs. 71.67% (p < 0.001); treatment 90.78% vs.73.74% (p < 0.001). Actionability scores for ChatGPT versus Google Bard on the various domains were as follows: condition 77.14% vs. 51.43% (p < 0.001); investigation 72.22% vs. 54.44% (p = 0.05); treatment 73.04% vs. 54.78% (p = 0.002). The mean Flesch-Kincaid Grade Level for ChatGPT was 9.0 and Google Bard was 5.9. No incorrect or dangerous information was identified in any of the generated responses from both ChatGPT and Google Bard.Conclusion Evaluation of ChatGPT and Google Bard patient education material for OSA indicates the former to offer superior information across several domains.
引用
收藏
页码:985 / 993
页数:9
相关论文
共 50 条
  • [1] Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard
    Ryan Chin Taw Cheong
    Samit Unadkat
    Venkata Mcneillis
    Andrew Williamson
    Jonathan Joseph
    Premjit Randhawa
    Peter Andrews
    Vinidh Paleri
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 985 - 993
  • [2] Artificial intelligence chatbots as sources of patient education material for cataract surgery: ChatGPT-4 versus Google Bard
    Azzopardi, Matthew
    Ng, Benjamin
    Logeswaran, Abison
    Loizou, Constantinos
    Cheong, Ryan Chin Taw
    Gireesh, Prasanth
    Ting, Darren Shu Jeng
    Chong, Yu Jeat
    BMJ OPEN OPHTHALMOLOGY, 2024, 9 (01):
  • [3] Performance of artificial intelligence chatbots in sleep medicine certification board exams: ChatGPT versus Google Bard
    Cheong, Ryan Chin Taw
    Pang, Kenny Peter
    Unadkat, Samit
    Mcneillis, Venkata
    Williamson, Andrew
    Joseph, Jonathan
    Randhawa, Premjit
    Andrews, Peter
    Paleri, Vinidh
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (04) : 2137 - 2143
  • [4] Performance of artificial intelligence chatbots in sleep medicine certification board exams: ChatGPT versus Google Bard
    Ryan Chin Taw Cheong
    Kenny Peter Pang
    Samit Unadkat
    Venkata Mcneillis
    Andrew Williamson
    Jonathan Joseph
    Premjit Randhawa
    Peter Andrews
    Vinidh Paleri
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 2137 - 2143
  • [5] Online Patient Education in Obstructive Sleep Apnea: ChatGPT versus Google Search
    Incerti Parenti, Serena
    Bartolucci, Maria Lavinia
    Biondi, Elena
    Maglioni, Alessandro
    Corazza, Giulia
    Gracco, Antonio
    Alessandri-Bonetti, Giulio
    HEALTHCARE, 2024, 12 (17)
  • [6] The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Ilgaz, Hasan B.
    Celik, Zehra
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (09)
  • [7] Chatbots as Patient Education Resources for Aesthetic Facial Plastic Surgery: Evaluation of ChatGPT and Google Bard Responses
    Garg, Neha
    Campbell, Daniel J.
    Yang, Angela
    Mccann, Adam
    Moroco, Annie E.
    Estephan, Leonard E.
    Palmer, William J.
    Krein, Howard
    Heffelfinger, Ryan
    FACIAL PLASTIC SURGERY & AESTHETIC MEDICINE, 2024, 26 (06) : 665 - 673
  • [8] Evaluating the Accuracy of ChatGPT and Google BARD in Fielding Oculoplastic Patient Queries: A Comparative Study on Artificial versus Human Intelligence
    Al-Sharif, Eman M.
    Penteado, Rafaella C.
    El Jalbout, Nahia Dib
    Topilow, Nicole J.
    Shoji, Marissa K.
    Kikkawa, Don O.
    Liu, Catherine Y.
    Korn, Bobby S.
    OPHTHALMIC PLASTIC AND RECONSTRUCTIVE SURGERY, 2024, 40 (03): : 303 - 311
  • [9] THE ABILITY OF ARTIFICIAL INTELLIGENCE CHATBOTS ChatGPT AND GOOGLE BARD TO ACCURATELY CONVEY PREOPERATIVE INFORMATION FOR PATIENTS UNDERGOING OPHTHALMIC SURGERIES
    Patil, Nikhil S.
    Huang, Ryan
    Mihalache, Andrew
    Kisilevsky, Eli
    Kwok, Jason
    Popovic, Marko M.
    Nassrallah, Georges
    Chan, Clara
    Mallipatna, Ashwin
    Kertes, Peter J.
    Muni, Rajeev H.
    RETINA-THE JOURNAL OF RETINAL AND VITREOUS DISEASES, 2024, 44 (06): : 950 - 953
  • [10] Understanding the Landscape: The Emergence of Artificial Intelligence (AI), ChatGPT, and Google Bard in Gastroenterology
    Rammohan, Rajmohan
    V. Joy, Melvin
    Magam, Sai Greeshma
    Natt, Dilman
    Magam, Sai Reshma
    Pannikodu, Leeza
    Desai, Jiten
    Akande, Olawale
    Bunting, Susan
    Yost, Robert M.
    Mustacchia, Paul
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (01)