Large language models: a new frontier in paediatric cataract patient education

被引:3
|
作者
Dihan, Qais [1 ,2 ]
Chauhan, Muhammad Z. [2 ]
Eleiwa, Taher K. [3 ]
Brown, Andrew D. [4 ]
Hassan, Amr K. [5 ]
Khodeiry, Mohamed M. [6 ]
Elsheikh, Reem H. [2 ]
Oke, Isdin [7 ]
Nihalani, Bharti R. [7 ]
VanderVeen, Deborah K. [7 ]
Sallam, Ahmed B. [2 ]
Elhusseiny, Abdelrahman M. [2 ,7 ]
机构
[1] Rosalind Franklin Univ Med & Sci, Chicago Med Sch, N Chicago, IL USA
[2] Univ Arkansas Med Sci, Dept Ophthalmol, Little Rock, AR 72205 USA
[3] Benha Univ, Dept Ophthalmol, Banha, Egypt
[4] Univ Arkansas Med Sci, Little Rock, AR USA
[5] South Valley Univ, Dept Ophthalmol, Qena, Egypt
[6] Univ Kentucky, Dept Ophthalmol, Lexington, KY USA
[7] Harvard Med Sch, Boston Childrens Hosp, Dept Ophthalmol, Boston, MA 02115 USA
关键词
Medical Education; Public health; Epidemiology; Child health (paediatrics); CHILDHOOD; READABILITY; INFORMATION; QUALITY; HEALTH;
D O I
10.1136/bjo-2024-325252
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Background/aims This was a cross-sectional comparative study. We evaluated the ability of three large language models (LLMs) (ChatGPT-3.5, ChatGPT-4, and Google Bard) to generate novel patient education materials (PEMs) and improve the readability of existing PEMs on paediatric cataract. Methods We compared LLMs' responses to three prompts. Prompt A requested they write a handout on paediatric cataract that was 'easily understandable by an average American.' Prompt B modified prompt A and requested the handout be written at a 'sixth-grade reading level, using the Simple Measure of Gobbledygook (SMOG) readability formula.' Prompt C rewrote existing PEMs on paediatric cataract 'to a sixth-grade reading level using the SMOG readability formula'. Responses were compared on their quality (DISCERN; 1 (low quality) to 5 (high quality)), understandability and actionability (Patient Education Materials Assessment Tool (>= 70%: understandable, >= 70%: actionable)), accuracy (Likert misinformation; 1 (no misinformation) to 5 (high misinformation) and readability (SMOG, Flesch-Kincaid Grade Level (FKGL); grade level <7: highly readable). Results All LLM-generated responses were of high-quality (median DISCERN >= 4), understandability (>= 70%), and accuracy (Likert=1). All LLM-generated responses were not actionable (<70%). ChatGPT-3.5 and ChatGPT-4 prompt B responses were more readable than prompt A responses (p<0.001). ChatGPT-4 generated more readable responses (lower SMOG and FKGL scores; 5.59 +/- 0.5 and 4.31 +/- 0.7, respectively) than the other two LLMs (p<0.001) and consistently rewrote them to or below the specified sixth-grade reading level (SMOG: 5.14 +/- 0.3). Conclusion LLMs, particularly ChatGPT-4, proved valuable in generating high-quality, readable, accurate PEMs and in improving the readability of existing materials on paediatric cataract.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Large language models: a new frontier in paediatric cataract patient education
    Dihan, Qais
    Chauhan, Muhammad Z.
    Eleiwa, Taher K.
    Brown, Andrew D.
    Hassan, Amr K.
    Khodeiry, Mohamed M.
    Elsheikh, Reem H.
    Oke, Isdin
    Nihalani, Bharti R.
    VanderVeen, Deborah K.
    Sallam, Ahmed B.
    Elhusseiny, Abdelrahman M.
    BRITISH JOURNAL OF OPHTHALMOLOGY, 2024, 108 (10) : 1470 - 1476
  • [2] Large Language Models: A New Frontier in Pediatric Cataract Patient Education
    Brown, Andrew David
    Dihan, Qais
    Chauhan, Muhammad Z.
    Eleiwa, Taher K.
    Hassan, Amr K.
    Sallam, Ahmed B.
    Elhusseiny, Abdelrahman M.
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [3] Evaluating the effectiveness of large language models in patient education for conjunctivitis
    Wang, Jingyuan
    Shi, Runhan
    Le, Qihua
    Shan, Kun
    Chen, Zhi
    Zhou, Xujiao
    He, Yao
    Hong, Jiaxu
    BRITISH JOURNAL OF OPHTHALMOLOGY, 2024,
  • [4] Can large language models safely address patient questions following cataract surgery?
    Lim, Ernest Junwei
    Chowdhury, Mohita
    Higham, Aisling
    McKinnon, Rory
    Ventoura, Nikoletta
    He, Yajie Vera
    de Pennington, Nick
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [5] Large language models in patient education: a scoping review of applications in medicine
    Aydin, Serhat
    Karabacak, Mert
    Vlachos, Victoria
    Margetis, Konstantinos
    FRONTIERS IN MEDICINE, 2024, 11
  • [6] Exploring the new frontier of information extraction through large language models in urban analytics
    Crooks, Andrew
    Chen, Qingqing
    ENVIRONMENT AND PLANNING B-URBAN ANALYTICS AND CITY SCIENCE, 2024, 51 (03) : 565 - 569
  • [7] The Use of Large Language Models in Education
    Xing, Wanli
    Nixon, Nia
    Crossley, Scott
    Denny, Paul
    Lan, Andrew
    Stamper, John
    Yu, Zhou
    INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2025,
  • [8] From Search Engines to Large Language Models: A Big Leap for Patient Education!
    Barabino, Emanuele
    Cittadini, Giuseppe
    CARDIOVASCULAR AND INTERVENTIONAL RADIOLOGY, 2024, 47 (02) : 251 - 252
  • [9] Advancing Patient Education in Idiopathic Intracranial Hypertension The Promise of Large Language Models
    Dihan, Qais A.
    Brown, Andrew D.
    Zaldivar, Ana T.
    Chauhan, Muhammad Z.
    Eleiwa, Taher K.
    Hassan, Amr K.
    Solyman, Omar
    Gise, Ryan
    Phillips, Paul H.
    Sallam, Ahmed B.
    Elhusseiny, Abdelrahman M.
    NEUROLOGY-CLINICAL PRACTICE, 2025, 15 (01)
  • [10] From Search Engines to Large Language Models: A Big Leap for Patient Education!
    Emanuele Barabino
    Giuseppe Cittadini
    CardioVascular and Interventional Radiology, 2024, 47 : 251 - 252