Assessing the Readability, Reliability, and Quality of AI-Modified and Generated Patient Education Materials for Endoscopic Skull Base Surgery

被引:3
|
作者
Warn, Michael [1 ]
Meller, Leo L. T. [2 ]
Chan, Daniella [3 ]
Torabi, Sina J. [3 ]
Bitner, Benjamin F. [3 ]
Tajudeen, Bobby A. [4 ]
Kuan, Edward C. [3 ]
机构
[1] Univ Calif Riverside, Riverside Sch Med, Riverside, CA USA
[2] Univ Calif San Diego, San Diego Sch Med, San Diego, CA USA
[3] Univ Calif Orange, Irvine Med Ctr, Dept Otolaryngol Head & Neck Surg, 101 City Dr South, Orange, CA 92868 USA
[4] Rush Univ, Dept Otolaryngol Head & Neck Surg, Chicago, IL USA
关键词
ChatGPT; artificial intelligence; AI; endoscopic surgery; skull base; HEALTH LITERACY; INFORMATION; MISINFORMATION; ASSOCIATION; WEBSITES; OUTCOMES;
D O I
10.1177/19458924241273055
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Background: Despite National Institutes of Health and American Medical Association recommendations to publish online patient education materials at or below sixth-grade literacy, those pertaining to endoscopic skull base surgery (ESBS) have lacked readability and quality. ChatGPT is an artificial intelligence (AI) system capable of synthesizing vast internet data to generate responses to user queries but its utility in improving patient education materials has not been explored. Objective: To examine the current state of readability and quality of online patient education materials and determined the utility of ChatGPT for improving articles and generating patient education materials. Methods: An article search was performed utilizing 10 different search terms related to ESBS. The ten least readable existing patient-facing articles were modified with ChatGPT and iterative queries were used to generate an article de novo. The Flesch Reading Ease (FRE) and related metrics measured overall readability and content literacy level, while DISCERN assessed article reliability and quality. Results: Sixty-six articles were located. ChatGPT improved FRE readability of the 10 least readable online articles (19.7 +/- 4.4 vs. 56.9 +/- 5.9, p < 0.001), from university to 10th grade level. The generated article was more readable than 48.5% of articles (38.9 vs. 39.4 +/- 12.4) and higher quality than 94% (51.0 vs. 37.6 +/- 6.1). 56.7% of the online articles had "poor" quality. Conclusions: ChatGPT improves the readability of articles, though most still remain above the recommended literacy level for patient education materials. With iterative queries, ChatGPT can generate more reliable and higher quality patient education materials compared to most existing online articles and can be tailored to match readability of average online articles.
引用
收藏
页码:396 / 402
页数:7
相关论文
共 19 条
  • [1] Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures
    Eid, Kevin
    Eid, Alen
    Wang, Diane
    Raiker, Rahul S.
    Chen, Stephen
    Nguyen, John
    OPHTHALMIC PLASTIC AND RECONSTRUCTIVE SURGERY, 2024, 40 (02): : 212 - 216
  • [2] Readability of Patient Education Materials in Plastic Surgery: Assessing 14 Years of Progress
    Wirth, Peter J.
    Warden, Aleah M.
    Moura, Steven P.
    Attaluri, Pradeep K.
    Larson, Jeffrey D.
    PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN, 2025, 13 (02)
  • [3] ASSESSMENT OF THE READABILITY, RELIABILITY AND QUALITY OF ONLINE JUVENILE IDIOPATHIC ARTHRITIS PATIENT EDUCATION MATERIALS
    Spiking, J.
    Ignotus, V.
    Moran, S. P.
    RHEUMATOLOGY, 2017, 56 : 8 - 9
  • [4] Evaluating the Readability, Quality, and Reliability of Online Patient Education Materials on Spinal Cord Stimulation
    Gunduz, Muhammet Enes
    Matis, Georgios K.
    Ozduran, Erkan
    Hanci, Volkan
    TURKISH NEUROSURGERY, 2024, 34 (04) : 588 - 599
  • [5] Readability assessment of internet-based patient education materials related to endoscopic sinus surgery
    Cherla, Deepa V.
    Sanghvi, Saurin
    Choudhry, Osamah J.
    Liu, James K.
    Eloy, Jean Anderson
    LARYNGOSCOPE, 2012, 122 (08): : 1649 - 1654
  • [6] Online Patient Education Materials for Orthognathic Surgery Fail to Meet Readability and Quality Standards
    Lee, Kevin C.
    Berg, Elizabeth T.
    Jazayeri, Hossein E.
    Chuang, Sung-Kiang
    Eisig, Sidney B.
    JOURNAL OF ORAL AND MAXILLOFACIAL SURGERY, 2019, 77 (01) : 180.e1 - 180.e8
  • [7] Assessing the Accuracy and Reliability of AI-Generated Responses to Patient Questions Regarding Spine Surgery
    Kasthuri, Viknesh S.
    Glueck, Jacob
    Pham, Han
    Daher, Mohammad
    Balmaceno-Criss, Mariah
    Mcdonald, Christopher L.
    Diebo, Bassel G.
    Daniels, Alan H.
    JOURNAL OF BONE AND JOINT SURGERY-AMERICAN VOLUME, 2024, 106 (12): : 1136 - 1142
  • [8] Evaluating the readability, quality and reliability of online patient education materials on post-covid pain
    Ozduran, Erkan
    Buyukcoban, Sibel
    PEERJ, 2022, 10
  • [9] Evaluating the readability, quality and reliability of online patient education materials on chronic low back pain
    Ozduran, Erkan
    Hanci, Volkan
    Erkin, Yuksel
    NATIONAL MEDICAL JOURNAL OF INDIA, 2024, 37 (03): : 124 - 130
  • [10] Evaluating AI-generated patient education materials for spinal surgeries: Comparative analysis of readability and DISCERN quality across ChatGPT and deepseek models
    Zhou, Mi
    Pan, Yun
    Zhang, Yuye
    Song, Xiaomei
    Zhou, Youbin
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2025, 198