Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study

被引:21
|
作者
Rouhi, Armaun D. [1 ]
Ghanem, Yazid K. [2 ]
Yolchieva, Laman [3 ]
Saleh, Zena [2 ]
Joshi, Hansa [2 ]
Moccia, Matthew C. [2 ]
Suarez-Pierre, Alejandro [4 ]
Han, Jason J. [5 ]
机构
[1] Univ Penn, Perelman Sch Med, Dept Surg, Philadelphia, PA USA
[2] Cooper Univ Hosp, Dept Surg, Camden, NJ USA
[3] Univ Penn, Coll Arts & Sci, Philadelphia, PA USA
[4] Univ Colorado, Sch Med, Dept Surg, Aurora, CO USA
[5] Hosp Univ Penn, Perelman Sch Med, Dept Surg, Div Cardiovasc Surg, Philadelphia, PA 19104 USA
关键词
Aortic stenosis; Heart valve disease; Readability; Health literacy; Patient education material; ChatGPT; Artificial intelligence; Large language models; Chatbots; HEALTH LITERACY; QUALITY ANALYSIS; INFORMATION; SMOG;
D O I
10.1007/s40119-023-00347-0
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Introduction: The advent of generative artificial intelligence (AI) dialogue platforms and large language models (LLMs) may help facilitate ongoing efforts to improve health literacy. Additionally, recent studies have highlighted inadequate health literacy among patients with cardiac disease. The aim of the present study was to ascertain whether two freely available generative AI dialogue platforms could rewrite online aortic stenosis (AS) patient education materials (PEMs) to meet recommended reading skill levels for the public. Methods: Online PEMs were gathered from a professional cardiothoracic surgical society and academic institutions in the USA. PEMs were then inputted into two AI-powered LLMs, ChatGPT-3.5 and Bard, with the prompt "translate to 5th-grade reading level". Readability of PEMs before and after AI conversion was measured using the validated Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook Index (SMOGI), and Gunning-Fog Index (GFI) scores. Results: Overall, 21 PEMs on AS were gathered. Original readability measures indicated difficult readability at the 10th-12th grade reading level. ChatGPT-3.5 successfully improved readability across all four measures (p < 0.001) to the approximately 6th-7th grade reading level. Bard successfully improved readability across all measures (p < 0.001) except for SMOGI (p = 0.729) to the approximately 8th-9th grade level. Neither platform generated PEMs written below the recommended 6th-grade reading level. ChatGPT-3.5 demonstrated significantly more favorable post-conversion readability scores, percentage change in readability scores, and conversion time compared to Bard (all p < 0.001). Conclusion: AI dialogue platforms can enhance the readability of PEMs for patients with AS but may not fully meet recommended reading skill levels, highlighting potential tools to help strengthen cardiac health literacy in the future.
引用
收藏
页码:137 / 147
页数:11
相关论文
共 50 条
  • [31] THE READABILITY OF PATIENT EDUCATION MATERIALS DESIGNED FOR PATIENTS WITH PSORIASIS
    FELDMAN, SR
    VANARTHOS, J
    FLEISCHER, AB
    JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY, 1994, 30 (02) : 284 - 286
  • [32] Readability of Patient Education Materials Available at the Point of Care
    Stossel, Lauren M.
    Segar, Nora
    Gliatto, Peter
    Fallar, Robert
    Karani, Reena
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2012, 27 (09) : 1165 - 1170
  • [33] The impact of medical terminology on readability of patient education materials
    Sand-Jecklin, Kari
    JOURNAL OF COMMUNITY HEALTH NURSING, 2007, 24 (02) : 119 - 129
  • [34] Readability of patient education materials: Implications for clinical practice
    Albright, J
    deGuzman, C
    Acebo, P
    Paiva, D
    Faulkner, M
    Swanson, J
    APPLIED NURSING RESEARCH, 1996, 9 (03) : 139 - 143
  • [35] Readability of online patient education materials for velopharyngeal insufficiency
    Xie, Deborah X.
    Wang, Ray Y.
    Chinnadurai, Sivakumar
    INTERNATIONAL JOURNAL OF PEDIATRIC OTORHINOLARYNGOLOGY, 2018, 104 : 113 - 119
  • [36] Readability and Suitability of Online Uveitis Patient Education Materials
    Khan, Saima
    Moon, Jared
    Martin, Cole
    Bowden, Eileen
    Chen, Judy Lynn
    Tsui, Edmund
    Crowell, Eric
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2022, 63 (07)
  • [37] Readability Assessment of Online Uveitis Patient Education Materials
    Ayoub, Samantha
    Tsui, Edmund
    Mohammed, Taariq
    Tseng, Joseph
    OCULAR IMMUNOLOGY AND INFLAMMATION, 2019, 27 (03) : 399 - 403
  • [38] The Readability of Patient Education Materials Pertaining to Gastrointestinal Procedures
    Nawaz, Mohammad S.
    McDermott, Laura E.
    Thor, Savanna
    CANADIAN JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY, 2021, 2021
  • [39] Readability of Online Patient Education Materials Related to IR
    McEnteggart, Gregory E.
    Naeem, Muhammad
    Skierkowski, Dorothy
    Baird, Grayson L.
    Ahn, Sun H.
    Soares, Gregory
    JOURNAL OF VASCULAR AND INTERVENTIONAL RADIOLOGY, 2015, 26 (08) : 1164 - 1168
  • [40] Readability assessment of patient education materials on autonomic dysreflexia
    Bataller, Will P.
    Powell, Lauren E.
    Gerdes, Austin
    Miskella, John
    White, Christopher
    JOURNAL OF SPINAL CORD MEDICINE, 2025,