Artificial intelligence in dental education: ChatGPT's performance on the periodontic in-service examination

被引:10
|
作者
Danesh, Arman [1 ]
Pazouki, Hirad [2 ]
Danesh, Farzad [3 ]
Danesh, Arsalan [4 ,5 ]
Vardar-Sengul, Saynur [4 ]
机构
[1] Western Univ, Schulich Sch Med & Dent, London, ON, Canada
[2] Western Univ, Fac Sci, London, ON, Canada
[3] Elgin Mills Endodont Specialists, Richmond Hill, ON, Canada
[4] Nova Southeastern Univ, Coll Dent Med, Dept Periodontol, Davie, FL USA
[5] Nova Southeastern Univ, Coll Dent Med, Dept Periodontol, 3050 S Univ Dr, Davie, FL 33314 USA
关键词
artificial intelligence; continuing dental education; dentistry; periodontics; GPT-4;
D O I
10.1002/JPER.23-0514
中图分类号
R78 [口腔科学];
学科分类号
1003 ;
摘要
BackgroundChatGPT's (Chat Generative Pre-Trained Transformer) remarkable capacity to generate human-like output makes it an appealing learning tool for healthcare students worldwide. Nevertheless, the chatbot's responses may be subject to inaccuracies, putting forth an intense risk of misinformation. ChatGPT's capabilities should be examined in every corner of healthcare education, including dentistry and its specialties, to understand the potential of misinformation associated with the chatbot's use as a learning tool. Our investigation aims to explore ChatGPT's foundation of knowledge in the field of periodontology by evaluating the chatbot's performance on questions obtained from an in-service examination administered by the American Academy of Periodontology (AAP).MethodsChatGPT3.5 and ChatGPT4 were evaluated on 311 multiple-choice questions obtained from the 2023 in-service examination administered by the AAP. The dataset of in-service examination questions was accessed through Nova Southeastern University's Department of Periodontology. Our study excluded questions containing an image as ChatGPT does not accept image inputs.ResultsChatGPT3.5 and ChatGPT4 answered 57.9% and 73.6% of in-service questions correctly on the 2023 Periodontics In-Service Written Examination, respectively. A two-tailed t test was incorporated to compare independent sample means, and sample proportions were compared using a two-tailed chi 2 test. A p value below the threshold of 0.05 was deemed statistically significant.ConclusionWhile ChatGPT4 showed a higher proficiency compared to ChatGPT3.5, both chatbot models leave considerable room for misinformation with their responses relating to periodontology. The findings of the study encourage residents to scrutinize the periodontic information generated by ChatGPT to account for the chatbot's current limitations.
引用
收藏
页码:682 / 687
页数:6
相关论文
共 50 条
  • [1] Artificial intelligence: ChatGPT as a disruptive didactic strategy in dental education
    Saravia-Rojas, Miguel Angel
    Camarena-Fonseca, Alexandra Rosy
    Leon-Manco, Roberto
    Geng-Vivanco, Rocio
    JOURNAL OF DENTAL EDUCATION, 2024, 88 (06) : 872 - 876
  • [2] ChatGPT: performance of artificial intelligence in the dermatology specialty certificate examination
    Jabour, Thais Barros Felippe
    Ribeiro Junior, Jose Paulo
    Fernandes, Alexandre Chaves
    Honorato, Cecilia Mirelle Almeida
    Queiroz, Maria do Carmo Araujo Palmeira
    ANAIS BRASILEIROS DE DERMATOLOGIA, 2024, 99 (02) : 277 - 279
  • [3] Assessment of Artificial Intelligence Performance on the Otolaryngology Residency In-Service Exam
    Mahajan, Arushi P.
    Shabet, Christina L.
    Smith, Joshua
    Rudy, Shannon F.
    Kupfer, Robbi A.
    Bohm, Lauren A.
    OTO OPEN, 2023, 7 (04)
  • [4] Artificial intelligence, ChatGPT, and dental education: Implications for reflective assignments and qualitative research
    Brondani, Mario
    Alves, Claudia
    Ribeiro, Cecilia
    Braga, Mariana M.
    Garcia, Renata C. Mathes
    Ardenghi, Thiago
    Pattanaporn, Komkham
    JOURNAL OF DENTAL EDUCATION, 2024, 88 (12) : 1671 - 1680
  • [5] Artificial intelligence in education: the challenges of ChatGPT
    Rodrigues, Olira Saraiva
    Rodrigues, Karoline Santos
    TEXTO LIVRE-LINGUAGEM E TECNOLOGIA, 2023, 16
  • [6] Artificial intelligence in compulsory level of education: perspectives from Namibian in-service teachers
    Jatileni, Cloneria Nyambali
    Sanusi, Ismaila Temitayo
    Olaleye, Sunday Adewale
    Ayanwale, Musa Adekunle
    Agbo, Friday Joseph
    Oyelere, Peter B.
    EDUCATION AND INFORMATION TECHNOLOGIES, 2024, 29 (10) : 12569 - 12596
  • [7] The Accuracy of Artificial Intelligence ChatGPT in Oncology Examination Questions
    Chow, Ronald
    Hasan, Shaakir
    Zheng, Ajay
    Gao, Chenxi
    Valdes, Gilmer
    Yu, Francis
    Chhabra, Arpit
    Raman, Srinivas
    Choi, J. Isabelle
    Lin, Haibo
    Simone, Charles B.
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2024, 21 (11) : 1800 - 1804
  • [8] Evaluating the Performance of ChatGPT, Gemini, and Bing Compared with Resident Surgeons in the Otorhinolaryngology In-service Training Examination
    Mete, Utku
    TURKISH ARCHIVES OF OTORHINOLARYNGOLOGY, 2024, 62 (02) : 48 - 57
  • [9] Assessing ChatGPT's orthopedic in-service training exam performance and applicability in the field
    Jain, Neil
    Gottlich, Caleb
    Fisher, John
    Campano, Dominic
    Winston, Travis
    JOURNAL OF ORTHOPAEDIC SURGERY AND RESEARCH, 2024, 19 (01)
  • [10] ChatGPT and Artificial Intelligence in Higher Education: Literature Review Powered by Artificial Intelligence
    Cep, Andrej
    Bernik, Andrija
    INTELLIGENT COMPUTING, VOL 3, 2024, 2024, 1018 : 240 - 248