Evaluation of Large Language Model Performance and Reliability for Citations and References in Scholarly Writing: Cross-Disciplinary Study

被引:8
|
作者
Mugaanyi, Joseph [1 ]
Cai, Liuying [2 ]
Cheng, Sumei [2 ]
Lu, Caide [1 ]
Huang, Jing [1 ]
机构
[1] Ningbo Univ, Lihuili Hosp, Hlth Sci Ctr, Ningbo Med Ctr,Dept Hepatopancreato Biliary Surg, 1111 Jiangnan Rd, Ningbo 315000, Peoples R China
[2] Shanghai Acad Social Sci, Inst Philosophy, Shanghai, Peoples R China
关键词
large language models; accuracy; academic writing; AI; cross -disciplinary evaluation; scholarly writing; ChatGPT; GPT-3.5; writing tool; scholarly; academic discourse; LLMs; machine learning algorithms; NLP; natural language processing; citations; references; natural science; humanities; chatbot; artificial intelligence;
D O I
10.2196/52935
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Large language models (LLMs) have gained prominence since the release of ChatGPT in late 2022. Objective: The aim of this study was to assess the accuracy of citations and references generated by ChatGPT (GPT-3.5) in two distinct academic domains: the natural sciences and humanities. Methods: Two researchers independently prompted ChatGPT to write an introduction section for a manuscript and include citations; they then evaluated the accuracy of the citations and Digital Object Identifiers (DOIs). Results were compared between the two disciplines. Results: Ten topics were included, including 5 in the natural sciences and 5 in the humanities. A total of 102 citations were generated, with 55 in the natural sciences and 47 in the humanities. Among these, 40 citations (72.7%) in the natural sciences and 36 citations (76.6%) in the humanities were confirmed to exist (P=.42). There were significant disparities found in DOI presence in the natural sciences (39/55, 70.9%) and the humanities (18/47, 38.3%), along with significant differences in accuracy between the two disciplines (18/55, 32.7% vs 4/47, 8.5%). DOI hallucination was more prevalent in the humanities (42/55, 89.4%). The Levenshtein distance was significantly higher in the humanities than in the natural sciences, reflecting the lower DOI accuracy. Conclusions: ChatGPT's performance in generating citations and references varies across disciplines. Differences in DOI standards and disciplinary nuances contribute to performance variations. Researchers should consider the strengths and limitations of artificial intelligence writing tools with respect to citation accuracy. The use of domain-specific models may enhance accuracy.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Between Truth and Hallucinations: Evaluation of the Performance of Large Language Model-Based AI Plugins in Website Quality Analysis
    Krol, Karol
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [42] The reliability of freely accessible, baseline, general-purpose large language model generated patient information for frequently asked questions on liver disease: a preliminary cross-sectional study
    Niriella, Madunil A.
    Premaratna, Pathum
    Senanayake, Mananjala
    Kodisinghe, Senerath
    Dassanayake, Uditha
    Dassanayake, Anuradha
    Ediriweera, Dileepa S.
    de Silva, H. Janaka
    EXPERT REVIEW OF GASTROENTEROLOGY & HEPATOLOGY, 2025,
  • [43] Evaluation of Genotype-Based Gene Expression Model Performance: A Cross-Framework and Cross-Dataset Study
    Tavares, Vania
    Monteiro, Joana
    Vassos, Evangelos
    Coleman, Jonathan
    Prata, Diana
    GENES, 2021, 12 (10)
  • [44] Performance evaluation of reliability assessment method based on stochastic differential equation model for a large-scale open source solution
    Tamura, Yoshinobu
    Yamada, Shigeru
    INTERNATIONAL JOURNAL OF SYSTEM ASSURANCE ENGINEERING AND MANAGEMENT, 2010, 1 (04) : 324 - 329
  • [45] Comparative analysis of BERT-based and generative large language models for detecting suicidal ideation: a performance evaluation study
    de Oliveira, Adonias Caetano
    Bessa, Renato Freitas
    Soares, Ariel
    CADERNOS DE SAUDE PUBLICA, 2024, 40 (10):
  • [46] Classifying Unstructured Text in Electronic Health Records for Mental Health Prediction Models: Large Language Model Evaluation Study
    Cardamone, Nicholas C.
    Olfson, Mark
    Schmutte, Timothy
    Ungar, Lyle
    Liu, Tony
    Cullen, Sara W.
    Williams, Nathaniel J.
    Marcus, Steven C.
    JMIR MEDICAL INFORMATICS, 2025, 13
  • [47] Large language models for data extraction from unstructured and semi-structured electronic health records: a multiple model performance evaluation
    Ntinopoulos, Vasileios
    Biefer, Hector Rodriguez Cetina
    Tudorache, Igor
    Papadopoulos, Nestoras
    Odavic, Dragan
    Risteski, Petar
    Haeussler, Achim
    Dzemali, Omer
    BMJ HEALTH & CARE INFORMATICS, 2025, 32 (01)
  • [48] Can a large language model create acceptable dental board-style examination questions? A cross-sectional prospective study
    Kim, Hak-Sun
    Kim, Gyu-Tae
    JOURNAL OF DENTAL SCIENCES, 2025, 20 (02) : 895 - 900
  • [49] Viability of Open Large Language Models for Clinical Documentation in German Health Care: Real-World Model Evaluation Study
    Heilmeyer, Felix
    Boehringer, Daniel
    Reinhard, Thomas
    Arens, Sebastian
    Lyssenko, Lisa
    Haverkamp, Christian
    JMIR MEDICAL INFORMATICS, 2024, 12
  • [50] An open-source fine-tuned large language model for radiological impression generation: a multi-reader performance study
    Serapio, Adrian
    Chaudhari, Gunvant
    Savage, Cody
    Lee, Yoo Jin
    Vella, Maya
    Sridhar, Shravan
    Schroeder, Jamie Lee
    Liu, Jonathan
    Yala, Adam
    Sohn, Jae Ho
    BMC MEDICAL IMAGING, 2024, 24 (01):