Are ChatGPT and other pretrained language models good parasitologists?

被引:13
|
作者
Slapeta, Jan [1 ,2 ]
机构
[1] Univ Sydney, Fac Sci, Sydney Sch Vet Sci, Sydney, NSW 2006, Australia
[2] Univ Sydney, Inst Infect Dis, Sydney, NSW 2006, Australia
关键词
ECHINOCOCCUS;
D O I
10.1016/j.pt.2023.02.006
中图分类号
R38 [医学寄生虫学]; Q [生物科学];
学科分类号
07 ; 0710 ; 09 ; 100103 ;
摘要
Large language models, such as ChatGPT, will have far-reaching impacts on parasitology, including on students. Authentic experiences gained during students' training are absent from these models. This is not a weakness of the models but rather an opportunity benefiting parasitology at large.
引用
收藏
页码:314 / 316
页数:3
相关论文
共 50 条
  • [21] Pretrained Language Models for Text Generation: A Survey
    Li, Junyi
    Tang, Tianyi
    Zhao, Wayne Xin
    Wen, Ji-Rong
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4492 - 4499
  • [22] Pretrained Language Models for Sequential Sentence Classification
    Cohan, Arman
    Beltagy, Iz
    King, Daniel
    Dalvi, Bhavana
    Weld, Daniel S.
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 3693 - 3699
  • [23] Probing Pretrained Language Models with Hierarchy Properties
    Lovon-Melgarejo, Jesus
    Moreno, Jose G.
    Besancon, Romaric
    Ferret, Olivier
    Tamine, Lynda
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT II, 2024, 14609 : 126 - 142
  • [24] A comprehensive survey on pretrained foundation models: a history from BERT to ChatGPT
    Zhou, Ce
    Li, Qian
    Li, Chen
    Yu, Jun
    Liu, Yixin
    Wang, Guangjing
    Zhang, Kai
    Ji, Cheng
    Yan, Qiben
    He, Lifang
    Peng, Hao
    Li, Jianxin
    Wu, Jia
    Liu, Ziwei
    Xie, Pengtao
    Xiong, Caiming
    Pei, Jian
    Yu, Philip S.
    Sun, Lichao
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024,
  • [25] Probing Pretrained Language Models for Lexical Semantics
    Vulie, Ivan
    Ponti, Edoardo M.
    Litschko, Robert
    Glava, Goran
    Korhonen, Anna
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7222 - 7240
  • [26] ChatGPT and Other Natural Language Processing Artificial Intelligence Models in Adult Reconstruction
    Magruder, Matthew L.
    Delanois, Ronald E.
    Nace, James
    Mont, Michael A.
    JOURNAL OF ARTHROPLASTY, 2023, 38 (11): : 2191 - 2192
  • [27] Pretrained models and evaluation data for the Khmer language
    Jiang, Shengyi
    Fu, Sihui
    Lin, Nankai
    Fu, Yingwen
    TSINGHUA SCIENCE AND TECHNOLOGY, 2022, 27 (04) : 709 - 718
  • [28] ChatGPT and Other Large Language Models in Medical Education - Scoping Literature Review
    Aster, Alexandra
    Laupichler, Matthias Carl
    Rockwell-Kollmann, Tamina
    Masala, Gilda
    Bala, Ebru
    Raupach, Tobias
    MEDICAL SCIENCE EDUCATOR, 2024, : 555 - 567
  • [29] Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents
    Sun, Weiwei
    Yang, Lingyong
    Ma, Xinyu
    Wang, Shuaigiang
    Ren, Pengjie
    Chen, Zhumin
    Yin, Dawei
    Ren, Zhaochun
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 14918 - 14937
  • [30] Pretrained Language Models as Visual Planners for Human Assistance
    Patel, Dhruvesh
    Eghbalzadeh, Hamid
    Kamra, Nitin
    Iuzzolino, Michael Louis
    Jain, Unnat
    Desai, Ruta
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15256 - 15268