Exploring the applicability of large language models to citation context analysis

被引:0
|
作者
Nishikawa, Kai [1 ,2 ]
Koshiba, Hitoshi [2 ]
机构
[1] Univ Tsukuba, Inst Lib Informat & Media Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
[2] Minist Culture Sci & Sports MEXT, Natl Inst Sci & Technol Policy NISTEP, 3-2-2 Kasumigaseki,Chiyoda Ku, Tokyo 1000013, Japan
关键词
Scientometrics; Citation context analysis; Annotation; Large language models (LLM); Generative pre-trained transformer (GPT); COUNTS MEASURE;
D O I
10.1007/s11192-024-05142-9
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Unlike traditional citation analysis, which assumes that all citations in a paper are equivalent, citation context analysis considers the contextual information of individual citations. However, citation context analysis requires creating a large amount of data through annotation, which hinders its widespread use. This study explored the applicability of Large Language Models (LLM)-particularly Generative Pre-trained Transformer (GPT)-to citation context analysis by comparing LLM and human annotation results. The results showed that LLM annotation is as good as or better than human annotation in terms of consistency but poor in terms of its predictive performance. Thus, having LLM immediately replace human annotators in citation context analysis is inappropriate. However, the annotation results obtained by LLM can be used as reference information when narrowing the annotation results obtained by multiple human annotators down to one; alternatively, the LLM can be used as an annotator when it is difficult to prepare sufficient human annotators. This study provides basic findings important for the future development of citation context analysis.
引用
收藏
页码:6751 / 6777
页数:27
相关论文
共 50 条
  • [21] Compressing Context to Enhance Inference Efficiency of Large Language Models
    Li, Yucheng
    Dong, Bo
    Guerin, Frank
    Lin, Chenghua
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6342 - 6353
  • [22] Context is everything in regulatory application of large language models (LLMs)
    Tong, Weida
    Renaudin, Michael
    DRUG DISCOVERY TODAY, 2024, 29 (04)
  • [23] Context Compression and Extraction: Efficiency Inference of Large Language Models
    Zhou, Junyao
    Du, Ruiqing
    Tan, Yushan
    Yang, Jintao
    Yang, Zonghao
    Luo, Wei
    Luo, Zhunchen
    Zhou, Xian
    Hu, Wenpeng
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT I, ICIC 2024, 2024, 14875 : 221 - 232
  • [24] Exploring Large Language Models for Verilog hardware design generation
    D'Hollander, Erik H.
    Danneels, Ewout
    Decorte, Karel-Brecht
    Loobuyck, Senne
    Vanheule, Ame
    Van Kets, Ian
    Stroobandt, Dirk
    2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW 2024, 2024, : 111 - 115
  • [25] Exploring the Role of Large Language Models in Melanoma: A Systematic Review
    Zarfati, Mor
    Nadkarni, Girish N.
    Glicksberg, Benjamin S.
    Harats, Moti
    Greenberger, Shoshana
    Klang, Eyal
    Soffer, Shelly
    JOURNAL OF CLINICAL MEDICINE, 2024, 13 (23)
  • [26] Exploring the Transferability of Visual Prompting for Multimodal Large Language Models
    Zhang, Yichi
    Dong, Yinpeng
    Zhang, Siyuan
    Min, Tianzan
    Su, Hang
    Zhu, Jun
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 26552 - 26562
  • [27] Exploring the role of large language models in radiation emergency response
    Chandra, Anirudh
    Chakraborty, Abinash
    JOURNAL OF RADIOLOGICAL PROTECTION, 2024, 44 (01)
  • [28] Learning to Retrieve In-Context Examples for Large Language Models
    Wang, Liang
    Yang, Nan
    Wei, Furu
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1752 - 1767
  • [29] Adaptive In-Context Learning with Large Language Models for Bundle
    Sun, Zhu
    Feng, Kaidong
    Yang, Jie
    Qu, Xinghua
    Fang, Hui
    Ong, Yew-Soon
    Liu, Wenyuan
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 966 - 976
  • [30] Exploring Large Language Models for Trajectory Prediction: A Technical Perspective
    Munir, Farzeen
    Mihaylova, Tsvetomila
    Azam, Shoaib
    Kucner, Tomasz Piotr
    Kyrki, Ville
    COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 774 - 778