Exploring the applicability of large language models to citation context analysis

被引:0
|
作者
Nishikawa, Kai [1 ,2 ]
Koshiba, Hitoshi [2 ]
机构
[1] Univ Tsukuba, Inst Lib Informat & Media Sci, 1-2 Kasuga, Tsukuba, Ibaraki 3058550, Japan
[2] Minist Culture Sci & Sports MEXT, Natl Inst Sci & Technol Policy NISTEP, 3-2-2 Kasumigaseki,Chiyoda Ku, Tokyo 1000013, Japan
关键词
Scientometrics; Citation context analysis; Annotation; Large language models (LLM); Generative pre-trained transformer (GPT); COUNTS MEASURE;
D O I
10.1007/s11192-024-05142-9
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Unlike traditional citation analysis, which assumes that all citations in a paper are equivalent, citation context analysis considers the contextual information of individual citations. However, citation context analysis requires creating a large amount of data through annotation, which hinders its widespread use. This study explored the applicability of Large Language Models (LLM)-particularly Generative Pre-trained Transformer (GPT)-to citation context analysis by comparing LLM and human annotation results. The results showed that LLM annotation is as good as or better than human annotation in terms of consistency but poor in terms of its predictive performance. Thus, having LLM immediately replace human annotators in citation context analysis is inappropriate. However, the annotation results obtained by LLM can be used as reference information when narrowing the annotation results obtained by multiple human annotators down to one; alternatively, the LLM can be used as an annotator when it is difficult to prepare sufficient human annotators. This study provides basic findings important for the future development of citation context analysis.
引用
收藏
页码:6751 / 6777
页数:27
相关论文
共 50 条
  • [31] Exploring Automated Assertion Generation via Large Language Models
    Zhang, Quanjun
    Sun, Weifeng
    Fang, Chunrong
    Yu, Bowen
    Li, Hongyan
    Yan, Meng
    Zhou, Jianyi
    Chen, Zhenyu
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2025, 34 (03)
  • [32] Exploring Large Language Models as Formative Feedback Tools in Physics
    El-Adawy, Shams
    MacDonagh, Aidan
    Abdelhafez, Mohamed
    2024 PHYSICS EDUCATION RESEARCH CONFERENCE, PERC, 2024, : 126 - 131
  • [33] Exploring Reversal Mathematical Reasoning Ability for Large Language Models
    Guo, Pei
    You, Wangjie
    Li, Juntao
    Yan, Bowen
    Zhang, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13671 - 13685
  • [34] Exploring Spatial Schema Intuitions in Large Language and Vision Models
    Wicke, Philipp
    Wachowiak, Lennart
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 6102 - 6117
  • [35] Exploring Large Language Models to generate Easy to Read content
    Martinez, Paloma
    Ramos, Alberto
    Moreno, Lourdes
    FRONTIERS IN COMPUTER SCIENCE, 2024, 6
  • [36] Exploring Large Language Models and Hierarchical Frameworks for Classification of Large Unstructured Legal Documents
    Prasad, Nishchal
    Boughanem, Mohand
    Dkaki, Taoufiq
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT II, 2024, 14609 : 221 - 237
  • [37] Exploring Synergies between Causal Models and Large Language Models for Enhanced Understanding and Inference
    Sun, Yaru
    Yang, Ying
    Fu, Wenhao
    2024 2ND ASIA CONFERENCE ON COMPUTER VISION, IMAGE PROCESSING AND PATTERN RECOGNITION, CVIPPR 2024, 2024,
  • [38] Trend Analysis Through Large Language Models
    Alzapiedi, Lucas
    Bihl, Trevor
    IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE, NAECON 2024, 2024, : 370 - 374
  • [39] Automated Topic Analysis with Large Language Models
    Kirilenko, Andrei
    Stepchenkova, Svetlana
    INFORMATION AND COMMUNICATION TECHNOLOGIES IN TOURISM 2024, ENTER 2024, 2024, : 29 - 34
  • [40] Multimodal large language models for bioimage analysis
    Zhang, Shanghang
    Dai, Gaole
    Huang, Tiejun
    Chen, Jianxu
    NATURE METHODS, 2024, 21 (08) : 1390 - 1393