Enhancing large language model capabilities for rumor detection with Knowledge-Powered Prompting

被引:7
|
作者
Yan, Yeqing [1 ]
Zheng, Peng [1 ]
Wang, Yongjun [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp Sci, Changsha 410003, Hunan, Peoples R China
基金
中国国家自然科学基金;
关键词
Social networks; Rumor detection; Knowledge augmentation; Prompt tuning; Large language model;
D O I
10.1016/j.engappai.2024.108259
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Amid the proliferation of misinformation on social networks, automated rumor detection has emerged as a pivotal and pressing research domain. Nonetheless, current methodologies are hindered by constrained feature representations and limited adaptability in effectively addressing diverse and unconventional rumors. The incorporation of large-scale language models holds the promise of delivering heightened semantic comprehension and broader adaptability. Regrettably, prevailing general-purpose prompting approaches frequently fall short in furnishing adequate domain -specific context and guidance, thereby restricting their utility in the context of rumor detection. To ameliorate these concerns, we introduce the Knowledge -Powered Prompting strategy, which imparts task -relevant prompts and context to the model by amalgamating domain expertise with large-scale language models. This fusion equips the model to better align with the exigencies of rumor detection, mitigating the challenges posed by sensitivity to semantic subtleties and a paucity of training samples. In particular, we devise exploration prompts and bolster the prompt representation with a dynamic knowledge injection module, thereby facilitating profound reasoning about pivotal entities. Subsequently, we extract valuable external knowledge through the filtration of interactions between knowledge and claim, thereby diminishing the impact of noise. Concurrently, we undertake joint optimization, encompassing multitask prompt population and categorical judgment objectives, fostering synergistic semantic modeling and discriminative assessments. Empirical evaluations reveal that our methodology substantially outperforms existing models.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] Enhancing Offensive Language Detection with Data Augmentation and Knowledge Distillation
    Deng, Jiawen
    Chen, Zhuang
    Sun, Hao
    Zhang, Zhexin
    Wu, Jincenzi
    Nakagawa, Satoshi
    Ren, Fuji
    Huang, Minlie
    RESEARCH, 2023, 6
  • [32] Automatic item generation in various STEM subjects using large language model prompting
    Park, Joonhyeong (joonhyeong.park@nie.edu.sg), 2025, 8
  • [33] Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model
    Hama, Kenta
    Otsuka, Atsushi
    Ishii, Ryo
    SOCIAL COMPUTING AND SOCIAL MEDIA, PT I, SCSM 2024, 2024, 14703 : 338 - 346
  • [34] Chinese Metaphor Recognition Using a Multi-stage Prompting Large Language Model
    Wang, Jie
    Wang, Jin
    Zhang, Xuejie
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT V, NLPCC 2024, 2025, 15363 : 234 - 246
  • [35] Demo: Accelerating Patient Screening for Clinical Trials using Large Language Model Prompting
    Gopeekrishnan, Anand
    Arif, Shibbir Ahmed
    Liu, Hao
    2024 IEEE/ACM CONFERENCE ON CONNECTED HEALTH: APPLICATIONS, SYSTEMS AND ENGINEERING TECHNOLOGIES, CHASE 2024, 2024, : 214 - 215
  • [36] Enhancing Chinese Argument Mining with Large Language Model
    Wang, Shiquan
    Fang, Ruiyu
    Li, Mengxiang
    He, Zhongjiang
    Li, Yongxiang
    Song, Shuangyong
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT V, NLPCC 2024, 2025, 15363 : 453 - 462
  • [37] Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation with Large Language Models
    Xu, Ran
    Cui, Hejie
    Yu, Yue
    Kan, Xuan
    Shi, Wenqi
    Zhuang, Yuchen
    Wang, May D.
    Jin, Wei
    Ho, Joyce C.
    Yang, Carl
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 15496 - 15523
  • [38] Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting
    Chen, Zhiyu
    Lu, Yujie
    Wang, William Yang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 4295 - 4304
  • [39] Estimating Large Language Model Capabilities without Labeled Test Data
    Fu, Harvey Yiyun
    Ye, Qinyuan
    Xu, Albert
    Ren, Xiang
    Jia, Robin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 9530 - 9546
  • [40] KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion
    Wei, Yanbin
    Huang, Qiushi
    Zhang, Yu
    Kwok, James T.
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8667 - 8683