Meta-prompt based learning for low-resource false information detection

被引:5
|
作者
Huang Y. [1 ,2 ]
Gao M. [1 ,2 ]
Wang J. [1 ,2 ]
Yin J. [1 ,2 ]
Shu K. [3 ]
Fan Q. [1 ,2 ]
Wen J. [1 ,2 ]
机构
[1] Key Laboratory of Dependable Service Computing in Cyber Physical Society (Chongqing University), Ministry of Education, Chongqing
[2] School of Big Data and Software Engineering, Chongqing University, Chongqing
[3] Department of Computer Science, Illinois Institute of Technology, Chicago, IL
来源
Information Processing and Management | 2023年 / 60卷 / 03期
基金
中国国家自然科学基金;
关键词
False information detection; Meta learning; Prompt learning;
D O I
10.1016/j.ipm.2023.103279
中图分类号
学科分类号
摘要
The wide spread of false information has detrimental effects on society, and false information detection has received wide attention. When new domains appear, the relevant labeled data is scarce, which brings severe challenges to the detection. Previous work mainly leverages additional data or domain adaptation technology to assist detection. The former would lead to a severe data burden; the latter underutilizes the pre-trained language model because there is a gap between the downstream task and the pre-training task, which is also inefficient for model storage because it needs to store a set of parameters for each domain. To this end, we propose a meta-prompt based learning (MAP) framework for low-resource false information detection. We excavate the potential of pre-trained language models by transforming the detection tasks into pre-training tasks by constructing template. To solve the problem of the randomly initialized template hindering excavation performance, we learn optimal initialized parameters by borrowing the benefit of meta learning in fast parameter training. The combination of meta learning and prompt learning for the detection is non-trivial: Constructing meta tasks to get initialized parameters suitable for different domains and setting up the prompt model's verbalizer for classification in the noisy low-resource scenario are challenging. For the former, we propose a multi-domain meta task construction method to learn domain-invariant meta knowledge. For the latter, we propose a prototype verbalizer to summarize category information and design a noise-resistant prototyping strategy to reduce the influence of noise data. Extensive experiments on real-world data demonstrate the superiority of the MAP in new domains of false information detection. © 2023 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [41] Tone Learning in Low-Resource Bilingual TTS
    Liu, Ruolan
    Wen, Xue
    Lu, Chunhui
    Chen, Xiao
    INTERSPEECH 2020, 2020, : 2952 - 2956
  • [42] AutoQGS: Auto-Prompt for Low-Resource Knowledge-based Question Generation from SPARQL
    Xiong, Guanming
    Bao, Junwei
    Zhao, Wen
    Wu, Youzheng
    He, Xiaodong
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 2250 - 2259
  • [43] Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning
    Ma, Mingyu Derek
    Kao, Jiun-Yu
    Gao, Shuyang
    Gupta, Arpit
    Jin, Di
    Chung, Tagyoung
    Peng, Nanyun
    INTERSPEECH 2023, 2023, : 4653 - 4657
  • [44] Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection
    Pujari, Rajkumar
    Oveson, Erik
    Kulkarni, Priyanka
    Nouri, Elnaz
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6703 - 6712
  • [45] Multitask Feature Learning for Low-Resource Query-by-Example Spoken Term Detection
    Chen, Hongjie
    Leung, Cheung-Chi
    Xie, Lei
    Ma, Bin
    Li, Haizhou
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2017, 11 (08) : 1329 - 1339
  • [46] Improving stance detection accuracy in low-resource languages: a deep learning framework with ParsBERT
    Rahimi, Mohammad
    Kiani, Vahid
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024, : 517 - 535
  • [47] Finer-grained Modeling units-based Meta-Learning for Low-resource Tibetan Speech Recognition
    Qin, Siqing
    Wang, Longbiao
    Li, Sheng
    Lin, Yuqin
    Dang, Jianwu
    INTERSPEECH 2022, 2022, : 2133 - 2137
  • [48] Battling with the low-resource condition for snore sound recognition: introducing a meta-learning strategy
    Li, Jingtan
    Sun, Mengkai
    Zhao, Zhonghao
    Li, Xingcan
    Li, Gaigai
    Wu, Chen
    Qian, Kun
    Hu, Bin
    Yamamoto, Yoshiharu
    Schuller, Bjoern W.
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2023, 2023 (01)
  • [49] Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning
    Sun, Yibo
    Tang, Duyu
    Duan, Nan
    Gong, Yeyun
    Feng, Xiaocheng
    Qin, Bing
    Jiang, Daxin
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8960 - 8967
  • [50] Crosslingual Transfer Learning for Low-Resource Languages Based on Multilingual Colexification Graphs
    Liu, Yihong
    Ye, Haotian
    Weissweiler, Leonie
    Pei, Renhao
    Schuetze, Hinrich
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8376 - 8401