Information Diffusion Prediction via Cascade-Retrieved In-context Learning

被引:0
|
作者
Zhong, Ting [1 ,2 ]
Zhang, Jienan [1 ]
Cheng, Zhangtao [1 ]
Zhou, Fan [1 ,3 ]
Chen, Xueqin [4 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[2] Kash Inst Elect & Informat Ind, Kashgar, Xinjiang, Peoples R China
[3] Intelligent Terminal Key Lab Sichuan Prov, Chengdu, Peoples R China
[4] Delft Univ Technol, Fac Civil Engn & Geosci, Delft, Netherlands
基金
中国国家自然科学基金;
关键词
Information diffusion prediction; in-context learning; cascade retrieved;
D O I
10.1145/3626772.3657909
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Information diffusion prediction, which aims to infer the infected behavior of individual users during information spread, is critical for understanding the dynamics of information propagation and users' influence on online social media. To date, existing methods either focus on capturing limited contextual information from a single cascade, overlooking the potentially complex dependencies across different cascades, or they are committed to improving model performance by using intricate technologies to extract additional features as supplements to user representations, neglecting the drift of model performance across different platforms. To address these limitations, we propose a novel framework called CARE (CAscade-REtrieved In-Context Learning) inspired by the concept of in-context learning in LLMs. Specifically, CARE first constructs a prompts pool derived from historical cascades, then utilizes ranking-based search engine techniques to retrieve prompts with similar patterns based on the query. Moreover, CARE also introduces two augmentation strategies alongside social relationship enhancement to enrich the input context. Finally, the transformed query-cascade representation from a GPT-type architecture is projected to obtain the prediction. Experiments on real-world datasets from various platforms show that CARE outperforms state-of-the-art baselines in terms of effectiveness and robustness in information diffusion prediction.
引用
收藏
页码:2472 / 2476
页数:5
相关论文
共 50 条
  • [1] In-Context Learning Unlocked for Diffusion Models
    Wang, Zhendong
    Jiang, Yifan
    Lu, Yadong
    Shen, Yelong
    He, Pengcheng
    Chen, Weizhu
    Wang, Zhangyang
    Zhou, Mingyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Dr.ICL: Demonstration-Retrieved In-context Learning
    Man Luo
    Xin Xu
    Zhuyun Dai
    Panupong Pasupat
    Mehran Kazemi
    Chitta Baral
    Vaiva Imbrasaite
    Vincent Y Zhao
    Data Intelligence, 2024, 6 (04) : 909 - 922
  • [3] Guideline Learning for In-Context Information Extraction
    Pang, Chaoxu
    Cao, Yixuan
    Ding, Qiang
    Luo, Ping
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 15372 - 15389
  • [4] Information Cascade Popularity Prediction via Probabilistic Diffusion
    Cheng, Zhangtao
    Zhou, Fan
    Xu, Xovee
    Zhang, Kunpeng
    Trajcevski, Goce
    Zhong, Ting
    Yu, Philip S.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 8541 - 8555
  • [5] Prompt Optimization via Adversarial In-Context Learning
    Do, Xuan Long
    Zhao, Yiran
    Brown, Hannah
    Xie, Yuxi
    Zhao, James Xu
    Chen, Nancy F.
    Kawaguchi, Kenji
    Shieh, Michael
    He, Junxian
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 7308 - 7327
  • [6] Cascade Large Language Model via In-Context Learning for Depression Detection on Chinese Social Media
    Zheng, Tong
    Guo, Yanrong
    Hong, Richang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 353 - 366
  • [7] Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering
    Wu, Zhiyong
    Wang, Yaoxiang
    Ye, Jiacheng
    Kong, Lingpeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1423 - 1436
  • [8] Enhancing In-context Learning via Linear Probe Calibration
    Abbas, Momin
    Zhou, Yi
    Ram, Parikshit
    Baracaldo, Nathalie
    Samulowitz, Horst
    Salonidis, Theodoros
    Chen, Tianyi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [9] Understanding In-Context Learning via Supportive Pretraining Data
    Han, Xiaochuang
    Simig, Daniel
    Mihaylov, Todor
    Tsvetkov, Yulia
    Celikyilmaz, Asli
    Wang, Tianlu
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12660 - 12673
  • [10] LabelWords are Anchors: An Information Flow Perspective for Understanding In-Context Learning
    Wang, Lean
    Li, Lei
    Dai, Damai
    Che, Deli
    Zho, Hao
    Meng, Fandong
    Zhou, Jie
    Sun, Xu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 9840 - 9855