Thread Popularity Prediction and Tracking with a Permutation-invariant Model

被引:0
|
作者
Chan, Hou Pong [1 ,2 ]
King, Irwin [1 ,2 ]
机构
[1] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Shatin, NT, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Shenzhen Key Lab Rich Media Big Data Analyt S & A, Shenzhen Res Inst, Shenzhen, Peoples R China
来源
2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018) | 2018年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of thread popularity prediction and tracking aims to recommend a few popular comments to subscribed users when a batch of new comments arrive in a discussion thread. This task has been formulated as a reinforcement learning problem, in which the reward of the agent is the sum of positive responses received by the recommended comments. In this work, we propose a novel approach to tackle this problem. First, we propose a deep neural network architecture to model the expected cumulative reward (Q-value) of a recommendation (action). Unlike the state-ofthe-art approach, which treats an action as a sequence, our model uses an attention mechanism to integrate information from a set of comments. Thus, the prediction of Q-value is invariant to the permutation of the comments, which leads to a more consistent agent behavior. Second, we employ a greedy procedure to approximate the action that maximizes the predicted Q-value from a combinatorial action space. Different from the state-of-the-art approach, this procedure does not require an additional pre-trained model to generate candidate actions. Experiments on five real-world datasets show that our approach outperforms the state-of-the-art.
引用
收藏
页码:3392 / 3401
页数:10
相关论文
共 50 条
  • [21] Quantum and Classical Communication Complexity of Permutation-Invariant Functions
    Guan, Ziyi
    Huang, Yunqi
    Yao, Penghui
    Ye, Zekun
    41ST INTERNATIONAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE, STACS 2024, 2024, 289
  • [22] Permutation-Invariant Representation of Neural Networks with Neuron Embeddings
    Zhou, Ryan
    Muise, Christian
    Hu, Ting
    GENETIC PROGRAMMING (EUROGP 2022), 2022, : 294 - 308
  • [23] Learning Permutation-Invariant Embeddings for Description Logic Concepts
    Demir, Caglar
    Ngomo, Axel-Cyrille Ngonga
    ADVANCES IN INTELLIGENT DATA ANALYSIS XXI, IDA 2023, 2023, 13876 : 103 - 115
  • [24] On the maximal halfspace depth of permutation-invariant distributions on the simplex
    Paindaveine, Davy
    Van Bever, Germain
    STATISTICS & PROBABILITY LETTERS, 2017, 129 : 335 - 339
  • [25] On lower bounds for integration of multivariate permutation-invariant functions
    Weimar, Markus
    JOURNAL OF COMPLEXITY, 2014, 30 (01) : 87 - 97
  • [26] Path planning for permutation-invariant multi-robot formations
    Kloder, S
    Hutchinson, S
    2005 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), VOLS 1-4, 2005, : 1797 - 1802
  • [27] Permutation-Invariant Cascaded Attentional Set Operator for Computational Nephropathology
    Zare, Samira
    Vo, Huy Q.
    Altini, Nicola
    Bevilacqua, Vitoantonio
    Rossini, Michele
    Pesce, Francesco
    Gesualdo, Loreto
    Turkevi-Nagy, Sandor
    Becker, Jan Ulrich
    Mohan, Chandra
    Van Nguyen, Hien
    KIDNEY360, 2025, 6 (03): : 441 - 450
  • [28] Permutation-invariant codes encoding more than one qubit
    Ouyang, Yingkai
    Fitzsimons, Joseph
    PHYSICAL REVIEW A, 2016, 93 (04)
  • [29] Risk Bounds for Learning Multiple Components with Permutation-Invariant Losses
    Lauer, Fabien
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 1178 - 1186
  • [30] Initializing a permutation-invariant quantum error-correction code
    Wu, Chunfeng
    Wang, Yimin
    Guo, Chu
    Ouyang, Yingkai
    Wang, Gangcheng
    Feng, Xun-Li
    PHYSICAL REVIEW A, 2019, 99 (01)