Sequential Multi-fusion Network for Multi-channel Video CTR Prediction

被引:0
|
作者
Wang, Wen [1 ]
Zhang, Wei [1 ,2 ]
Feng, Wei [3 ]
Zha, Hongyuan [4 ]
机构
[1] East China Normal Univ, Sch Comp Sci & Technol, Shanghai, Peoples R China
[2] Minist Educ, Key Lab Artificial Intelligence, Shanghai, Peoples R China
[3] Facebook, Menlo Pk, CA USA
[4] Georgia Inst Technol, Atlanta, GA USA
关键词
Click-through rate prediction; Sequential recommendation; Recurrent neural networks;
D O I
10.1007/978-3-030-59419-0_1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we study video click-through rate (CTR) prediction, crucial for the refinement of video recommendation and the revenue of video advertising. Existing studies have verified the importance of modeling users' clicked items as their latent preference for general click-through rate prediction. However, all of the clicked ones are equally treated in the input stage, which is not the case in online video platforms. This is because each video is attributed to one of the multiple channels (e.g., TV and MOVIES), thus having different impacts on the prediction of candidate videos from a certain channel. To this end, we propose a novel Sequential Multi-Fusion Network (SMFN) by classifying all the channels into two categories: (1) target channel which current candidate videos belong to, and (2) context channel which includes all the left channels. For each category, SMFN leverages a recurrent neural network to model the corresponding clicked video sequence. The hidden interactions between the two categories are characterized by correlating each video of a sequence with the overall representation of another sequence through a simple but effective fusion unit. The experimental results on the real datasets collected from a commercial online video platform demonstrate the proposed model outperforms some strong alternative methods.
引用
收藏
页码:3 / 18
页数:16
相关论文
共 50 条
  • [11] A MULTI-CHANNEL SEQUENTIAL DETECTION PROCEDURE
    NADELYAYEV, YV
    RADIO ENGINEERING AND ELECTRONIC PHYSICS-USSR, 1969, 14 (12): : 1842 - +
  • [12] Retinal artery/vein classification by multi-channel multi-scale fusion network
    Junyan Yi
    Chouyu Chen
    Gang Yang
    Applied Intelligence, 2023, 53 : 26400 - 26417
  • [13] Retinal artery/vein classification by multi-channel multi-scale fusion network
    Yi, Junyan
    Chen, Chouyu
    Yang, Gang
    APPLIED INTELLIGENCE, 2023, 53 (22) : 26400 - 26417
  • [14] Energy Efficient Sequential Sensing for Wideband Multi-channel Cognitive Network
    Xu, Miao
    Li, He
    Gan, Xiaoying
    2011 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2011,
  • [15] MCTN: A Multi-Channel Temporal Network for Wearable Fall Prediction
    Liu, Jiawei
    Li, Xiaohu
    Liao, Guorui
    Wang, Shu
    Liu, Li
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 394 - 409
  • [16] Multi-channel fusion LSTM for medical event prediction using EHRs
    Liu, Sicen
    Wang, Xiaolong
    Xiang, Yang
    Xu, Hui
    Wang, Hui
    Tang, Buzhou
    JOURNAL OF BIOMEDICAL INFORMATICS, 2022, 127
  • [17] Multi-Fusion Residual Memory Network for Multimodal Human Sentiment Comprehension
    Mai, Sijie
    Hu, Haifeng
    Xu, Jia
    Xing, Songlong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2022, 13 (01) : 320 - 334
  • [18] Video fire recognition based on multi-channel convolutional neural network
    Zhong, Chen
    Shao, Yu
    Ding, Hongjun
    Wang, Ke
    2020 3RD INTERNATIONAL CONFERENCE ON COMPUTER INFORMATION SCIENCE AND APPLICATION TECHNOLOGY (CISAT) 2020, 2020, 1634
  • [19] Recognising pedestrian behaviour using a multi-channel spatiotemporal fusion network
    Li, Chen
    Liu, Yunqing
    Wang, Junnian
    Li, Jianxin
    Zhuang, Chengtong
    JOURNAL OF ELECTRICAL SYSTEMS, 2024, 20 (04) : 317 - 330
  • [20] Target Classification based on Sensor Fusion in Multi-Channel Seismic Network
    Zubair, Mussab
    Hartmann, Klaus
    2011 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT), 2011, : 438 - 443