Layer-by-Layer Residual Interactive Network Approach for Advertisement Click-Through Rate Prediction

被引:0
|
作者
Yin Y.-F. [1 ,2 ]
Long L.-J. [1 ]
Huang F.-L. [2 ]
Wu K.-G. [1 ]
机构
[1] College of CompuLer Science, Chongqing University, Chongqing
[2] Guangxi Key Lab of Human-Machine Interaction and Intelligent Decision, Nanning
来源
关键词
attention; CTR prediction; feature interaction; layer-by-layer; residual network;
D O I
10.11897/SP.J.1016.2024.00575
中图分类号
学科分类号
摘要
Online advertising fees are charged based on the number of times that users click on ads, and therefore how to accurately predict Click-Through Rate (CTR) is a very concerned issue for advertising companies. Current state-of-the-art methods focus on constructing various high-order feature interaction models to predict CTR; however, high-order feature interactions will lose low-order information, especially the information of original features. To this end, a novel layer-by-layer residual interaction network framework is proposed in this paper, which leverages the guiding role of the original features at each interaction, and is named as the Layer-by-layer Residual Interaction Network (LRIN). LRIN emphasizes that higher-order feature interactions should be based on the interactions of original features layer by layer. The interaction of n-order features is obtained by the element-wise product between the original features and the n—1-order features. Moreover, a multi-scale approach is introduced to design attention network. Affected by layer-by-layer interaction, the attention network is also designed into multiple layers, which is called layer-by-layer attention networks. In order to combine the two, this paper proposes to take the outputs of the layer-by-layer residual interaction network as the weights of the layer-by-layer attention network, and thus forms a novel dual-network training model. The experimental results on multiple benchmark datasets indicate that the performance of LRIN is on average 1. 24% better than current advanced methods on the Criteo dataset, 2 16% better on the Avazu dataset, 1. 3% better on the MovieLens-lM dataset, and 1. 27% better on the Book-crossing dataset. © 2024 Science Press. All rights reserved.
引用
收藏
页码:575 / 588
页数:13
相关论文
共 37 条
  • [1] Wang Y Q, Liu X Y, Zheng Z Z, Zhang Z L, Et al., On designing a two-stage auction for online advertising, Pro-ceedings of the ACM Web Conference 2022 (WWW 2022), pp. 90-99, (2022)
  • [2] Guan F, Qian C, lie F Y., A knowledge distillation-based deep interaction compressed network for CTR prediction, Knowledge-Based Systems, 275, pp. 1-9, (2023)
  • [3] Zhang W, Du T, Wang J., Deep learning over multi-field categorical data—A case study on user response prediction, Proceedings of the 38th European Conference on Information Retrieval, pp. 45-57, (2016)
  • [4] Qu Y, Cai II, Ren K, Et al., Product-based neural networks for user response prediction, Proceedings of the IEEE 16th International Conference on Data Mining, pp. 1149-1154, (2016)
  • [5] Luo L, Chen Y F, Liu X II, Et al., Feature aware and bilinear feature equal interaction network for click-through rate prediction, Proceedings of the 27th International Conference on Neural Information Processing, pp. 432-443, (2020)
  • [6] Wang Z, She Q, Zhang J., MaskNet: Introducing feature-wise multiplication to CTR ranking models by instance-guided mask, (2021)
  • [7] Xue N M, Liu B, Guo II F, Et al., AutoIIash: Learning higher-order feature interactions for deep CTR prediction, IEEE Transactions on Knowledge and Data Engineering, 34, 6, pp. 2653-2666, (2022)
  • [8] Long L J, Yin Y F, Huang F L., Hierarchical attention factorization machine for CTR prediction, Proceedings of the 27th International Conference on Database Systems for Advanced Applications, pp. 343-358, (2022)
  • [9] Zhao X Y, Xia L, Tang J L, Et al., Deep reinforcement learning for search, recommendation, and online advertising: A survey, (2018)
  • [10] Yin N, Li II Y, Su C., CLR: Coupled logistic regression model for CTR prediction, Proceedings of the ACM Turing 50th Celebration Conference—China, pp. 1-9, (2017)