Modeling User Session and Intent with an Attention-based Encoder-Decoder Architecture

被引:42
|
作者
Loyola, Pablo [1 ,2 ]
Liu, Chen [2 ]
Hirate, Yu [2 ]
机构
[1] Univ Tokyo, Tokyo, Japan
[2] Rakuten Inst Technol, Tokyo, Japan
关键词
Recommender Systems; Recurrent Neural Networks; Attention Mechanisms;
D O I
10.1145/3109859.3109917
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose an encoder-decoder neural architecture to model user session and intent using browsing and purchasing data from a large e-commerce company. We begin by identifying the source-target transition pairs between items within each session. Then, the set of source items are passed through an encoder, whose learned representation is used by the decoder to estimate the sequence of target items. Therefore, as this process is performed pair-wise, we hypothesize that the model could capture the transition regularities in a more fine grained way. Additionally, our model incorporates an attention mechanism to explicitly learn the more expressive portions of the sequences in order to improve performance. Besides modeling the user sessions, we also extended the original architecture by means of attaching a second decoder that is jointly trained to predict the purchasing intent of user in each session. With this, we want to explore to what extent the model can capture inter session dependencies. We performed an empirical study comparing against several baselines on a large real world dataset, showing that our approach is competitive in both item and intent prediction.
引用
收藏
页码:147 / 151
页数:5
相关论文
共 50 条
  • [31] Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
    Zeineldeen, Mohammad
    Glushko, Aleksandr
    Michel, Wilfried
    Zeyer, Albert
    Schlueter, Ralf
    Ney, Hermann
    INTERSPEECH 2021, 2021, : 2856 - 2860
  • [32] Recognition of Japanese historical text lines by an attention-based encoder-decoder and text line generation
    Le, Anh Duc
    Mochihashi, Daichi
    Masuda, Katsuya
    Mima, Hideki
    Ly, Nam Tuan
    PROCEEDINGS OF THE 2019 WORKSHOP ON HISTORICAL DOCUMENT IMAGING AND PROCESSING (HIP' 19), 2019, : 37 - 41
  • [33] Self-Supervised Pre-Training for Attention-Based Encoder-Decoder ASR Model
    Gao, Changfeng
    Cheng, Gaofeng
    Li, Ta
    Zhang, Pengyuan
    Yan, Yonghong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1763 - 1774
  • [34] AEDmts: An Attention-Based Encoder-Decoder Framework for Multi-Sensory Time Series Analytic
    Fan, Jin
    Wang, Hongkun
    Huang, Yipan
    Zhang, Ke
    Zhao, Bei
    IEEE ACCESS, 2020, 8 (08): : 37406 - 37415
  • [35] Enhancing lane changing trajectory prediction on highways: A heuristic attention-based encoder-decoder model
    Xiao, Xue
    Bo, Peng
    Chen, Yingda
    Chen, Yili
    Li, Keping
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 639
  • [36] A Novel Dynamic Attack on Classical Ciphers Using an Attention-Based LSTM Encoder-Decoder Model
    Ahmadzadeh, Ezat
    Kim, Hyunil
    Jeong, Ongee
    Moon, Inkyu
    IEEE ACCESS, 2021, 9 (09): : 60960 - 60970
  • [37] Lane-Level Heterogeneous Traffic Flow Prediction: A Spatiotemporal Attention-Based Encoder-Decoder Model
    Zheng, Yan
    Li, Wenquan
    Zheng, Wen
    Dong, Chunjiao
    Wang, Shengyou
    Chen, Qian
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2023, 15 (03) : 51 - 67
  • [38] A novel approach to workload prediction using attention-based LSTM encoder-decoder network in cloud environment
    Zhu, Yonghua
    Zhang, Weilin
    Chen, Yihai
    Gao, Honghao
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2019, 2019 (01)
  • [39] An attention-based row-column encoder-decoder model for text recognition in Japanese historical documents
    Ly, Nam Tuan
    Nguyen, Cuong Tuan
    Nakagawa, Masaki
    PATTERN RECOGNITION LETTERS, 2020, 136 : 134 - 141
  • [40] Hybrid Transducer and Attention based Encoder-Decoder Modeling for Speech-to-Text Tasks
    Tang, Yun
    Sun, Anna Y.
    Inaguma, Hirofumi
    Chen, Xinyue
    Dong, Ning
    Ma, Xutai
    Tomasello, Paden D.
    Pino, Juan
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12441 - 12455