Sequential recommendation by reprogramming pretrained transformer

被引:0
|
作者
Tang, Min [1 ]
Cui, Shujie [2 ]
Jin, Zhe [3 ]
Liang, Shiuan-ni [1 ]
Li, Chenliang [4 ]
Zou, Lixin [4 ]
机构
[1] Monash Univ, Sch Engn, Bandar Sunway 47500, Malaysia
[2] Monash Univ, Sch Informat Technol, Clayton, Vic 3800, Australia
[3] Anhui Univ, Sch Artificial Intelligence, Hefei 230039, Anhui, Peoples R China
[4] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Sequential recommendation; Generative pretrained transformer; Few-shot learning;
D O I
10.1016/j.ipm.2024.103938
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by the success of Pre-trained language models (PLMs), numerous sequential recommenders attempted to replicate its achievements by employing PLMs' efficient architectures for building large models and using self-supervised learning for broadening training data. Despite their success, there is curiosity about developing a large-scale sequential recommender system since existing methods either build models within a single dataset or utilize text as an intermediary for alignment across different datasets. However, due to the sparsity of user- item interactions, unalignment between different datasets, and lack of global information in the sequential recommendation, directly pre-training a large foundation model may not be feasible. Towards this end, we propose the RecPPT that firstly employs the GPT-2 to model historical sequence by training the input item embedding and the output layer from scratch, which avoids training a large model on the sparse user-item interactions. Additionally, to alleviate the burden of unalignment, the RecPPT is equipped with a reprogramming module to reprogram the target embedding to existing well-trained proto-embeddings. Furthermore, RecPPT integrates global information into sequences by initializing the item embedding using an SVD-based initializer. Extensive experiments over four datasets demonstrated the RecPPT achieved an average improvement of 6.5% on NDCG@5, 6.2% on NDCG@10, 6.1% on Recall@5, and 5.4% on Recall@10 compared to the baselines. Particularly in few-shot scenarios, the significant improvements in NDCG@10 confirm the superiority of the proposed method.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Adaptive Disentangled Transformer for Sequential Recommendation
    Zhang, Yipeng
    Wang, Xin
    Chen, Hong
    Zhu, Wenwu
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3434 - 3445
  • [2] Knowledge Graph Transformer for Sequential Recommendation
    Zhu, Jinghua
    Cui, Yanchang
    Zhang, Zhuohao
    Xi, Heran
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 459 - 471
  • [3] Effective and Efficient Transformer Models for Sequential Recommendation
    Petrov, Aleksandr V.
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT V, 2024, 14612 : 325 - 327
  • [4] Personalized Dual Transformer Network for sequential recommendation
    Ge, Meiling
    Wang, Chengduan
    Qin, Xueyang
    Dai, Jiangyan
    Huang, Lei
    Qin, Qibing
    Zhang, Wenfeng
    NEUROCOMPUTING, 2025, 622
  • [5] Effective and Efficient Transformer Models for Sequential Recommendation
    Petrov, Aleksandr, V
    PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 1150 - 1151
  • [6] Explanation Generated for Sequential Recommendation based on Transformer model
    Qu, Yuanpeng
    Nobuhara, Hajime
    2022 JOINT 12TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 23RD INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS&ISIS), 2022,
  • [7] Attention Calibration for Transformer-based Sequential Recommendation
    Zhou, Peilin
    Ye, Qichen
    Xie, Yueqi
    Gao, Jingqi
    Wang, Shoujin
    Kim, Jae Boum
    You, Chenyu
    Kim, Sunghun
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3595 - 3605
  • [8] Contrasting Transformer and Hypergraph Network for Cooperative Sequential Recommendation
    Wu, Tongyu
    Qu, Jianfeng
    Wang, Deqing
    Cui, Zhiming
    Liu, Guanfeng
    Zhao, Pengpeng
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 83 - 98
  • [9] Multi-Behavior Sequential Recommendation With Temporal Graph Transformer
    Xia, Lianghao
    Huang, Chao
    Xu, Yong
    Pei, Jian
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) : 6099 - 6112
  • [10] AdaMCT: Adaptive Mixture of CNN-Transformer for Sequential Recommendation
    Jiang, Juyong
    Zhang, Peiyan
    Luo, Yingtao
    Li, Chaozhuo
    Kim, Jae Boum
    Zhang, Kai
    Wang, Senzhang
    Xie, Xing
    Kim, Sunghun
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 976 - 986