Sequential recommendation by reprogramming pretrained transformer

被引:0
|
作者
Tang, Min [1 ]
Cui, Shujie [2 ]
Jin, Zhe [3 ]
Liang, Shiuan-ni [1 ]
Li, Chenliang [4 ]
Zou, Lixin [4 ]
机构
[1] Monash Univ, Sch Engn, Bandar Sunway 47500, Malaysia
[2] Monash Univ, Sch Informat Technol, Clayton, Vic 3800, Australia
[3] Anhui Univ, Sch Artificial Intelligence, Hefei 230039, Anhui, Peoples R China
[4] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Hubei, Peoples R China
基金
中国国家自然科学基金;
关键词
Sequential recommendation; Generative pretrained transformer; Few-shot learning;
D O I
10.1016/j.ipm.2024.103938
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Inspired by the success of Pre-trained language models (PLMs), numerous sequential recommenders attempted to replicate its achievements by employing PLMs' efficient architectures for building large models and using self-supervised learning for broadening training data. Despite their success, there is curiosity about developing a large-scale sequential recommender system since existing methods either build models within a single dataset or utilize text as an intermediary for alignment across different datasets. However, due to the sparsity of user- item interactions, unalignment between different datasets, and lack of global information in the sequential recommendation, directly pre-training a large foundation model may not be feasible. Towards this end, we propose the RecPPT that firstly employs the GPT-2 to model historical sequence by training the input item embedding and the output layer from scratch, which avoids training a large model on the sparse user-item interactions. Additionally, to alleviate the burden of unalignment, the RecPPT is equipped with a reprogramming module to reprogram the target embedding to existing well-trained proto-embeddings. Furthermore, RecPPT integrates global information into sequences by initializing the item embedding using an SVD-based initializer. Extensive experiments over four datasets demonstrated the RecPPT achieved an average improvement of 6.5% on NDCG@5, 6.2% on NDCG@10, 6.1% on Recall@5, and 5.4% on Recall@10 compared to the baselines. Particularly in few-shot scenarios, the significant improvements in NDCG@10 confirm the superiority of the proposed method.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Knowledge Graph Transformer for Sequential Recommendation
    Zhu, Jinghua
    Cui, Yanchang
    Zhang, Zhuohao
    Xi, Heran
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT VI, 2023, 14259 : 459 - 471
  • [2] Adaptive Disentangled Transformer for Sequential Recommendation
    Zhang, Yipeng
    Wang, Xin
    Chen, Hong
    Zhu, Wenwu
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3434 - 3445
  • [3] Personalized Dual Transformer Network for sequential recommendation
    Ge, Meiling
    Wang, Chengduan
    Qin, Xueyang
    Dai, Jiangyan
    Huang, Lei
    Qin, Qibing
    Zhang, Wenfeng
    NEUROCOMPUTING, 2025, 622
  • [4] Attention Calibration for Transformer-based Sequential Recommendation
    Zhou, Peilin
    Ye, Qichen
    Xie, Yueqi
    Gao, Jingqi
    Wang, Shoujin
    Kim, Jae Boum
    You, Chenyu
    Kim, Sunghun
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 3595 - 3605
  • [5] Explanation Generated for Sequential Recommendation based on Transformer model
    Qu, Yuanpeng
    Nobuhara, Hajime
    2022 JOINT 12TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 23RD INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS&ISIS), 2022,
  • [6] Contrasting Transformer and Hypergraph Network for Cooperative Sequential Recommendation
    Wu, Tongyu
    Qu, Jianfeng
    Wang, Deqing
    Cui, Zhiming
    Liu, Guanfeng
    Zhao, Pengpeng
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2024, PT 3, 2025, 14852 : 83 - 98
  • [7] AdaMCT: Adaptive Mixture of CNN-Transformer for Sequential Recommendation
    Jiang, Juyong
    Zhang, Peiyan
    Luo, Yingtao
    Li, Chaozhuo
    Kim, Jae Boum
    Zhang, Kai
    Wang, Senzhang
    Xie, Xing
    Kim, Sunghun
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 976 - 986
  • [8] SSE-PT: Sequential Recommendation Via Personalized Transformer
    Wu, Liwei
    Li, Shuqing
    Hsieh, Cho-Jui
    Sharpnack, James
    RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, : 328 - 337
  • [9] Transformer-Based Rating-Aware Sequential Recommendation
    Li, Yang
    Li, Qianmu
    Meng, Shunmei
    Hou, Jun
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT I, 2022, 13155 : 759 - 774
  • [10] Dual Contrastive Transformer for Hierarchical Preference Modeling in Sequential Recommendation
    Huang, Chengkai
    Wang, Shoujin
    Wang, Xianzhi
    Yao, Lina
    PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 99 - 109