Ensemble Modeling with Contrastive Knowledge Distillation for Sequential Recommendation

被引:10
作者
Du, Hanwen [1 ]
Yuan, Huanhuan [1 ]
Zhao, Pengpeng [1 ]
Zhuang, Fuzhen [2 ]
Liu, Guanfeng [3 ]
Zhao, Lei [1 ]
Liu, Yanchi [4 ]
Sheng, Victor S. [5 ]
机构
[1] Soochow Univ, Suzhou, Jiangsu, Peoples R China
[2] Beihang Univ, Beijing, Peoples R China
[3] Macquarie Univ, Sydney, NSW, Australia
[4] Rutgers State Univ, New Brunswick, NJ USA
[5] Texas Tech Univ, Lubbock, TX 79409 USA
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
Sequential Recommendation; Contrastive Learning; Knowledge Distillation;
D O I
10.1145/3539618.3591679
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sequential recommendation aims to capture users' dynamic interest and predicts the next item of users' preference. Most sequential recommendation methods use a deep neural network as sequence encoder to generate user and item representations. Existing works mainly center upon designing a stronger sequence encoder. However, few attempts have been made with training an ensemble of networks as sequence encoders, which is more powerful than a single network because an ensemble of parallel networks can yield diverse prediction results and hence better accuracy. In this paper, we present Ensemble Modeling with contrastive Knowledge Distillation for sequential recommendation (EMKD). Our framework adopts multiple parallel networks as an ensemble of sequence encoders and recommends items based on the output distributions of all these networks. To facilitate knowledge transfer between parallel networks, we propose a novel contrastive knowledge distillation approach, which performs knowledge transfer from the representation level via Intra-network Contrastive Learning (ICL) and Cross-network Contrastive Learning (CCL), as well as Knowledge Distillation (KD) from the logits level via minimizing the Kullback-Leibler divergence between the output distributions of the teacher network and the student network. To leverage contextual information, we train the primary masked item prediction task alongside the auxiliary attribute prediction task as a multitask learning scheme. Extensive experiments on public benchmark datasets show that EMKD achieves a significant improvement compared with the state-of-the-art methods. Besides, we demonstrate that our ensemble method is a generalized approach that can also improve the performance of other sequential recommenders. Our code is available at this link: https://github.com/hw-du/EMKD.
引用
收藏
页码:58 / 67
页数:10
相关论文
共 45 条
[1]   Distilling Knowledge via Knowledge Review [J].
Chen, Pengguang ;
Liu, Shu ;
Zhao, Hengshuang ;
Jia, Jiaya .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5006-5015
[2]  
Chen T, 2020, PR MACH LEARN RES, V119
[3]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297
[4]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[5]   An Empirical Study of Training Self-Supervised Vision Transformers [J].
Chen, Xinlei ;
Xie, Saining ;
He, Kaiming .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9620-9629
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]   Sequential Recommendation via Stochastic Self-Attention [J].
Fan, Ziwei ;
Liu, Zhiwei ;
Wang, Yu ;
Wang, Alice ;
Nazari, Zahra ;
Zheng, Lei ;
Peng, Hao ;
Yu, Philip S. .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2036-2047
[8]  
Fang Zhiyuan, 2021, ICLR
[9]  
Gao TY, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P6894
[10]  
Garipov T, 2018, ADV NEUR IN, V31