One Person, One Model, One World: Learning Continual User Representation without Forgetting

被引:40
作者
Yuan, Fajie [1 ,2 ]
Zhang, Guoxiao [2 ]
Karatzoglou, Alexandros [3 ]
Jose, Joemon [4 ]
Kong, Beibei [2 ]
Li, Yudong [2 ]
机构
[1] Westlake Univ, Hangzhou, Peoples R China
[2] Tencent, Shenzhen, Peoples R China
[3] Google, London, England
[4] Univ Glasgow, Glasgow, Lanark, Scotland
来源
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL | 2021年
关键词
User Modeling; Lifelong Learning; Forgetting; Recommender Systems;
D O I
10.1145/3404835.3462884
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning user representations is a vital technique toward effective user modeling and personalized recommender systems. Existing approaches often derive an individual set of model parameters for each task by training on separate data. However, the representation of the same user potentially has some commonalities, such as preference and personality, even in different tasks. As such, these separately trained representations could be suboptimal in performance as well as inefficient in terms of parameter sharing. In this paper, we delve on research to continually learn user representations task by task, whereby new tasks are learned while using partial parameters from old ones. A new problem arises since when new tasks are trained, previously learned parameters are very likely to be modified, and as a result, an artificial neural network (ANN)-based model may lose its capacity to serve for well-trained previous tasks forever, this issue is termed catastrophic forgetting. To address this issue, we present Conure the first continual, or lifelong, user representation learner - i.e., learning new tasks over time without forgetting old ones. Specifically, we propose iteratively removing less important weights of old tasks in a deep user representation model, motivated by the fact that neural network models are usually over-parameterized. In this way, we could learn many tasks with a single model by reusing the important weights, and modifying the less important weights to adapt to new tasks. We conduct extensive experiments on two real-world datasets with nine tasks and show that Conure largely exceeds the standard model that does not purposely preserve such old "knowledge", and performs competitively or sometimes better than models which are trained either individually for each task or simultaneously by merging all task data.
引用
收藏
页码:696 / 705
页数:10
相关论文
共 46 条
[1]  
[Anonymous], An empirical evaluation of generic convolutional and recurrent networks for sequence modeling
[2]  
Ba J.L., 2016, stat, VVolume 29, P3617, DOI 10.48550/arXiv.1607.06450
[3]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[4]  
Chen Lei, 2021, USER ADAPTIVE LAYER
[5]   Deep Neural Networks for YouTube Recommendations [J].
Covington, Paul ;
Adams, Jay ;
Sargin, Emre .
PROCEEDINGS OF THE 10TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'16), 2016, :191-198
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]   Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation [J].
Yuan, Fajie ;
He, Xiangnan ;
Karatzoglou, Alexandros ;
Zhang, Liguang .
PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, :1469-1478
[8]   ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation [J].
Mi, Fei ;
Lin, Xiaoyu ;
Faltings, Boi .
RECSYS 2020: 14TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2020, :408-413
[9]   Iterative Network Pruning with Uncertainty Regularization for Lifelong Sentiment Classification [J].
Geng, Binzong ;
Yang, Min ;
Yuan, Fajie ;
Wang, Shupeng ;
Ao, Xiang ;
Xu, Ruifeng .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1229-1238
[10]  
Golkar S., 2019, ARXIV190304476