Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning

被引:0
作者
Shin, Kyuyong [1 ,2 ]
Kwak, Hanock [1 ]
Kim, Wonjae [2 ]
Jeong, Jisu [1 ,2 ]
Jung, Seungjae [1 ]
Kim, Kyung-Min [1 ,2 ]
Ha, Jung-Woo [2 ]
Lee, Sang-Woo [1 ,2 ]
机构
[1] NAVER, Seongnam, South Korea
[2] NAVER AI Lab, Seongnam, South Korea
来源
PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have proposed unified user modeling frameworks that leverage user behavior data from various applications. Many of them benefit from utilizing users' behavior sequences as plain texts, representing rich information in any domain or system without losing generality. Hence, a question arises: Can language modeling for user history corpus help improve recommender systems? While its versatile usability has been widely investigated in many domains, its applications to recommender systems still remain underexplored. We show that language modeling applied directly to task-specific user histories achieves excellent results on diverse recommendation tasks. Also, leveraging additional task-agnostic user histories delivers significant performance benefits. We further demonstrate that our approach can provide promising transfer learning capabilities for a broad spectrum of real-world recommender systems, even on unseen domains and services.
引用
收藏
页码:1146 / 1161
页数:16
相关论文
共 59 条
[1]  
[Anonymous], 2019, P 1 INT WORKSH DEEP
[2]  
Ardalani Newsha, 2022, ARXIV220808489
[3]  
Bahri Yasaman, 2021, ARXIV210206701
[4]  
Borsos Zalan, 2022, ARXIV220903143
[5]  
Brown TB, 2020, ADV NEUR IN, V33
[6]  
Chen M, 2020, PR MACH LEARN RES, V119
[7]  
Chen Mark., 2021, arXiv2107.03374
[8]  
Chen X., 2022, ICLR
[9]  
Chronopoulou A, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P2089
[10]  
Cui Zeyu, 2022, ARXIV220508084