Modeling User Fatigue for Sequential Recommendation

被引:2
作者
Li, Nian [1 ]
Ban, Xin [2 ]
Ling, Cheng [2 ]
Gao, Chen [3 ]
Hu, Lantao [2 ]
Jiang, Peng [2 ]
Gai, Kun
Li, Yong [3 ]
Liao, Qingmin [1 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Shenzhen, Peoples R China
[2] Kuaishou Inc, Beijing, Peoples R China
[3] Tsinghua Univ, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024 | 2024年
基金
中国国家自然科学基金;
关键词
User Fatigue; Sequential Recommendation; Long and Short-term; Interests;
D O I
10.1145/3626772.3657802
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommender systems filter out information that meets user interests. However, users may be tired of the recommendations that are too similar to the content they have been exposed to in a short historical period, which is the so-called user fatigue. Despite the significance for a better user experience, user fatigue is seldom explored by existing recommenders. In fact, there are three main challenges to be addressed for modeling user fatigue, including what features support it, how it influences user interests, and how its explicit signals are obtained. In this paper, we propose to model user Fatigue in interest learning for sequential Recommendations (FRec). To address the first challenge, based on a multi-interest framework, we connect the target item with historical items and construct an interest-aware similarity matrix as features to support fatigue modeling. Regarding the second challenge, built upon feature cross, we propose a fatigue-enhanced multi-interest fusion to capture longterm interest. In addition, we develop a fatigue-gated recurrent unit for short-term interest learning, with temporal fatigue representations as important inputs for constructing update and reset gates. For the last challenge, we propose a novel sequence augmentation to obtain explicit fatigue signals for contrastive learning. We conduct extensive experiments on real-world datasets, including two public datasets and one large-scale industrial dataset. Experimental results show that FRec can improve AUC and GAUC up to 0.026 and 0.019 compared with state-of-the-art models, respectively. Moreover, large-scale online experiments demonstrate the effectiveness of FRec for fatigue reduction. Our codes are released at https://github.com/tsinghua- fib- lab/SIGIR24-FRec.
引用
收藏
页码:996 / 1005
页数:10
相关论文
共 44 条
[1]   Soft Frequency Capping for Improved Ad Click Prediction in Yahoo Gemini Native [J].
Aharon, Michal ;
Kaplan, Yohay ;
Levy, Rina ;
Somekh, Oren ;
Blanc, Ayelet ;
Eshel, Neetai ;
Shahar, Avi ;
Singer, Assaf ;
Zlotnik, Alex .
PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19), 2019, :2793-2801
[2]   Survey on the Objectives of Recommender Systems: Measures, Solutions, Evaluation Methodology, and New Perspectives [J].
Alhijawi, Bushra ;
Awajan, Arafat ;
Fraihat, Salam .
ACM COMPUTING SURVEYS, 2023, 55 (05)
[3]  
An MX, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P336
[4]  
Bai S., 2018, An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
[5]   Controllable Multi-Interest Framework for Recommendation [J].
Cen, Yukuo ;
Zhang, Jianwei ;
Zou, Xu ;
Zhou, Chang ;
Yang, Hongxia ;
Tang, Jie .
KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, :2942-2951
[6]   Sequential Recommendation with Graph Neural Networks [J].
Chang, Jianxin ;
Gao, Chen ;
Zheng, Yu ;
Hui, Yiqun ;
Niu, Yanan ;
Song, Yang ;
Jin, Depeng ;
Li, Yong .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :378-387
[7]   Intent Contrastive Learning for Sequential Recommendation [J].
Chen, Yongjun ;
Liu, Zhiwei ;
Li, Jia ;
McAuley, Julian ;
Xiong, Caiming .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2172-2182
[8]  
Cheng H.-T., 2016, P 1 WORKSH DEEP LEAR, P7
[9]  
Cho K., 2014, 8 WORKSH SYNT SEM ST
[10]  
Ding JT, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2230