Integrating optimized item selection with active learning for continuous exploration in recommender systems

被引:1
作者
Kadioglu, Serdar [1 ,2 ]
Kleynhans, Bernard [1 ]
Wang, Xin [1 ]
机构
[1] Fidel Investments, AI Ctr Excellence, Boston, MA 02110 USA
[2] Brown Univ, Dept Comp Sci, Providence, RI 02912 USA
关键词
Recommender systems; Exploration-exploitation; Active learning;
D O I
10.1007/s10472-024-09941-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommender Systems have become the backbone of personalized services that provide tailored experiences to individual users, yet designing new recommendation applications with limited or no available training data remains a challenge. To address this issue, we focus on selecting the universe of items for experimentation in recommender systems by leveraging a recently introduced combinatorial problem. On the one hand, selecting a large set of items is desirable to increase the diversity of items. On the other hand, a smaller set of items enables rapid experimentation and minimizes the time and the amount of data required to train machine learning models. We first present how to optimize for such conflicting criteria using a multi-level optimization framework. Then, we shift our focus to the operational setting of a recommender system. In practice, to work effectively in a dynamic environment where new items are introduced to the system, we need to explore users' behaviors and interests continuously. To that end, we show how to integrate the item selection approach with active learning to guide randomized exploration in an ongoing fashion. Our hybrid approach combines techniques from discrete optimization, unsupervised clustering, and latent text embeddings. Experimental results on well-known movie and book recommendation benchmarks demonstrate the benefits of optimized item selection and efficient exploration.
引用
收藏
页码:1585 / 1607
页数:23
相关论文
共 61 条
[61]  
Zhao L., 2013, P 27 AAAI C ARTIFICI