Controlling Popularity Bias in Learning-to-Rank Recommendation

被引:232
作者
Abdollahpouri, Himan [1 ]
Burke, Robin [1 ]
Mobasher, Bamshad [1 ]
机构
[1] Depaul Univ, Chicago, IL 60604 USA
来源
PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'17) | 2017年
基金
美国国家科学基金会;
关键词
Recommender systems; long-tail; Recommendation evaluation; Coverage; Learning to rank;
D O I
10.1145/3109859.3109912
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are often desirable recommendations. In this paper, we introduce a flexible regularization-based framework to enhance the long-tail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to improve coverage of long tail items without substantial loss of ranking performance.
引用
收藏
页码:42 / 46
页数:5
相关论文
共 18 条
[1]  
Abdollahpouri Himan, 2017, P 25 C US M IN PRESS
[2]  
[Anonymous], 2012, P 6 ACM C RECOMMENDE, DOI DOI 10.1145/2365952.2365972
[3]  
[Anonymous], 2006, LONG TAIL WHY FUTURE
[4]  
[Anonymous], 2016, 20 9 INT FLAIRS C FL
[5]  
Brynjolfsson E, 2006, MIT SLOAN MANAGE REV, V47, P67
[6]  
Burke R, 2016, 2016 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE WORKSHOPS (WIW 2016), P62, DOI [10.1109/WIW.2016.18, 10.1109/WIW.2016.028]
[7]  
Burke Robin D., 2016, WORKSH SURPR OPP OBS, V6
[8]  
Celma O., 2008, P 2 KDD WORKSH LARG, P1
[9]  
Guo Guibing, 2015, UMAP WORKSH, V4
[10]   The MovieLens Datasets: History and Context [J].
Harper, F. Maxwell ;
Konstan, Joseph A. .
ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2016, 5 (04)