Active Learning Strategies for Rating Elicitation in Collaborative Filtering: A System-Wide Perspective

被引:40
作者
Elahi, Mehdi [1 ]
Ricci, Francesco [1 ]
Rubens, Neil [2 ]
机构
[1] Free Univ Bozen Bolzano, Bozen Bolzano, Italy
[2] Univ Electrocommun, Tokyo, Japan
关键词
Algorithms; Experimentation; Performance; Recommender systems; active learning; rating elicitation; cold start; PREFERENCE ELICITATION;
D O I
10.1145/2542182.2542195
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor-quality data during training, that is, garbage in, garbage out. Active learning aims to remedy this problem by focusing on obtaining better-quality data that more aptly reflects a user's preferences. However, traditional evaluation of active learning strategies has two major flaws, which have significant negative ramifications on accurately evaluating the system's performance (prediction error, precision, and quantity of elicited ratings). (1) Performance has been evaluated for each user independently (ignoring system-wide improvements). (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). In this article we show that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system centric). We propose a new evaluation methodology and use it to evaluate some novel and state-of-the-art rating elicitation strategies. We found that the system-wide effectiveness of a rating elicitation strategy depends on the stage of the rating elicitation process, and on the evaluation measures (MAE, NDCG, and Precision). In particular, we show that using some common user-centric strategies may actually degrade the overall performance of a system. Finally, we show that the performance of many common active learning strategies changes significantly when evaluated concurrently with the natural acquisition of ratings in recommender systems. Categories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval
引用
收藏
页数:33
相关论文
共 43 条
[1]  
Anderson C., 2006, LONG TAIL
[2]  
[Anonymous], 2008, NETFL PRIZ
[3]  
[Anonymous], 2008, P 14 ACM SIGKDD INT
[4]  
[Anonymous], 2008, Introduction to information retrieval
[5]   Efficiently learning the preferences of people [J].
Birlutiu, Adriana ;
Groot, Perry ;
Heskes, Tom .
MACHINE LEARNING, 2013, 90 (01) :1-28
[6]  
BONILLA EV, 2010, ADV NEURAL INFORM PR, V23, P262
[7]  
Boutilier C., 2003, Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial Intelligence, P98
[8]  
Braziunas Darius., 2010, Proceedings of the Eleventh ACM Conference on Electronic Commerce (EC'10), P219
[9]  
Burke R., 2010, P 4 ACM C REC SYST, P225
[10]  
Carenini G., 2003, IUI 03. 2003 International Conference on Intelligent User Interfaces, P12, DOI 10.1145/604045.604052