Towards reproducibility in recommender-systems research

被引:40
作者
Beel, Joeran [1 ,5 ]
Breitinger, Corinna [1 ,2 ]
Langer, Stefan [1 ,3 ]
Lommatzsch, Andreas [4 ]
Gipp, Bela [1 ,5 ]
机构
[1] Docear, Constance, Germany
[2] Linnaeus Univ, Sch Comp Sci Phys & Math, S-35195 Vaxjo, Sweden
[3] Otto Von Guericke Univ, Dept Comp Sci, D-39106 Magdeburg, Germany
[4] Tech Univ Berlin, DAI Lab, Ernst Reuter Pl 7, D-10587 Berlin, Germany
[5] Univ Konstanz, Dept Informat Sci, Universitatsstr 10, D-78464 Constance, Germany
关键词
Recommender systems; Evaluation; Experimentation; Reproducibility;
D O I
10.1007/s11257-016-9174-x
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista's news recommender system, and Docear's research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms' user model depended on users' age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach's performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.
引用
收藏
页码:69 / 101
页数:33
相关论文
共 100 条
  • [21] Beel J, 2013, ACM-IEEE J CONF DIG, P443
  • [22] A Comparison of Offline Evaluations, Online Evaluations, and User Studies in the Context of Research-Paper Recommender Systems
    Beel, Joeran
    Langer, Stefan
    [J]. RESEARCH AND ADVANCED TECHNOLOGY FOR DIGITAL LIBRARIES, 2015, 9316 : 153 - 168
  • [23] Exploring the Potential of User Modeling Based on Mind Maps
    Beel, Joeran
    Langer, Stefan
    Kapitsaki, Georgia
    Breitinger, Corinna
    Gipp, Bela
    [J]. USER MODELING, ADAPTATION AND PERSONALIZATION, 2015, 9146 : 3 - 17
  • [24] Beel J, 2014, LECT NOTES COMPUT SC, V8538, P301
  • [25] Beel J, 2010, LECT NOTES COMPUT SC, V6273, P413, DOI 10.1007/978-3-642-15464-5_45
  • [26] Beel Joeran., 2013, Proceedings of the Interna- tional Workshop on Reproducibility and Replication in Recommender Sys- tems Evaluation, P15
  • [27] Recommender systems survey
    Bobadilla, J.
    Ortega, F.
    Hernando, A.
    Gutierrez, A.
    [J]. KNOWLEDGE-BASED SYSTEMS, 2013, 46 : 109 - 132
  • [28] Bogers T, 2008, RECSYS'08: PROCEEDINGS OF THE 2008 ACM CONFERENCE ON RECOMMENDER SYSTEMS, P287
  • [29] Bogers T, 2007, RECSYS 07: PROCEEDINGS OF THE 2007 ACM CONFERENCE ON RECOMMENDER SYSTEMS, P141
  • [30] Bollen J, 2000, LECT NOTES COMPUT SC, V1923, P356