An in-depth investigation on the behavior of measures to quantify reproducibility

被引:5
作者
Maistro, Maria [1 ]
Breuer, Timo [2 ]
Schaer, Philipp [2 ]
Ferro, Nicola [3 ]
机构
[1] Univ Copenhagen, Copenhagen, Denmark
[2] TH Koln Univ Appl Sci, Cologne, Germany
[3] Univ Padua, Padua, Italy
基金
欧盟地平线“2020”;
关键词
Reproducibility; Information retrieval; Evaluation;
D O I
10.1016/j.ipm.2023.103332
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Science is facing a so-called reproducibility crisis, where researchers struggle to repeat exper-iments and to get the same or comparable results. This represents a fundamental problem in any scientific discipline because reproducibility lies at the very basis of the scientific method. A central methodological question is how to measure reproducibility and interpret different measures. In Information Retrieval (IR), current practices to measure reproducibility rely mainly on comparing averaged scores. If the reproduced score is close enough to the original one, the reproducibility experiment is deemed successful, although the identical scores can still rely on entirely different result lists. Therefore, this paper focuses on measures to quantify reproducibility in IR and their behavior. We present a critical analysis of IR reproducibility measures by synthetically generating runs in a controlled experimental setting, which allows us to control the amount of reproducibility error. These synthetic runs are generated by a deterioration algorithm based on swaps and replacements of documents in ranked lists. We investigate the behavior of different reproducibility measures with these synthetic runs in three different scenarios. Moreover, we propose a normalized version of Root Mean Square Error (RMSE) to quantify reproducibility better. Experimental results show that a single score is not enough to decide whether an experiment is successfully reproduced because such a score depends on the type of effectiveness measure and the performance of the original run. This study highlights how challenging it can be to reproduce experimental results and quantify the amount of reproducibility.
引用
收藏
页数:39
相关论文
共 69 条
  • [1] Agosti M., 2012, SIGIR FORUM, V46, P51
  • [2] Agosti Maristella, 2019, TIRS, V41, P105, DOI [10.1007/978-3-030-22948-1_4, DOI 10.1007/978-3-030-22948-1_4]
  • [3] Allan J., 2008, NIST SPECIAL PUBLICA
  • [4] Allan J., 2009, NIST SPECIAL PUBLICA, P500
  • [5] [Anonymous], 1977, INFORM PROCESS MANAG, DOI DOI 10.1145/383952.383972
  • [6] [Anonymous], 2016, SIGIR Forum
  • [7] [Anonymous], 2015, SIGIR Forum
  • [8] [Anonymous], 2012, SIGIR Forum
  • [9] Arampatzis A., 2001, SIGIR Forum, P285
  • [10] Modeling score distributions in information retrieval
    Arampatzis, Avi
    Robertson, Stephen
    [J]. INFORMATION RETRIEVAL, 2011, 14 (01): : 26 - 46