Revisiting the evaluation of diversified search evaluation metrics with user preferences

被引:2
作者
Chen, Fei [1 ]
Liu, Yiqun [1 ]
Dou, Zhicheng [2 ]
Xu, Keyang [1 ]
Cao, Yujie [1 ]
Zhang, Min [1 ]
Ma, Shaoping [1 ]
机构
[1] Tsinghua University, Beijing
[2] Renmin University of China, Beijing
来源
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | 2014年 / 8870卷
关键词
Information retrieval;
D O I
10.1007/978-3-319-12844-3_5
中图分类号
学科分类号
摘要
To validate the credibility of diversity evaluation metrics, a number of methods that “evaluate evaluation metrics” are adopted in diversified search evaluation studies, such as Kendall’s τ, Discriminative Power, and the Intuitiveness Test. These methods have been widely adopted and have aided us in gaining much insight into the effectiveness of evaluation metrics. However, they also follow certain types of user behaviors or statistical assumptions and do not take the information of users’ actual search preferences into consideration. With multi-grade user preference judgments collected for diversified search result lists displayed parallel, we take user preferences as the ground truth to investigate the evaluation of diversity metrics. We find that user preference at the subtopic level gain similar results with those at the topic level, which means we can use user preference at the topic level with much less human efforts in future experiments. We further find that most existing evaluation metrics correlate with user preferences well for result lists with large performance differences, no matter the differences is detected by the metric or the users. According to these findings, we then propose a preference-weighted correlation, the Multi-grade User Preference (MUP) method, to evaluate the diversity metrics based on user preferences. The experimental results reveal that MUP evaluates diversity metrics from real users’ perspective that may differ from other methods. In addition, we find the relevance of the search result is more important than the diversity of the search result in the diversified search evaluation of our experiments. © Springer International Publishing Switzerland 2014.
引用
收藏
页码:48 / 59
页数:11
相关论文
共 19 条
[1]  
Agrawal R., Gollapudi S., Halverson A., Leong S., Diversifying search results, Proc. of ACM WSDM 2009, pp. 1043-1052, (2009)
[2]  
Amigo E., Gonzalo J., Verdejo F., A general evaluation measure for document organization tasks, Proc. of SIGIR 2013, pp. 643-652, (2013)
[3]  
Ashkan A., Clarke C.L.A., On the informativeness of cascade and intent-aware effectiveness measures, Proc. of ACM, Hyderabad, India, pp. 407-416, (2011)
[4]  
Aslam J.A., Pavlu V., Savell R., A unified model for metasearch, pooling, and system evaluation, Proc. of ACM CIKM 2003, pp. 484-491, (2003)
[5]  
Buckley C., Voorhees E.M., Retrieval evaluation with incomplete information, Proc. of ACM SIGIR 2004, pp. 25-32, (2001)
[6]  
Chapelle O., Metlzer D., Zhang Y., Grinspan P., Expected reciprocal rank for graded relevance, Proc. of ACM CIKM 2009, pp. 621-630, (2009)
[7]  
Clarke C.L.A., Kolla M., Cormack G.V., Vechtomova O., Novelty and diversity in information retrieval evaluation, Proc. of ACM SIGIR 2008, pp. 659-666, (2008)
[8]  
Clarke C.L.A., Kolla M., Vechtomova O., An effectiveness measure for ambiguous and underspecified queries, ICTIR 2009. LNCS, 5766, pp. 188-199, (2009)
[9]  
Kendall M., A new measure of rank correlation, Biometrica, 30, pp. 81-89, (1938)
[10]  
Moffat A., Seven numeric properties of effectiveness metrics, AIRS 2013. LNCS, 8281, pp. 1-12, (2013)