Automatic search engine performance evaluation based on user behavior analysis

被引:0
作者
Liu, Yi-Qun [1 ,2 ]
Cen, Rong-Wei [1 ,2 ]
Zhang, Min [1 ,2 ]
Ru, Li-Yun [3 ]
Ma, Shao-Ping [1 ,2 ]
机构
[1] Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University
[2] State Key Laboratory of Intelligent Technology and Systems, Department of Computer Science and Technology, Tsinghua University
[3] Sohu Inc. Research and Development Center
来源
Ruan Jian Xue Bao/Journal of Software | 2008年 / 19卷 / 11期
关键词
Performance evaluation; User behavior analysis; Web information retrieval;
D O I
10.3724/sp.j.1001.2008.03023
中图分类号
学科分类号
摘要
With click-through data analysis, an automatic search engine performance evaluation method is proposed. This method generates navigational type query topics and answers automatically based on search users' querying and clicking behavior. Experimental results based on a commercial Chinese search engine's user logs show that the automatic method gets a similar evaluation result with the traditional assessor-based ones. This method can also provide timely evaluation results with little human efforts.
引用
收藏
页码:3023 / 3032
页数:9
相关论文
共 12 条
[1]  
Saracevic T., Evaluation of evaluation in information retrieval, Proc. of the 18th Annual Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR'95), pp. 138-146, (1995)
[2]  
Voorhees E.M., The philosophy of information retrieval evaluation, Evaluation of Cross-Language Information Retrieval Systems: Second Workshop of the Cross-Language Evaluation Forum, pp. 355-370, (2001)
[3]  
Soboroff I., Nicholas C., Cahan P., Ranking retrieval systems without relevance judgments, Proc. of the 24th Annual Int'l ACM SIGIR Conf. on Research and Development in information Retrieval (SIGIR 2001), pp. 66-73, (2001)
[4]  
Nuray R., Can F., Automatic ranking of retrieval systems in imperfect environments, Proc. of the 26th Annual Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR 2003), pp. 379-380, (2003)
[5]  
Chowdhury A., Soboroff I., Automatic evaluation of world wide Web search services, Proc. of the 25th Annual Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR 2002), pp. 421-422, (2002)
[6]  
Beitzel S.M., Jensen E.C., Chowdhury A., Grossman D., Using titles and category names from editor-driven taxonomies for automatic evaluation, Proc. of the 12th Int'l Conf. on Information and Knowledge Management, pp. 17-23, (2003)
[7]  
Amitay E., Carmel D., Lempel R., Soffer A., Scaling IR-system evaluation using term relevance sets, Proc. of the 27th Annual Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval (SIGIR 2004), pp. 10-17, (2004)
[8]  
Joachims T., Evaluating retrieval performance using clickthrough data, Text Mining., pp. 79-96, (2003)
[9]  
Liu Y.Q., Zhang M., Ru L.Y., Ma S.P., Automatic query type identification based on click through information, Proc. of the 3rd Asia Information Retrieval Symp., AIRS 2006. LNCS 4182, pp. 593-600, (2006)
[10]  
Lee U., Liu Z.Y., Cho J., Automatic identification of user goals in Web search, Proc. of the 14th Int'l Conf. on World Wide Web, WWW 2005. 2005. ACM Proc. of the 14th Int'l Conf. on World Wide Web (WWW 2005), pp. 391-400, (2005)