Automatic performance evaluation of web search engines using judgments of metasearch engines

被引:8
作者
Sadeghi, Hamid [1 ]
机构
[1] Islamic Azad Univ, Dept Comp Engn, Hashtgerd, Iran
关键词
Search engines; Performance evaluation; Metasearch engines; Information retrieval; Automation; Information searches; Function evaluation; RANK RETRIEVAL-SYSTEMS; COMPARING RANKINGS; OVERLAP;
D O I
10.1108/14684521111193229
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Purpose - The purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally investigate which search engine among three popular search engines (Ask.com, Bing and Google) gives the best performance. Design/methodology/approach - The study assesses the performance of three search engines. For each one the weighted average of similarity degrees between its ranked result list and those of its metasearch engines is measured. Next these measures are compared to establish which search engine gives the best performance. To compute the similarity degree between the lists two measures called the "tendency degree" and "coverage degree" are introduced; the former assesses a search engine in terms of results presentation and the latter evaluates it in terms of retrieval effectiveness. The performance of the search engines is experimentally assessed based on the 50 topics of the 2002 TREC web track. The effectiveness of the methods is also compared with human-based ones. Findings - Google outperformed the others, followed by Bing and Ask.com. Moreover significant degrees of consistency - 92.87 percent and 91.93 percent - were found between automatic and human-based approaches. Practical implications - The findings of this work could help users to select a truly effective search engine. The results also provide motivation for the vendors of web search engines to improve their technology. Originality/value - The paper focuses on two novel automatic methods to evaluate the performance of search engines and provides valuable experimental results on three popular ones.
引用
收藏
页码:957 / 971
页数:15
相关论文
共 28 条
  • [1] [Anonymous], P 1 INT C INT HUM CO
  • [2] Comparing rankings of search results on the Web
    Bar-Ilan, J
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2005, 41 (06) : 1511 - 1519
  • [3] User rankings of search engine results
    Bar-Ilan, Judit
    Keenoy, Kevin
    Yaari, Eti
    Levene, Mark
    [J]. JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, 2007, 58 (09): : 1254 - 1266
  • [4] Methods for comparing rankings of search engine results
    Bar-Ilan, Judit
    Mat-Hassan, Mazlita
    Levene, Mark
    [J]. COMPUTER NETWORKS, 2006, 50 (10) : 1448 - 1463
  • [5] Random Sampling from a Search Engine's Index
    Bar-Yossef, Ziv
    Gurevich, Maxim
    [J]. JOURNAL OF THE ACM, 2008, 55 (05)
  • [6] A subjective measure of web search quality
    Beg, MMS
    [J]. INFORMATION SCIENCES, 2005, 169 (3-4) : 365 - 381
  • [7] Automatic performance evaluation of Web search engines
    Can, F
    Nuray, R
    Sevdik, AB
    [J]. INFORMATION PROCESSING & MANAGEMENT, 2004, 40 (03) : 495 - 514
  • [8] Carterette Ben, 2007, ADV NEURAL INFORM PR, P217
  • [9] Cen RW, 2009, LECT NOTES COMPUT SC, V5839, P351
  • [10] Craswell N., 2002, NIST SPECIAL PUBLICA, P86