Constructing Test Collections using Multi-armed Bandits and Active Learning

被引:12
作者
Rahman, Md Mustafizur [1 ]
Kutlu, Mucahid [2 ]
Lease, Matthew [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] TOBB Econ & Tech Univ, Ankara, Turkey
来源
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019) | 2019年
关键词
Information Retrieval; Evaluation; Active Learning; Multi-Armed Bandits;
D O I
10.1145/3308558.3313675
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
While test collections provide the cornerstone of system-based evaluation in information retrieval, human relevance judging has become prohibitively expensive as collections have grown ever larger. Consequently, intelligently deciding which documents to judge has become increasingly important. We propose a two-phase approach to intelligent judging across topics which does not require document rankings from a shared task. In the first phase, we dynamically select the next topic to judge via a multi-armed bandit method. In the second phase, we employ active learning to select which document to judge next for that topic. Experiments on three TREC collections (varying scarcity of relevant documents) achieve tau approximate to 0.90 correlation for P@10 ranking and find 90% of the relevant documents at 48% of the original budget. To support reproducibility and follow-on work, we have shared our code online(1).
引用
收藏
页码:3158 / 3164
页数:7
相关论文
共 50 条
[41]   Scheduling for Massive MIMO With Hybrid Precoding Using Contextual Multi-Armed Bandits [J].
Mauricio, Weskley V. F. ;
Maciel, Tarcisio Ferreira ;
Klein, Anja ;
Marques Lima, Francisco Rafael .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (07) :7397-7413
[42]   Dynamic Estimation of Rater Reliability in Subjective Tasks Using Multi-Armed Bandits [J].
Tarasov, Alexey ;
Delany, Sarah Jane ;
Mac Namee, Brian .
Proceedings of 2012 ASE/IEEE International Conference on Privacy, Security, Risk and Trust and 2012 ASE/IEEE International Conference on Social Computing (SocialCom/PASSAT 2012), 2012, :979-980
[43]   Personalizing Natural Language Understanding using Multi-armed Bandits and Implicit Feedback [J].
Moerchen, Fabian ;
Ernst, Patrick ;
Zappella, Giovanni .
CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, :2661-2668
[44]   PAC models in stochastic multi-objective multi-armed bandits [J].
Drugan, Madalina M. .
PROCEEDINGS OF THE 2017 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'17), 2017, :409-416
[45]   Multi-User Multi-Armed Bandits for Uncoordinated Spectrum Access [J].
Bande, Meghana ;
Veeravalli, Venugopal V. .
2019 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2019, :653-657
[46]   Multi-armed bandits for adjudicating documents in pooling-based evaluation of information retrieval systems [J].
Losada, David E. ;
Parapar, Javier ;
Barreiro, Alvaro .
INFORMATION PROCESSING & MANAGEMENT, 2017, 53 (05) :1005-1025
[47]   Unreliable Multi-Armed Bandits: A Novel Approach to Recommendation Systems [J].
Ravi, Aditya Narayan ;
Poduval, Pranav ;
Moharir, Sharayu .
2020 INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS & NETWORKS (COMSNETS), 2020,
[48]   Best-Arm Identification in Correlated Multi-Armed Bandits [J].
Gupta, Samarth ;
Joshi, Gauri ;
Yagan, Osman .
IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY, 2021, 2 (02) :549-563
[49]   Robust Risk-Averse Stochastic Multi-armed Bandits [J].
Maillard, Odalric-Ambrym .
ALGORITHMIC LEARNING THEORY (ALT 2013), 2013, 8139 :218-233
[50]   Multi-armed Bandits with Generalized Temporally-Partitioned Rewards [J].
van den Broek, Ronald C. ;
Litjens, Rik ;
Sagis, Tobias ;
Verbeeke, Nina ;
Gajane, Pratik .
ADVANCES IN INTELLIGENT DATA ANALYSIS XXII, PT I, IDA 2024, 2024, 14641 :41-52