Constructing Test Collections using Multi-armed Bandits and Active Learning

被引:10
|
作者
Rahman, Md Mustafizur [1 ]
Kutlu, Mucahid [2 ]
Lease, Matthew [1 ]
机构
[1] Univ Texas Austin, Austin, TX 78712 USA
[2] TOBB Econ & Tech Univ, Ankara, Turkey
来源
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019) | 2019年
关键词
Information Retrieval; Evaluation; Active Learning; Multi-Armed Bandits;
D O I
10.1145/3308558.3313675
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
While test collections provide the cornerstone of system-based evaluation in information retrieval, human relevance judging has become prohibitively expensive as collections have grown ever larger. Consequently, intelligently deciding which documents to judge has become increasingly important. We propose a two-phase approach to intelligent judging across topics which does not require document rankings from a shared task. In the first phase, we dynamically select the next topic to judge via a multi-armed bandit method. In the second phase, we employ active learning to select which document to judge next for that topic. Experiments on three TREC collections (varying scarcity of relevant documents) achieve tau approximate to 0.90 correlation for P@10 ranking and find 90% of the relevant documents at 48% of the original budget. To support reproducibility and follow-on work, we have shared our code online(1).
引用
收藏
页码:3158 / 3164
页数:7
相关论文
共 50 条
  • [1] Active Learning in Multi-armed Bandits
    Antos, Andras
    Grover, Varun
    Szepesvari, Csaba
    ALGORITHMIC LEARNING THEORY, PROCEEDINGS, 2008, 5254 : 287 - +
  • [2] Falcon: Fair Active Learning using Multi-armed Bandits
    Tae, Ki Hyun
    Zhang, Hantian
    Park, Jaeyoung
    Rong, Kexin
    Whang, Steven Euijong
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2024, 17 (05): : 952 - 965
  • [3] Quantum Reinforcement Learning for Multi-Armed Bandits
    Liu, Yi-Pei
    Li, Kuo
    Cao, Xi
    Jia, Qing-Shan
    Wang, Xu
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 5675 - 5680
  • [4] TRANSFER LEARNING FOR CONTEXTUAL MULTI-ARMED BANDITS
    Cai, Changxiao
    Cai, T. Tony
    Li, Hongzhe
    ANNALS OF STATISTICS, 2024, 52 (01): : 207 - 232
  • [5] An empirical evaluation of active inference in multi-armed bandits
    Markovic, Dimitrije
    Stojic, Hrvoje
    Schwoebel, Sarah
    Kiebel, Stefan J.
    NEURAL NETWORKS, 2021, 144 : 229 - 246
  • [6] Upper-Confidence-Bound Algorithms for Active Learning in Multi-armed Bandits
    Carpentier, Alexandra
    Lazaric, Alessandro
    Ghavamzadeh, Mohammad
    Munos, Remi
    Auer, Peter
    ALGORITHMIC LEARNING THEORY, 2011, 6925 : 189 - +
  • [7] On Kernelized Multi-armed Bandits
    Chowdhury, Sayak Ray
    Gopalan, Aditya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [8] Regional Multi-Armed Bandits
    Wang, Zhiyang
    Zhou, Ruida
    Shen, Cong
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [9] Multi-armed Bandits with Compensation
    Wang, Siwei
    Huang, Longbo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] Federated Multi-Armed Bandits
    Shi, Chengshuai
    Shen, Cong
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 9603 - 9611