Multiple Queries as Bandit Arms

被引:7
|
作者
Li, Cheng [1 ]
Resnick, Paul [1 ]
Mei, Qiaozhu [1 ]
机构
[1] Univ Michigan, Sch Informat, Ann Arbor, MI 48109 USA
来源
CIKM'16: PROCEEDINGS OF THE 2016 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT | 2016年
基金
美国国家科学基金会;
关键词
Query pooling; multi-armed bandits;
D O I
10.1145/2983323.2983816
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing retrieval systems rely on a single active query to pull documents from the index. Relevance feedback may be used to iteratively refine the query, but only one query is active at a time. If the user's information need has multiple aspects, the query must represent the union of these aspects. We consider a new paradigm of retrieval where multiple queries are kept "active" simultaneously. In the presence of rate limits, the active queries take turns accessing the index to retrieve another "page" of results. Turns are assigned by a multi-armed bandit based on user feedback. This allows the system to explore which queries return more relevant results and to exploit the best ones. In empirical tests, query pools outperform solo, combined queries. Significant improvement is observed both when the subtopic queries are known in advance and when the queries are generated in a user-interactive process.
引用
收藏
页码:1089 / 1098
页数:10
相关论文
共 50 条
  • [11] OPTIMAL STOPPING PROBLEMS FOR MULTIARMED BANDIT PROCESSES WITH ARMS INDEPENDENCE
    YOSHIDA, Y
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 1993, 26 (12) : 47 - 60
  • [12] Power-of-2-Arms for Bandit Learning With Switching Costs
    Shi, Ming
    Lin, Xiaojun
    Jiao, Lei
    PROCEEDINGS OF THE 2022 THE TWENTY-THIRD INTERNATIONAL SYMPOSIUM ON THEORY, ALGORITHMIC FOUNDATIONS, AND PROTOCOL DESIGN FOR MOBILE NETWORKS AND MOBILE COMPUTING, MOBIHOC 2022, 2022, : 131 - 140
  • [13] INFINITE-ARMS BANDIT: OPTIMALITY VIA CONFIDENCE BOUNDS
    Chan, Hock Peng
    Hu, Shouri
    STATISTICA SINICA, 2022, 32 (03) : 1683 - 1699
  • [14] Optimal Clustering with Noisy Queries via Multi-Armed Bandit
    Xia, Jinghui
    Huang, Zengfeng
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [15] Bandit and covariate processes, with finite or non-denumerable set of arms
    Lai, Tze Leung
    Sklar, Michael Benjamin
    Xu, Huanzhong
    STOCHASTIC PROCESSES AND THEIR APPLICATIONS, 2022, 150 : 1222 - 1237
  • [16] A unified framework for bandit multiple testing
    Xu, Ziyu
    Wang, Ruodu
    Ramdas, Aaditya
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [17] Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically Triggered Arms
    Chen, Wei
    Wang, Yajun
    Yuan, Yang
    Wang, Qinshi
    JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17
  • [19] The Unreasonable Effectiveness of Greedy Algorithms in Multi-Armed Bandit with Many Arms
    Bayati, Mohsen
    Hamidi, Nima
    Johari, Ramesh
    Khosravi, Khashayar
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [20] Submodular Bandit Problem Under Multiple Constraints
    Takemori, Sho
    Sato, Masahiro
    Sonoda, Takashi
    Singh, Janmajay
    Ohkuma, Tomoko
    CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE (UAI 2020), 2020, 124 : 191 - 200