Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model

被引:23
作者
Kiyohara, Haruka [1 ]
Saito, Yuta [2 ]
Matsuhiro, Tatsuya [3 ]
Narita, Yusuke [4 ]
Shimizu, Nobuyuki [3 ]
Yamamoto, Yasuo [3 ]
机构
[1] Tokyo Inst Technol, Tokyo, Japan
[2] Cornell Univ, Ithaca, NY 14853 USA
[3] Yahoo Japan Corp, Tokyo, Japan
[4] Yale Univ, New Haven, CT 06520 USA
来源
WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2022年
关键词
off policy evaluation; slate recommendation; doubly robust; inverse propensity score; cascade model;
D O I
10.1145/3488560.3498380
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In real-world recommender systems and search engines, optimizing ranking decisions to present a ranked list of relevant items is critical. Off-policy evaluation (OPE) for ranking policies is thus gaining a growing interest because it enables performance estimation of new ranking policies using only logged data. Although OPE in contextual bandits has been studied extensively, its naive application to the ranking setting faces a critical variance issue due to the huge item space. To tackle this problem, previous studies introduce some assumptions on user behavior to make the combinatorial item space tractable. However, an unrealistic assumption may, in turn, cause serious bias. Therefore, appropriately controlling the bias-variance tradeoff by imposing a reasonable assumption is the key for success in OPE of ranking policies. To achieve a well-balanced bias-variance tradeoff, we propose the Cascade Doubly Robust estimator building on the cascade assumption, which assumes that a user interacts with items sequentially from the top position in a ranking. We show that the proposed estimator is unbiased in more cases compared to existing estimators that make stronger assumptions on user behavior. Furthermore, compared to a previous estimator based on the same cascade assumption, the proposed estimator reduces the variance by leveraging a control variate. Comprehensive experiments on both synthetic and real-world e-commerce data demonstrate that our estimator leads to more accurate OPE than existing estimators in a variety of settings.
引用
收藏
页码:487 / 497
页数:11
相关论文
共 25 条
  • [1] [Anonymous], 2010, ADV NEURAL INFORM PR
  • [2] Beygelzimer A, 2009, KDD-09: 15TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, P129
  • [3] Doubly Robust Policy Evaluation and Optimization
    Dudik, Miroslav
    Erhan, Dumitru
    Langford, John
    Li, Lihong
    [J]. STATISTICAL SCIENCE, 2014, 29 (04) : 485 - 511
  • [4] Farajtabar M, 2018, PR MACH LEARN RES, V80
  • [5] Offline A/B testing for Recommender Systems
    Gilotte, Alexandre
    Calauzenes, Clement
    Nedelec, Thomas
    Abraham, Alexandre
    Dolle, Simon
    [J]. WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2018, : 198 - 206
  • [6] Offline Evaluation to Make Decisions About Playlist Recommendation Algorithms
    Gruson, Alois
    Chandar, Praveen
    Charbuillet, Christophe
    McInerney, James
    Hansen, Samantha
    Tardieu, Damien
    Carterette, Ben
    [J]. PROCEEDINGS OF THE TWELFTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'19), 2019, : 420 - 428
  • [7] Guo F., 2009, P 2 ACM INT C WEB SE, P124, DOI DOI 10.1145/1498759.1498818
  • [8] Cumulated gain-based evaluation of IR techniques
    Järvelin, K
    Kekäläinen, J
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2002, 20 (04) : 422 - 446
  • [9] Jiang N, 2016, PR MACH LEARN RES, V48
  • [10] Kallus N., 2021, PMLR, V139, P5247