A General Framework for Counterfactual Learning-to-Rank

被引:87
作者
Agarwal, Aman [1 ]
Takatsu, Kenta [1 ]
Zaitsev, Ivan [1 ]
Joachims, Thorsten [1 ]
机构
[1] Cornell Univ, Ithaca, NY 14850 USA
来源
PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19) | 2019年
关键词
Learning to rank; presentation bias; counterfactual inference;
D O I
10.1145/3331184.3331202
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Implicit feedback (e.g., click, dwell time) is an attractive source of training data for Learning-to-Rank, but its naive use leads to learning results that are distorted by presentation bias. For the special case of optimizing average rank for linear ranking functions, however, the recently developed SVM-PropRank method has shown that counterfactual inference techniques can be used to provably overcome the distorting effect of presentation bias. Going beyond this special case, this paper provides a general and theoretically rigorous framework for counterfactual learning-to-rank that enables unbiased training for a broad class of additive ranking metrics (e.g., Discounted Cumulative Gain (DCG)) as well as a broad class of models (e.g., deep networks). Specifically, we derive a relaxation for propensity-weighted rank-based metrics which is subdifferentiable and thus suitable for gradient-based optimization. We demonstrate the effectiveness of this general approach by instantiating two new learning methods. One is a new type of unbiased SVM that optimizes DCG - called SVM PropDCG -, and we show how the resulting optimization problem can be solved via the Convex Concave Procedure (CCP). The other is Deep PropDCG, where the ranking function can be an arbitrary deep network. In addition to the theoretical support, we empirically find that SVM PropDCG significantly outperforms existing linear rankers in terms of DCG. Moreover, the ability to train non-linear ranking functions via Deep PropDCG further improves performance.
引用
收藏
页码:5 / 14
页数:10
相关论文
共 36 条
[11]  
Craswell Nick, 2008, P INT C WEB SEARCH W, P87, DOI [10.1145/1341531.1341545, DOI 10.1145/1341531.1341545, 10.1145/1341531]
[12]   A GENERALIZATION OF SAMPLING WITHOUT REPLACEMENT FROM A FINITE UNIVERSE [J].
HORVITZ, DG ;
THOMPSON, DJ .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1952, 47 (260) :663-685
[13]  
Hu Ziniu, 2018, ARXIVCSIR180905813
[14]  
Joachims T., 2002, KDD, P133
[15]   Unbiased Learning-to-Rank with Biased Feedback [J].
Joachims, Thorsten ;
Swaminathan, Adith ;
Schnabel, Tobias .
WSDM'17: PROCEEDINGS OF THE TENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2017, :781-789
[16]  
Joachims T, 2009, MACH LEARN, V77, P27, DOI [10.1007/S10994-009-5108-8, 10.1007/s10994-009-5108-8]
[17]  
Ke G., 2017, ADV NEURAL INFORM PR, V30, P3146
[18]  
Li L., 2011, INT C WEB SEARCH DAT
[19]   Variations and extension of the convex-concave procedure [J].
Lipp, Thomas ;
Boyd, Stephen .
OPTIMIZATION AND ENGINEERING, 2016, 17 (02) :263-287
[20]   Rank-Biased Precision for Measurement of Retrieval Effectiveness [J].
Moffat, Alistair ;
Zobel, Justin .
ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2009, 27 (01)