Learning to Rank from Noisy Data

被引:7
作者
Ding, Wenkui [1 ]
Geng, Xiubo [2 ]
Zhang, Xu-Dong [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[2] Yahoo Labs Beijing, Beijing, Peoples R China
关键词
Noisy data; robust learning;
D O I
10.1145/2576230
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning to rank, which learns the ranking function from training data, has become an emerging research area in information retrieval and machine learning. Most existing work on learning to rank assumes that the training data is clean, which is not always true, however. The ambiguity of query intent, the lack of domain knowledge, and the vague definition of relevance levels all make it difficult for common annotators to give reliable relevance labels to some documents. As a result, the relevance labels in the training data of learning to rank usually contain noise. If we ignore this fact, the performance of learning-to-rank algorithms will be damaged. In this article, we propose considering the labeling noise in the process of learning to rank and using a two-step approach to extend existing algorithms to handle noisy training data. In the first step, we estimate the degree of labeling noise for a training document. To this end, we assume that the majority of the relevance labels in the training data are reliable and we use a graphical model to describe the generative process of a training query, the feature vectors of its associated documents, and the relevance labels of these documents. The parameters in the graphical model are learned by means of maximum likelihood estimation. Then the conditional probability of the relevance label given the feature vector of a document is computed. If the probability is large, we regard the degree of labeling noise for this document as small; otherwise, we regard the degree as large. In the second step, we extend existing learning-to-rank algorithms by incorporating the estimated degree of labeling noise into their loss functions. Specifically, we give larger weights to those training documents with smaller degrees of labeling noise and smaller weights to those with larger degrees of labeling noise. As examples, we demonstrate the extensions for McRank, RankSVM, RankBoost, and RankNet. Empirical results on benchmark datasets show that the proposed approach can effectively distinguish noisy documents from clean ones, and the extended learning-to-rank algorithms can achieve better performances than baselines.
引用
收藏
页数:21
相关论文
共 37 条
[21]   Cumulated gain-based evaluation of IR techniques [J].
Järvelin, K ;
Kekäläinen, J .
ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2002, 20 (04) :422-446
[22]  
Jelinek F., 1980, P WORKSH PATT REC PR
[23]  
Joachims T., 2003, ICML '03, P290
[24]  
Joachims Thorsten, 2002, Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, DOI [DOI 10.1145/775047.775067, 10.1145/775047.775067]
[25]  
Lawrence N, 2001, P 18 INT C MACHINE L, P306
[26]   Classification in the presence of class noise using a probabilistic Kernel Fisher method [J].
Li, Yunlei ;
Wessels, Lodewyk F. A. ;
de Ridder, Dick ;
Reinders, Marcel J. T. .
PATTERN RECOGNITION, 2007, 40 (12) :3349-3357
[27]  
Lidan Wang, 2012, Proceedings of the 35th Annual International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR 2012), P761, DOI 10.1145/2348283.2348385
[28]  
Niu SZ, 2012, SIGIR 2012: PROCEEDINGS OF THE 35TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, P751, DOI 10.1145/2348283.2348384
[29]  
Ozertem U, 2012, SIGIR 2012: PROCEEDINGS OF THE 35TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, P25, DOI 10.1145/2348283.2348290
[30]  
Platt J., 1998, SEQUENTIAL MINIMAL O