Deep Neural Network Regularization for Feature Selection in Learning-to-Rank

被引:22
作者
Rahangdale, Ashwini [1 ]
Raut, Shital [1 ]
机构
[1] Visvesvaraya Natl Inst Technol, Dept Comp Sci & Engn, Nagpur 440010, Maharashtra, India
关键词
Deep neural network; feature selection; information retrieval; learning-to-rank; regularization;
D O I
10.1109/ACCESS.2019.2902640
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Learning-to-rank is an emerging area of research for a wide range of applications. Many algorithms are devised to tackle the problem of learning-to-rank. However, very few existing algorithms deal with deep learning. Previous research depicts that deep learning makes significant improvements in a variety of applications. The proposed model makes use of the deep neural network for learning-to-rank for document retrieval. It employs a regularization technique particularly suited for the deep neural network to improve the results significantly. The main aim of regularization is optimizing the weight of neural network, selecting the relevant features with active neurons at the input layer, and pruning of the network by selecting only active neurons at hidden layer while learning. Specifically, we use group l(1) regularization in order to induce the group level sparsity on the network's connections. Set of outgoing weights from each hidden layer represents the group here. The sparsity of network is measured by the sparsity ratio and it is compared with learning-torank models, which adopt the embedded method for feature selection. An extensive experimental evaluation considers the performance of the extended l(1) regularization technique against classical regularization techniques. The empirical results confirm that sparse group l(1) regularization is able to achieve competitive performance while simultaneously making the network compact with less number of input features. The model is analyzed with respect to evaluating measures, such as prediction accuracy, NDCG@n, MAP, and Precision on benchmark datasets, which demonstrate improved results over other state-of-the-art methods.
引用
收藏
页码:53988 / 54006
页数:19
相关论文
共 63 条
[1]  
Abadi M, 2015, TENSORFLOW LARGE SCA
[2]  
[Anonymous], P SIG WORKSH FEAT GE
[3]  
[Anonymous], LEARNING DEEP LISTWI
[4]  
[Anonymous], P 2 INT C INT MULT C
[5]  
[Anonymous], P ICLR
[6]  
[Anonymous], 1999, MODERN INFORM RETRIE
[7]  
[Anonymous], 2009, Proceedings of the 18th ACM conference on Information and knowledge management
[8]  
[Anonymous], 2006, P ACMSIGKDD INT C KN
[9]  
[Anonymous], 1992, NIPS 91 P 4 INT C NE
[10]  
[Anonymous], 2015, DEEP LEARNING NATURE, DOI [10.1038/nature14539, DOI 10.1038/NATURE14539]