Improving Negative Sampling for Word Representation using Self-embedded Features

被引:21
作者
Chen, Long [1 ]
Yuan, Fajie [1 ]
Jose, Joemon M. [1 ]
Zhang, Weinan [2 ]
机构
[1] Univ Glagow, Glasgow, Lanark, Scotland
[2] Shanghai Jiao Tong Univ, Shang Hai, Peoples R China
来源
WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2018年
基金
英国工程与自然科学研究理事会;
关键词
MODELS;
D O I
10.1145/3159652.3159695
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Although the word-popularity based negative sampler has shown superb performance in the skip-gram model, the theoretical motivation behind oversampling popular (non-observed) words as negative samples is still not well understood. In this paper, we start from an investigation of the gradient vanishing issue in the skip-gram model without a proper negative sampler. By performing an insightful analysis from the stochastic gradient descent (SGD) learning perspective, we demonstrate that, both theoretically and intuitively, negative samples with larger inner product scores are more informative than those with lower scores for the SGD learner in terms of both convergence rate and accuracy. Understanding this, we propose an alternative sampling algorithm that dynamically selects informative negative samples during each SGD update. More importantly, the proposed sampler accounts for multi-dimensional self-embedded features during the sampling process, which essentially makes it more effective than the original popularity-based (one-dimensional) sampler. Empirical experiments further verify our observations, and show that our fine-grained samplers gain significant improvement over the existing ones without increasing computational complexity.
引用
收藏
页码:99 / 107
页数:9
相关论文
共 48 条
[41]   SOME MATHEMATICAL NOTES ON 3-MODE FACTOR ANALYSIS [J].
TUCKER, LR .
PSYCHOMETRIKA, 1966, 31 (03) :279-279
[42]   From Frequency to Meaning: Vector Space Models of Semantics [J].
Turney, Peter D. ;
Pantel, Patrick .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2010, 37 :141-188
[43]  
Vijayanarasimhan Sudheendra, 2014, ARXIV14127479
[44]  
Weston Jason, WSABIE SCALING LARGE
[45]   BoostFM: Boosted Factorization Machines for Top-N Feature-based Recommendation [J].
Yuan, Fajie ;
Guo, Guibing ;
Jose, Joemon M. ;
Chen, Long ;
Yu, Haitao ;
Zhang, Weinan .
IUI'17: PROCEEDINGS OF THE 22ND INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2017, :45-54
[46]   LambdaFM: Learning Optimal Ranking with Factorization Machines Using Lambda Surrogates [J].
Yuan, Fajie ;
Guo, Guibing ;
Jose, Joemon M. ;
Chen, Long ;
Yu, Haitao ;
Zhang, Weinan .
CIKM'16: PROCEEDINGS OF THE 2016 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2016, :227-236
[47]  
Yuan Fajie, FAST BATCH GRADIENT
[48]  
Zhou GY, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P250