On entropy-based term weighting schemes for text categorization

被引:5
作者
Wang, Tao [1 ]
Cai, Yi [2 ,3 ]
Leung, Ho-fung [4 ]
Lau, Raymond Y. K. [5 ]
Xie, Haoran [6 ]
Li, Qing [7 ]
机构
[1] Kings Coll London, Dept Biostat & Hlth Informat, London, England
[2] South China Univ Technol, Sch Software Engn, Guangzhou, Peoples R China
[3] South China Univ Technol, Key Lab Big Data & Intelligent Robot, Minist Educ, Guangzhou, Peoples R China
[4] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[5] City Univ Hong Kong, Dept Informat Syst, Hong Kong, Peoples R China
[6] Lingnan Univ, Dept Comp & Decis Sci, Hong Kong, Peoples R China
[7] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Entropy; Normalization; Smoothing; Term weighting; Text categorization; FEATURE-SELECTION; INFORMATION; CLASSIFICATION; RELEVANCE;
D O I
10.1007/s10115-021-01581-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In text categorization, Vector Space Model (VSM) has been widely used for representing documents, in which a document is represented by a vector of terms. Since different terms contribute to a document's semantics in various degrees, a number of term weighting schemes have been proposed for VSM to improve text categorization performance. Much evidence shows that the performance of a term weighting scheme often varies across different text categorization tasks, while the mechanism underlying variability in a scheme's performance remains unclear. Moreover, existing schemes often weight a term with respect to a category locally, without considering the global distribution of a term's occurrences across all categories in a corpus. In this paper, we first systematically examine pros and cons of existing term weighting schemes in text categorization and explore the reasons why some schemes with sound theoretical bases, such as chi-square test and information gain, perform poorly in empirical evaluations. By measuring the concentration that a term distributes across all categories in a corpus, we then propose a series of entropy-based term weighting schemes to measure the distinguishing power of a term in text categorization. Through extensive experiments on five different datasets, the proposed term weighting schemes consistently outperform the state-of-the-art schemes. Moreover, our findings shed new light on how to choose and develop an effective term weighting scheme for a specific text categorization task.
引用
收藏
页码:2313 / 2346
页数:34
相关论文
共 89 条
[61]   TERM-WEIGHTING APPROACHES IN AUTOMATIC TEXT RETRIEVAL [J].
SALTON, G ;
BUCKLEY, C .
INFORMATION PROCESSING & MANAGEMENT, 1988, 24 (05) :513-523
[62]   VECTOR-SPACE MODEL FOR AUTOMATIC INDEXING [J].
SALTON, G ;
WONG, A ;
YANG, CS .
COMMUNICATIONS OF THE ACM, 1975, 18 (11) :613-620
[63]   Machine learning in automated text categorization [J].
Sebastiani, F .
ACM COMPUTING SURVEYS, 2002, 34 (01) :1-47
[64]  
Shannon C. E., 2001, ACM SIGMOBILE Mobile Comput. Commun. Rev., V5, P3, DOI [DOI 10.1002/J.1538-7305.1948.TB01338.X, 10.1002/j.1538-7305.1948.tb01338.x]
[65]  
Snoek L, 2019, International Journal of Computer Vision, V128, P1
[66]  
Socher Richard, 2013, P 2013 C EMP METH NA, P1631
[67]  
Song L, 2012, J MACH LEARN RES, V13, P1393
[68]  
Soucy P, 2005, 19TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI-05), P1130
[69]   STATISTICAL INTERPRETATION OF TERM SPECIFICITY AND ITS APPLICATION IN RETRIEVAL [J].
SPARCKJONES, K .
JOURNAL OF DOCUMENTATION, 1972, 28 (01) :11-+
[70]   What are the Biases in My Word Embedding? [J].
Swinger, Nathaniel ;
De-Arteaga, Maria ;
Heffernan, Neil Thomas ;
Leiserson, Mark D. M. ;
Kalai, Adam Tauman .
AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, 2019, :305-311