Text Classification Based on Convolutional Neural Networks and Word Embedding for Low-Resource Languages: Tigrinya

被引:56
作者
Fesseha, Awet [1 ,2 ]
Xiong, Shengwu [1 ]
Emiru, Eshete Derb [1 ,3 ]
Diallo, Moussa [1 ]
Dahou, Abdelghani [1 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Technol, Wuhan 430070, Peoples R China
[2] Mekelle Univ, Coll Nat & Computat Sci, Mekelle 231, Ethiopia
[3] DebreMarkos Univ, Sch Comp, Debremarkos 269, Ethiopia
基金
中国国家自然科学基金;
关键词
text classification; CNN; low-resource language; machine learning; word embedding; natural language processing;
D O I
10.3390/info12020052
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article studies convolutional neural networks for Tigrinya (also referred to as Tigrigna), which is a family of Semitic languages spoken in Eritrea and northern Ethiopia. Tigrinya is a "low-resource" language and is notable in terms of the absence of comprehensive and free data. Furthermore, it is characterized as one of the most semantically and syntactically complex languages in the world, similar to other Semitic languages. To the best of our knowledge, no previous research has been conducted on the state-of-the-art embedding technique that is shown here. We investigate which word representation methods perform better in terms of learning for single-label text classification problems, which are common when dealing with morphologically rich and complex languages. Manually annotated datasets are used here, where one contains 30,000 Tigrinya news texts from various sources with six categories of "sport", "agriculture", "politics", "religion", "education", and "health" and one unannotated corpus that contains more than six million words. In this paper, we explore pretrained word embedding architectures using various convolutional neural networks (CNNs) to predict class labels. We construct a CNN with a continuous bag-of-words (CBOW) method, a CNN with a skip-gram method, and CNNs with and without word2vec and FastText to evaluate Tigrinya news articles. We also compare the CNN results with traditional machine learning models and evaluate the results in terms of the accuracy, precision, recall, and F1 scoring techniques. The CBOW CNN with word2vec achieves the best accuracy with 93.41%, significantly improving the accuracy for Tigrinya news classification.
引用
收藏
页码:1 / 17
页数:17
相关论文
共 45 条
[1]  
Abate ST, 2020, INT CONF ACOUST SPEE, P8274, DOI [10.1109/icassp40776.2020.9053883, 10.1109/ICASSP40776.2020.9053883]
[2]   A comprehensive survey of arabic sentiment analysis [J].
Al-Ayyoub, Mahmoud ;
Khamaiseh, Abed Allah ;
Jararweh, Yaser ;
Al-Kabi, Mohammed N. .
INFORMATION PROCESSING & MANAGEMENT, 2019, 56 (02) :320-342
[3]  
[Anonymous], 2016, Fasttext.zip: Compressing text classification models
[4]  
[Anonymous], 2008, P ICML
[5]  
[Anonymous], 2014, C EMPIRICAL METHODS
[6]  
Arora Sanjeev, 2016, Thode-Arora deserves particular praise for her unwillingness to jump to simple con
[7]   GAUGING SIMILARITY WITH N-GRAMS - LANGUAGE-INDEPENDENT CATEGORIZATION OF TEXT [J].
DAMASHEK, M .
SCIENCE, 1995, 267 (5199) :843-848
[8]  
ehrek R, MODELSWORD2VEC DEEP
[9]  
Fisseha Y., 2011, DEV STEMMING ALGORIT
[10]   Text classification based on deep belief network and softmax regression [J].
Jiang, Mingyang ;
Liang, Yanchun ;
Feng, Xiaoyue ;
Fan, Xiaojing ;
Pei, Zhili ;
Xue, Yu ;
Guan, Renchu .
NEURAL COMPUTING & APPLICATIONS, 2018, 29 (01) :61-70