isiZulu Word Embeddings

被引:0
作者
Dlamini, Sibonelo [1 ]
Jembere, Edgar [1 ]
Pillay, Anban [1 ]
van Niekerk, Brett [1 ]
机构
[1] Univ KwaZulu Natal, Dept Comp Sci, Durban, South Africa
来源
2021 CONFERENCE ON INFORMATION COMMUNICATIONS TECHNOLOGY AND SOCIETY (ICTAS) | 2021年
关键词
isiZulu; word embeddings; semantic relatedness; agglutinative language; subword embeddings;
D O I
10.1109/ICTAS50802.2021.9395011
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Word embeddings are currently the most popular vector space model in Natural Language Processing. How we encode words is important because it affects the performance of many downstream tasks such as Machine Translation (MT), Information Retrieval (IR) and Automatic Speech Recognition (ASR). While much focus has been placed on constructing word embeddings for English, very little attention is paid to under-resourced languages, especially native African languages. In this paper we select four popular word embedding models (Word2Vec CBOW and Skip-Gram; FastText and GloVe) and train them on the 10 million token isiZulu National Corpus (INC) to create isiZulu word embeddings. To the best of our knowledge, this is the first time that word embeddings in isiZulu have been constructed and made available to the public. We create a semantic similarity data set analogous to WordSim353, which we also make publicly available. This data set is used to conduct an evaluation of the four models to determine which is the best for creating isiZulu word embeddings in a low-resource (small corpus) setting. We found that the Word2Vec Skip-Gram model produced the highest quality embeddings, as measured by this semantic similarity task. However, it was the GloVe model which performed best on the nearest neighbours task.
引用
收藏
页码:121 / 126
页数:6
相关论文
共 24 条
  • [11] Placing search in context: The concept revisited
    Finkelstein, L
    Gabrilovich, E
    Matias, Y
    Rivlin, E
    Solan, Z
    Wolfman, G
    Ruppin, E
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2002, 20 (01) : 116 - 131
  • [12] Gladkova A., 2016, P 1 WORKSH EV VECT S P 1 WORKSH EV VECT S
  • [13] Hartmann N, 2017, ARXIV PREPRINT ARXIV
  • [14] Khumalo L., ISIZULU NATL CORPUS
  • [15] Mikolov T, 2016, ARXIV PREPRINT ARXIV
  • [16] Mikolov T, 2013, Eficient Estimation of Word Representations in Vector Space
  • [17] Mikolov Tomas, 2013, P C N AM CHAPT ASS C
  • [18] Pennington J., 2014, P 2014 C EMP METH NA
  • [19] Pennington J, GloVe: Global Vectors for Word Representation
  • [20] Radim R., GENS GENS