Deep Learning Transformer Architecture for Named-Entity Recognition on Low-Resourced Languages: State of the art results

被引:4
作者
Hanslo, Ridewaan [1 ]
机构
[1] Univ Pretoria, Gauteng, South Africa
来源
PROCEEDINGS OF THE 2022 17TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS (FEDCSIS) | 2022年
基金
新加坡国家研究基金会;
关键词
Named-Entity Recognition; Natural Language Processing; Neural Networks; Sequence Tagging; XLM-R; Machine Learning; Transformer Models;
D O I
10.15439/2022F53
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper reports on the evaluation of Deep Learning (DL) transformer architecture models for Named-Entity Recognition (NER) on ten low-resourced South African (SA) languages. In addition, these DL transformer models were compared to other Neural Network and Machine Learning (ML) NER models. The findings show that transformer models substantially improve performance when applying discrete finetuning parameters per language. Furthermore, fine-tuned transformer models outperform other neural network and machine learning models on NER with the low-resourced SA languages. For example, the transformer models obtained the highest F-scores for six of the ten SA languages and the highest average F-score surpassing the Conditional Random Fields ML model. Practical implications include developing high-performance NER capability with less effort and resource costs, potentially improving downstream NLP tasks such as Machine Translation (MT). Therefore, the application of DL transformer architecture models for NLP NER sequence tagging tasks on low-resourced SA languages is viable. Additional research could evaluate the more recent transformer architecture models on other Natural Language Processing tasks and applications, such as Phrase chunking, MT, and Part-of-Speech tagging.
引用
收藏
页码:53 / 60
页数:8
相关论文
共 21 条
[1]   Low-Resource Named Entity Recognition via the Pre-Training Model [J].
Chen, Siqi ;
Pei, Yijie ;
Ke, Zunwang ;
Silamu, Wushour .
SYMMETRY-BASEL, 2021, 13 (05)
[2]  
Conneau A., 2020, P 58 ANN M ASS COMP, DOI [10.18653/v1/2020.acl-main.747, DOI 10.18653/V1/2020.ACL-MAIN.747]
[3]  
Conneau A, 2019, ADV NEUR IN, V32
[4]  
Eiselen R, 2016, LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P3344
[5]   A pre-training and self-training approach for biomedical named entity recognition [J].
Gao, Shang ;
Kotevska, Olivera ;
Sorokine, Alexandre ;
Christian, J. Blair .
PLOS ONE, 2021, 16 (02)
[6]   Recent advances in convolutional neural networks [J].
Gu, Jiuxiang ;
Wang, Zhenhua ;
Kuen, Jason ;
Ma, Lianyang ;
Shahroudy, Amir ;
Shuai, Bing ;
Liu, Ting ;
Wang, Xingxing ;
Wang, Gang ;
Cai, Jianfei ;
Chen, Tsuhan .
PATTERN RECOGNITION, 2018, 77 :354-377
[7]   Evaluation of Neural Network Transformer Models for Named-Entity Recognition on Low-Resourced Languages [J].
Hanslo, Ridewaan .
PROCEEDINGS OF THE 2021 16TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS (FEDCSIS), 2021, :115-119
[8]  
Hedderich M. A., 2020, Transfer Learning and Distant Supervision for Multilingual Transformer Models: A Study on African Languages, DOI [10.18653/v1/2020.emnlp-main.204, DOI 10.18653/V1/2020.EMNLP-MAIN.204]
[9]   CCG supertagging via Bidirectional LSTM-CRF neural architecture [J].
Kadari, Rekia ;
Zhang, Yu ;
Zhang, Weinan ;
Liu, Ting .
NEUROCOMPUTING, 2018, 283 :31-37
[10]  
Kudo T., 2014, CRF++: Yet Another CRF toolkit