共 21 条
Comparison of Korean Preprocessing Performance according to Tokenizer in NMT Transformer Model
被引:4
|作者:
Kim, Geumcheol
[1
]
Lee, Sang-Hong
[1
]
机构:
[1] Anyang Univ, Dept Comp Sci & Engn, Anyang Si, South Korea
基金:
新加坡国家研究基金会;
关键词:
translation;
tokenizer;
neural machine translation;
natural language processing;
deep learning;
D O I:
10.12720/jait.11.4.228-232
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
Mechanical translation using neural networks in natural language processing is making rapid progress. With the development of natural language processing model and tokenizer, accurate translation is becoming possible. In this paper, we will create a transformer model that shows high performance recently and compare the performance of English Korean according to tokenizer. We made a traditional neural network-based Neural Machine Translation (NMT) model using a transformer and compared the Korean translation results according to the tokenizer. The Byte Pair Encoding (BPE)-based Tokenizer showed a small vocabulary size and a fast learning speed, but due to the nature of Korean, the translation result was not good. The morphological analysis-based Tokenizer showed that the parallel corpus data is large and the vocabulary is large, the performance is higher regardless of the characteristics of the language.
引用
收藏
页码:228 / 232
页数:5
相关论文