Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering

被引:4
作者
Phakmongkol, Puri [1 ]
Vateekul, Peerapon [1 ]
机构
[1] Chulalongkorn Univ, Fac Engn, Dept Comp Engn, Bangkok 10300, Thailand
来源
APPLIED SCIENCES-BASEL | 2021年 / 11卷 / 21期
关键词
natural language processing; question answering; machine reading comprehension;
D O I
10.3390/app112110267
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages that have low availability of labeled corpora in QA studies. According to previous studies, while the English QA models could achieve more than 90% of F1 scores, Thai QA models could obtain only 70% in our baseline. In this study, we aim to improve the performance of Thai QA models by generating more question-answer pairs with Multilingual Text-to-Text Transfer Transformer (mT5) along with data preprocessing methods for Thai. With this method, the question-answer pairs can synthesize more than 100 thousand pairs from provided Thai Wikipedia articles. Utilizing our synthesized data, many fine-tuning strategies were investigated to achieve the highest model performance. Furthermore, we have presented that the syllable-level F1 is a more suitable evaluation measure than Exact Match (EM) and the word-level F1 for Thai QA corpora. The experiment was conducted on two Thai QA corpora: Thai Wiki QA and iApp Wiki QA. The results show that our augmented model is the winner on both datasets compared to other modern transformer models: Roberta and mT5.
引用
收藏
页数:17
相关论文
共 21 条
[1]  
Chormai P., ARXIV191107056
[2]   Development of Thai Question Answering System [J].
Decha, Hatsanai ;
Patanukhom, Karn .
PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING (ICCIP 2017), 2017, :124-128
[3]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[4]  
Dhingra B., ARXIV180400720
[5]  
Dodge J., ARXIV210408758
[6]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.8.1735, 10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[7]   SpanBERT: Improving Pre-training by Representing and Predicting Spans [J].
Joshi, Mandar ;
Chen, Danqi ;
Liu, Yinhan ;
Weld, Daniel S. ;
Zettlemoyer, Luke ;
Levy, Omer .
TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2020, 8 :64-77
[8]  
Lapchaicharoenkit Theerit, 2020, ICCCM'20: Proceedings of the 8th International Conference on Computer and Communications Management, P3, DOI 10.1145/3411174.3411184
[9]  
Liu Yinhan, 2019, COMPUTING RES REPOSI
[10]  
Lowphansirikul L., ARXIV210109635