Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages

被引:0
|
作者
Dhar, Prajit [1 ]
Bisazza, Arianna [1 ]
van Noord, Gertjan [1 ]
机构
[1] Univ Groningen, Ctr Language & Cognit Groningen CLCG, Groningen, Netherlands
关键词
low resource nmt; morphology; inflection;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The scarcity of parallel data is a major limitation for Neural Machine Translation (NMT) systems, in particular for translation into morphologically rich languages (MRLs). An important way to overcome the lack of parallel data is to leverage target monolingual data, which is typically more abundant and easier to collect. We evaluate a number of techniques to achieve this, ranging from back-translation to random token masking, on the challenging task of translating English into four typologically diverse MRLs, under low-resource settings. Additionally, we introduce Inflection Pre-Training (or PT-Inflect), a novel pre-training objective whereby the NMT system is pre-trained on the task of re-inflecting lemmatized target sentences before being trained on standard source-to-target language translation. We conduct our evaluation on four typologically diverse target MRLs, and find that PT-Inflect surpasses NMT systems trained only on parallel data. While PT-Inflect is outperformed by back-translation overall, combining the two techniques leads to gains in some of the evaluated language pairs.
引用
收藏
页码:4933 / 4943
页数:11
相关论文
共 50 条
  • [1] Pre-training model for low-resource Chinese-Braille translation
    Yu, Hailong
    Su, Wei
    Liu, Lei
    Zhang, Jing
    Cai, Chuan
    Xu, Cunlu
    DISPLAYS, 2023, 79
  • [2] Pre-Training on Mixed Data for Low-Resource Neural Machine Translation
    Zhang, Wenbo
    Li, Xiao
    Yang, Yating
    Dong, Rui
    INFORMATION, 2021, 12 (03)
  • [3] Low-Resource Neural Machine Translation Using XLNet Pre-training Model
    Wu, Nier
    Hou, Hongxu
    Guo, Ziyue
    Zheng, Wei
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2021, PT V, 2021, 12895 : 503 - 514
  • [4] Character-Aware Low-Resource Neural Machine Translation with Weight Sharing and Pre-training
    Cao, Yichao
    Li, Miao
    Feng, Tao
    Wang, Rujing
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019, 2019, 11856 : 321 - 333
  • [5] Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural Machine Translation
    Liu, Zihan
    Winata, Genta Indra
    Fung, Pascale
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2706 - 2718
  • [6] Linguistically Driven Multi-Task Pre-Training for Low-Resource Neural Machine Translation
    Mao, Zhuoyuan
    Chu, Chenhui
    Kurohashi, Sadao
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2022, 21 (04)
  • [7] Investigating the Pre-Training Bias in Low-Resource Abstractive Summarization
    Chernyshev, Daniil
    Dobrov, Boris
    IEEE ACCESS, 2024, 12 : 47219 - 47230
  • [8] Pre-training on High-Resource Speech Recognition Improves Low-Resource Speech-to-Text Translation
    Bansal, Sameer
    Kamper, Herman
    Livescu, Karen
    Lopez, Adam
    Goldwater, Sharon
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, 2019, : 58 - 68
  • [9] Low-Resource Named Entity Recognition via the Pre-Training Model
    Chen, Siqi
    Pei, Yijie
    Ke, Zunwang
    Silamu, Wushour
    SYMMETRY-BASEL, 2021, 13 (05):
  • [10] Multi-Stage Pre-training for Low-Resource Domain Adaptation
    Zhang, Rong
    Reddy, Revanth Gangi
    Sultan, Md Arafat
    Castelli, Vittorio
    Ferritto, Anthony
    Florian, Radu
    Kayi, Efsun Sarioglu
    Roukos, Salim
    Sil, Avirup
    Ward, Todd
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 5461 - 5468