P-Transformer: Towards Better Document-to-Document Neural Machine Translation

被引:5
作者
Li, Yachao [1 ]
Li, Junhui [2 ]
Jiang, Jing [1 ]
Tao, Shimin [3 ]
Yang, Hao [3 ]
Zhang, Min [2 ]
机构
[1] Northwest Minzu Univ, Key Lab Linguist & Cultural Comp, Minist Educ, Lanzhou 730030, Peoples R China
[2] Soochow Univ, Inst Artificial Intelligence, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
[3] Huawei Translat Serv Ctr, Beijing 100000, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural machine translation; document-level NMT; document-to-document translation; position information; sequence-to-sequence;
D O I
10.1109/TASLP.2023.3313445
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Directly training a document-to-document (Doc2Doc) neural machine translation (NMT) via Transformer from scratch, especially on small datasets, usually fails to converge. Our dedicated probing tasks show that 1) both the absolute position and relative position information gets gradually weakened or even vanished once it reaches the upper encoder layers, and 2) the vanishing of absolute position information in encoder output causes the training failure of Doc2Doc NMT. To alleviate this problem, we propose a position-aware Transformer (P-Transformer) to enhance both the absolute and relative position information in both self-attention and cross-attention. Specifically, we integrate absolute positional information, i.e., position embeddings, into the query-key pairs both in self-attention and cross-attention through a simple yet effective addition operation. Moreover, we also integrate relative position encoding in self-attention. The proposed P-Transformer utilizes sinusoidal position encoding and does not require any task-specified position embedding, segment embedding, or attention mechanism. Through the above methods, we build a Doc2Doc NMT model with P-Transformer, which ingests the source document and completely generates the target document in a sequence-to-sequence (seq2seq) way. In addition, P-Transformer can be applied to seq2seq-based document-to-sentence (Doc2Sent) and sentence-to-sentence (Sent2Sent) translations. Extensive experimental results of Doc2Doc NMT show that P-Transformer significantly outperforms strong baselines on the widely-used 9 document-level datasets in 7 language pairs, covering small-, middle-, and large-scales, and achieves a new state-of-the-art. Experimentation on discourse phenomena shows that our Doc2Doc NMT models improve the translation quality in both BLEU and discourse coherence. We make our code available on Github.
引用
收藏
页码:3859 / 3870
页数:12
相关论文
共 54 条
[1]  
Agrawal R., 2018, P 21 ANN C EUR ASS M, P11
[2]  
Ainslie J, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), P268
[3]  
Bajaj A, 2021, ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING: PROCEEDINGS OF THE STUDENT RESEARCH WORKSHOP, P71
[4]  
Bao GS, 2021, 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, P3442
[5]  
Bawden R., 2018, P 2018 C N AM CHAPT, V1, P1304
[6]  
Beltagy I, 2020, Arxiv, DOI [arXiv:2004.05150, 10.48550/arXiv.2004.05150]
[7]  
Cettolo M, 2012, P C EUROPEAN ASS MAC, P261
[8]  
Chen PC, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P2974
[9]  
Conneau A, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P2126
[10]  
Dai ZH, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P2978