Recent Advances in Interactive Machine Translation With Large Language Models

被引:2
作者
Wang, Yanshu [1 ]
Zhang, Jinyi [2 ,3 ]
Shi, Tianrong [2 ]
Deng, Dashuai [2 ]
Tian, Ye [4 ]
Matsumoto, Tadahiro [3 ]
机构
[1] Shenyang Ligong Univ, Art & Design Coll, Shenyang 110159, Peoples R China
[2] Shenyang Ligong Univ, Sch Informat Sci & Engn, Shenyang 110159, Peoples R China
[3] Gifu Univ, Fac Engn, Gifu 5011193, Japan
[4] Zhuzhou CRRC Times Elect Co Ltd, Zhuzhou 412001, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Data models; Machine translation; Chatbots; Transformers; Adaptation models; Accuracy; Privacy; Large language models; Context modeling; Tuning; large language models; pre-trained language model; in-context learning; post-editing;
D O I
10.1109/ACCESS.2024.3487352
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper explores the role of Large Language Models (LLMs) in revolutionizing interactive Machine Translation (MT), providing a comprehensive analysis across nine innovative research directions. LLMs demonstrate exceptional capabilities in handling complex tasks through advanced text generation and interactive human-machine collaboration, significantly enhancing translation accuracy and efficiency, especially in low-resource language scenarios. This study also outlines potential advancements in LLM applications, emphasizing the integration of domain-specific knowledge and the exploration of model combinations to optimize performance. Future research is suggested to focus on enhancing model adaptability to diverse linguistic environments and refining human-machine interaction frameworks to better serve practical translation needs. The findings contribute to the ongoing discourse on the strategic deployment of MT with LLMs, aiming to direct future developments towards more robust and nuanced language processing solutions.
引用
收藏
页码:179353 / 179382
页数:30
相关论文
共 128 条
[1]  
2023, Arxiv, DOI [arXiv:2303.08774, DOI 10.48550/ARXIV.2303.08774, 10.48550/arXiv.2303.08774]
[2]  
Agrawal S, 2023, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), P8857
[3]  
Alves Duarte, 2023, FINDINGS ASS COMPUTA, P11127
[4]  
Aranberri N., P 24 ANN C EUR ASS M
[5]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[6]  
Banerjee Satanjeev, 2005, P ACL WORKSHOP INTRI
[7]  
Bang Y, 2023, Arxiv, DOI [arXiv:2302.04023, 10.48550/arXiv.2302.04023]
[8]  
Bao GS, 2023, PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, P10725
[9]  
Briakou E, 2023, PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, P9432
[10]  
Brown TB, 2020, ADV NEUR IN, V33