Involving Language Professionals in the Evaluation of Machine Translation

被引:0
作者
Avramidis, Eleftherios [1 ,2 ]
Burchardt, Aljoscha [1 ,2 ]
Federmann, Christian [1 ,2 ]
Popovic, Maja [1 ,2 ]
Tscherwinka, Cindy [3 ]
Vilar, David [1 ,2 ]
机构
[1] DFKI Language Technol Lab, Berlin, Germany
[2] DFKI Language Technol Lab, Saarbrucken, Germany
[3] Euroscript Deutschland, Berlin, Germany
来源
LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION | 2012年
关键词
machine translation; human evaluation; error analysis;
D O I
暂无
中图分类号
H0 [语言学];
学科分类号
030303 ; 0501 ; 050102 ;
摘要
Significant breakthroughs in machine translation only seem possible if human translators are taken into the loop. While automatic evaluation and scoring mechanisms such as BLEU have enabled the fast development of systems, it is not clear how systems can meet real-world (quality) requirements in industrial translation scenarios today. The TARAXU project paves the way for wide usage of hybrid machine translation outputs through various feedback loops in system development. In a consortium of research and industry partners, the project integrates human translators into the development process for rating and post-editing of machine translation outputs thus collecting feedback for possible improvements.
引用
收藏
页码:1127 / 1130
页数:4
相关论文
共 8 条
[1]  
Alonso Juan A., 2003, P 9 MACH TRANSL SUMM
[2]  
[Anonymous], 2006, Proceedings of LREC
[3]  
[Anonymous], P 7 C INT LANG RES E
[4]  
Callison-Burch Chris, 2010, P JOINT 5 WORKSH STA, P17
[5]  
Koehn P., 2007, ACL
[6]  
Papineni K., 2002, IBM Research Report RC22176 (W0109-022)
[7]   Towards Automatic Error Analysis of Machine Translation Output [J].
Popovic, Maja ;
Ney, Hermann .
COMPUTATIONAL LINGUISTICS, 2011, 37 (04) :657-688
[8]  
Tiedemann Jorg, 2009, RECENT ADV NATURAL L, P237, DOI [DOI 10.1075/CILT.309.19TIE, DOI 10.1075/CILT.309]