Regression for machine translation evaluation at the sentence level

被引:10
作者
Albrecht, Joshua S. [1 ]
Hwa, Rebecca [1 ]
机构
[1] Univ Pittsburgh, Dept Comp Sci, Pittsburgh, PA 15260 USA
关键词
Machine translation; Evaluation metrics; Machine learning;
D O I
10.1007/s10590-008-9046-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning offers a systematic framework for developing metrics that use multiple criteria to assess the quality of machine translation (MT). However, learning introduces additional complexities that may impact on the resulting metric's effectiveness. First, a learned metric is more reliable for translations that are similar to its training examples; this calls into question whether it is as effective in evaluating translations from systems that are not its contemporaries. Second, metrics trained from different sets of training examples may exhibit variations in their evaluations. Third, expensive developmental resources (such as translations that have been evaluated by humans) may be needed as training examples. This paper investigates these concerns in the context of using regression to developmetrics for evaluating machine-translated sentences. We track a learned metric's reliability across a 5 year period to measure the extent to which the learned metric can evaluate sentences produced by other systems. We compare metrics trained under different conditions to measure their variations. Finally, we present an alternative formulation of metric training in which the features are based on comparisons against pseudo-references in order to reduce the demand on human produced resources. Our results confirm that regression is a useful approach for developing new metrics for MT evaluation at the sentence level.
引用
收藏
页码:1 / 27
页数:27
相关论文
共 50 条
[41]   Linguistic measures for automatic machine translation evaluation [J].
Giménez J. ;
Màrquez L. .
Machine Translation, 2010, 24 (3-4) :209-240
[42]   Using Contextual Information for Machine Translation Evaluation [J].
Fomicheva, Marina ;
Bel, Nuria .
LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, :2755-2761
[43]   Involving Language Professionals in the Evaluation of Machine Translation [J].
Avramidis, Eleftherios ;
Burchardt, Aljoscha ;
Federmann, Christian ;
Popovic, Maja ;
Tscherwinka, Cindy ;
Vilar, David .
LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2012, :1127-1130
[44]   Calibration and context in human evaluation of machine translation [J].
Knowles, Rebecca ;
Lo, Chi-kiu .
NATURAL LANGUAGE PROCESSING, 2025, 31 (04) :1017-1041
[45]   Handling certain issues in the Evaluation of Machine Translation [J].
Manisha ;
Sinha, Madhavi ;
Govil, Rekha .
RECENT ADVANCES OF ASIAN LANGUAGE PROCESSING TECHNOLOGIES, 2008, :189-+
[46]   A Reading Comprehension Corpus for Machine Translation Evaluation [J].
Scarton, Carolina ;
Specia, Lucia .
LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, :3652-3658
[47]   Evaluation of Arabic to English Machine Translation Systems [J].
Zakraoui, Jezia ;
Saleh, Moutaz ;
Al-Maadeed, Somaya ;
AlJa'am, Jihad Mohamad .
2020 11TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION SYSTEMS (ICICS), 2020, :185-190
[48]   A Visualization method for machine translation evaluation results [J].
Yao, Jian-Min ;
Qu, Yun-Qian ;
Zhu, Qiao-Ming ;
Zhang, Jing .
PACLIC 20: PROCEEDINGS OF THE 20TH PACIFIC ASIA CONFERENCE ON LANGUAGE, INFORMATION AND COMPUTATION, 2006, :390-393
[49]   Neural Machine Translation Based on Back-Translation for Multilingual Translation Evaluation Task [J].
Lai, Siyu ;
Yang, Yueting ;
Xu, Jin'an ;
Chen, Yufeng ;
Huang, Hui .
MACHINE TRANSLATION, CCMT 2020, 2020, 1328 :132-141
[50]   Sentence Level Sentiment Classification Using Machine Learning Approach in the Bengali Language [J].
Hossain, Tuhin ;
Kabir, Ahmed Ainun Nahian ;
Ratul, Md Ahasun Habib ;
Sattar, Abdus .
2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA), 2022, :1286-1289