Involving language professionals in the evaluation of machine translation

被引:0
作者
Maja Popović
Eleftherios Avramidis
Aljoscha Burchardt
Sabine Hunsicker
Sven Schmeier
Cindy Tscherwinka
David Vilar
Hans Uszkoreit
机构
[1] DFKI – Language Technology Lab,
[2] euroscript Deutschland,undefined
来源
Language Resources and Evaluation | 2014年 / 48卷
关键词
Machine translation; Human evaluation; Error analysis;
D O I
暂无
中图分类号
学科分类号
摘要
Significant breakthroughs in machine translation (MT) only seem possible if human translators are taken into the loop. While automatic evaluation and scoring mechanisms such as BLEU have enabled the fast development of systems, it is not clear how systems can meet real-world (quality) requirements in industrial translation scenarios today. The taraXŰ project has paved the way for wide usage of multiple MT outputs through various feedback loops in system development. The project has integrated human translators into the development process thus collecting feedback for possible improvements. This paper describes results from detailed human evaluation. Performance of different types of translation systems has been compared and analysed via ranking, error analysis and post-editing.
引用
收藏
页码:541 / 559
页数:18
相关论文
共 50 条
  • [41] Machine Translation based on Neural Network Language Models
    Zamora-Martinez, Francisco
    Jose Castro-Bleda, Maria
    PROCESAMIENTO DEL LENGUAJE NATURAL, 2010, (45): : 221 - 228
  • [42] Sanskrit as Inter-Lingua Language in Machine Translation
    Chand, Sunita
    EMERGING TRENDS IN ELECTRICAL, COMMUNICATIONS AND INFORMATION TECHNOLOGIES, 2017, 394 : 27 - 34
  • [43] A Novel Natural Language Processing (NLP)-Based Machine Translation Model for English to Pakistan Sign Language Translation
    Khan, Nabeel Sabir
    Abid, Adnan
    Abid, Kamran
    COGNITIVE COMPUTATION, 2020, 12 (04) : 748 - 765
  • [44] Universal Networking Language Approach to Machine Translation Between Bulgarian and Slovak Language
    Stoykova, Velislava
    PROCEEDINGS OF SEVENTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, VOL 4, 2023, 465 : 201 - 208
  • [45] Can automated machine translation evaluation metrics be used to assess students' interpretation in the language learning classroom?
    Han, Chao
    Lu, Xiaolei
    COMPUTER ASSISTED LANGUAGE LEARNING, 2023, 36 (5-6) : 1064 - 1087
  • [46] A Novel Natural Language Processing (NLP)–Based Machine Translation Model for English to Pakistan Sign Language Translation
    Nabeel Sabir Khan
    Adnan Abid
    Kamran Abid
    Cognitive Computation, 2020, 12 : 748 - 765
  • [47] The machine translationness: a concept applied to the evaluation of machine translation systems
    More Lopez, Joaquim
    Climent Roca, Salvador
    PROCESAMIENTO DEL LENGUAJE NATURAL, 2006, (37): : 233 - 240
  • [48] Evaluation of Alternatives on Speech to Sign Language Translation
    San-Segundo, R.
    Perez, A.
    Ortiz, D.
    D'Haro, L. F.
    Torres, M. I.
    Casacuberta, F.
    INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 53 - +
  • [49] MACHINE TRANSLATION TO SIGN LANGUAGE USING POST-TRANSLATION REPLACEMENT WITHOUT PLACEHOLDERS
    Miyazaki, Taro
    Nakatani, Naoki
    Uchida, Tsubasa
    Kaneko, Hiroyuki
    Sano, Masanori
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [50] Regression for machine translation evaluation at the sentence level
    Albrecht, Joshua S.
    Hwa, Rebecca
    MACHINE TRANSLATION, 2008, 22 (1-2) : 1 - 27