FEMTI Taxonomy for Evaluating Machine Translation Models

被引:0
|
作者
Mizera-Pietraszko, Jolanta [1 ]
机构
[1] Opole Univ, Inst Math & Comp Sci, Opole, Poland
来源
ADVANCES IN DIGITAL TECHNOLOGIES | 2015年 / 275卷
关键词
natural language processing; language engineering; parallel corpora; evaluation; MT systems;
D O I
10.3233/978-1-61499-503-6-263
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The aim of our experiment is to present interrelationship between some of the characteristics of MT (Machine Translation) systems and the quality of their output. The emphasis is placed on the machine translation models that determine output quality of these systems. In order to achieve our goal, a couple of MT systems have been tested on different text types prepared in two languages: English and French. The procedure of our evaluation is in strict conformity with a Framework for the Evaluation of Machine Translation in ISLE (International Standards for Language Engineering)[7]. In our study, we also consider user population as a factor that indicates different levels of the adequacy-oriented output to the users' needs. In conclusion, the backbone is made of the further study on incorporation of the database types and both the target as well as the source-language formalism which should be critical for designing universally usable MT system.
引用
收藏
页码:263 / 272
页数:10
相关论文
共 50 条
  • [1] Evaluating Machine Translation in a Usage Scenario
    Del Gaudio, Rosa
    Burchardt, Aljoscha
    Branco, Antonio
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 1 - 8
  • [2] Crowdsourcing for Evaluating Machine Translation Quality
    Goto, Shinsuke
    Lin, Donghui
    Ishida, Toru
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3456 - 3463
  • [3] Study on evaluating machine translation in cyber commons
    Miyazawa, S
    Okada, I
    Shimizu, N
    Yokoyama, S
    Ohta, T
    7TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL I, PROCEEDINGS: INFORMATION SYSTEMS, TECHNOLOGIES AND APPLICATIONS, 2003, : 42 - 47
  • [4] A methodology for evaluating arabic machine translation systems
    Guessoum, Ahmed
    Zantout, Rached
    Machine Translation, 2004, 18 (04) : 299 - 335
  • [5] Evaluating Machine Translation Quality with Conformal Predictive Distributions
    Giovannotti, Patrizio
    CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 204, 2023, 204 : 413 - 429
  • [6] PROTEST: A Test Suite for Evaluating Pronouns in Machine Translation
    Guillou, Liane
    Hardmeier, Christian
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 636 - 643
  • [7] Evaluating Machine Translation Performance on Chinese Idioms with a Blacklist Method
    Shao, Yutong
    Sennrich, Rico
    Webber, Bonnie
    Fancellu, Federico
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 31 - 38
  • [8] Evaluating the English-Turkish parallel treebank for machine translation
    Gorgun, Onur
    Yildiz, Olcay Taner
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2022, 30 (01) : 184 - 199
  • [9] Evaluating the Impact of Integrating Similar Translations into Neural Machine Translation
    Tezcan, Arda
    Bulte, Bram
    INFORMATION, 2022, 13 (01)
  • [10] A conjoint analysis framework for evaluating user preferences in machine translation
    Kirchhoff, Katrin
    Capurro, Daniel
    Turner, Anne M.
    MACHINE TRANSLATION, 2014, 28 (01) : 1 - 17