Study on evaluating machine translation in cyber commons

被引:0
|
作者
Miyazawa, S [1 ]
Okada, I [1 ]
Shimizu, N [1 ]
Yokoyama, S [1 ]
Ohta, T [1 ]
机构
[1] Shumei Univ, Fac Int Cooperat, Yachiyo, Chiba 2760003, Japan
来源
7TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL I, PROCEEDINGS: INFORMATION SYSTEMS, TECHNOLOGIES AND APPLICATIONS | 2003年
关键词
cyber commons; evaluation; genetic algorithm; machine translation and translation functions;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Studies on evaluating the quality of Machine Translation ("MT") have been frequently conducted, while studies on evaluating the overall translation functions have not yet been done as far as is known by the authors. In this study, a functional evaluation of MT software was made including an evaluation based on mathematical analysis. As a result, it is found that functions vary considerably depending on each product. Objectives of functional evaluation include to research, study and evaluate what kind of functions; existing MT software have as well as the level of functionality, so that matters requiring attention, improvement points and constraints can be utilized for further development of MT software. Although this functional evaluation was conducted for English-Japanese MT software, these evaluation and studies are for general purposes independent of specific languages. Close relationship between Cyber Commons and MT systems is also discussed.
引用
收藏
页码:42 / 47
页数:6
相关论文
共 50 条
  • [1] Study on web maintenance in Cyber Commons
    Miyazawa, S
    Shimizu, N
    Okada, I
    Ohta, T
    6TH WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL II, PROCEEDINGS: CONCEPTS AND APPLICATIONS OF SYSTEMICS, CYBERNETICS AND INFORMATICS I, 2002, : 492 - 497
  • [2] Evaluating Machine Translation in a Usage Scenario
    Del Gaudio, Rosa
    Burchardt, Aljoscha
    Branco, Antonio
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 1 - 8
  • [3] Crowdsourcing for Evaluating Machine Translation Quality
    Goto, Shinsuke
    Lin, Donghui
    Ishida, Toru
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 3456 - 3463
  • [4] FEMTI Taxonomy for Evaluating Machine Translation Models
    Mizera-Pietraszko, Jolanta
    ADVANCES IN DIGITAL TECHNOLOGIES, 2015, 275 : 263 - 272
  • [5] A methodology for evaluating arabic machine translation systems
    Guessoum, Ahmed
    Zantout, Rached
    Machine Translation, 2004, 18 (04) : 299 - 335
  • [6] PROTEST: A Test Suite for Evaluating Pronouns in Machine Translation
    Guillou, Liane
    Hardmeier, Christian
    LREC 2016 - TENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2016, : 636 - 643
  • [7] Evaluating Machine Translation Performance on Chinese Idioms with a Blacklist Method
    Shao, Yutong
    Sennrich, Rico
    Webber, Bonnie
    Fancellu, Federico
    PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), 2018, : 31 - 38
  • [8] Evaluating the Impact of Integrating Similar Translations into Neural Machine Translation
    Tezcan, Arda
    Bulte, Bram
    INFORMATION, 2022, 13 (01)
  • [9] A conjoint analysis framework for evaluating user preferences in machine translation
    Kirchhoff, Katrin
    Capurro, Daniel
    Turner, Anne M.
    MACHINE TRANSLATION, 2014, 28 (01) : 1 - 17
  • [10] To Case or not to case: Evaluating Casing Methods for Neural Machine Translation
    Etchegoyhen, Thierry
    Ugarte, Harritxu Gete
    PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), 2020, : 3752 - 3760