A human evaluation of English-Slovak machine translation

被引:1
|
作者
Munkova, Dasa [1 ]
Panisova, Ludmila [1 ]
Welnitzova, Katarina [2 ]
机构
[1] Constantine Philosopher Univ Nitra, Tr A Hlinku 1, Nitra 94974, Slovakia
[2] Univ Ss Cyril & Methodius, Trnava, Slovakia
关键词
Machine translation; error analysis; English language; Slovak language; error typology; QUALITY; METRICS; ERRORS;
D O I
10.1080/0907676X.2022.2116989
中图分类号
H0 [语言学];
学科分类号
030303 ; 0501 ; 050102 ;
摘要
The paper aims to obtain an error profile for machine translation (MT) from English into Slovak. We present an adjusted framework for MT evaluation, which is based on Vanko's categorical framework, but reflects machine translation peculiarities of synthetic and/or inflectional languages. Based on the framework, we analyse the errors generated by Google Translate and identify the most frequent categories of errors occurring in machine translation when translating newspaper articles from English into Slovak. While we have seen research on widely-spoken languages, such as English or other major official EU languages, little is known about Slovak, which is also an official EU language. This paper provides the first human MT evaluation study of English-Slovak machine translation using professional translators for a more detailed depiction of translation quality. Our research has revealed that the highest numbers of errors occurred in the sphere of lexical semantics, as well as in syntactic-semantic correlativeness, both being closely related. Additionally, based on the results of the Cochran Q test, we show how individual MT errors located in the examined categories differ in co-incidence and in how they impact translation quality.
引用
收藏
页码:1142 / 1161
页数:20
相关论文
共 50 条
  • [21] Machine translation and global English
    Raley, R
    YALE JOURNAL OF CRITICISM, 2003, 16 (02): : 291 - 313
  • [22] On "Human Parity" and "Super Human Performance" in Machine Translation Evaluation
    Poibeau, Thierry
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6018 - 6023
  • [23] Linguistically Motivated Evaluation of English-Latvian Statistical Machine Translation
    Skadina, Inguna
    Levane-Petrova, Kristine
    Rabante, Guna
    HUMAN LANGUAGE TECHNOLOGIES: THE BALTIC PERSPECTIVE, 2012, 247 : 221 - 229
  • [24] Evaluation of English to Arabic Machine Translation Systems using BLEU and GTM
    Al-Rukban, Aljoharah
    Saudagar, Abdul Khader Jilani
    PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON EDUCATION TECHNOLOGY AND COMPUTERS (ICETC 2017), 2017, : 228 - 232
  • [25] An English-Chinese Machine Translation and Evaluation Method for Geographical Names
    Ren, Hongkai
    Mao, Xi
    Ma, Weijun
    Wang, Jizhou
    Wang, Linyun
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2020, 9 (03)
  • [26] An Improvement in BLEU Metric for English-Hindi Machine Translation Evaluation
    Malik, Pooja
    Baghel, Anurag Singh
    2016 IEEE INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND AUTOMATION (ICCCA), 2016, : 331 - 336
  • [27] Sentence-structure errors of machine translation into Slovak
    Welnitzova, Katarina
    Munkova, Dasa
    TOPICS IN LINGUISTICS, 2021, 22 (01) : 78 - 92
  • [28] Toward More Effective Human Evaluation for Machine Translation
    Saldias, Belen
    Foster, George
    Freitag, Markus
    Tan, Qijun
    PROCEEDINGS OF THE 2ND WORKSHOP ON HUMAN EVALUATION OF NLP SYSTEMS (HUMEVAL 2022), 2022, : 76 - 89
  • [29] Neural Machine Translation for Amharic-English Translation
    Gezmu, Andargachew Mekonne
    Nuernberger, Andreas
    Bati, Tesfaye Bayu
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 1, 2021, : 526 - 532
  • [30] Improvements in Machine Translation for English/Iraqi Speech Translation
    Saleem, S.
    Subramanian, K.
    Prasad, R.
    Stallard, D.
    Kao, C.
    Natarajan, P.
    Suleiman, R.
    INTERSPEECH 2007: 8TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION, VOLS 1-4, 2007, : 2165 - 2168