نتایج جستجو برای: translation evaluation

تعداد نتایج: 946896  

2014
Joke Daems Lieve Macken Sonia Vandepitte Mihaela Vela Anne-Kathrin Schumann Douglas Jones Paul Gatewood Martha Herzog Federico Gaspari Antonio Toral Arle Lommel Stephen Doherty Josef van Genabith

This paper presents a study on human and automatic evaluations of translations in a French-German translation learner corpus. The aim of the paper is to shed light on the differences between MT evaluation scores and approaches to translation evaluation rooted in a closely related discipline, namely translation studies. We illustrate the factors contributing to the human evaluation of translatio...

2005
Adriane Rinsche

This paper is based on extensive studies of MT evaluation methods and practical comparative evaluation experience testing major MT systems such as Systran, Metal, Logos, Ariane. Evaluation studies were carried out within the framework of a Ph.D., on behalf of the EC Commission, and the London based computer consultancy OVUM. The paper begins with some general observations on the state of the ar...

2013
Yanqing He Chongde Shi Huilin Wang

This paper describes statistical machine translation system of ISTIC used in the evaluation campaign of the patent machine translation task at NTCIR-10. In this year's evaluation, we participated in patent machine translation tasks for ChineseEnglish, Japanese-English and English-Japanese. Here we mainly describe the overview of the system, the primary modules, the key techniques and the evalua...

2016
Emad AlSukhni Mohammed N. Al-Kabi Izzat M. Alsmadi

The number of Free Online Machine Translation (FOMT) users witnessed a spectacular growth since 1994. FOMT systems change the aspects of machine translation (MT) and the mass translated materials using a wide range of natural languages and machine translation systems. Hundreds of millions of people use these FOMT systems to translate the holy Quran (Al-Qurʾān) verses from the Arabic language to...

2011
Yanqing He Chongde Shi Huilin Wang

This paper describes statistical machine translation system of ISTIC used in the evaluation campaign of the patent machine translation task at NTCIR-9. In this year's evaluation, we participated in patent machine translation task for ChineseEnglish. Here we mainly describe the overview of the system, the primary modules, the key techniques and the evaluation results.

2013
Vaishali Gupta Nisheeth Joshi Iti Mathur

This paper is based on the Evaluation of English to Urdu Machine Translation. Evaluation measures the quality characteristic of the Machine Translation output and is based on two approaches: Human Evaluation and Automatic Evaluation. In this paper, we are mainly concentrating over Human Evaluation. Machine Translation is an emerging research area in which human beings play a very crucial role. ...

Journal: :CoRR 2013
Nisheeth Joshi Iti Mathur Hemant Darbari Ajai Kumar

Machine translation evaluation is a very important activity in machine translation development. Automatic evaluation metrics proposed in literature are inadequate as they require one or more human reference translations to compare them with output produced by machine translation. This does not always give accurate results as a text can have several different translations. Human evaluation metri...

2012
Chris Callison-Burch Philipp Koehn Christof Monz Matt Post Radu Soricut Lucia Specia

This paper presents the results of the WMT12 shared tasks, which included a translation task, a task for machine translation evaluation metrics, and a task for run-time estimation of machine translation quality. We conducted a large-scale manual evaluation of 103 machine translation systems submitted by 34 teams. We used the ranking of these systems to measure how strongly automatic metrics cor...

2016
Kim Harris Aljoscha Burchardt Georg Rehm Lucia Specia

Translation quality evaluation (QE) has gained significant uptake in recent years, in particular in light of increased demand for automated translation workflows and machine translation. Despite the need for innovative and forward-looking quality evaluation solutions, the technology landscape remains highly fragmented and the two major consituencies in need of collaborative and ground-breaking ...

1994
Eric Nyberg Teruko Mitamura Jaime G. Carbonell

A methodology is presented for component-based machine translation (MT) evaluation through causal error analysis to complement existing global evaluation methods. This methodology is particularly appropriate for knowledge-based machine translation (KBMT) systems. After a discussion of MT evaluation criteria and the particular evaluation metrics proposed for KBMT, we apply this methodology to a ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید