نتایج جستجو برای: translation evaluation

تعداد نتایج: 946896  

2007
Yu Zhou Yanqing He Chengqing Zong

This paper describes our phrase-based statistical machine translation system (CASIA) used in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2007. In this year's evaluation, we participated in the open data track of clean text for the Chinese-to-English machine translation. Here, we mainly introduce the overview of the system, the primary modules, th...

2008
Iñaki Alegria Lluís Màrquez Kepa Sarasola Arantza Casillas Arantza Díaz de Ilarraza Jon Igartua Gorka Labaka Mikel Lersundi

Constructing a classifier that distinguishes machine translations from human translations is a promising approach to automatically evaluating machine-translated sentences. We developed a classifier with this approach that distinguishes translations based on word-alignment distributions between source sentences and human/machine translations. We used Support Vector Machines as machinelearning al...

2006
Sylvie REGNIER-PROST Eva DAUPHIN

Since the beginning of the 1980’s, the Information Department of the Common Research Centre of AEROSPATIALE has been involved in the experimentation and evaluation of Natural Language Processing tools, ranging from terminology extraction and management tools to translation memories and Machine Translation systems. Through its experience in comparative testing and diagnostic evaluation of the ma...

2004
Yasuhiro Akiba Marcello Federico Noriko Kando Hiromi Nakaiwa Michael Paul Jun'ichi Tsujii

This paper gives an overview of the evaluation campaign results of the IWSLT041 workshop, which is organized by the C-STAR2 consortium to investigate novel speech translation technologies and their evaluation. The objectives of this workshop is to provide a framework for the applicability validation of existing machine translation evaluation methodologies to evaluate speech translation technolo...

2014
Hui Yu Xiaofeng Wu Jun Xie Wenbin Jiang Qun Liu Shouxun Lin

Most of the widely-used automatic evaluation metrics consider only the local fragments of the references and translations, and they ignore the evaluation on the syntax level. Current syntaxbased evaluation metrics try to introduce syntax information but suffer from the poor parsing results of the noisy machine translations. To alleviate this problem, we propose a novel dependency-based evaluati...

2005
Matthias Eck Chiori Hori

This paper reports an overview of the evaluation campaign results of the IWSLT 2005 workshop 1 . The BTEC corpus, which consists of typical travel domain phrases, was used. Data for the five language pairs Arabic/Chinese/Japanese/Korean to English and English to Chinese was prepared. To study how much the amount of the training data and how much different training and decoding approaches contri...

2009
Chris Callison-Burch Philipp Koehn Christof Monz Josh Schroeder

This paper presents the results of the WMT09 shared tasks, which included a translation task, a system combination task, and an evaluation task. We conducted a large-scale manual evaluation of 87 machine translation systems and 22 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality, for ...

2014
Shinsuke Goto Donghui Lin Toru Ishida

The recent popularity of machine translation has increased the demand for the evaluation of translations. However, the traditional evaluation approach, manual checking by a bilingual professional, is too expensive and too slow. In this study, we confirm the feasibility of crowdsourcing by analyzing the accuracy of crowdsourcing translation evaluations. We compare crowdsourcing scores to profess...

1999
Fumiaki Sugaya Toshiyuki Takezawa Akio Yokoo Seiichi Yamamoto

ATR Interpreting Telecommunications Research Laboratories developed ATR-MATRIX speech translation system, which translates both ways between English and Japanese, enough to hold natural on-line real-time conversations. Using this system we started an end-to-end evaluation of a speech translation system through a dialog test with naive speakers who are not involved in system development and not ...

Journal: :CoRR 2009
Vishal Goyal Gurpreet Singh Lehal

Machine Translation in India is relatively young. The earliest efforts date from the late 80s and early 90s. The success of every system is judged from its evaluation experimental results. Number of machine translation systems has been started for development but to the best of author knowledge, no high quality system has been completed which can be used in real applications. Recently, Punjabi ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید