نتایج جستجو برای: evaluation metrics

تعداد نتایج: 878773  

2007
Olivier Hamon Anthony Hartley Andrei Popescu-Belis Khalid Choukri

This paper analyzes the results of the French MT Evaluation Campaign, CESTA (2003-2006). The details of the campaign are first briefly described. The paper then focuses on the results of the two runs, which used human metrics, such as fluency or adequacy, as well as automated metrics, mainly based on n-gram comparison and word error rates. The results show that the quality of the systems can be...

2014
Shuhang Guo

KEMI-TORNIO UNIVERSITY OF APPLIED SCIENCES Degree programme: Business Information Technology Writer: Guo, Shuhang Thesis title: Analysis and evaluation of similarity metrics in collaborative filtering recommender system Pages (of which appendix): 62 (1) Date: May 15, 2014 Thesis instructor: Ryabov, Vladimir This research is focused on the field of recommender systems. The general aims of this t...

2009
Abhinav Saxena Jose Celaya Bhaskar Saha Sankalita Saha Kai Goebel

Prognostics performance evaluation has gained significant attention in the past few years. * As prognostics technology matures and more sophisticated methods for prognostic uncertainty management are developed, a standardized methodology for performance evaluation becomes extremely important to guide improvement efforts in a constructive manner. This paper is in continuation of previous efforts...

2013
Tetsuya Sakai

This lecture is intended to serve as an introduction to Information Retrieval (IR) effectiveness metrics and their usage in IR experiments using test collections. Evaluation metrics are important because they are inexpensive tools for monitoring technological advances. This lecture covers a wide variety of IR metrics (except for those designed for XML retrieval, as there is a separature lecture...

2006
Jesús Giménez Enrique Amigó

Abstract We present the IQMT Framework for Machine Translation Evaluation Inside QARLA. IQMT offers a common workbench in which evaluation metrics can be utilized and combined. It provides i) a measure to evaluate the quality of any set of similarity metrics (KING), ii) a measure to evaluate the quality of a translation using a set of similarity metrics (QUEEN), and iii) a measure to evaluate t...

2005
Shaocai Yu Brian Eder Robin Dennis Shao-Hang Chu Stephen E. Schwartz

Unbiased symmetric metrics to quantify the relative bias and error between modeled and observed concentrations, based on the factor between measured and observed concentrations, are introduced and compared to conventionally employed metrics. Application to the evaluation of several data sets shows that the new metrics overcome concerns with the conventional metrics and provide useful measures o...

2006
Shaocai Yu Brian Eder Robin Dennis Shao-Hang Chu Stephen Schwartz

Unbiased symmetric metrics to quantify the relative bias and error between modeled and observed concentrations, based on the factor between measured and observed concentrations, are introduced and compared to conventionally employed metrics. Application to evaluation of several data sets shows that the new metrics overcome concerns with the conventional metrics and provide useful measures of mo...

2013
Aaron Li-Feng Han Derek F. Wong Lidia S. Chao Liangye He Yi Lu Junwen Xing Xiaodong Zeng

The conventional machine translation evaluation metrics tend to perform well on certain language pairs but weak on other language pairs. Furthermore, some evaluation metrics could only work on certain language pairs not language-independent. Finally, no considering of linguistic information usually leads the metrics result in low correlation with human judgments while too many linguistic featur...

2010
Ibrahim Adeyanju Nirmalie Wiratunga Robert Lothian Susan Craw

The need for automated text evaluation is common to several AI disciplines. In this work, we explore the use of Machine Translation (MT) evaluation metrics for Textual Case Based Reasoning (TCBR). MT and TCBR typically propose textual solutions and both rely on human reference texts for evaluation purposes. Current TCBR evaluation metrics such as precision and recall employ a single human refer...

2009
Nicholas A. Gorski John E. Laird

One of the challenges in comparing learning performance across multiple conditions is to develop an appropriate evaluation. We identify four candidate metrics and apply them in an empirical example, providing contrast to differences between the metrics. We propose a set of criteria that learning comparison metrics should satisfy, evaluate the four metrics using the identified criteria, and conc...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید