نتایج جستجو برای: evaluation metrics

تعداد نتایج: 878773  

Journal: :Foundations and Trends in Computer Graphics and Vision 2012
Marius Pedersen Jon Y. Hardeberg

The wide variety of distortions that images are subject to during acquisition, processing, storage, and reproduction can degrade their perceived quality. Since subjective evaluation is time-consuming, expensive, and resource-intensive, objective methods of evaluation have been proposed. One type of these methods, image quality (IQ) metrics, have become very popular and new metrics are proposed ...

2005
Amanda Stent Matthew Marge Mohit Singhai

Recent years have seen increasing interest in automatic metrics for the evaluation of generation systems. When a system can generate syntactic variation, automatic evaluation becomes more difficult. In this paper, we compare the performance of several automatic evaluation metrics using a corpus of automatically generated paraphrases. We show that these evaluation metrics can at least partially ...

2006
Tetsuya Sakai

Large-scale information retrieval evaluation efforts such as TREC and NTCIR have always used binary-relevance evaluation metrics, even when graded relevance data were available. However, the NTCIR-6 crosslingual task has finally announced that it will use graded-relevance metrics, though only as additional metrics. This paper compares graded-relevance metrics in terms of the ability to control ...

1996
Fouad A. Tobagi

In this paper some well known quality metrics such as PSNR and the metric developed at Institute for Telecommunication Sciences (ITS) are reviewed. Their shortcomings in measuring quality of coded video compared to subjective tests are pointed out. Then, a new video quality metric called Moving Picture Quality Metric (MPQM) is presented. This metric models the human visual system and matches co...

2004
Chin-Yew Lin Franz Josef Och

Comparisons of automatic evaluation metrics for machine translation are usually conducted on corpus level using correlation statistics such as Pearson’s product moment correlation coefficient or Spearman’s rank order correlation coefficient between human scores and automatic scores. However, such comparisons rely on human judgments of translation qualities such as adequacy and fluency. Unfortun...

2013
Amitangshu Pal Asis Nasipuri

We propose a joint power control and quality aware routing scheme for rechargeable wireless sensor networks (WSNs) in order to achieve reliable network operation in the presence of spatial and temporal variations of energy resources. The proposed scheme reduces the energy consumption in sensor nodes that have low remaining battery life through cooperative and network-wide adaptations of transmi...

2013
Rebekka Alm Sven Kiehl Birger Lantow Kurt Sandkuhl

Ontology Design Patterns (ODPs) provide best practice solutions for common or recurring ontology design problems. This work focuses on Content ODPs. These form small ontologies themselves and thus can be subject to ontology quality metrics in general. We investigate the use of such metrics for Content ODP evaluation in terms of metrics applicability and validity. The quality metrics used for th...

2016
Zubeida Casmod Khan C. Maria Keet

Recent years have seen many advances in ontology modularisation. This has made it difficult to determine whether a module is actually a good module; it is unclear which metrics should be considered. The few existing works on evaluation metrics focus on only some metrics that suit the modularisation technique, and there is not always a quantitative approach to calculate them. Overall, the metric...

2006
Ding Liu Daniel Gildea

A number of metrics for automatic evaluation of machine translation have been proposed in recent years, with some metrics focusing on measuring the adequacy of MT output, and other metrics focusing on fluency. Adequacy-oriented metrics such as BLEU measure n-gram overlap of MT outputs and their references, but do not represent sentence-level information. In contrast, fluency-oriented metrics su...

2013
Chen Chen Vincent Ng

Virtually all the commonly-used evaluation metrics for entity coreference resolution are linguistically agnostic, treating the mentions to be clustered as generic rather than linguistic objects. We argue that the performance of an entity coreference resolver cannot be accurately reflected when it is evaluated using linguistically agnostic metrics. Consequently, we propose a framework for incorp...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید