نتایج جستجو برای: relevance based language models

تعداد نتایج: 3883993  

Abbas Pour , H, Ghani Poor Tafreshi , M, Ranjbar , Z,

Abstract Introduction: The growth and expansion of the Internet has changed the way information is accessed and many facilities have been created on the Web to facilitate and expedite information locating. Objective: To identify the impact of keyword documentation using the medical thesaurus on the retrieval of articles from Proquest and Science Direct databases. Materials and Methods:The pr...

Journal: :مطالعات قرآنی و روایی 0

understanding of the language of the qur'an is needed for     recognition of the names and attributes of allah. language         of the quran is not equal with the language of religion which             has originated from christianity world. language of the            qur'an and imams speeches do not have one aspect, so that it            can be understood perfectly via lexical princ...

Journal: :CoRR 2014
Benjamin Roth

mir durch ihre Hilfe bei den maschinellen¨Ubersetzungen viel Zeit gespart.

2000
Jianfeng Gao Kai-Fu Lee

We propose a distribution-based pruning of n-gram backoff language models. Instead of the conventional approach of pruning n-grams that are infrequent in training data, we prune n-grams that are likely to be infrequent in a new document. Our method is based on the n-gram distribution i.e. the probability that an n-gram occurs in a new document. Experimental results show that our method performe...

Journal: :CoRR 1998
Andreas Stolcke

A criterion for pruning parameters from N-gram backoff language models is developed, based on the relative entropy between the original and the pruned model. It is shown that the relative entropy resulting from pruning a single N-gram can be computed exactly and efficiently for backoff models. The relative entropy measure can be expressed as a relative change in training set perplexity. This le...

1999
Hong-Kwang Jeff Kuo Wolfgang Reichl

Including phrases in the vocabulary list can improve ngram language models used in speech recognition. In this paper, we report results of automatic extraction of phrases from the training text using frequency, likelihood, and correlation criteria. We show how a language model built from a vocabulary that includes useful phrases can systematically improve language model perplexity in a natural ...

2007
Ingo Plag Remko Scha Neal Snider Jean-Pierre Nadal Dave Cochran David Cochran

ions and the retention of exemplars of which those abstractions are composed. The work by Batali (2002), who investigates the emergence of semantic structures in language acquisition and evolution, can also be viewed in

2009
Jeff Mitchell Mirella Lapata

In this paper we propose a novel statistical language model to capture long-range semantic dependencies. Specifically, we apply the concept of semantic composition to the problem of constructing predictive history representations for upcoming words. We also examine the influence of the underlying semantic space on the composition task by comparing spatial semantic representations against topic-...

2012
Kristen Parton Jianfeng Gao

We present a new cross-lingual relevance feedback model that improves a machine-learned ranker for a language with few training resources, using feedback from a better ranker for a language that has more training resources. The model focuses on linguistically non-local queries, such as [world cup] and [copa mundial], that have similar user intent in different languages, thus allowing the low-re...

2009
Abby D. Levenberg Miles Osborne

Randomised techniques allow very big language models to be represented succinctly. However, being batch-based they are unsuitable for modelling an unbounded stream of language whilst maintaining a constant error rate. We present a novel randomised language model which uses an online perfect hash function to efficiently deal with unbounded text streams. Translation experiments over a text stream...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید