نتایج جستجو برای: context model

تعداد نتایج: 2433238  

2005
Daniel A. Woods

1 Current methods model RNA sequence and secondary structure as stochastic context-free grammars, and then use a generative learning model to find the most likely parse (and, therefore, the most likely structure). As we learned in class, discriminative models generally enjoy higher performance than generative learning models. This implies that performance may increase if discriminative learning...

2011
Mark-Jan Nederhof Giorgio Satta

The notion of infix probability has been introduced in the literature as a generalization of the notion of prefix (or initial substring) probability, motivated by applications in speech recognition and word error correction. For the case where a probabilistic context-free grammar is used as language model, methods for the computation of infix probabilities have been presented in the literature,...

Journal: :Artif. Intell. 1996
Eugene Charniak Glenn Carroll John E. Adcock Anthony R. Cassandra Yoshihiko Gotoh Jeremy Katz Michael L. Littman John McCann

We consider what tagging models are most appropriate as front ends for probabilistic context-free-grammar parsers. In particular, we ask if using a tagger that returns more than one tag, a \multiple tagger," improves parsing performance. Our conclusion is somewhat surprising: single-tag Markov-model taggers are quite adequate for the task. First of all, parsing accuracy, as measured by the corr...

2003
Jose L. Verdú-Mas Jorge Calera-Rubio Rafael C. Carrasco

In a previous work, a new probabilistic context-free grammar (PCFG) model for natural language parsing derived from a tree bank corpus has been introduced. The model estimates the probabilities according to a generalized k-grammar scheme for trees. It allows for faster parsing, decreases considerably the perplexity of the test samples and tends to give more structured and refined parses. Howeve...

2012
Marten Van Schijndel Andrew Exley William Schuler

Probabilistic context-free grammars (PCFGs) are a popular cognitive model of syntax (Jurafsky, 1996). These can be formulated to be sensitive to human working memory constraints by application of a right-corner transform (Schuler, 2009). One side-effect of the transform is that it guarantees at most a single expansion (push) and at most a single reduction (pop) during a syntactic parse. The pri...

2003
John Gillett Wayne Ward

We present a class trigram language model in which each class is specified by a probabilistic context-free grammar. We show how to estimate the parameters of the model, and how to smooth these estimates. Experimental perplexity and speech recognition results are presented.

2009
Yun-Cheng Ju Tim Paek

Automotive infotainment systems now provide drivers the ability to hear incoming Short Message Service (SMS) text messages using text-to-speech. However, the question of how best to allow users to respond to these messages using speech recognition remains unsettled. In this paper, we propose a robust voice search approach to replying to SMS messages based on template matching. The templates are...

2016
Pavol Bielik Veselin Raychev Martin T. Vechev

We introduce a new generative model for code called probabilistic higher order grammar (PHOG). PHOG generalizes probabilistic context free grammars (PCFGs) by allowing conditioning of a production rule beyond the parent non-terminal, thus capturing rich contexts relevant to programs. Even though PHOG is more powerful than a PCFG, it can be learned from data just as efficiently. We trained a PHO...

2004
Dennis Wagelaar

An important drive for Model-Driven Architecture is that many software applications have to be deployed on a variety of platforms and within a variety of contexts in general. Using software models, e.g. described in the Unified Modeling Language (UML), one can abstract from specific platforms. A software model can then be transformed to a refined model, given the context in which it should run....

2016
Rasmus Dall Kei Hashimoto Keiichiro Oura Yoshihiko Nankaku Keiichi Tokuda

In this paper we present an investigation of a number of alternative linguistic feature context sets for HMM and DNN textto-speech synthesis. The representation of positional values is explored through two alternatives to the standard set of absolute values, namely relational and categorical values. In a preference test the categorical representation was found to be preferred for both HMM and D...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید