نتایج جستجو برای: bayes risk
تعداد نتایج: 960369 فیلتر نتایج به سال:
We address the problem of training the free parameters of a statistical machine translation system. We show significant improvements over a state-of-the-art minimum error rate training baseline on a large ChineseEnglish translation task. We present novel training criteria based on maximum likelihood estimation and expected loss computation. Additionally, we compare the maximum a-posteriori deci...
A new method for example-dependent cost (EDC) classification is proposed. The constitutes an extension of a recently introduced training algorithm neural networks. surrogate function estimate the Bayesian risk, where estimates conditional probabilities each class are defined in terms 1-D Parzen window estimator output (discriminative) This probability density modeled with objective allowing eas...
This article shows that the Minimum Classification Error (MCE) criterion function commonly used for discriminative design of speech recognition systems is equivalent to a Parzen window based estimate of the theoretical Bayes classification risk. In this analysis, each training token is mapped to the center of a Parzen kernel in the domain of a suitably defined random variable. The kernels are s...
This article shows that the Minimum Classification Error (MCE) criterion function commonly used for discriminative design of speech recognition systems is equivalent to a Parzen window based estimate of the theoretical Bayes classification risk. In this analysis, each training token is mapped to the center of a Parzen kernel in the domain of a suitably defined random variable. The kernels are s...
The author proposes to use weighted likelihood to approximate Bayesian inference when no external or prior information is available. He proposes a weighted likelihood estimator that minimizes the empirical Bayes risk under relative entropy loss. He discusses connections among the weighted likelihood, empirical Bayes and James–Stein estimators. Both simulated and real data sets are used for illu...
Of those things that can be estimated well in an inverse problem, which is best to estimate? Backus–Gilbert resolution theory answers a version of this question for linear (or linearized) inverse problems in Hilbert spaces with additive zero-mean errors with known, finite covariance, and no constraints on the unknown other than the data. This paper generalizes resolution: it defines the resolut...
The Cp selection criterion is a popular method to choose the smoothing parameter in spline regression. Another widely used method is the generalized maximum likelihood (GML) derived from a normal-theory empirical Bayes framework. These two seemingly unrelated methods, have been shown in Efron (Ann. Statist. 29 (2001) 470) and Kou and Efron (J. Amer. Statist. Assoc. 97 (2002) 766) to be actually...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید