نتایج جستجو برای: bayes estimator

تعداد نتایج: 48066  

2008
Daniel S. Weller

This paper describes several new algorithms for estimating the parameters of a periodic bandlimited signal from samples corrupted by jitter (timing noise) and additive noise. Both classical (non-random) and Bayesian formulations are considered: an Expectation-Maximization (EM) algorithm is developed to compute the maximum likelihood (ML) estimator for the classical estimation framework, and two...

2009
Hacheme AYASSO

We propose a method to restore and to segment simultaneously images degraded by a known point spread function (PSF) and additive white noise. For this purpose, we propose a joint Bayesian estimation framework, where a family of non-homogeneous Gauss-Markov fields with Potts region labels models are chosen to serve as priors for images. Since neither the joint maximum a posteriori estimator nor ...

2009
Artin Armagan

Here we obtain approximate Bayes inferences through variational methods when an exponential power family type prior is specified for the regression coefficients to mimic the characteristics of the Bridge regression. We accomplish this through hierarchical modeling of such priors. Although the mixing distribution is not explicitly stated for scale normal mixtures, we obtain the required moments ...

2006
Man-Wai Ho

A class of random hazard rates, which is defined as a mixture of an indicator kernel convolved with a completely random measure, is of interest. We provide an explicit characterization of the posterior distribution of this mixture hazard rate model via a finite mixture of S-paths. A closed and tractable Bayes estimator for the hazard rate is derived to be a finite sum over S-paths. The path cha...

2006
Man-Wai Ho

A class of random hazard rates, that is defined as a mixture of an indicator kernel convoluted with a completely random measure, is of interest. We provide an explicit characterization of the posterior distribution of this mixture hazard rate model via a finite mixture of S-paths. A closed and tractable Bayes estimator for the hazard rate is derived to be a finite sum over S-paths. The path cha...

2017
Jann Spiess

Shrinkage estimation usually reduces variance at the cost of bias. But when we care only about some parameters of a model, I show that we can reduce variance without incurring bias if we have additional information about the distribution of covariates. In a linear regression model with homoscedastic Normal noise, I consider shrinkage estimation of the nuisance parameters associated with control...

Journal: :J. Multivariate Analysis 2013
Tatsuya Kubokawa Éric Marchand William E. Strawderman Jean-Philippe Turcotte

This paper is concerned with estimation of a predictive density with parametric constraints under Kullback-Leibler loss. When an invariance structure is embedded in the problem, general and unified conditions for the minimaxity of the best equivariant predictive density estimator are derived. These conditions are applied to check minimaxity in various restricted parameter spaces in location and...

Journal: :Neural networks : the official journal of the International Neural Network Society 2010
Sumio Watanabe

Learning machines that have hierarchical structures or hidden variables are singular statistical models because they are nonidentifiable and their Fisher information matrices are singular. In singular statistical models, neither does the Bayes a posteriori distribution converge to the normal distribution nor does the maximum likelihood estimator satisfy asymptotic normality. This is the main re...

2014
Thang T. Vu Chao Sima Ulisses Braga-Neto Edward R. Dougherty

Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic argume...

2005
Bo Wang D. M. Titterington

In this paper we investigate the properties of the covariance matrices associated with variational Bayesian approximations, based on data from mixture models, and compare them with the true covariance matrices, corresponding to Fisher information matrices. It is shown that the covariance matrices from the variational Bayes approximations are normally ‘too small’ compared with those for the maxi...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید