نتایج جستجو برای: james stein estimator
تعداد نتایج: 56551 فیلتر نتایج به سال:
Nearly all estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the estimator; we focus on Stein’s unbiased risk estimator, or SURE (Stein, 1981; Efron, 1986), which forms an unbiased estimate of the prediction err...
This paper is devoted to adaptive signal denoising in the context of Graph Signal Processing (GSP) using Spectral Wavelet Transform (SGWT). issue addressed via a data-driven thresholding process transformed domain by optimizing parameters sense Mean Square Error (MSE) Stein’s Unbiased Risk Estimator (SURE). The SGWT considered built upon partition unity making transform semi-orthogonal so that ...
The James-Stein estimator is an of the multivariate normal mean and dominates maximum likelihood (MLE) under squared error loss. original work inspired great interest in developing shrinkage estimators for a variety problems. Nonetheless, research on estimation manifold-valued data scarce. In this paper, we propose parameters Log-Normal distribution defined manifold $N \times N$ symmetric posit...
We consider the estimation of mean a multivariate normal distribution with known variance. Most studies risk competing estimators, that is trace squared error matrix. In contrast we whole matrix, in particular its eigenvalues. prove there are only two distinct eigenvalues and apply our findings to James–Stein Thompson class estimators. It turns out famous Stein paradox no longer when matrix rat...
The current standard correlation coefficient used in the analysis of microarray data was introduced by M. B. Eisen, P. T. Spellman, P. O. Brown, and D. Botstein [(1998) Proc. Natl. Acad. Sci. USA 95, 14863-14868]. Its formulation is rather arbitrary. We give a mathematically rigorous correlation coefficient of two data vectors based on James-Stein shrinkage estimators. We use the assumptions de...
This article concerns the canonical empirical Bayes problem of estimating normal means under squared-error loss. General empirical estimators are derived which are asymptotically minimax and optimal. Uniform convergence and the speed of convergence are considered. The general empirical Bayes estimators are compared with the shrinkage estimators of Stein (1956) and James and Stein (1961). Estima...
This note contains supplementary materials to Kernel Mean Estimation via Spectral Filtering. 1 Proof of Theorem 1 (i) Since μ̌λ = μ̂ λ λ+1 = μ̂P λ+1 , we have ‖μ̌λ − μP‖ = ∥∥∥∥ μ̂P λ+ 1 − μP ∥∥∥∥ ≤ ∥∥∥∥ μ̂P λ+ 1 − μP λ+ 1 ∥∥∥∥+ ∥∥∥∥ μP λ+ 1 − μP ∥∥∥∥ ≤ ‖μ̂P − μP‖+ λ‖μP‖. From [1], we have that ‖μ̂P − μP‖ = OP(n) and therefore the result follows. (ii) Define ∆ := EP‖μ̂P − μP‖ = ∫ k(x,x) dP(x)−‖μP‖ 2 n . ...
We consider the nonparametric functional estimation of the drift of a Gaussian process via minimax and Bayes estimators. In this context, we construct superefficient estimators of Stein type for such drifts using the Malliavin integration by parts formula and superharmonic functionals on Gaussian space. Our results are illustrated by numerical simulations and extend the construction of James–St...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید