نتایج جستجو برای: m estimator

تعداد نتایج: 566707  

Journal: :SIAM Journal on Numerical Analysis 2022

Convergence of an adaptive collocation method for the parametric stationary diffusion equation with finite-dimensional affine coefficient is shown. The algorithm relies on a recently introduced residual-based reliable posteriori error estimator. For convergence proof, strategy used stochastic Galerkin hierarchical estimator transferred to setting. Extensions other variants methods (including no...

Journal: :Concurrency and Computation: Practice and Experience 2023

The method of maximum likelihood flops when there is linear dependency (multicollinearity) and outlier in the generalized models. In this study, we combined ridge estimator with transformed M-estimator (MT) conditionally unbiased bounded influence (CE). two new estimators are called robust MT Robust-CE. A Monte Carlo study revealed that proposed dominate for models Poisson response log link fun...

2010
Rand R. Wilcox

For the random variables Y,X1, . . . , Xp, where Y is binary, let M(x1, . . . , xp) = P (Y = 1|(X1, . . . Xp) = (x1, . . . xp)). The paper compares four smoothers aimed at estimating M(x1, . . . , xp), three of which can be used when p > 1. Evidently there are no published comparisons of smoothers when p > 1 and Y is binary. And there are no published results on how the four estimators, conside...

ژورنال: پژوهش های ریاضی 2018

Introduction      In classical methods of statistics, the parameter of interest is estimated based on a random sample using natural estimators such as maximum likelihood or unbiased estimators (sample information). In practice,  the researcher has a prior information about the parameter in the form of a point guess value. Information in the guess value is called as nonsample information. Thomp...

2006
Sylvie Huet

We propose a method based on a penalised likelihood criterion, for estimating the number on non-zero components of the mean of a Gaussian vector. Following the work of Birgé and Massart in Gaussian model selection, we choose the penalty function such that the resulting estimator minimises the Kullback risk. Mathematics Subject Classification. 62G05, 62G09. Received January 13, 2004. Revised Sep...

2001
Jeffrey M. Wooldridge

T he method of moments approach to parameter estimation dates back more than 100 years (Stigler, 1986). The notion of a moment is fundamental for describing features of a population. For example, the population mean (or population average), usually denoted m, is the moment that measures central tendency. If y is a random variable describing the population of interest, we also write the populati...

2010
Gelu M. Nita Dale E. Gary

Spectral Kurtosis (SK; defined by Nita et al. 2007) is a statistical approach for detecting and removing radio frequency interference (RFI) in radio astronomy data. In this paper, the statistical properties of the SK estimator are investigated and all moments of its probability density function are analytically determined. These moments provide a means to determine the tail probabilities of the...

2008
Søren Johansen Bent Nielsen

An algorithm suggested by Hendry (1999) for estimation in a regression with more regressors than observations, is analyzed with the purpose of …nding an estimator that is robust to outliers and structural breaks. This estimator is an example of a one-step M -estimator based on Huber’s skip function. The asymptotic theory is derived in the situation where there are no outliers or structural brea...

2010
Gelu M. Nita Dale E. Gary

Spectral Kurtosis (SK) is a statistical approach for detecting and removing radio frequency interference (RFI) in radio astronomy data. In this study, the statistical properties of the SK estimator are investigated and all moments of its probability density function are analytically determined. These moments provide a means to determine the tail probabilities of the estimator that are essential...

2008
Li Li Zhiheng Li Yi Zhang Yudong Chen

This paper gives an analytical proof of the conjecture [1]: when the dimension M of the auto-covariance matrix is large, the eigenvalue spectrum from Principal Component Analysis (PCA) of a fractal Brownian motion (fBm) process with Hurst parameter H decays as a power-law: λm ∼ m −(2H+1), m = 1,..., M . This resolves the interesting puzzle why PCA based H estimator can yield right results for f...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید