نتایج جستجو برای: kernel estimator

تعداد نتایج: 78705  

2004
Alexander M. Bronstein Michael M. Bronstein Michael Zibulevsky Yehoshua Y. Zeevi

Blind deconvolution is considered as a problem of quasi maximum likelihood (QML) estimation of the restoration kernel. Simple closed-form expressions for the asymptotic estimation error are derived. The asymptotic performance bounds coincide with the Cramér-Rao bounds, when the true ML estimator is used. Conditions for asymptotic stability of the QML estimator are derived. Special cases when th...

1999
Angeles SAAVEDRA Ricardo CAO

The authors present a new convolution-type kernel estimator of the marginal density of an MA(1) process with general error distribution. They prove the √ n-consistency of the nonparametric estimator and give asymptotic expressions for the mean square and the integrated mean square error of some unobservable version of the estimator. An extension to MA(q) processes is presented in the case of th...

Journal: :Journal of risk and financial management 2022

This paper proposes a new combined semiparametric estimator of the conditional variance that takes product parametric and nonparametric based on machine learning. A popular kernel-based learning algorithm, known as kernel-regularized least squares estimator, is used to estimate component. We discuss how using real data use this make forecasts for variance. Simulations are conducted show dominan...

2016
José Figueroa-López Cheng Li

Abstract: The selections of the bandwidth and kernel function of a kernel estimator are of great importance in practice. This is not different in the context of spot volatility kernel estimators. In this work, a feasible method of bandwidth and kernel selection is proposed, under some mild conditions on the volatility process, which not only cover classical Brownian motion driven dynamics but a...

Journal: :Statistics & Probability Letters 2021

Three common classes of kernel regression estimators are considered: the Nadaraya–Watson (NW) estimator, Priestley–Chao (PC) and Gasser–Müller (GM) estimator. It is shown that (i) GM estimator has a certain monotonicity preservation property for any K, (ii) NW this if only K log concave, (iii) PC does not have K. Other related properties these discussed.

2001
Thomas H. McCurdy Thanasis Stengos

This paper computes parametric estimates of a time-varying risk premium model and compares the one-step-ahead forecasts implied by that model with those given by a nonparametric kernel estimator of the conditional mean function. The conditioning information used for the nonparametric analysis is that implied by the theoretical model of time-varying risk. Thus, the kernel estimator is used, in c...

2007
Aurore Delaigle

The deconvolution kernel density estimator is a popular technique for solving the deconvolution problem, where the goal is to estimate a density from a sample of contaminated observations. Although this estimator is optimal, it suffers from two major drawbacks: it converges at very slow rates (inherent to the deconvolution problem) and can only be calculated when the density of the errors is co...

2008
Madeleine Cule Michael Stewart

Let X1, . . . ,Xn be independent and identically distributed random vectors with a (Lebesgue) density f. We first prove that, with probability 1, there is a unique log-concave maximum likelihood estimator f̂n of f. The use of this estimator is attractive because, unlike kernel density estimation, the method is fully automatic, with no smoothing parameters to choose. Although the existence proof ...

M. Sarmad P. Asghari V. Fakoor

Length-biased data are widely seen in applications. They are mostly applicable in epidemiological studies or survival analysis in medical researches. Here we aim to propose a Berry-Esseen type bound for the kernel density estimator of this kind of data.The rate of normal convergence in the proposed Berry-Esseen type theorem is shown to be O(n^(-1/6) ) modulo logarithmic term as n tends to infin...

Journal: :Computational Statistics & Data Analysis 2013
Han Lin Shang

Error density estimation in a nonparametric functional regression model with functional predictor and scalar response is considered. The unknown error density is approximated by a mixture of Gaussian densities with means being the individual residuals, and variance as a constant parameter. This proposed mixture error density has a form of a kernel density estimator of residuals, where the regre...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید