نتایج جستجو برای: squared error loss

تعداد نتایج: 698765  

2002
Andrew J. Patton Allan Timmermann

Evaluation of forecast optimality in economics and Þnance has almost exclusively been conducted under the assumption of mean squared error loss. Under this loss function optimal forecasts should be unbiased and forecast errors should be serially uncorrelated at the single period horizon with increasing variance as the forecast horizon grows. Using analytical results, we show in this paper that ...

2007
Fredrik Hekland Geir E. Øien Tor A. Ramstad

In this paper, we identify and quantify loss factors causing sub-optimal performance in joint source-channel coding. We show that both the loss due to non-Gaussian distributed channel symbols and the loss due to non-Gaussian quantization error equals the relative entropy between the actual distribution and the optimal Gaussian distribution, given an average power constraint and an mean-squared ...

2016
Aarti Singh

In many machine learning task, we have data Z from some distribution p and the task is to minimize the risk: R(f) = EZ∼p[`(f(Z), Z)] (11.1) where ` is a loss function of interest, e.g. in classification Z = (X,Y ) and we use 0/1 loss `(f(Z), Z) = 1f(X)6=Y , in regression Z = (X,Y ) and we use squared error `(f(Z), Z) = (f(X) − Y ) and in density estimation Z = X and we use negative log likeliho...

2009
Gyan Prakash Harish Chandra

• In the present paper we study the performance of the Bayes Shrinkage estimators for the scale parameter of the Weibull distribution under the squared error loss and the LINEX loss functions in the presence of a prior point information of the scale parameter when Type-II censored data are available. The properties of the minimax estimators are also discussed. Key-Words: • Bayes shrinkage estim...

Journal: :Pattern Recognition 2017
Jesse H. Krijthe Marco Loog

We introduce the implicitly constrained least squares (ICLS) classifier, a novel semi-supervised version of the least squares classifier. This classifier minimizes the squared loss on the labeled data among the set of parameters implied by all possible labelings of the unlabeled data. Unlike other discriminative semisupervised methods, this approach does not introduce explicit additional assump...

Journal: :CoRR 2016
Christos Thrampoulidis Ehsan Abbasi Babak Hassibi

A popular approach for estimating an unknown signal x0 ∈ R from noisy, linear measurements y = Ax0 +z ∈ R is via solving a so called regularized M-estimator: x̂ := arg minx L(y−Ax)+λf(x). Here, L is a convex loss function, f is a convex (typically, non-smooth) regularizer, and, λ > 0 is a regularizer parameter. We analyze the squared error performance ‖x̂ − x0‖2 of such estimators in the high-dim...

B. Babadi, Fatemeh Ghapani,

In this paper, we propose a new ridge-type estimator called the new mixed ridge estimator (NMRE) by unifying the sample and prior information in linear measurement error model with additional stochastic linear restrictions. The new estimator is a generalization of the mixed estimator (ME) and ridge estimator (RE). The performances of this new estimator and mixed ridge estimator (MRE) against th...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید