نتایج جستجو برای: mean squared error mse and root mean squared error rmse if me and mse are closer to zero

تعداد نتایج: 18449156  

2007
Tae-Young Heo Jong-Min Kim

In this paper, we examine the problem of estimating the sensitive characteristics and behaviors in a multinomial randomized response model using Bayesian approach. We derived a posterior distribution for parameter of interest for multinomial randomized response model. Based on the posterior distribution, we also calculated a credible intervals and mean squared error (MSE). We finally compare th...

Journal: :علوم دامی ایران 0
عباس صفری علیقیارلو دانشجوی کارشناسی ارشد رسول واعظ ترشیزی دانشیار دانشگاه تربیت مدرس عباس پاکدل عضو هیئت علمی

throughout the current study, the appropriate function for a description of egg production curve in a commercial broiler dam line was determined through fittings of a number of six mathematical models, namely: incomplete gamma function (wm): modified incomplete gamma function (mwm): compartmental function (cm): modified compartmental function )mcm): polynomial regression function of ali and sch...

2006
Richard A. Ashley

While the conditional mean is known to provide the minimum mean square error (MSE) forecast – and hence is optimal under a squared-error loss function – it must often in practice be replaced by a noisy estimate when model parameters are estimated over a small sample. Here two results are obtained, both of which motivate the use of forecasts biased toward zero (shrinkage forecasts) in such setti...

ژورنال: علوم آب و خاک 2011
مهدی همایی, , وحیدرضا جلالی, ,

Soil bulk density measurements are often required as an input parameter for models that predict soil processes. Nonparametric approaches are being used in various fields to estimate continuous variables. One type of the nonparametric lazy learning algorithms, a k-nearest neighbor (k-NN) algorithm was introduced and tested to estimate soil bulk density from other soil properties, including soil ...

Journal: :EURASIP J. Wireless Comm. and Networking 2017
Chee-Hyun Park Joon-Hyuk Chang

We propose a line-of-sight (LOS)/non-line-of-sight (NLOS) mixture source localization algorithms that utilize the weighted block Newton (WBN) and variable step size WBN (VSSWBN) method, in which the weighting matrix is determined in the form of the inverse of the squared error or as an exponential function with a negative exponent. The proposed WBN and VSSWBN algorithms converge in two iteratio...

Journal: :CoRR 2014
Collins Leke Bhekisipho Twala Tshilidzi Marwala

This paper presents methods which are aimed at finding approximations to missing data in a dataset by using optimization algorithms to optimize the network parameters after which prediction and classification tasks can be performed. The optimization methods that are considered are genetic algorithm (GA), simulated annealing (SA), particle swarm optimization (PSO), random forest (RF) and negativ...

Journal: :IEEE Transactions on Signal Processing 2017

2012
James Ting-Ho Lo Yichuan Gui Yun Peng

A method of training multilayer perceptrons (MLPs) to reach a global or nearly global minimum of the standard mean squared error (MSE) criterion is proposed. It has been found that the region in the weight space that does not have a local minimum of the normalized riskaverting error (NRAE) criterion expands strictly to the entire weight space as the risk-sensitivity index increases to infinity....

Mahdi Roozbeh, Monireh Maanavi,

Background and purpose: Machine learning is a class of modern and strong tools that can solve many important problems that nowadays humans may be faced with. Support vector regression (SVR) is a way to build a regression model which is an incredible member of the machine learning family. SVR has been proven to be an effective tool in real-value function estimation. As a supervised-learning appr...

Journal: :IEEE Trans. Signal Processing 2001
Simon I. Hill Robert C. Williamson

This paper studies three related algorithms: the (traditional) Gradient Descent (GD) Algorithm, the Exponentiated Gradient Algorithm with Positive and Negative weights (EG algorithm) and the Exponentiated Gradient Algorithm with Unnormalized Positive and Negative weights (EGU algorithm). These algorithms have been previously analyzed using the “mistake-bound framework” in the computational lear...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید