نتایج جستجو برای: cross validation error

تعداد نتایج: 878094  

Journal: :J. Exp. Theor. Artif. Intell. 1994
Peter D. Turney

This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore , the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error....

Journal: :desert 2014
mohammadali zare chahouki asghar zare chahouki arash malekian reza bagheri fahraji s.a. vesali

rainfall is considered a highly valuable climatologic resource, particularly in arid regions. as one of the primaryinputs that drive watershed dynamics, rainfall has been shown to be crucial for accurate distributed hydrologicmodeling. precipitation is known only at certain locations; interpolation procedures are needed to predict this variablein other regions. in this study, the ordinary cokri...

1995
Timothy L. Bailey Charles Elkan

Cross-validation is a frequently used, intuitively pleasing technique for estimating the accuracy of theories learned by machine learning algorithms. During testing of a machine learning algorithm (foil) on new databases of prokaryotic RNA transcription promoters which we have developed, cross-validation displayed an interesting phenomenon. One theory is found repeatedly and is responsible for ...

E Salahi Parvin N Gholami P Asadolahi P Hanafizadeh

There are three major strategies to form neural network ensembles. The simplest one is the Cross Validation strategy in which all members are trained with the same training data. Bagging and boosting strategies pro-duce perturbed sample from training data. This paper provides an ideal model based on two important factors: activation function and number of neurons in the hidden layer and based u...

Journal: :Neural computation 2017
Ruibo Wang Yu Wang Jihong Li Xingli Yang Jing Yang

A cross-validation method based on [Formula: see text] replications of two-fold cross validation is called an [Formula: see text] cross validation. An [Formula: see text] cross validation is used in estimating the generalization error and comparing of algorithms' performance in machine learning. However, the variance of the estimator of the generalization error in [Formula: see text] cross vali...

Journal: :Bioinformatics 2007
Ian A. Wood Peter M. Visscher Kerrie L. Mengersen

MOTIVATION Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues consi...

2007
JEFFREY S. RACINE

Determining the most appropriate network architecture for a data generating process (DGP) is a fundamental aspect of modeling relationships via artiicial neural networks. Cross-validatory techniques rank among the most popular approaches toward architecture-selection. Cross-validation is used to estimate the expected squared prediction error of a model. Architecture-selection via cross-validati...

Journal: :journal of optimization in industrial engineering 2012
behnam vahdani seyed meysam mousavi morteza mousakhani mani sharifi hassan hashemi

estimation of the conceptual costs in construction projects can be regarded as an important issue in feasibility studies. this estimation has a major impact on the success of construction projects. indeed, this estimation supports the required information that can be employed in cost management and budgeting of these projects. the purpose of this paper is to introduce an intelligent model to im...

1995
David H. Wolpert

This paper uses off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. It is shown, loosely speaking, that for any two algorithms A and B, there are as many targets (or priors over targets) for which A has lower expected OTS error than B as vice-versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation a...

1996
DAVID H. WOLPERT WILLIAM G. MACREADY

Bagging [1] is a technique that tries to improve a learning algorithm's performance by using bootstrap replicates of the training set [5, 4]. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive for leave-one-out cross-validation one needs to train the underlying algorithm on the order of m times, where...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید