نتایج جستجو برای: fold cross validation

تعداد نتایج: 773095  

Ahmad Shalbaf, Arash Maghsoudi, Sara Bagherzadeh,

Background: Schizophrenia is a mental disorder that severely affects the perception and relations of individuals. Nowadays, this disease is diagnosed by psychiatrists based on psychiatric tests, which is highly dependent on their experience and knowledge. This study aimed to design a fully automated framework for the diagnosis of schizophrenia from electroencephalogram signals using advanced de...

2005
Bjørn-Helge Mevik Henrik René Cederkvist

The paper presents results from simulations based on real data, comparing several competing mean squared error of prediction (MSEP) estimators on principal components regression (PCR) and partial least squares regression (PLSR): leave-one-out crossvalidation, K-fold and adjusted K-fold cross-validation, the ordinary bootstrap estimate, the bootstrap smoothed cross-validation (BCV) estimate and ...

Journal: :Journal of the American Medical Informatics Association 2020

2017
Zeyi Wen Bin Li Kotagiri Ramamohanarao Jian Chen Yawen Chen Rui Zhang

The k-fold cross-validation is commonly used to evaluate the effectiveness of SVMs with the selected hyper-parameters. It is known that the SVM k-fold cross-validation is expensive, since it requires training k SVMs. However, little work has explored reusing the h SVM for training the (h+ 1) SVM for improving the efficiency of k-fold cross-validation. In this paper, we propose three algorithms ...

Journal: :Journal of Machine Learning Research 2016
Sylvain Arlot Matthieu Lerasle

This paper studies V -fold cross-validation for model selection in least-squares density estimation. The goal is to provide theoretical grounds for choosing V in order to minimize the least-squares loss of the selected estimator. We first prove a non-asymptotic oracle inequality for V -fold cross-validation and its bias-corrected version (V -fold penalization). In particular, this result implie...

1995
Ron Kohavi

We review accuracy estimation methods and compare the two most commonmethods cross validation and bootstrap Recent experimen tal results on arti cial data and theoretical re sults in restricted settings have shown that for selecting a good classi er from a set of classi ers model selection ten fold cross validation may be better than the more expensive leave one out cross validation We report o...

2002
Jos De Brabanter Kristiaan Pelckmans Johan A. K. Suykens Joos Vandewalle

In this paper a new method for tuning regularisation parameters or other hyperparameters of a learning process (non-linear function estimation) is proposed, called robust cross-validation score function (CV S−fold). CV Robust S−fold is effective for dealing with outliers and nonGaussian noise distributions on the data. Illustrative simulation results are given to demonstrate that the CV S−fold ...

2013
Ravi Kumar Daniel Lokshtanov Sergei Vassilvitskii Andrea Vattani

Multi-fold cross-validation is an established practice to estimate the error rate of a learning algorithm. Quantifying the variance reduction gains due to cross-validation has been challenging due to the inherent correlations introduced by the folds. In this work we introduce a new and weak measure called loss stability and relate the cross-validation performance to this measure; we also establ...

2006
Joaquín Torres-Sospedra Carlos Hernández-Espinosa Mercedes Fernández-Redondo

As seen in the bibliography, Adaptive Boosting (Adaboost) is one of the most known methods to increase the performance of an ensemble of neural networks. We introduce a new method based on Adaboost where we have applied Cross-Validation to increase the diversity of the ensemble. We have used CrossValidation over the whole learning set to generate an specific training set and validation set for ...

2003
Yoshua Bengio Yves Grandvalet

1 Motivations In machine learning, the standard measure of accuracy for models is the prediction error (PE), i.e. the expected loss on future examples. We consider here the i.i.d. regression or classification setups, where future examples are assumed to be independently sampled from the distribution that generated the training set. When the data distribution is unknown, PE cannot be computed. T...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید