نتایج جستجو برای: bootstrap aggregating

تعداد نتایج: 18325  

2002
Hyun-Chul Kim Shaoning Pang Hong-Mo Je Daijin Kim Sung Yang Bang

Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM,...

2005
Giorgio Fumera Fabio Roli Alessandra Serrau

In this paper the performance of bagging in classification problems is theoretically analysed, using a framework developed in works by Tumer and Ghosh and extended by the authors. A bias-variance decomposition is derived, which relates the expected misclassification probability attained by linearly combining classifiers trained on N bootstrap replicates of a fixed training set to that attained ...

Journal: :Pattern Recognition 2003
Torsten Hothorn Berthold Lausen

The combination of classi"ers leads to substantial reduction of misclassi"cation error in a wide range of applications and benchmark problems. We suggest using an out-of-bag sample for combining di0erent classi"ers. In our setup, a linear discriminant analysis is performed using the observations in the out-of-bag sample, and the corresponding discriminant variables computed for the observations...

2001
Patrice Latinne Olivier Debeir Christine Decaestecker

The aim of this paper is to propose a simple procedure that a priori determines a minimum number of classifiers to combine in order to obtain a prediction accuracy level similar to the one obtained with the combination of larger ensembles. The procedure is based on the McNemar non-parametric test of significance. Knowing a priori the minimum size of the classifier ensemble giving the best predi...

Journal: :Pattern Recognition 2011
Ivica Dimitrovski Dragi Kocev Suzana Loskovska Saso Dzeroski

In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique ...

2012
S. Kanmani

A Classifier Ensemble (CE) efficiently improves the generalization ability of the classifier compared to a single classifier. This paper proposes an alternate approach for Integration of classifier ensembles. Initially three classifiers that are highly diverse and showed good classification accuracy when applied to six UCI (University of California, Irvine) datasets are selected. Then Feature S...

2001
Marina Skurichina Robert P. W. Duin

The performance of a single weak classifier can be improved by using combining techniques such as bagging, boosting and the random subspace method. When applying them to linear discriminant analysis, it appears that they are useful in different situations. Their performance is strongly affected by the choice of the base classifier and the training sample size. As well, their usefulness depends ...

Journal: :CoRR 2015
Kishore Reddy Konda Xavier Bouthillier Roland Memisevic Pascal Vincent

Dropout is typically interpreted as bagging a large number of models sharing parameters. We show that using dropout in a network can also be interpreted as a kind of data augmentation in the input space without domain knowledge. We present an approach to projecting the dropout noise within a network back into the input space, thereby generating augmented versions of the training data, and we sh...

2002
Hyun-Chul Kim Shaoning Pang Hong-Mo Je Daijin Kim Sung Yang Bang

While the support vector machine (SVM) can provide a good generalization performance, the classification result of the SVM is often far from the theoretically expected level in practical implementation because they are based on approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use an SVM ensembl...

1999
J. R. Quinlan

Breiman’s bagging and Freund and Schapire’s boosting are recent methods for improving the predictive power of classifier learning systems. Both form a set of classifiers that are combined by voting, bagging by generating replicated bootstrap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that l...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید