نتایج جستجو برای: bagging model

تعداد نتایج: 2105681  

2000
Alexey Tsymbal Seppo Puuronen

One approach in classification tasks is to use machine learning techniques to derive classifiers using learning instances. The cooperation of several base classifiers as a decision committee has succeeded to reduce classification error. The main current decision committee learning approaches boosting and bagging use resampling with the training set and they can be used with different machine le...

2014
Periklis A. Papakonstantinou Jia Xu Zhu Cao

Bagging (Breiman 1996) and its variants is one of the most popular methods in aggregating classifiers and regressors. Originally, its analysis assumed that the bootstraps are built from an unlimited, independent source of samples, therefore we call this form of bagging ideal-bagging. However in the real world, base predictors are trained on data subsampled from a limited number of training samp...

2003
Song Xi Chen Peter Hall

Bagging an estimator approximately doubles its bias through the impact of bagging on quadratic terms in expansions of the estimator. This difficulty can be alleviated by bagging a suitably bias-corrected estimator, however. In these and other circumstances, what is the overall impact of bagging and/or bias correction, and how can it be characterised? We answer these questions in the case of gen...

Journal: :Nature 2001

2014
Tae-Hwy Lee Yundong Tu Aman Ullah

The equity premium, return on equity minus return on risk-free asset, is expected to be positive. We consider imposing such positivity constraint in local historical average (LHA) in nonparametric kernel regression framework. It is also extended to the semiparametric single index model when multiple predictors are used. We construct the constrained LHA estimator via an indicator function which ...

2010
Albert Bifet Geoff Holmes Bernhard Pfahringer

Bagging, boosting and Random Forests are classical ensemble methods used to improve the performance of single classifiers. They obtain superior performance by increasing the accuracy and diversity of the single classifiers. Attempts have been made to reproduce these methods in the more challenging context of evolving data streams. In this paper, we propose a new variant of bagging, called lever...

Journal: :Machine Learning 2004

Journal: :Neurocomputing 2015
Jerzy Blaszczynski Jerzy Stefanowski

Various approaches to extend bagging ensembles for class imbalanced data are considered. First, we review known extensions and compare them in a comprehensive experimental study. The results show that integrating bagging with under-sampling is more powerful than over-sampling. They also allow to distinguish Roughly Balanced Bagging as the most accurate extension. Then, we point out that complex...

Journal: :Pattern Recognition 1998
Marina Skurichina Robert P. W. Duin

Classifiers built on small training sets are usually biased or unstable. Different techniques exist to construct more stable classifiers. It is not clear which ones are good, and whether they really stabilize the classifier or just improve the performance. In this paper bagging (bootstrapping and aggregating (1)) is studied for a number of linear classifiers. A measure for the instability of cl...

Journal: :Comput. Sci. Inf. Syst. 2006
Kristína Machova Miroslav Puszta Frantisek Barcák Peter Bednár

In this paper we present an improvement of the precision of classification algorithm results. Two various approaches are known: bagging and boosting. This paper describes a set of experiments with bagging and boosting methods. Our use of these methods aims at classification algorithms generating decision trees. Results of performance tests focused on the use of the bagging and boosting methods ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید