نتایج جستجو برای: کمیته bagging

تعداد نتایج: 4107  

2007
Carlos Valle Ricardo Ñanculef Héctor Allende Claudio Moraga

In this paper, we present two ensemble learning algorithms which make use of boostrapping and out-of-bag estimation in an attempt to inherit the robustness of bagging to overfitting. As against bagging, with these algorithms learners have visibility on the other learners and cooperate to get diversity, a characteristic that has proved to be an issue of major concern to ensemble models. Experime...

2006
José María Martínez-Otzeta Basilio Sierra Elena Lazkano Ekaitz Jauregi

Classifier ensembles is an active area of research within the machine learning community. One of the most successful techniques is bagging, where an algorithm (typically a decision tree inducer) is applied over several different training sets, obtained applying sampling with replacement to the original database. In this paper we define a framework where sampling with and without replacement can...

2006
Eibe Frank Bernhard Pfahringer

Bagging is an ensemble learning method that has proved to be a useful tool in the arsenal of machine learning practitioners. Commonly applied in conjunction with decision tree learners to build an ensemble of decision trees, it often leads to reduced errors in the predictions when compared to using a single tree. A single tree is built from a training set of size N . Bagging is based on the ide...

Journal: :Pattern Recognition 2003
Hyun-Chul Kim Shaoning Pang Hong-Mo Je Daijin Kim Sung Yang Bang

Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classi6cation result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classi6cation performance of the real SVM, w...

Journal: :Pattern Recognition 2003
Robert K. Bryll Ricardo Gutierrez-Osuna Francis K. H. Quek

We present attribute bagging (AB), a technique for improving the accuracy and stability of classi#er ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classi#ers ar...

Journal: :Information Fusion 2002
Ludmila I. Kuncheva Marina Skurichina Robert P. W. Duin

In classifier combination, it is believed that diverse ensembles have a better potential for improvement on the accuracy than nondiverse ensembles. We put this hypothesis to a test for two methods for building the ensembles: Bagging and Boosting, with two linear classifier models: the nearest mean classifier and the pseudo-Fisher linear discriminant classifier. To estimate diversity, we apply n...

2002
Hyun-Chul Kim Shaoning Pang Hong-Mo Je Daijin Kim Sung Yang Bang

While the support vector machine (SVM) can provide a good generalization performance, the classification result of the SVM is often far from the theoretically expected level in practical implementation because they are based on approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use an SVM ensembl...

1998
Zijian Zheng Geoffrey I. Webb

Classi er committee learning methods generate multiple classi ers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the nal classication. Two such methods, Bagging and Boosting, have shown great success with decision tree learning. They create di erent classi ers by modifying the distribution of the training set. This paper stu...

2004
Robert E. Banfield Lawrence O. Hall Kevin W. Bowyer Divya Bhadoria W. Philip Kegelmeyer Steven Eschrich

We experimentally evaluate bagging and six other randomization-based approaches to creating an ensemble of decision-tree classifiers. Bagging uses randomization to create multiple training sets. Other approaches, such as Randomized C4.5 apply randomization in selecting a test at a given node of a tree. Then there are approaches, such as random forests and random subspaces, that apply randomizat...

1997
Richard Maclin David Opitz

An ensemble consists of a set of independently trained classiiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classi-ers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but pop...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید