نتایج جستجو برای: bagging
تعداد نتایج: 2077 فیلتر نتایج به سال:
Bagging is an ensemble learning method that has proved to be a useful tool in the arsenal of machine learning practitioners. Commonly applied in conjunction with decision tree learners to build an ensemble of decision trees, it often leads to reduced errors in the predictions when compared to using a single tree. A single tree is built from a training set of size N . Bagging is based on the ide...
Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classi6cation result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classi6cation performance of the real SVM, w...
We present attribute bagging (AB), a technique for improving the accuracy and stability of classi#er ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classi#ers ar...
In classifier combination, it is believed that diverse ensembles have a better potential for improvement on the accuracy than nondiverse ensembles. We put this hypothesis to a test for two methods for building the ensembles: Bagging and Boosting, with two linear classifier models: the nearest mean classifier and the pseudo-Fisher linear discriminant classifier. To estimate diversity, we apply n...
While the support vector machine (SVM) can provide a good generalization performance, the classification result of the SVM is often far from the theoretically expected level in practical implementation because they are based on approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use an SVM ensembl...
Classi er committee learning methods generate multiple classi ers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the nal classication. Two such methods, Bagging and Boosting, have shown great success with decision tree learning. They create di erent classi ers by modifying the distribution of the training set. This paper stu...
We experimentally evaluate bagging and six other randomization-based approaches to creating an ensemble of decision-tree classifiers. Bagging uses randomization to create multiple training sets. Other approaches, such as Randomized C4.5 apply randomization in selecting a test at a given node of a tree. Then there are approaches, such as random forests and random subspaces, that apply randomizat...
An ensemble consists of a set of independently trained classiiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classi-ers in the ensemble. Bagging (Breiman 1996a) and Boosting (Freund & Schapire 1996) are two relatively new but pop...
Under-sampling extensions of bagging are currently the most accurate ensembles specialized for class imbalanced data. Nevertheless, since improvements of recognition of the minority class, in this type of ensembles, are usually associated with a decrease of recognition of majority classes, we introduce a new, two phase, ensemble called Actively Balanced Bagging. The proposal is to first learn a...
Building the survivability prediction models is a challenging task because they provide an important approach to assessing risk and prognosis. In this paper, we investigated the performance of combining of the Bagging with several weak learners to build 5-accurate breast cancer survivability prediction models from the Srinagarind hospital database in Thailand. These models could assist medical ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید