نتایج جستجو برای: کمیته bagging

تعداد نتایج: 4107  

Journal: :Data Knowl. Eng. 2008
Zoran Bosnic Igor Kononenko

The paper compares different approaches to estimate the reliability of individual predictions in regression. We compare the sensitivity-based reliability estimates developed in our previous work with four approaches found in the literature: variance of bagged models, local cross-validation, density estimation, and local modeling. By combining pairs of individual estimates, we compose a combined...

2002
Marina Skurichina Ludmila I. Kuncheva Robert P. W. Duin

In combining classifiers, it is believed that diverse ensembles perform better than non-diverse ones. In order to test this hypothesis, we study the accuracy and diversity of ensembles obtained in bagging and boosting applied to the nearest mean classifier. In our simulation study we consider two diversity measures: the Q statistic and the disagreement measure. The experiments, carried out on f...

2014
Nikita Joshi Shweta Srivastava

Using ensemble methods is one of the general strategies to improve the accuracy of classifier and predictor. Bagging is one of the suitable ensemble learning methods. Ensemble learning is a simple, useful and effective metaclassification methodology that combines the predictions from multiple base classifiers (or learners). In this paper we show a comparative study of different classifiers (Dec...

1999
Richard Maclin David Opitz

An ensemble consists of a set of independently trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble as a whole is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman 1996a) and Boosting (F’reund & Schapire 1996) are two relatively new but ...

2005
Jan Macek Supphanat Kanokphara Anja Geumann

HMMs are the dominating technique used in speech recognition today since they perform well in overall phone recognition. In this paper, we show the comparison of HMM methods and machine learning techniques, such as neural networks, decision trees and ensemble classifiers with boosting and bagging in the task of articulatory-acoustic feature classification. The experimental results show that HMM...

2014
Wenge Rong Yifan Nie Yuanxin Ouyang Baolin Peng Zhang Xiong

Sentiment analysis has long been a hot topic for understanding users statements online. Previously many machine learning approaches for sentiment analysis such as simple feature-oriented SVM or more complicated probabilistic models have been proposed. Though they have demonstrated capability in polarity detection, there exist one challenge called the curse of dimensionality due to the high dime...

2000
Yves Grandvalet

Bagging is a procedure averaging estimators trained on bootstrap samples. Numerous experiments have shown that bagged estimates often yield better results than the original predictor, and several explanations have been given to account for this gain. However, six years from its introduction, bagging is still not fully understood. Most explanations given until now are based on global properties ...

2012
Meishan Zhang Wanxiang Che Ting Liu

We describe our method of traditional Phrase Structure Grammar (PSG) parsing in CIPS-Bakeoff2012 Task3. First, bagging is proposed to enhance the baseline performance of PSG parsing. Then we suggest exploiting another TreeBank (CTB7.0) to improve the performance further. Experimental results on the development data set demonstrate that bagging can boost the baseline F1 score from 81.33% to 84.4...

2016
Evan Dowey Matthew Johnson

................................................................................................................................................... vi 1. Background ........................................................................................................................................... 1 1.1 Carbon Fiber ............................................................................

1996
Leo Breiman

Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید