نتایج جستجو برای: bagging model

تعداد نتایج: 2105681  

2014
Wenge Rong Yifan Nie Yuanxin Ouyang Baolin Peng Zhang Xiong

Sentiment analysis has long been a hot topic for understanding users statements online. Previously many machine learning approaches for sentiment analysis such as simple feature-oriented SVM or more complicated probabilistic models have been proposed. Though they have demonstrated capability in polarity detection, there exist one challenge called the curse of dimensionality due to the high dime...

2000
Yves Grandvalet

Bagging is a procedure averaging estimators trained on bootstrap samples. Numerous experiments have shown that bagged estimates often yield better results than the original predictor, and several explanations have been given to account for this gain. However, six years from its introduction, bagging is still not fully understood. Most explanations given until now are based on global properties ...

2016
Evan Dowey Matthew Johnson

................................................................................................................................................... vi 1. Background ........................................................................................................................................... 1 1.1 Carbon Fiber ............................................................................

1996
Leo Breiman

Bagging predictors is a method for generating multiple versions of a predictor and using these to get an aggregated predictor. The aggregation averages over the versions when predicting a numerical outcome and does a plurality vote when predicting a class. The multiple versions are formed by making bootstrap replicates of the learning set and using these as new learning sets. Tests on real and ...

2000
Andreas Buja Werner Stuetzle

Bagging is a device intended for reducing the prediction error of learning algorithms. In its simplest form, bagging draws bootstrap samples from the training sample, applies the learning algorithm to each bootstrap sample, and then averages the resulting prediction rules. We study the von Mises expansion of a bagged statistical functional and show that it is related to the Stein-Efron ANOVA ex...

1996
J. Ross Quinlan

Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classiier learning systems. Both form a set of classiiers that are combined by voting, bagging by generating replicated boot-strap samples of the data, and boosting by adjusting the weights of training instances. This paper reports results of applying both techniques to a system that le...

2011
Quan Sun Bernhard Pfahringer

Ensemble selection has recently appeared as a popular ensemble learning method, not only because its implementation is fairly straightforward, but also due to its excellent predictive performance on practical problems. The method has been highlighted in winning solutions of many data mining competitions, such as the Netflix competition, the KDD Cup 2009 and 2010, the UCSD FICO contest 2010, and...

1998
Bruce A. Draper Kyungim Baek

Previous research has shown that aggregated predictors improve the performance of non-parametric function approximation techniques. This paper presents the results of applying aggregated predictors to a computer vision problem, and shows that the method of bagging signi cantly improves performance. In fact, the results are better than those previously reported on other domains. This paper expla...

2003
J R Quinlan

Breiman s bagging and Freund and Schapire s boosting are recent methods for improving the predictive power of classi er learning systems Both form a set of classi ers that are combined by voting bagging by generating replicated boot strap samples of the data and boosting by ad justing the weights of training instances This paper reports results of applying both techniques to a system that learn...

Journal: :Intell. Data Anal. 1998
Pedro M. Domingos

If it is to qualify as knowledge, a learner's output should be accurate, stable and comprehensible. Learning multiple models can improve signiicantly on the accuracy and stability of single models, but at the cost of losing their comprehensibility (when they possess it, as do, for example, simple decision trees and rule sets). This article proposes and evaluates CMM, a meta-learner that seeks t...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید