نتایج جستجو برای: adaboost learning
تعداد نتایج: 601957 فیلتر نتایج به سال:
The notion of a boosting algorithm was originally introduced by Valiant in the context of the “probably approximately correct” (PAC) model of learnability [19]. In this context boosting is a method for provably improving the accuracy of any “weak” classification learning algorithm. The first boosting algorithm was invented by Schapire [16] and the second one by Freund [2]. These two algorithms ...
This paper presents a variant of the AdaBoost algorithm for boosting Näıve Bayes text classifier, called AdaBUS, which combines active learning with boosting algorithm. Boosting has been evaluated to effectively improve the accuracy of machine-learning based classifiers. However, Näıve Bayes classifier, which is remarkably successful in practice for text classification problems, is known not to...
This paper describes a face detection approach via learning local features. The key idea is that local features, being manifested by a collection of pixels in a local region, are learnt from the training set instead of arbitrarily defined. The learning procedure consists of two steps. First, a modified version of NMF (Non-negative Matrix Factorization), namely local NMF (LNMF), is applied to ge...
One approach in classification tasks is to use machine learning techniques to derive classifiers using learning instances. The cooperation of several base classifiers as a decision committee has succeeded to reduce classification error. The main current decision committee learning approaches boosting and bagging use resampling with the training set and they can be used with different machine le...
Multiple-instance learning (MIL) is a generalization of the supervised learning problem where each training observation is a labeled bag of unlabeled instances. Several supervised learning algorithms have been successfully adapted for the multiple-instance learning settings. We explore the adaptation of the Naive Bayes (NB) classifier and the utilization of its sufficient statistics for develop...
In this paper we propose an approach for ensemble construction based on the use of supervised projections, both linear and non-linear, to achieve both accuracy and diversity of individual classifiers. The proposed approach uses the philosophy of boosting, putting more effort on difficult instances, but instead of learning the classifier on a biased distribution of the training set, it uses misc...
Boosting methods are known to improve generalization performances of learning algorithms reducing both bias and variance or enlarging the margin of the resulting multi-classifier system. In this contribution we applied Adaboost to the discrimination of different types of coffee using data produced with an Electronic Nose. Two groups of coffees (blends and monovarieties), consisting of seven cla...
We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-SquareBoost algorithm for regression. These methods have in common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from numerical o...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید