نتایج جستجو برای: bagging

تعداد نتایج: 2077  

2005
Yuan Jiang Jinjiang Ling Gang Li Honghua Dai Zhi-Hua Zhou

In this paper, a new variant of Bagging named DepenBag is proposed. This algorithm obtains bootstrap samples at first. Then, it employs a causal discoverer to induce from each sample a dependency model expressed as a Directed Acyclic Graph (DAG). The attributes without connections to the class attribute in all the DAGs are then removed. Finally, a component learner is trained from each of the r...

2012
Guohua Liang

As growing numbers of real world applications involve imbalanced class distribution or unequal costs for misclassification errors in different classes, learning from imbalanced class distribution is considered to be one of the most challenging issues in data mining research. This study empirically investigates the sensitivity of bagging predictors with respect to 12 algorithms and 9 levels of c...

2003
Lina Petrakieva Colin Fyfe

In this paper, we apply the combination method of bagging which has been developed in the context of supervised learning of classifiers and regressors to the unsupervised artificial neural network known as the Self Organising Map. We show that various initialisation techniques can be used to create maps which are comparable by humans by eye. We then use a semi-supervised version of the SOM to c...

2011
Joaquín Torres-Sospedra Carlos Hernández-Espinosa Mercedes Fernández-Redondo

In previous researches it can been seen that Bagging, Boosting and Cross-Validation Committee can provide good performance separately. In this paper, Boosting methods are mixed with Bagging and Cross-Validation Committee in order to generate accurate ensembles and take benefit from all these alternatives. In this way, the networks are trained according to the boosting methods but the specific t...

2016
Sung-Hwan Min

Ensemble classification combines individually trained classifiers to obtain more accurate predictions than individual classifiers alone. Ensemble techniques are very useful for improving the generalizability of the classifier. Bagging is the method used most commonly for constructing ensemble classifiers. In bagging, different training data subsets are drawn randomly with replacement from the o...

Journal: :JIPS 2014
Deepak Ghimire Joonwhoan Lee

An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a me...

2002
Nitesh V. Chawla Thomas E. Moore Kevin W. Bowyer Philip Kegelmeyer

10 Bagging forms a committee of classifiers by bootstrap aggregation of training sets from a pool of training data. A 11 simple alternative to bagging is to partition the data into disjoint subsets. Experiments with decision tree and neural 12 network classifiers on various datasets show that, given the same size partitions and bags, disjoint partitions result in 13 performance equivalent to, o...

2014
Daniel Gianola Kent A. Weigel Nicole Krämer Alessandra Stella Chris-Carolin Schön

We examined whether or not the predictive ability of genomic best linear unbiased prediction (GBLUP) could be improved via a resampling method used in machine learning: bootstrap aggregating sampling ("bagging"). In theory, bagging can be useful when the predictor has large variance or when the number of markers is much larger than sample size, preventing effective regularization. After present...

1997
Kai Ming Ting Ian H. Witten

In this paper, we investigate the method of stacked generalization in combining models derived from diierent subsets of a training dataset by a single learning algorithm, as well as diierent algorithms. The simplest way to combine predictions from competing models is majority vote, and the eeect of the sampling regime used to generate training subsets has already been studied in this context|wh...

1998
Zijian Zheng

Classiier committee learning approaches have demonstrated great success in increasing the prediction accuracy of classiier learning , which is a key technique for datamining. These approaches generate several classiiers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the nal classiication. It has been shown that Boosting and ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید