نتایج جستجو برای: boosting ensemble learning

تعداد نتایج: 645106  

2005
Ke Chen

Input/output hidden Markov model (IOHMM) has turned out to be effective in sequential data processing via supervised learning. However, there are several difficulties, e.g. model selection, unexpected local optima and high computational complexity, which hinder an IOHMM from yielding the satisfactory performance in sequence classification. Unlike previous efforts, this paper presents an ensembl...

2003
Luis Daza

A combination of classification rules (classifiers) is known as an Ensemble, and in general it is more accurate than the individual classifiers used to build it. Two popular methods to construct an Ensemble are Bagging (Bootstrap aggregating) introduced by Breiman, [4] and Boosting (Freund and Schapire, [11]). Both methods rely on resampling techniques to obtain different training sets for each...

2003
Jerome H. Friedman Bogdan E. Popescu

Learning a function of many arguments is viewed from the perspective of high– dimensional numerical quadrature. It is shown that many of the popular ensemble learning procedures can be cast in this framework. In particular randomized methods, including bagging and random forests, are seen to correspond to random Monte Carlo integration methods each based on particular importance sampling strate...

2008
Amin Assareh Mohammad Hassan Moradi L. Gwenn Volkert

Classifier fusion strategies have shown great potential to enhance the performance of pattern recognition systems. There is an agreement among researchers in classifier combination that the major factor for producing better accuracy is the diversity in the classifier team. Re-sampling based approaches like bagging, boosting and random subspace generate multiple models by training a single learn...

2008
Elie Prudhomme Stéphane Lallich

The knowledge discovery process encounters the difficulties to analyze large amount of data. Indeed, some theoretical problems related to high dimensional spaces then appear and degrade the predictive capacity of algorithms. In this paper, we propose a new methodology to get a better representation and prediction of huge datasets. For that purpose, an ensemble approach is used to overcome probl...

2016
Gang Wang Li-hua Huang

Credit scoring is an important finance activity. Both statistical techniques and Artificial Intelligence (AI) techniques have been explored for this topic. But different techniques have different advantages and disadvantages on different datasets. Recent studies draw no consistent conclusions to show that one technique is superior to the other, while they suggest combining multiple classifiers,...

2013
Sai Zhang

In this paper, we propose to use classifier ensemble (CE) as a method to enhance the robustness of machine learning (ML) kernels in presence of hardware error. Different ensemble methods (Bagging and Adaboost) are explored with decision tree (C4.5) and artificial neural network (ANN) as base classifiers. Simulation results show that ANN is inherently tolerant to hardware errors with up to 10% h...

2005
Petra Povalej Brzan Mitja Lenic Peter Kokol

In real world there are many examples where synergetic cooperation of multiple entities performs better than just single one. The same fundamental idea can be found in ensemble learning methods that have the ability to improve classification accuracy. Each classifier has specific view on the problem domain and can produce different classification for the same observed sample. Therefore many met...

Journal: :Advances in experimental medicine and biology 2015
Valeriy Gavrishchaka Olga Senyukova Kristina Davis

Previously, we have proposed to use complementary complexity measures discovered by boosting-like ensemble learning for the enhancement of quantitative indicators dealing with necessarily short physiological time series. We have confirmed robustness of such multi-complexity measures for heart rate variability analysis with the emphasis on detection of emerging and intermittent cardiac abnormali...

2005
Chee Peng Lim Wei Yee Goh

In this paper, the application of multiple Elman neural networks to time series data regression problems is studied. An ensemble of Elman networks is formed by boosting to enhance the performance of the individual networks. A modified version of the AdaBoost algorithm is employed to integrate the predictions from multiple networks. Two benchmark time series data sets, i.e., the Sunspot and Box-...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید