نتایج جستجو برای: experts mixture

تعداد نتایج: 160772  

Journal: :CoRR 2008
Baruch Lubinsky Bekir Genc Tshilidzi Marwala

Neural networks are powerful tools for classification and regression in static environments. This paper describes a technique for creating an ensemble of neural networks that adapts dynamically to changing conditions. The model separates the input space into four regions and each network is given a weight in each region based on its performance on samples from that region. The ensemble adapts d...

2001
Andrew Estabrooks Nathalie Japkowicz

One of the particular characteristics of text classification tasks is that they present large class imbalances. Such a problem can easily be tackled using resampling methods. However, although these approaches are very simple to implement, tuning them most effectively is not an easy task. In particular, it is unclear whether oversampling is more effective than undersampling and which oversampli...

Journal: :Neurocomputing 2006
Minh Ha Nguyen Hussein A. Abbass Robert I. McKay

Combining several suitable neural networks can enhance the generalization performance of the group when compared to a single network alone. However it remains a largely open question, how best to build a suitable combination of individuals. Jacobs and his colleagues proposed the Mixture of Experts (ME) model, in which a set of neural networks are trained together with a gate network. This tight...

Journal: :Neural computation 2016
Hien Duy Nguyen Luke R. Lloyd-Jones Geoffrey J. McLachlan

The mixture-of-experts (MoE) model is a popular neural network architecture for nonlinear regression and classification. The class of MoE mean functions is known to be uniformly convergent to any unknown target function, assuming that the target function is from a Sobolev space that is sufficiently differentiable and that the domain of estimation is a compact unit hypercube. We provide an alter...

Journal: :CoRR 2016
Faicel Chamroukhi

Mixture of Experts (MoE) is a popular framework in the fields of statistics and machine learning for modeling heterogeneity in data for regression, classification and clustering. MoE for continuous data are usually based on the normal distribution. However, it is known that for data with asymmetric behavior, heavy tails and atypical observations, the use of the normal distribution is unsuitable...

Journal: :Statistics and Computing 2014
Julien Cornebise Eric Moulines Jimmy Olsson

Selecting appropriately the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm approximating adaptively the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logi...

2010
Gao Tang Kris Hauser

This paper proposes a discontinuity-sensitive approach to learn the solutions of parametric optimal control problems with high accuracy. Many tasks, ranging from model predictive control to reinforcement learning, may be solved by learning optimal solutions as a function of problem parameters. However, nonconvexity, discrete homotopy classes, and control switching cause discontinuity in the par...

2005
Edward Meeds Simon Osindero

We present an infinite mixture model in which each component comprises a multivariate Gaussian distribution over an input space, and a Gaussian Process model over an output space. Our model is neatly able to deal with non-stationary covariance functions, discontinuities, multimodality and overlapping output signals. The work is similar to that by Rasmussen and Ghahramani [1]; however, we use a ...

Journal: :Neural computation 1999
Ran Avnimelech Nathan Intrator

We present a new supervised learning procedure for ensemble machines, in which outputs of predictors, trained on different distributions, are combined by a dynamic classifier combination model. This procedure may be viewed as either a version of mixture of experts (Jacobs, Jordan, Nowlan, & Hintnon, 1991), applied to classification, or a variant of the boosting algorithm (Schapire, 1990). As a ...

1998
Craig L. Fancourt Jose C. Principe

A new algorithm is proposed that performs competitive principal component analysis (PCA) of an image. A set of expert PCA networks compete, through the Mixture of Experts (MOE) formalism, on the basis of their ability to reconstruct the original image. The result is that the network finds an optimal projection of the image onto a reduced dimensional space as a function of the input and, hence, ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید