نتایج جستجو برای: experts mixture
تعداد نتایج: 160772 فیلتر نتایج به سال:
In mixtures-of-experts (ME) model, where a number of submodels (experts) are combined, there have been two longstanding problems: (i) how many experts should be chosen, given the size of the training data? (ii) given the total number of parameters, is it better to use a few very complex experts, or is it better to combine many simple experts? In this paper, we try to provide some insights to th...
A considerable body of evidence from prosopagnosia, a deficit in face recognition dissociable from nonface object recognition, indicates that the visual system devotes a specialized functional area to mechanisms appropriate for face processing. We present a modular neural network composed of two “expert” networks and one mediating “gate” network with the task of learning to recognize the faces ...
Mixture of Gaussian processes models extended a single Gaussian process with ability of modeling multi-modal data and reduction of training complexity. Previous inference algorithms for these models are mostly based on Gibbs sampling, which can be very slow, particularly for large-scale data sets. We present a new generative mixture of experts model. Each expert is still a Gaussian process but ...
Abstract. The statistical properties of the likelihood ratio test statistic (LRTS) for mixture-of-expert models are addressed in this paper. This question is essential when estimating the number of experts in the model. Our purpose is to extend the existing results for mixtures (Liu and Shao, 2003) and mixtures of multilayer perceptrons (Olteanu and Rynkiewicz, 2008). In this paper we study a s...
Expert combination is a classic strategy that has been widely used in various problem solving tasks. A team of individuals with diverse and complementary skills tackle a task jointly such that a performance better than any single individual can make is achieved via integrating the strengths of individuals. Started from the late 1980’ in the handwritten character recognition literature, studies ...
We study generalization capability of the mixture of experts learning from examples generated by another network with the same architecture. When the number of examples is smaller than a critical value, the network shows a symmetric phase where the role of the experts is not specialized. Upon crossing the critical point, the system undergoes a continuous phase transition to a symmetry breaking ...
The \mixture of experts" framework provides a modular and exible approach to function approximation. However, the important problem of determining the appropriate number and complexity of experts has not been fully explored. In this paper, we consider a localized form of the gating network that can perform function approximation tasks very well with only one layer of experts. Certain measures f...
Difficult non-linear problems can be mapped to a set of localised linear problems. A self-organising map (SOM) is used as a gating function to a localised mixture of experts classifier and is shown to find solutions equivalent to those learned by a multi-layer perceptron while retaining the simplicity and resilience of a single-layer perceptron. Modifications to the traditional softmax gate fun...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید