نتایج جستجو برای: experts mixture

تعداد نتایج: 160772  

Journal: :CoRR 2014
Billy Peralta

A useful strategy to deal with complex classification scenarios is the “divide and conquer” approach. The mixture of experts (MOE) technique makes use of this strategy by joinly training a set of classifiers, or experts, that are specialized in different regions of the input space. A global model, or gate function, complements the experts by learning a function that weights their relevance in d...

2007
Tracey A. Bale Khurshid Ahmad Tracey Bale

The mixture-of-experts model is a static neural network architecture in that it learns input-output mappings where the output is directly influenced by the current input but not previous inputs. We explore a dynamic version of the mixture-of-experts model by introducing feedback into the architecture, enabling it to learn temporal behaviour. The model’s ability to decompose a task into static a...

Journal: :CoRR 2017
Tianyi Zhao Jun Yu Zhenzhong Kuang Wei Zhang Jianping Fan

In this paper, a deep mixture of diverse experts algorithm is developed for seamlessly combining a set of base deep CNNs (convolutional neural networks) with diverse outputs (task spaces), e.g., such base deep CNNs are trained to recognize different subsets of tens of thousands of atomic object classes. First, a two-layer (category layer and object class layer) ontology is constructed to achiev...

1995
Yair Weiss

Estimating motion in scenes containing multiple motions remains a diicult problem for computer vision. Here we describe a novel recurrent network architecture which solves this problem by simultaneously estimating motion and segmenting the scene. The network is comprised of locally connected units which carry out simple calculations in parallel. We present simulation results illustrating the su...

1996
Viswanath Ramamurti Joydeep Ghosh

The Hierarchical mixture of experts(HME) architecture is a powerful tree structured architecture for supervised learning. In this paper, an eecient one-pass algorithm to solve the M-step of the EM iterations while training the HME network to perform classiication tasks, is rst described. This substantially reduces the training time compared to using the IRLS method to solve the M-step. Further,...

Journal: :IEEE Trans. Signal Processing 1997
Ajit V. Rao David J. Miller Kenneth Rose Allen Gersho

We propose a new learning algorithm for regression modeling. The method is especially suitable for optimizing neural network structures that are amenable to a statistical description as mixture models. These include mixture of experts, hierarchical mixture of experts (HME), and normalized radial basis functions (NRBF). Unlike recent maximum likelihood (ML) approaches, we directly minimize the (...

Journal: :Neurocomputing 2008
Reza Ebrahimpour Ehsanollah Kabir Hossein Esteky Mohammad Reza Yousefi

A model for view-independent face recognition, based on Mixture of Experts, ME, is presented. Instead of allowing ME to partition the face space automatically, it is directed to adapt to a particular partitioning corresponding to predetermined views. Experimental results show that this model performs well in recognizing faces of intermediate unseen views. There are neurophysiological evidences ...

2007
Katherine A. Heller Zoubin Ghahramani

Although clustering data into mutually exclusive partitions has been an extremely successful approach to unsupervised learning, there are many situations in which a richer model is needed to fully represent the data. This is the case in problems where data points actually simultaneously belong to multiple, overlapping clusters. For example a particular gene may have several functions, therefore...

2004
Arik Azran Ron Meir

The hierarchical mixture of experts architecture provides a flexible procedure for implementing classification algorithms. The classification is obtained by a recursive soft partition of the feature space in a data-driven fashion. Such a procedure enables local classification where several experts are used, each of which is assigned with the task of classification over some subspace of the feat...

Journal: :Neurocomputing 1998
Lei Xu

The connections of the alternative model for mixture of experts (ME) to the normalized radial basis function (NRBF) nets and extended normalized RBF (ENRBF) nets are established, and the well-known expectation-maximization (EM) algorithm for maximum likelihood learning is suggested to the two types of RBF nets. This new learning technique determines the parameters of the input layer (including ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید