View-independent face recognition with Mixture of Experts

نویسندگان

  • Reza Ebrahimpour
  • Ehsanollah Kabir
  • Hossein Esteky
  • Mohammad Reza Yousefi
چکیده

A model for view-independent face recognition, based on Mixture of Experts, ME, is presented. Instead of allowing ME to partition the face space automatically, it is directed to adapt to a particular partitioning corresponding to predetermined views. Experimental results show that this model performs well in recognizing faces of intermediate unseen views. There are neurophysiological evidences that underpin the proposed model, reporting similar mechanisms of pooling the outputs of several view-specific modules to perform viewindependent face recognition. r 2007 Elsevier B.V. All rights reserved.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Teacher-directed learning in view-independent face recognition with mixture of experts using overlapping eigenspaces

A model for view-independent face recognition, based on Mixture of Experts, ME, is presented. In the basic form of ME the problem space is automatically divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our proposed model, the ME is directed to adapt to a particular partitioning corresponding to predetermined views. To force an exper...

متن کامل

Teacher-directed learning in view-independent face recognition with mixture of experts using single-view eigenspaces

We propose a new model for view-independent face recognition by multiview approach. We use the so-called ‘‘mixture of experts’’, ME, in which, the problem space is divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our model, instead of leaving the ME to partition the face space automatically, the ME is directed to adapt to a particul...

متن کامل

Teacher-Directed Learning with Mixture of Experts for View-Independent Face Recognition

We propose two new models for view-independent face recognition, which lies under the category of multiview approaches. We use the so-called “mixture of experts” (MOE) in which, the problem space is divided into several subspaces for the experts, and then the outputs of experts are combined by a gating network to form the final output. Basically, our focus is on the way that the face space is p...

متن کامل

Mixture of Experts for Persian handwritten word recognition

This paper presents the results of Persian handwritten word recognition based on Mixture of Experts technique. In the basic form of ME the problem space is automatically divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our proposed model, we used Mixture of Experts Multi Layered Perceptrons with Momentum term, in the classification ...

متن کامل

Boosted Pre-loaded Mixture of Experts for low-resolution face recognition

A modified version of Boosted Mixture of Experts (BME) for low-resolution face recognition is presented in this paper. Most of the methods developed for low-resolution face recognition focused on improving the resolution of face images and/or special feature extraction methods that can deal effectively with low-resolution problem. However, we focused on the classification step of face recogniti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neurocomputing

دوره 71  شماره 

صفحات  -

تاریخ انتشار 2008