نتایج جستجو برای: experts mixture

تعداد نتایج: 160772  

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2023

Existing works on anomaly detection (AD) rely clean labels from human annotators that are expensive to acquire in practice. In this work, we propose a method leverage weak/noisy (e.g., risk scores generated by machine rules for detecting malware) cheaper obtain detection. Specifically, ADMoE, the first framework algorithms learn noisy labels. nutshell, ADMoE leverages mixture-of-experts (MoE) a...

Journal: :Lecture Notes in Computer Science 2021

Federated learning (FL) is an emerging distributed machine paradigm that avoids data sharing among training nodes so as to protect privacy. Under the coordination of FL server, each client conducts model using its own computing resource and private set. The global can be created by aggregating results clients. To cope with highly non-IID distributions, personalized federated (PFL) has been prop...

1996
Christopher C. Vogt Garrison W. Cottrell Richard K. Belew Brian T. Bartell

A linear mixture of experts is used to combine three standard IR systems. The parameters for the mixture are determined automatically through training on document relevance assessments via optimization of a rank-order statistic which is empirically correlated with average precision. The mixture improves performance in some cases and degrades it in others, with the degradations possibly due to t...

Journal: :CoRR 2013
David Eigen Marc'Aurelio Ranzato Ilya Sutskever

Mixtures of Experts combine the outputs of several “expert” networks, each of which specializes in a different part of the input space. This is achieved by training a “gating” network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this w...

Journal: :IEEE Transactions on Biometrics, Behavior, and Identity Science 2020

1998
J. Tin-Yau Kwok

In this paper, we study the incorporation of the support vector machine (SVM) into the (hierarchical) mixture of experts model to form a support vector mixture. We show that, in both classification and regression problems, the use of a support vector mixture leads to quadratic programming (QP) problems that are very similar to those for a SVM, with no increase in the dimensionality of the QP pr...

Journal: :IEEE Transactions on Neural Networks and Learning Systems 2015

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید