نتایج جستجو برای: gaussian processes
تعداد نتایج: 598040 فیلتر نتایج به سال:
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approxim...
Gaussian process models are flexible, Bayesian non-parametric approaches to regression. Properties of multivariate Gaussians mean that they can be combined linearly in the manner of additive models and via a link function (like in generalized linear models) to handle non-Gaussian data. However, the link function formalism is restrictive, link functions are always invertible and must convert a p...
Only the k(x * , X) term depends on the test point x * , therefore to calculate the slope of the posterior mean we just need to differentiate the kernel. For the squared exponential covariance function the derivative of the kernel between x * and a training point x i is,
We introduce the mixture of Gaussian processes (MGP) model which is useful for applications in which the optimal bandwidth of a map is input dependent. The MGP is derived from the mixture of experts model and can also be used for modeling general conditional probability densities. We discuss how Gaussian processes -in particular in form of Gaussian process classification, the support vector mac...
The Bayesian analysis of neural networks is difficult because a simple prior over weights implies a complex prior distribution over functions . In this paper we investigate the use of Gaussian process priors over functions, which permit the predictive Bayesian analysis for fixed values of hyperparameters to be carried out exactly using matrix operations. Two methods, using optimization and aver...
We introduce a Gaussian process model of functions which are additive. An additive function is one which decomposes into a sum of low-dimensional functions, each depending on only a subset of the input variables. Additive GPs generalize both Generalized Additive Models, and the standard GP models which use squared-exponential kernels. Hyperparameter learning in this model can be seen as Bayesia...
We address the problem of learning to rank based on a large feature set and a training set of judged documents for given queries. Recently there has been interest in using IR evaluation metrics to assist in training ranking functions. However, direct optimization of an IR metric such as NDCG with respect to model parameters is difficult because such a metric is non-smooth with respect to docume...
While the framework of Gaussian process priors for functions is very flexible and has a number of advantages, its use within a fully Bayesian hierarchical modeling framework has been limited due to computational constraints. Most often, simple models are fit, with hyperparameters learned by maximum likelihood. But this approach understates the posterior uncertainty in inference. We consider pri...
In this paper, we consider Tipping’s relevance vector machine (RVM) [1] and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call Subspace EM (SSEM). Working with a subset of active basis functions, the sparsity of the RVM solution will ensure that the number of basis functions and thereby the computational complexity is kept low. We...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید