نتایج جستجو برای: gaussian kernel

تعداد نتایج: 123253  

2001
Rein van den Boomgaard Rik van der Weij

Gaussian convolutions are perhaps the most often used image operators in low-level computer vision tasks. Surprisingly though, there are precious few articles that describe efficient and accurate implementations of these operators. In this paper we describe numerical approximations of Gaussian convolutions based on interpolation. We start with the continuous convolution integral and use an inte...

2012
Jie Yu

Soft sensor technique has become increasingly important to provide reliable on-line measurements, facilitate advanced process control and improve product quality in process industries. The conventional soft sensors are normally single-model based and thus may not be appropriate for processes with shifting operating conditions or phases. In this study, a multiway Gaussian mixture model (MGMM) ba...

2008
Ryan Prescott Adams Iain Murray David J.C. MacKay

The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate to unity. The statistics community has explored the use of a logistic Gaussian process for density estimation, relying on approximations of the normalization constant (e.g. [1...

2014
Ziyu Wang Babak Shakibi Lin Jin Nando de Freitas

where σ T (x) = κ(x,x) − k1:T (x)K−1k1:T (x) and this bound is tight. Moreover, σ T (x) is the posterior predictive variance of a Gaussian process with the same kernel. Lemma 3 (Adapted from Proposition 1 of de Freitas et al. (2012)). Let κ : R × R → R be a kernel that is twice differentiable along the diagonal {(x,x) |x ∈ RD}, with L defined as in Lemma 1.1, and f be an element of the RKHS wit...

Journal: :CoRR 2011
Toshiro Kubota

Construction of a scale space with a convolution filter has been studied extensively in the past. It has been proven that the only convolution kernel that satisfies the scale space requirements is a Gaussian type. In this paper, we consider a matrix of convolution filters introduced in [1] as a building kernel for a scale space, and shows that we can construct a non-Gaussian scale space with a ...

2011
Ming-Syan Chen Keng-Pei Lin

Training support vector machines (SVMs) with nonlinear kernel functions on large-scale data are usually very timeconsuming. In contrast, there exist faster solvers to train the linear SVM. We propose a technique which sufficiently approximates the infinite-dimensional implicit feature mapping of the Gaussian kernel function by a low-dimensional feature mapping. By explicitly mapping data to the...

Journal: :Neural computation 2003
S. Sathiya Keerthi Chih-Jen Lin

Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyperparameters: the penalty parameter C and the kernel width sigma. This letter analyzes the behavior of the SVM classifier when these hyperparameters take very small or very large values. Our results help in understanding the hyperparameter spac...

2009
Daoqiang Zhang Wanquan Liu

In this paper, we propose a general formulation for kernel nonnegative matrix factorization with flexible kernels. Specifically, we propose the Gaussian nonnegative matrix factorization (GNMF) algorithm by using the Gaussian kernel in the framework. Different from a recently developed polynomial NMF (PNMF), GNMF finds basis vectors in the kernel-induced feature space and the computational cost ...

2013
Andrew McHutchon

Only the k(x * , X) term depends on the test point x * , therefore to calculate the slope of the posterior mean we just need to differentiate the kernel. For the squared exponential covariance function the derivative of the kernel between x * and a training point x i is,

2008
Réda Dehak Najim Dehak Patrick Kenny Pierre Dumouchel

We present a new approach to construct kernels used on support vector machines for speaker verification. The idea is to learn new kernels by taking linear combination of many kernels such as the Generalized Linear Discriminant Sequence kernels (GLDS) and Gaussian Mixture Models (GMM) supervector kernels. In this new linear kernel combination, the weights are speaker dependent rather than univer...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید