نتایج جستجو برای: reproducing kernel hilbert space method

تعداد نتایج: 2079705  

2003
Matthias O. Franz Bernhard Schölkopf

The Wiener series is one of the standard methods to systematically characterize the nonlinearity of a neural system. The classical estimation method of the expansion coefficients via cross-correlation suffers from severe problems that prevent its application to high-dimensional and strongly nonlinear systems. We propose a new estimation method based on regression in a reproducing kernel Hilbert...

2003
Luc Hoegaerts Johan A. K. Suykens Joos Vandewalle Bart De Moor

We focus on covariance criteria for finding a suitable subspace for regression in a reproducing kernel Hilbert space: kernel principal component analysis, kernel partial least squares and kernel canonical correlation analysis, and we demonstrate how this fits within a more general context of subspace regression. For the kernel partial least squares case some variants are considered and the meth...

Journal: :IEEE Trans. Information Theory 2010
Gérard Biau Frédéric Cérou Arnaud Guyader

Let F be a separable Banach space, and let (X, Y ) be a random pair taking values in F×R. Motivated by a broad range of potential applications, we investigate rates of convergence of the k-nearest neighbor estimate rn(x) of the regression function r(x) = E[Y |X = x], based on n independent copies of the pair (X, Y ). Using compact embedding theory, we present explicit and general finite sample ...

2014
Garvesh Raskutti Martin J. Wainwright Bin Yu Sara van de Geer

Early stopping is a form of regularization based on choosing when to stop running an iterative algorithm. Focusing on non-parametric regression in a reproducing kernel Hilbert space, we analyze the early stopping strategy for a form of gradient-descent applied to the least-squares loss function. We propose a data-dependent stopping rule that does not involve hold-out or cross-validation data, a...

2012
Wei Zhang Xin Zhao Yi-Fan Zhu Xin-Jian Zhang

Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It’s well-known that reproducing kernel (R.K) is a useful kernel function ...

2015
Jordan Bell

P (α) = C(α, F (x, y)) = αF (x, x) + 2αF (x, y) + F (x, y)F (y, y), which is ≥ 0. In the case F (x, x) = 0, the fact that P ≥ 0 implies that F (x, y) = 0. In the case F (x, y) 6= 0, P (α) is a quadratic polynomial and because P ≥ 0 it follows that the discriminant of P is ≤ 0: 4F (x, y) − 4 · F (x, x) · F (x, y)F (y, y) ≤ 0. That is, F (x, y) ≤ F (x, y)F (x, x)F (y, y), and this implies that F ...

2009
Ming Yuan

We study a smoothness regularization method for functional linear regression and provide a unified treatment for both the prediction and estimation problems. By developing a tool on simultaneous diagonalization of two positive definite kernels, we obtain shaper results on the minimax rates of convergence and show that smoothness regularized estimators achieve the optimal rates of convergence fo...

Journal: :Proceedings of the National Academy of Sciences of the United States of America 2002
Grace Wahba

Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which ...

Journal: :Journal of Machine Learning Research 2013
Kenji Fukumizu Le Song Arthur Gretton

A kernel method for realizing Bayes’ rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The poste...

2008
Bharath K. Sriperumbudur Arthur Gretton Kenji Fukumizu Gert R. G. Lanckriet Bernhard Schölkopf

A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). The embedding function has been proven to be injective when the reproducing kernel is universal. In this...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید