نتایج جستجو برای: kernel sliced inverse regression ksir
تعداد نتایج: 448527 فیلتر نتایج به سال:
We consider a Gaussian process formulation of the multiple kernel learning problem. The goal is to select the convex combination of kernel matrices that best explains the data and by doing so improve the generalisation on unseen data. Sparsity in the kernel weights is obtained by adopting a hierarchical Bayesian approach: Gaussian process priors are imposed over the latent functions and general...
Nonlinear Dimension Reduction Through Cumulative Slicing Estimation for Nonlinear Manifolds Learning
Sliced inverse regression (SIR) was developed to find the effective dimension reduction directions for exploring the intrinsic structure of high-dimensional data. The isometric SIR (ISOSIR), a nonlinear extension of SIR, employed K-means on the pre-calculated isometric distance matrix of the data set so that the classical SIR algorithm can be applied. It has been shown that ISOSIR can recover t...
This paper is concerned with dimension reduction in regressions with multivariate responses on high-dimensional predictors. A unified method that can be regarded as either an inverse regression approach or a forward regression method is proposed to recover the central dimension reduction subspace. By using Stein’s Lemma, the forward regression estimates the first derivative of the conditional c...
We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimato...
We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimato...
The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice a good probability distance. However some popular metrics like Wasserstein or Sliced distances, Jensen–Shannon divergence, Kullback–Leibler lack convenient properties such (geodesic) convexity, fast evaluation so on. To address these shortcomings, we introduce c...
Pattern classiication may be viewed as an ill-posed, inverse problem to which the method of regularization be applied. In doing so, a proper theoretical framework is provided for the application of radial basis function (RBF) networks to pattern classiication, with strong links to the classical kernel regression estimator (KRE)-based classiiers that estimate the underlying posterior class densi...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید