نتایج جستجو برای: kernel sliced inverse regression ksir

تعداد نتایج: 448527  

2010
Cedric Archambeau Francis Bach

We consider a Gaussian process formulation of the multiple kernel learning problem. The goal is to select the convex combination of kernel matrices that best explains the data and by doing so improve the generalisation on unseen data. Sparsity in the kernel weights is obtained by adopting a hierarchical Bayesian approach: Gaussian process priors are imposed over the latent functions and general...

2013
Han-Ming Wu

Sliced inverse regression (SIR) was developed to find the effective dimension reduction directions for exploring the intrinsic structure of high-dimensional data. The isometric SIR (ISOSIR), a nonlinear extension of SIR, employed K-means on the pre-calculated isometric distance matrix of the data set so that the classical SIR algorithm can be applied. It has been shown that ISOSIR can recover t...

2010
Li-Ping Zhu Li-Xing Zhu Song-Qiao Wen

This paper is concerned with dimension reduction in regressions with multivariate responses on high-dimensional predictors. A unified method that can be regarded as either an inverse regression approach or a forward regression method is proposed to recover the central dimension reduction subspace. By using Stein’s Lemma, the forward regression estimates the first derivative of the conditional c...

2009
Gilles Blanchard Nicole Krämer

We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimato...

2009
Gilles Blanchard Nicole Krämer

We study the statistical consistency of conjugate gradient applied to a bounded regression learning problem seen as an inverse problem defined in a reproducing kernel Hilbert space. This approach leads to an estimator that stands out of the well-known classical approaches, as it is not defined as the solution of a global cost minimization procedure over a fixed model nor is it a linear estimato...

Journal: :Neural Networks 2021

The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice a good probability distance. However some popular metrics like Wasserstein or Sliced distances, Jensen–Shannon divergence, Kullback–Leibler lack convenient properties such (geodesic) convexity, fast evaluation so on. To address these shortcomings, we introduce c...

1994
Paul Yee Simon Haykin

Pattern classiication may be viewed as an ill-posed, inverse problem to which the method of regularization be applied. In doing so, a proper theoretical framework is provided for the application of radial basis function (RBF) networks to pattern classiication, with strong links to the classical kernel regression estimator (KRE)-based classiiers that estimate the underlying posterior class densi...

Journal: :Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications 2003

Journal: :Electronic Journal of Statistics 2011

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید