Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
نویسندگان
چکیده
Regularization is a well-recognized powerful strategy to improve the performance of a learning machine and l(q) regularization schemes with 0 < q < ∞ are central in use. It is known that different q leads to different properties of the deduced estimators, say, l(2) regularization leads to a smooth estimator, while l(1) regularization leads to a sparse estimator. Then how the generalization capability of l(q) regularization learning varies with q is worthy of investigation. In this letter, we study this problem in the framework of statistical learning theory. Our main results show that implementing l(q) coefficient regularization schemes in the sample-dependent hypothesis space associated with a gaussian kernel can attain the same almost optimal learning rates for all 0 < q < ∞. That is, the upper and lower bounds of learning rates for l(q) regularization learning are asymptotically identical for all 0 < q < ∞. Our finding tentatively reveals that in some modeling contexts, the choice of q might not have a strong impact on the generalization capability. From this perspective, q can be arbitrarily specified, or specified merely by other nongeneralization criteria like smoothness, computational complexity or sparsity.
منابع مشابه
Learnability of Gaussians with Flexible Variances
Gaussian kernels with flexible variances provide a rich family of Mercer kernels for learning algorithms. We show that the union of the unit balls of reproducing kernel Hilbert spaces generated by Gaussian kernels with flexible variances is a uniform Glivenko-Cantelli (uGC) class. This result confirms a conjecture concerning learnability of Gaussian kernels and verifies the uniform convergence ...
متن کاملBinning in Gaussian Kernel Regularization
Gaussian kernel regularization is widely used in the machine learning literature and has proved successful in many empirical experiments. The periodic version of Gaussian kernel regularization has been shown to be minimax rate optimal in estimating functions in any finite order Sobolev space. However, for a data set with n points, the computation complexity of the Gaussian kernel regularization...
متن کاملMultiple Kernel Multi-Task Learning
Recently, there has been a lot of interest around multi-task learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on lp−lq (with 0 ≤ p ≤ 1 and 1 ≤ q ≤ 2) ...
متن کاملIndefinite Kernel Network with Dependent Sampling
We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposi...
متن کاملLeast Square Regression Learning with Data Dependent Hypothesis and Coefficient Regularzation
-We study the least square regression with data dependent hypothesis and coefficient regularization algorithms based on general kernel. An explicit expression of the solution of this kernel scheme is derived. Then we provide a sample error with a decay of 1 ( ) O m and estimate the approximation error in terms of some kind of K -functional. Index Terms -Least Square Regressions, Data Dependent ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neural computation
دوره 26 10 شماره
صفحات -
تاریخ انتشار 2014