نتایج جستجو برای: gradient descent

تعداد نتایج: 137892  

2016
Trung Le Vu Nguyen Tu Dinh Nguyen Dinh Q. Phung

One of the most challenging problems in kernel online learning is to bound the model size. Budgeted kernel online learning addresses this issue by bounding the model size to a predefined budget. However, determining an appropriate value for such predefined budget is arduous. In this paper, we propose the Nonparametric Budgeted Stochastic Gradient Descent that allows the model size to automatica...

Journal: :IEEE Transactions on Signal Processing 2019

Journal: :ESAIM: Mathematical Modelling and Numerical Analysis 2009

Journal: :Computer Methods in Applied Mechanics and Engineering 2021

Bayesian computation plays an important role in modern machine learning and statistics to reason about uncertainty. A key computational challenge inference is develop efficient techniques approximate, or draw samples from posterior distributions. Stein variational gradient decent (SVGD) has been shown be a powerful approximate algorithm for this issue. However, the vanilla SVGD requires calcula...

2015
Aymeric Dieuleveut Francis Bach

We consider the random-design least-squares regression problem within the reproducing kernel Hilbert space (RKHS) framework. Given a stream of independent and identically distributed input/output data, we aim to learn a regression function within an RKHS H, even if the optimal predictor (i.e., the conditional expectation) is not in H. In a stochastic approximation framework where the estimator ...

Journal: :Foundations of Computational Mathematics 2008
Yiming Ying Massimiliano Pontil

This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without explicit regularization. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately c...

2016

Lemmas 1, 2, 3 and 4, and Corollary 1, were originally derived by Toulis and Airoldi (2014). These intermediate results (and Theorem 1) provide the necessary foundation to derive Lemma 5 (only in this supplement) and Theorem 2 on the asymptotic optimality of θ̄n, which is the key result of the main paper. We fully state these intermediate results here for convenience but we point the reader to t...

Journal: :CoRR 2017
Cong Ma Kaizheng Wang Yuejie Chi Yuxin Chen

Recent years have seen a flurry of activities in designing provably efficient nonconvex procedures for solving statistical estimation problems. Due to the highly nonconvex nature of the empirical loss, stateof-the-art procedures often require proper regularization (e.g. trimming, regularized cost, projection) in order to guarantee fast convergence. For vanilla procedures such as gradient descen...

1999
Leemon C. Baird Tom Mitchell Scott Fahlman Leslie Kaelbling

1

Journal: :CoRR 2016
Jason D. Lee Max Simchowitz Michael I. Jordan Benjamin Recht

We show that gradient descent converges to a local minimizer, almost surely with random initialization. This is proved by applying the Stable Manifold Theorem from dynamical systems theory.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید