نتایج جستجو برای: stochastic gradient descent

تعداد نتایج: 258150  

Recently, we have demonstrated a new and efficient method to simultaneously reconstruct two unknown interfering wavefronts. A three-dimensional interference pattern was analyzed and then Zernike polynomials and the stochastic parallel gradient descent algorithm were used to expand and calculate wavefronts. In this paper, as one of the applications of this method, the reflected wavefronts from t...

Journal: :CoRR 2017
Matteo Pirotta Marcello Restelli

In this paper we propose a novel approach to automatically determine the batch size in stochastic gradient descent methods. The choice of the batch size induces a trade-off between the accuracy of the gradient estimate and the cost in terms of samples of each update. We propose to determine the batch size by optimizing the ratio between a lower bound to a linear or quadratic Taylor approximatio...

Journal: :IEEE Transactions on Signal Processing 2023

This work studies constrained stochastic optimization problems where the objective and constraint functions are convex expressed as compositions of functions. The problem arises in context fair classification, regression, design queuing systems. Of particular interest is large-scale setting an oracle provides gradients constituent functions, goal to solve with a minimal number calls oracle. Owi...

Journal: :IEEE journal on selected areas in information theory 2021

We consider a decentralized learning setting in which data is distributed over nodes graph. The goal to learn global model on the without involving any central entity that needs be trusted. While gossip-based stochastic gradient descent (SGD) can used achieve this objective, it incurs high communication and computation costs, since has wait for all local models at converge. To speed up converge...

2015
Elad Hazan Kfir Y. Levy Shai Shalev-Shwartz

Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. ...

Journal: :Journal of Scientific Computing 2021

Stochastic gradient descent (SGD) for strongly convex functions converges at the rate $$\mathcal {O}(1/k)$$ . However, achieving good results in practice requires tuning parameters (for example learning rate) of algorithm. In this paper we propose a generalization Polyak step size, used subgradient methods, to stochastic descent. We prove non-asymptotic convergence with constant which can be be...

Journal: :IEEE Control Systems Letters 2022

We systematically develop a learning-based treatment of stochastic optimal control (SOC), relying on direct optimization parametric policies. propose derivation adjoint sensitivity results for differential equations through application variational calculus. Then, given an objective function predetermined task specifying the desiderata controller, we optimize their parameters via iterative gradi...

2016
Kelvin Kai Wing Ng

1) There exists a study on employing mini-batch approach on SVRG, one of the VR methods. It shows that the approach cannot scale well that there is no significant difference between using 16 threads and more[2]. This study observes the cause of the poor scalability of this existing mini-batch approach on VR method. 2) The performance of mini-batch approach on distributed setting is improved by ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید