نتایج جستجو برای: stochastic gradient descent

تعداد نتایج: 258150  

Journal: :IEEE Transactions on Pattern Analysis and Machine Intelligence 2021

Journal: :IEEE Access 2023

This paper introduces CompSkipDSGD, a new algorithm for distributed stochastic gradient descent that aims to improve communication efficiency by compressing and selectively skipping communication. In addition compression, CompSkipDSGD allows both workers the server skip in any iteration of training process reserve it future iterations without significantly decreasing testing accuracy. Our exper...

2013
Simon Wiesler Jinyu Li Jian Xue

Context-dependent deep neural network HMMs have been shown to achieve recognition accuracy superior to Gaussian mixture models in a number of recent works. Typically, neural networks are optimized with stochastic gradient descent. On large datasets, stochastic gradient descent improves quickly during the beginning of the optimization. But since it does not make use of second order information, ...

1998
Daoli ZHU

The descent auxiliary problem method allows one to nd the solution of minimization problems by solving a sequence of auxiliary problems which incorporate a linesearch strategy. We derive the basic algorithm and study its convergence properties within the framework of innnite dimensional pseudoconvex minimization. We also introduce a partial descent type auxiliary problem method which partially ...

Journal: :Annals of Mathematical Sciences and Applications 2019

Journal: :IEEE Transactions on Knowledge and Data Engineering 2022

This paper investigates the stochastic optimization problem focusing on developing scalable parallel algorithms for deep learning tasks. Our solution involves a reformation of objective function in neural network models, along with novel computing strategy, coined weighted aggregating gradient descent ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/x...

2016
Sashank J. Reddi Ahmed Hefny Suvrit Sra Barnabás Póczos Alexander J. Smola

We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (Svrg) methods for them. Svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (Sgd); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary...

Journal: :CoRR 2014
Jun He Yue Zhang

In this paper, we present GASG21 (Grassmannian Adaptive Stochastic Gradient for L2,1 norm minimization), an adaptive stochastic gradient algorithm to robustly recover the low-rank subspace from a large matrix. In the presence of column outliers corruption, we reformulate the classical matrix L2,1 norm minimization problem as its stochastic programming counterpart. For each observed data vector,...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید