نتایج جستجو برای: gradient descent

تعداد نتایج: 137892  

Journal: :Numerical Algebra, Control and Optimization 2023

We propose AEGD, a new algorithm for optimization of non-convex objective functions, based on dynamically updated 'energy' variable. The method is shown to be unconditionally energy stable, irrespective the base step size. prove energy-dependent convergence rates AEGD both and convex objectives, which suitably small size recovers desired batch gradient descent. also provide an bound stationary ...

Journal: :IEEE Transactions on Circuits and Systems for Video Technology 2022

Decentralized learning has gained great popularity to improve efficiency and preserve data privacy. Each computing node makes equal contribution collaboratively learn a Deep Learning model. The elimination of centralized Parameter Servers (PS) can effectively address many issues such as privacy, performance bottleneck single-point-failure. However, how achieve Byzantine Fault Tolerance in decen...

In this paper, a new hybrid conjugate gradient algorithm is proposed for solving unconstrained optimization problems. This new method can generate sufficient descent directions unrelated to any line search. Moreover, the global convergence of the proposed method is proved under the Wolfe line search. Numerical experiments are also presented to show the efficiency of the proposed algorithm, espe...

Journal: :IEEE Transactions on Information Theory 2020

Journal: :IEEE Transactions on Information Theory 2022

In this work, we present a family of vector quantization schemes vqSGD (Vector-Quantized Stochastic Gradient Descent) that provide an asymptotic reduction in the communication cost with convergence guarantees first-order distributed optimization. process derive following fundamental information...

Journal: :Siam Journal on Optimization 2021

The conditions of relative smoothness and strong convexity were recently introduced for the analysis Bregman gradient methods convex optimization. We introduce a generalized left-pr...

Journal: :Siam Journal on Optimization 2023

For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to number evaluations. An analogous nonsmooth is challenging. Even when objective smooth at every iterate, corresponding local models unstable, and cutting planes invoked by traditional remedies difficult bound, leading guarantees sublinear cumulative We instead propose a...

2016
Antonin Chambolle

2 (First order) Descent methods, rates 2 2.1 Gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 What can we achieve? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Second order methods: Newton’s method . . . . . . . . . . . . . . . . . 7 2.4 Multistep first order methods . . . . . . . . . . . . . . . . . . . . . . . . 8 2.4.1 Heavy ball method . ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید