A Compressed Gradient Tracking Method for Decentralized Optimization With Linear Convergence
نویسندگان
چکیده
Communication compression techniques are of growing interests for solving the decentralized optimization problem under limited communication, where global objective is to minimize average local cost functions over a multiagent network using only computation and peer-to-peer communication. In this article, we propose novel compressed gradient tracking algorithm (C-GT) that combines technique with communication compression. particular, C-GT compatible general class operators unifies both unbiased biased compressors. We show inherits advantages tracking-based algorithms achieves linear convergence rate strongly convex smooth functions. Numerical examples complement theoretical findings demonstrate efficiency flexibility proposed algorithm.
منابع مشابه
Global Convergence of a Memory Gradient Method for Unconstrained Optimization
The memory gradient method is used for unconstrained optimization, especially large scale problems. The first idea of memory gradient method was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally...
متن کاملA conditional gradient method with linear rate of convergence for solving convex linear systems
We consider the problem of finding a point in the intersection of an affine set with a compact convex set, called a convex linear system (CLS). The conditional gradient method is known to exhibit a sublinear rate of convergence. Exploiting the special structure of (CLS), we prove that the conditional gradient method applied to the equivalent minimization formulation of (CLS), converges to a sol...
متن کاملA decentralized proximal-gradient method with network independent step-sizes and separated convergence rates
This paper considers the problem of decentralized optimization with a composite objective containing smooth and non-smooth terms. To solve the problem, a proximal-gradient scheme is studied. Specifically, the smooth and nonsmooth terms are dealt with by gradient update and proximal update, respectively. The studied algorithm is closely related to a previous decentralized optimization algorithm,...
متن کاملOn the Convergence of Decentralized Gradient Descent
Consider the consensus problem of minimizing f(x) = ∑n i=1 fi(x) where each fi is only known to one individual agent i belonging to a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve da...
متن کاملDecentralized gradient algorithm for solution of a linear equation
The paper develops a technique for solving a linear equation Ax = b with a square and nonsingular matrix A, using a decentralized gradient algorithm. In the language of control theory, there are n agents, each storing at time t an n-vector, call it xi(t), and a graphical structure associating with each agent a vertex of a fixed, undirected and connected but otherwise arbitrary graph G with vert...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2022
ISSN: ['0018-9286', '1558-2523', '2334-3303']
DOI: https://doi.org/10.1109/tac.2022.3180695