Communication-Efficient Variance-Reduced Decentralized Stochastic Optimization Over Time-Varying Directed Graphs

نویسندگان

چکیده

In this article, we consider the problem of decentralized optimization over time-varying directed networks. The network nodes can access only their local objectives, and aim to collaboratively minimize a global function by exchanging messages with neighbors. Leveraging sparsification, gradient tracking, variance reduction, propose novel communication-efficient scheme that is suitable for resource-constrained We prove in case smooth strongly convex objective functions, proposed achieves an accelerated linear convergence rate. To our knowledge, first framework networks such rate applies settings requiring sparsified communication. Experimental results on both synthetic real datasets verify theoretical demonstrate efficacy scheme.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Communication-Efficient Algorithms for Decentralized and Stochastic Optimization

We present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems defined over multiagent networks. Considering that communication is a major bottleneck in decentralized optimization, our main goal in this paper is to develop algorithmic frameworks which can significantly reduce the number of inter-node communications. We first propose a decentralize...

متن کامل

Variance-Reduced and Projection-Free Stochastic Optimization

The Frank-Wolfe optimization algorithm has recently regained popularity for machine learning applications due to its projection-free property and its ability to handle structured constraints. However, in the stochastic learning setting, it is still relatively understudied compared to the gradient descent counterpart. In this work, leveraging a recent variance reduction technique, we propose two...

متن کامل

Distributed Time-Varying Stochastic Optimization and Utility-Based Communication

We devise a distributed asynchronous stochastic ǫgradient-based algorithm to enable a network of computing and communicating nodes to solve a constrained discrete-time time-varying stochastic convex optimization problem. Each node updates its own decision variable only once every discrete time step. Under some assumptions (among which, strong convexity, Lipschitz continuity of the gradient, per...

متن کامل

Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization

We consider the stochastic composition optimization problem proposed in [17], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVRADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(logS/S), which improves upon the O(S−4/9) r...

متن کامل

Stochastic Variance-Reduced ADMM

The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAGADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an inte...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2022

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2021.3133372