Distributed Mirror Descent With Integral Feedback: Asymptotic Convergence Analysis of Continuous-Time Dynamics

نویسندگان

چکیده

This letter addresses distributed optimization, where a network of agents wants to minimize global strongly convex objective function. The function can be written as sum local functions, each which is associated with an agent. We propose continuous-time mirror descent algorithm that uses purely information converge the optimum. Unlike previous work on descent, we incorporate integral feedback in update, allowing constant step-size when discretized. establish asymptotic convergence using Lyapunov stability analysis. further illustrate numerical experiments verify advantage adopting for improving rate descent.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accelerated Stochastic Mirror Descent: From Continuous-time Dynamics to Discrete-time Algorithms

We present a new framework to analyze accelerated stochastic mirror descent through the lens of continuous-time stochastic dynamic systems. It enables us to design new algorithms, and perform a unified and simple analysis of the convergence rates of these algorithms. More specifically, under this framework, we provide a Lyapunov function based analysis for the continuous-time stochastic dynamic...

متن کامل

Distributed Beamforming with Feedback: Convergence Analysis

The focus of this work is on the analysis of transmit beamforming schemes with a low-rate feedback link in wireless sensor/relay networks, where nodes in the network need to implement beamforming in a distributed manner. Specifically, the problem of distributed phase alignment is considered, where neither the transmitters nor the receiver has perfect channel state information, but there is a lo...

متن کامل

Convergence Analysis of Distributed Stochastic Gradient Descent with Shuffling

When using stochastic gradient descent (SGD) to solve large-scale machine learning problems, a common practice of data processing is to shuffle the training data, partition the data across multiple threads/machines if needed, and then perform several epochs of training on the re-shuffled (either locally or globally) data. The above procedure makes the instances used to compute the gradients no ...

متن کامل

Convergence of Online Mirror Descent Algorithms

In this paper we consider online mirror descent (OMD) algorithms, a class of scalable online learning algorithms exploiting data geometric structures through mirror maps. Necessary and sufficient conditions are presented in terms of the step size sequence {ηt}t for the convergence of an OMD algorithm with respect to the expected Bregman distance induced by the mirror map. The condition is limt→...

متن کامل

Accelerated Mirror Descent in Continuous and Discrete Time

We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original continuous-time motivation of mirror descent with a recent ODE interpretation of Nesterov’s accelerated method, we propose a family of continuous-time descent dynamics for convex functions with Lipschitz gradients, such that the solution trajectories converge to the optimum at a O(1/t2) rate. We...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Control Systems Letters

سال: 2021

ISSN: ['2475-1456']

DOI: https://doi.org/10.1109/lcsys.2020.3040934