Approximate Gradient Coding with Optimal Decoding

نویسندگان

چکیده

In distributed optimization problems, a technique called gradient coding, which involves replicating data points, has been used to mitigate the effect of straggling machines. Recent work studied approximate concerns coding schemes where replication factor is too low recover full exactly. Our motivated by challenge creating that simultaneously well in both adversarial and stochastic models. To end, we introduce novel codes based on expander graphs, each machine receives exactly two blocks points. We analyze decoding error random straggler setting, when optimal coefficients are used. show our achieve an decays exponentially factor. nearly smaller than any existing code with similar performance setting. convergence bounds setting for descent under standard assumptions using codes. rate improves upon block-box bounds. can converge down noise floor scales linearly gradient. demonstrate empirically near-optimal faster algorithms do not use coefficients.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Network Coding of Correlated Data with Approximate Decoding

This paper considers a framework where data from correlated sources are transmitted with help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the network nodes for improved data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding i...

متن کامل

Approximate Gradient Coding via Sparse Random Graphs

Distributed algorithms are often beset by the straggler effect, where the slowest compute nodes in the system dictate the overall running time. Coding–theoretic techniques have been recently proposed to mitigate stragglers via algorithmic redundancy. Prior work in coded computation and gradient coding has mainly focused on exact recovery of the desired output. However, slightly inexact solution...

متن کامل

Smooth Optimization with Approximate Gradient

We show that the optimal complexity of Nesterov’s smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this often means computing only a few dominant eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method’s...

متن کامل

Hyperparameter optimization with approximate gradient

Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging. In this work we propose an algorithm for the optimization of continuous hyperparameters using inexact gradient information. An advantage of this method is that hyperparamete...

متن کامل

Information-Theoretic Viewpoints on Optimal Causal Coding-Decoding Problems

In this paper we consider an interacting two-agent sequential decision-making problem consisting of a Markov source process, a causal encoder with feedback, and a causal decoder. Motivated by a desire to foster links between control and information theory, we augment the standard formulation by considering general alphabets and a cost function operating on current and previous symbols. Using dy...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE journal on selected areas in information theory

سال: 2021

ISSN: ['2641-8770']

DOI: https://doi.org/10.1109/jsait.2021.3100110