SSGD: A Safe and Efficient Method of Gradient Descent

نویسندگان

چکیده

With the vigorous development of artificial intelligence technology, various engineering technology applications have been implemented one after another. The gradient descent method plays an important role in solving optimization problems, due to its simple structure, good stability, and easy implementation. However, multinode machine learning system, gradients usually need be shared, which will cause privacy leakage, because attackers can infer training data with information. In this paper, prevent leakage while keeping accuracy model, we propose super stochastic approach update parameters by concealing modulus length vectors converting it or them into a unit vector. Furthermore, analyze security demonstrate that our algorithm defend against attacks on gradient. Experiment results show is obviously superior prevalent approaches terms accuracy, robustness, adaptability large-scale batches. Interestingly, also resist model poisoning certain extent.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Gradient Descent Method for a Neural

| It has been demonstrated that higher order recurrent neu-ral networks exhibit an underlying fractal attractor as an artifact of their dynamics. These fractal attractors ooer a very eecent mechanism to encode visual memories in a neu-ral substrate, since even a simple twelve weight network can encode a very large set of diierent images. The main problem in this memory model, which so far has r...

متن کامل

A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search

A new nonlinear conjugate gradient method and an associated implementation, based on an inexact line search, are proposed and analyzed. With exact line search, our method reduces to a nonlinear version of the Hestenes–Stiefel conjugate gradient scheme. For any (inexact) line search, our scheme satisfies the descent condition gT kdk ≤ − 7 8 ‖gk‖. Moreover, a global convergence result is establis...

متن کامل

Privacy-preservation for Stochastic Gradient Descent Method

The traditional paradigm in machine learning has been that given a data set, the goal is to learn a target function or decision model (such as a classifier) from it. Many techniques in data mining and machine learning follow a gradient descent paradigm in the iterative process of discovering this target function or decision model. For instance, Linear regression can be resolved through a gradie...

متن کامل

Faster gradient descent and the efficient recovery of images

Much recent attention has been devoted to gradient descent algorithms where the steepest descent step size is replaced by a similar one from a previous iteration or gets updated only once every second step, thus forming a faster gradient descent method. For unconstrained convex quadratic optimization these methods can converge much faster than steepest descent. But the context of interest here ...

متن کامل

A Gradient Descent Method for Optimization of Model Microvascular Networks

Within animals, oxygen exchange occurs within networks containing potentially billions of microvessels that are distributed throughout the animal’s body. Innovative imaging methods now allow for mapping of the architecture and blood flows within real microvascular networks. However, these data streams have so far yielded little new understanding of the physical principles that underlie the orga...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Security and Communication Networks

سال: 2021

ISSN: ['1939-0122', '1939-0114']

DOI: https://doi.org/10.1155/2021/5404061