Serial and parallel backpropagation convergence via nonmonotone perturbed minimization

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Backpropagation Convergence via Deterministic Nonmonotone Perturbed Minimization

The fundamental backpropagation (BP) algorithm for training artificial neural networks is cast as a deterministic nonmonotone perturbed gradient method. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error func...

متن کامل

Nonmonotone and Perturbed Optimization

The primary purpose of this research is the analysis of nonmonotone optimization algorithms to which standard convergence analysis techniques do not apply. We consider methods that are inherently nonmonotone, as well as nonmono-tonicity induced by data perturbations or inexact subproblem solution. One of the principal applications of our results is the analysis of gradient-type methods that pro...

متن کامل

Nonmonotone Convergence and Relaxing Functions

In the minimization of real valued functions, Newton’s algorithm is often combined with a line search method. Grippo et al [SIAM J. Numer. Anal., Vol. 23, No. 4] first suggested a nonmonotone stepsize selection rule based on the maximum of a fixed set of previous function values. In this paper we introduce the notion of relaxing functions and suggest several other nonmonotone procedures using a...

متن کامل

Nonmonotone methods for backpropagation training with adaptive learning rate

In this paper, we present nonmonotone methods for feedforward neural network training, i.e. training methods in which error function values are allowed to increase at some iterations. More specifically, at each epoch we impose that the current error function value must satisfy an Armijo-type criterion, with respect to the maximum error function value of M previous epochs. A strategy to dynamica...

متن کامل

A Nonmonotone Backpropagation Training Method for Neural Networks

Abstract A method that improves the speed and the success rate of the backpropagation algorithm is proposed. This method adapts the learning rate using the Barzilai and Borwein [IMA J.Numer. Anal., 8, 141–148, 1988] steplength update for gradient descent methods. The learning rate is automatically adapted at each epoch, using the weight and gradient values of the previous one. Additionally, an ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Optimization Methods and Software

سال: 1994

ISSN: 1055-6788,1029-4937

DOI: 10.1080/10556789408805581