نتایج جستجو برای: back propagation

تعداد نتایج: 256056  

2018
Renjie Liao Yuwen Xiong Ethan Fetaya Lisa Zhang KiJung Yoon Xaq Pitkow Raquel Urtasun Richard Zemel

In this paper, we revisit the recurrent backpropagation (RBP) algorithm (Almeida, 1987; Pineda, 1987), discuss the conditions under which it applies as well as how to satisfy them in deep neural networks. We show that RBP can be unstable and propose two variants based on conjugate gradient on the normal equations (CG-RBP) and Neumann series (Neumann-RBP). We further investigate the relationship...

Journal: :IEEE transactions on neural networks 1991
Marwan A. Jabri Barry Flower

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation...

Journal: :Proceedings of the National Academy of Sciences of the United States of America 1991
P Mazzoni R A Andersen M I Jordan

Many recent studies have used artificial neural network algorithms to model how the brain might process information. However, back-propagation learning, the method that is generally used to train these networks, is distinctly "unbiological." We describe here a more biologically plausible learning rule, using reinforcement learning, which we have applied to the problem of how area 7a in the post...

Journal: :CoRR 2012
Sudarshan Nandy Partha Pratim Sarkar Achintya Das

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent back-propagation algorithm. In the system, optimization is carried out using multilayer neural network. The ...

1987
David C. Plaut Geoffrey E. Hinton

A learning procedure, called back-propagation, for layered networks of deterministic, neuron-like units has been described previously. The ability of the procedure automatically to discover useful internal representations makes it a powerful tool for attacking difficult problems like speech recognition. This paper describes further research on the learning procedure and presents an example in w...

Journal: :Neural Networks 1993
Thierry Denoeux Régis Lengellé

Abstract--This paper addresses the problem of initializing the weights in back propagation networks with one hidden layer. The proposed method relies on the use of reference patterns, or prototypes, and on a transformation which maps each vector in the original feature space onto a unit-length vector in a space with one additional dimension. This scheme applies to pattern recognition tasks, as ...

2015
Neha Jaiswal

Image compression technique is used to reduce the number of bits required in representing image, which helps to reduce the storage space and transmission cost. In the present research work back propagation neural network training algorithm has been used. Back propagation neural network algorithm helps to increase the performance of the system and to decrease the convergence time for the trainin...

1994
Tom Heskes

We study on-line backpropagation and show that the existing theoretical descriptions are strictly valid only on relatively short time scales or in the vicinity of (local) minima of the backpropagation error potential. Qualitative global features (e.g., why is it much easier to escape from local minima than from global minima) may also be explained by these local descriptions, but the current ap...

2009
John E. W. Mayhew Neil A. Thacker

The original back-propagation methods were plagued with variable parameters which affected both the convergence properties of the training and the generalisation abilities of the resulting network. These parameters presented many difficulties when attempting to use these networks to solve particular mapping problems. A combination of established numerical minimisation methods (Polak-Ribiere Con...

1990
John F. Kolen Jordan B. Pollack

This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additio...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید