نتایج جستجو برای: backpropagation

تعداد نتایج: 7478  

Journal: :IEEE transactions on neural networks 1997
Brijesh Verma

Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an inverse transformation for linearization of nonlinear output activation functions, direct soluti...

Journal: :Neural computation 2003
Xiaohui Xie H. Sebastian Seung

Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the...

Journal: :CoRR 2018
Varun Ranganathan S. Natarajan

The backpropagation algorithm, which had been originally introduced in the 1970s, is the workhorse of learning in neural networks. This backpropagation algorithm makes use of the famous machine learning algorithm known as Gradient Descent, which is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, ...

2004
Jadranka Skorin-Kapov Wendy Tang

In this paper we explore different strategies to guide backpropagation algorithm used for training artificial neural networks. Two different variants of steepest descent-based backpropagation algorithm, and four different variants of conjugate gradient algorithm are tested. The variants differ whether or not the time component is used, and whether or not additional gradient information is utili...

2017
Haiping Huang Taro Toyoizumi

Standard error backpropagation is used in almost all modern deep network training. However, it typically suffers from proliferation of saddle points in high-dimensional parameter space. Therefore, it is highly desirable to design an efficient algorithm to escape from these saddle points and reach a good parameter region of better generalization capabilities, especially based on rough insights a...

2008
Jacob M.J. Murre Jacob Murre

Given the range of neural network paradigms available at the moment, we might ask why anyone would still want to use backpropagation. An important argument for using this learning algorithm seems to be its popularity. Backpropagation has become one of the standard technologies in connectionist modelling. Although it was invented by Werbos in 1974, it has only been with the publication of the so...

Journal: :International journal of neural systems 2002
Gürsel Serpen Joel Corra

This paper proposes a non-recurrent training algorithm, resilient propagation, for the Simultaneous Recurrent Neural network operating in relaxation-mode for computing high quality solutions of static optimization problems. Implementation details related to adaptation of the recurrent neural network weights through the non-recurrent training algorithm, resilient backpropagation, are formulated ...

Journal: :Neural networks : the official journal of the International Neural Network Society 2001
Alexander Nikov Stefka Stoeva

A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) ...

2018
Xu He Herbert Jaeger

Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, “conceptor-aided backprop” (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic...

2002
Wan Hussain Wan Ishak Fadzailah Siraj Abu Talib Othman

Neural Network is a computational paradigm that comprises several disciplines such as mathematics, statistic, biology and philosophy. Neural Network has been implemented in many applications; in software and even hardware. In most cases, Neural Network considered large amount of data, as it will be teach to learn or memorize the data as the knowledge. The learning mechanism for Neural Network i...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید