نتایج جستجو برای: backpropagation

تعداد نتایج: 7478  

2001
Jiazhi OU Kaijiang CHEN Zongge LI

In this paper we address the problem of rejecting Out-Of-Vocabulary words in speaker-independent Mandarin place name recognition. We integrate neural network and Hidden Markov Models in an attempt to utilize the strength of both. HMM based acoustic models including keyword models, filler models, and an anti-keyword model were trained to meet our needs. Statistical features are fed to a neural n...

Journal: :Neural Computation 1994
Françoise Beaufays Eric A. Wan

We show that signal ow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, real-time backpropagation and backpropagation-through-time. Starting with the ow graph for real-time backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the ...

2008
Azian Azamimi Yoko Uwate Yoshifumi Nishio

Over the years, many improvements and modifications of the backpropagation learning algorithm have been reported. In this study, we propose a new modified backpropagation learning algorithm by adding the chaotic noise into weight update process. By computer simulations, we confirm that the proposed algorithm can gives a better convergence rate and can find a good solution in early time compared...

2000
Fath El Alem F. ALI Zensho NAKAO Yen-Wei CHEN Kazunori MATSUO Izuru OHKAWA

Presented in this paper is a neural back propagation algorithm for reconstructing two-dimensional CT images from a small number of projection data. The paper extends the work in [1], in which a backpropagation algorithm is applied to the CT image reconstruction problem. The delta rule of the ordinary backpropagation algorithm is modified using a ‘secondary’ teaching signal and the ‘Resilient ba...

2007
Eric A. Wan

We show that signal ow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, real-time backpropagation and backpropagation-through-time. Starting with the ow graph for real-time backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the ...

Journal: :The Journal of neuroscience : the official journal of the Society for Neuroscience 2003
Allan T Gulledge Greg J Stuart

Somatic and dendritic whole-cell recording was used to examine action potential (AP) initiation and propagation in layer 5 pyramidal neurons of the rat prelimbic prefrontal cortex. APs generated by somatic current injection, or via antidromic stimulation, were reliably recorded at apical dendritic locations as far as 480 microm from the soma. Although the backpropagation of single APs into the ...

2014
Deepak Gupta Ravi Kumar

This paper reports the effect of the step-size (learning rate parameter) on the performance of the backpropgation algorithm. Backpropagation algorithm (BP) is used to train multilayer neural network. BP algorithm is the generalized form of the least mean square (LMS) algorithm. In this proposed backpropagation algorithm different learning rate parameter are used in different layer. The learning...

Journal: :CoRR 2016
Thomas Miconi

Hebbian plasticity allows biological agents to learn from their lifetime experience, extending the fixed information provided by evolutionary search. Conversely, backpropagation methods can build high-performance fixed-weights networks, but are not currently equipped to design networks with Hebbian connections. Here we use backpropagation to train fully-differentiable plastic networks, such tha...

1992
Arnfried Ossen

Self-supervised backpropagation is an unsupervised learning procedure for feedfor-ward networks, where the desired output vector is identical with the input vector. For backpropagation, we are able to use powerful simulators running on parallel machines. Topology-preserving maps, on the other hand, can be developed by a variant of the competitive learning procedure. However, in a degenerate cas...

Journal: :CoRR 2017
Thomas Frerix Thomas Möllenhoff Michael Möller Daniel Cremers

We propose proximal backpropagation (ProxProp) as a novel algorithm that takes implicit instead of explicit gradient steps to update the network parameters during neural network training. Our algorithm is motivated by the step size limitation of explicit gradient descent, which poses an impediment for optimization. ProxProp is developed from a general point of view on the backpropagation algori...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید