نتایج جستجو برای: back propagation

تعداد نتایج: 256056  

2001
Mohd Yusoff Mashor

This paper presents the performance comparison between back propagation, recursive prediction error (RPE) and modified recursive prediction error (MRPE) algorithms for training multilayered perceptron networks. Back propagation is a steepest descent type algorithm that normally has slow convergence rate and the search for the global minimum often becomes trapped at poor local minima. RPE and MR...

2012
Victor E. S. Parahyba Eduardo S. Rosa Júlio C. M. Diniz Vitor B. Ribeiro Júlio C. R. F. Oliveira

High order modulation nonlinear effects has been appointed as the main limitation in coherent optical fiber transmission. Digital back-propagation algorithms are one of the current studied methods to cope with such impairment and extend the systems maximum reach. In this article, we analyzed the digital back-propagation performance in a 224 Gb/s dual polarization 16QAM optical coherent system. ...

2012
Klaus MAYER Prashanth K. CHINTA Karl-Jörg LANGENBERG Martin KRAUSE

Imaging methods like Synthetic Aperture Focusing Technique (SAFT) are based on the principle of propagation of the measured ultrasonic wave field back to field sources. In the case of a pulse-echo measurement, this back propagation under some approximating conditions leads to an image of the scattering object with well known properties and imperfections. The back propagation through the object ...

2013
Priyanka Sharma Asha Mishra

Back propagation algorithm (BPA) have the complexity, local minima problem so we are using Particle Swarm optimization (PSO) algorithms to reduce and optimize BPA. In this paper, two variants of Particle Swarm Optimization (PSO) PSO_Hill and PSO_A* is used as optimization algorithm. PSO_Hill and PSO_A* algorithms are analyzed and evaluated on the basis of their advantages, applied to feed forwa...

2001

Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...

Journal: :Complex Systems 1988
Gerald Tesauro Bob Janssens

A bstrac t. We present an empirical st udy of th e required training time for neural networks to learn to compute the parity function using the back -propagation learning algorithm, as a function of t he numb er of inp uts. The parity funct ion is a Boolean predica te whose order is equal to th e number of inpu t s. \Ve find t hat t he t rain ing time behaves roughly as 4" I where n is the num ...

2000
Rolf Pfeifer

Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...

1986
David C. Plaut Steven J. Nowlan Geoffrey E. Hinton

Rumelhart, Hinton and Williams [Rumelhart et al. 86] describe a learning procedure for layered networks of deterministic, neuron-like units. This paper describes further research on the learning procedure. We start by describing the units, the way they are connected, the learning procedure, and the extension to iterative nets. We then give an example in which a network learns a set of filters t...

2011
Taiwo Ayodele Shikun Zhou Rinat Khusainov

This paper proposes a new email classification model using a teaching process of multi-layer neural network to implement back propagation technique. Email has become one of the fastest and most efficient forms of communication. However, the increase of email users with high volume of email messages could lead to un-structured mail boxes, email congestion, email overload, unprioritised email mes...

2005
André Grüning

The back-propagation (BP) training scheme is widely used for training network models in cognitive science besides its well known technical and biological short-comings. In this paper we contribute to making the BP training scheme more acceptable from a biological point of view in cognitively motivated prediction tasks overcoming one of its major drawbacks. Traditionally, recurrent neural networ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید