نتایج جستجو برای: backpropagation

تعداد نتایج: 7478  

2007
Diego Andina Aleksandar Jevtić

This paper presents some relevant results of a novel variant of the Backpropagation Algorithm to be applied during the Multilayer Perceptrons learning phase. The novelty consists in a weighting operation when the MLP learns the weights. The purpose is to modify the Mean Square Error objective giving more relevance to less frequent training patterns and resting relevance to the frequent ones. Th...

1994
P Henaff M Milgram J Rabit

This paper presents experimental results of an original approach to the Neural Network learning architecture for the control and the adaptive control of mobile robots. The basic idea is to use non-recurrent multi-layer-network and the backpropagation algorithm without desired outputs, but with a quadratic criterion which spezify the control objective. To illustrate this method, we consider an e...

Journal: :Lecture notes in networks and systems 2021

Artificial Intelligence algorithms have been steadily increasing in popularity and usage. Deep Learning, allows neural networks to be trained using huge datasets also removes the need for human extracted features, as it automates feature learning process. In hearth of training deep networks, such Convolutional Neural Networks, we find backpropagation, that by computing gradient loss function wi...

Journal: :CoRR 2016
Pierre Baldi Peter J. Sadowski Zhiqin Lu

Abstract: Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the ...

Journal: :Neural computation 1999
George D. Magoulas Michael N. Vrahatis George S. Androulakis

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradi...

2000
Chris Charalambous Andreas Charitou Froso Kaourou

This study compares the predictive performance of three neural network methods, namely the Learning Vector Quantization, the Radial Basis Function, and the Feedforward network that uses the conjugate gradient optimization algorithm, with the performance of the logistic regression and the backpropagation algorithm. All these methods are applied to a dataset of 139 matched-pairs of bankrupt and n...

1989
Raymond J. Mooney Jude W. Shavlik Geoffrey G. Towell Alan Gove

Despite the fact that many symbolic and connectionist (neural net) learning algorithms are addressing the same problem of learning from classified examples, very little Is known regarding their comparative strengths and weaknesses. This paper presents the results of experiments comparing the ID3 symbolic learning algorithm with the perceptron and back-propagation connectionist learning algorith...

2013
Gunjan Mehta Sonia Vatta

Face recognition is a system that identifies human faces through complex computational techniques. The paper explains two different algorithms for feature extraction. These are Principal Component Analysis and Fisher Faces algorithm. It then explains how images can be recognized using a backpropagation algorithm on a feed forward neural network. Two training databases one containing 20 images a...

1999
Martin Mandischer Hannes Geyer Peter Ulbig

In this paper we report results for the prediction of thermodynamic properties based on neural networks, evolutionary algorithms and a combination of them. We compare backpropagation trained networks and evolution strategy trained networks with two physical models. Experimental data for the enthalpy of vaporization were taken from the literature in our investigation. The input information for b...

1994
Raúl Rojas

In this paper we present some visualization techniques which assist in understanding the iteration process of learning algorithms for neural networks. In the case of perceptron learning, we show that the algorithm can be visualized as a search on the surface of what we call the boolean sphere. In the case of backpropagation, we show that the iteration path is not just random noise, but that und...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید