نتایج جستجو برای: backpropagation

تعداد نتایج: 7478  

2016
Dezdemona Gjylapi Eljona Proko Alketa Shehu

This paper evaluates the usefulness of neural networks in GDP forecasting. It is focused on comparing a neural network model trained with genetic algorithm (GANN) to a backpropagation neural network model, both used to forecast the GDP of Albania. Its forecasting is of particular importance in decision-making issues in the field of economy. The conclusion is that the GANN model achieves higher ...

2018
Yatin Saraiya

The training of deep neural nets is expensive. We present a predictorcorrectormethod for the training of deep neural nets. It alternates a predictor pass with a corrector pass using stochastic gradient descent with backpropagation such that there is no loss in validation accuracy. No special modifications to SGD with backpropagation is required by this methodology. Our experiments showed a time...

Journal: :Complex Systems 1990
Peter J. Gawthrop Daniel G. Sbarbaro-Hofer

A standard general algorithm, the stochastic approximation algorithm of Albert and Gardner [1] , is applied in a new context to compute the weights of a multilayer per ceptron network. This leads to a new algorithm, the gain backpropagation algorithm, which is related to, but significantly different from, the standard backpropagat ion algorith m [2]. Some simulation examples show the potential ...

Journal: :Neurocomputing 2000
José Luis Bernier Julio Ortega Ignacio Rojas Alberto Prieto

This paper proposes a version of the backpropagation algorithm which increases the tolerance of a feedforward neural network against deviations in the weight values. These changes can originate either when the neural network is mapped on a given VLSI circuit where the precision and/or weight matching are low, or by physical defects a!ecting the neural circuits. The modi"ed backpropagation algor...

1997
Amr Mohamed Radi

Using Genetic Programming Amr Mohamed Radi and Riccardo Poli School of Computer Science The University of Birmingham Birmingham B15 2TT, UK E-mail: fA.M.Radi,[email protected] Technical Report CSRP-97-21 September 3, 1997 Abstract The development of the backpropagation learning rule has been a landmark in neural networks. It provides a computational method for training multilayer networks. ...

2014
Bhavna Sharma K. Venugopalan

Classification is one of the most important task in application areas of artificial neural networks (ANN).Training neural networks is a complex task in the supervised learning field of research. The main difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training function for the classification task. We compared the performances of three types of tr...

2012
Partha Pratim Sarangi Banshidhar Majhi Madhumita Panda Chien-Yu Huang Long-Hui Chen Yueh-Li Chen Fengming M. Chang

Multilayer perceptrons (MLPs) are widely used for pattern classification and regression problems. Backpropagation (BP) algorithm is known technique in the training of multilayer perceptrons. However for its optimum training convergence, the learning and momentum parameters need to be tuned on trial and error method. Further, sometimes the backpropagation algorithm fails to achieve global conver...

2013
Sheela Tiwari Ram Naresh Rameshwar Jha

This paper explores the application of artificial neural networks for online identification of a multimachine power system. A recurrent neural network has been proposed as the identifier of the two area, four machine system which is a benchmark system for studying electromechanical oscillations in multimachine power systems. This neural identifier is trained using the static Backpropagation alg...

2012
Liang GONG Chengliang LIU Yanming LI Fuqing YUAN

The most widely used algorithm for training multiplayer feedforward networks, Error BackPropagation (EBP), is an iterative gradient descend algorithm by nature. Variable stepsize is the key to fast convergence of BP networks. A new optimal stepsize algorithm is proposed for accelerating the training process. It modifies the objective function to reduce the computational complexity of the Jacobi...

Journal: :Applied Mathematics and Computer Science 2012
Maciej Huk

In this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if n...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید