نتایج جستجو برای: multilayer perceptrons

تعداد نتایج: 20290  

1995
Friedrich Lange

The training algorithm EKFNet uses an Extended Kalman Filter for supervised learning of feed-forward neural nets. The big diierence with respect to ordinary backpropagation methods is the calculation of a N N covariance matrix which considers the interdependence of the N weights that have to be optimised. Therefore computing time increases quadratically with the number of weights, thus restrict...

Journal: :Adv. Comput. Math. 2002
Wei Wu Guorui Feng Xin Li

Motivated by the problem of training multilayer perceptrons in neural networks, we consider the problem of minimizing E(x) = ∑ni=1 fi(ξi · x), where ξi ∈ Rs , 1 i n, and each fi(ξi · x) is a ridge function. We show that when n is small the problem of minimizing E can be treated as one of minimizing univariate functions, and we use the gradient algorithms for minimizing E when n is moderately la...

Journal: :IEEE transactions on neural networks 1999
Pinaki Roy Chowdhury Yashwant Prasad Singh R. A. Chansarkar

A new efficient computational technique for training of multilayer feedforward neural networks is proposed. The proposed algorithm consists two learning phases. The first phase is a local search which implements gradient descent, and the second phase is a direct search scheme which implements dynamic tunneling in weight space avoiding the local trap thereby generates the point of next descent. ...

2001
Deniz Erdogmus Deniz Rende Jose C. Principe Tan F. Wong

The minimum error entropy criterion was recently suggested in adaptive system training as an alternative to the mean-square-error criterion, and it was shown to produce better results in many tasks. In this paper, we apply a multiplayer perceptron scheme trained with this information theoretic criterion to the problem of nonlinear channel equalization. In our simulations, we use a realistic non...

Journal: :Complex Systems 1990
Peter J. Gawthrop Daniel G. Sbarbaro-Hofer

A standard general algorithm, the stochastic approximation algorithm of Albert and Gardner [1] , is applied in a new context to compute the weights of a multilayer per ceptron network. This leads to a new algorithm, the gain backpropagation algorithm, which is related to, but significantly different from, the standard backpropagat ion algorith m [2]. Some simulation examples show the potential ...

Journal: :Physical review. E, Statistical, nonlinear, and soft matter physics 2006
Kazushi Mimura Masato Okada

Statistical mechanics is applied to lossy compression using multilayer perceptrons for unbiased Boolean messages. We utilize a treelike committee machine (committee tree) and treelike parity machine (parity tree) whose transfer functions are monotonic. For compression using a committee tree, a lower bound of achievable distortion becomes small as the number of hidden units K increases. However,...

Journal: :CoRR 2018
V. I. Avrutskiy

© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Abstract—Resi...

2001
Shun-ichi Amari Hyeyoung Park Tomoko Ozeki

Singularities are ubiquitous in the parameter space of hierarchical models such as multilayer perceptrons. At singularities, the Fisher information matrix degenerates, and the Cramer-Rao paradigm does no more hold, implying that the classical model selection theory such as AIC and MDL cannot be applied. It is important to study the relation between the generalization error and the training erro...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید