نتایج جستجو برای: multilayer perceptrons

تعداد نتایج: 20290  

Journal: :Neural computation 2008
Haikun Wei Jun Zhang Florent Cousseau Tomoko Ozeki Shun-ichi Amari

We explicitly analyze the trajectories of learning near singularities in hierarchical networks, such as multilayer perceptrons and radial basis function networks, which include permutation symmetry of hidden nodes, and show their general properties. Such symmetry induces singularities in their parameter space, where the Fisher information matrix degenerates and odd learning behaviors, especiall...

2001
Deniz Erdogmus Jose C. Principe

We have previously proposed the use of quadratic Renyi’s error entropy with a Parzen density estimator with Gaussian kernels as an alternative optimality criterion for supervised neural network training, and showed that it produces better performance on the test data compared to the MSE. The error entropy criterion imposes the minimization of average information content in the error signal rath...

Journal: :Neural Computation 1994
Thorsteinn S. Rögnvaldsson

The Langevin updating rule, in which noise is added to the weights during learning, is presented and shown to improve learning on problems with initially ill-conditioned Hessians. This is particularly important for multilayer perceptrons with many hidden layers, that often have ill-conditioned Hessians. In addition, Manhattan updating is shown to have a similar eeect.

2004
Maribel García Arenas Pedro Ángel Castillo Valdivieso Gustavo Romero Fatima Rateb Juan Julián Merelo Guervós

When designing artificial neural network (ANN) it is important to optimise the network architecture and the learning coefficients of the training algorithm, as well as the time the network training phase takes, since this is the more timeconsuming phase. In this paper an approach to cooperative co-evolutionary optimisation of multilayer perceptrons (MLP) is presented. The cooperative co-evoluti...

2006
Zoltán Szabó András Lőrincz

Multilayer Perceptrons (MLP) are formulated within Support Vector Machine (SVM) framework by constructing multilayer networks of SVMs. The coupled approximation scheme can take advantage of generalization capabilities of the SVM and the combinatory feature of the hidden layer of MLP. The network, the Multilayer Kerceptron (MLK) assumes its own backpropagation procedure that we shall derive here...

1990
Haibo Li Torbjörn Kronander Ingemar Ingemarsson

In this paper we present a novel classifier which integrates a multilayer perceptron and a error-correcting decoder. There are two stages in the classifier, in the first stage, mapping feature vectors from feature space to code space is achieved by a multilayer perceptron; in the second stage, error correcting decoding is done on code space, by which the index of the noisy codeword can be obtai...

1997
R Urbanczik

Zero temperature Gibbs learning is considered for a connected committee machine with K hidden units. For large K, the scale of the learning curve strongly depends on the target rule. When learning a perceptron, the sample size P needed for optimal generalization scales so that N P KN, where N is the dimension of the input. This even holds for a noisy perceptron rule if a new input is classiied ...

Journal: :Ingénierie Des Systèmes D'information 2022

The millimeter-wave frequencies planned for 6G systems present challenges channel modeling. At these frequencies, surface roughness affects wave propagation and causes severe attenuation of (mmWave) signals. In general, beamforming techniques compensate this problem. Analog has some major advantages over its counterpart, digital beamforming, because it uses low-cost phase shifters massive MIMO ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید