نتایج جستجو برای: multilayer perceptrons

تعداد نتایج: 20290  

2003
Dalius Navakauskas

A quick gradient training algorithm for a specific neural network structure called an extra reduced size lattice–ladder multilayer perceptron is introduced. Presented derivation of the algorithm utilizes recently found by author simplest way of exact computation of gradients for rotation parameters of lattice–ladder filter. Developed neural network training algorithm is optimal in terms of mini...

Journal: :Neurocomputing 2003
Aapo Hyvärinen Ella Bingham

The data model of independent component analysis (ICA) gives a multivariate probability density that describes many kinds of sensory data better than classical models like Gaussian densities or Gaussian mixtures. When only a subset of the random variables is observed, ICA can be used for regression, i.e. to predict the missing observations. In this paper, we show that the resulting regression i...

2014
Roland Schäfer

Removal of boilerplate is among the essential tasks in web corpus construction and web indexing. In this paper, we present an improved machine learning approach to general-purpose boilerplate detection for languages based on (extended) Latin alphabets (easily adaptable to other scripts). We keep it highly efficient (around 320 documents per single CPU core second) by using an optimized Multilay...

1997
Tomas Lundin Perry Moerland

i Preface A connectionist system or neural network is a massively parallel network of weighted interconnections, which connect one or more layers of non-linear processing elements (neurons). To fully proot from the inherent parallel processing of these networks, development of parallel hardware implementations is essential. However, these hardware implementations often diier in various ways fro...

2003
Georgios Petkos

This thesis is about modelling carbon flux in forests based on meterological variables using modern machine learning techniques. The motivation is to better understand the carbon uptake process from trees and find the driving factors of it, using totally automated techniques. Data from two British forests were used, (Griffin and Harwood) but finally results were obtained only with Harwood becau...

Journal: :Neural networks : the official journal of the International Neural Network Society 2008
Luís M. Silva Joaquim Marques de Sá Luís A. Alexandre

The learning process of a multilayer perceptron requires the optimization of an error function E(y,t) comparing the predicted output, y, and the observed target, t. We review some usual error functions, analyze their mathematical properties for data classification purposes, and introduce a new one, E(Exp), inspired by the Z-EDM algorithm that we have recently proposed. An important property of ...

2007
Tautvydas Cibas Patrick Gallinari Olivier Gascuel

This paper describes experimental investigations for exploring the dependence of Neural Networks behavior and capabilities on their complexity. Characteristic behavior patterns are worked out through an artificial problem. We analyze in particular the dependency of overfitting on NN complexity, characterize the bias-variance evolution of the error, and the effects of two regularization techniqu...

Journal: :IEEE transactions on neural networks 1991
Shih-Chi Huang Yih-Fang Huang

Fundamental issues concerning the capability of multilayer perceptrons with one hidden layer are investigated. The studies are focused on realizations of functions which map from a finite subset of E(n) into E(d). Real-valued and binary-valued functions are considered. In particular, a least upper bound is derived for the number of hidden neurons needed to realize an arbitrary function which ma...

2003
Thomas P. Trappenberg

In this chapter a brief review is given of computational systems that are motivated by information processing in the brain, an area that is often called neurocomputing or artificial neural networks. While this is now a well studied and documented area, specific emphasis is given to a subclass of such models, called continuous attractor neural networks, which are beginning to emerge in a wide co...

1997
David J C Mackay Mark N Gibbs

A density network is a neural network that maps from unobserved inputs to observable outputs. The inputs are treated as latent variables so that, for given network parameters, a non{trivial probability density is deened over the output variables. This probabilistic model can be trained by various Monte Carlo methods. The model can discover a description of the observed data in terms of an under...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید