نتایج جستجو برای: feedforward neural network

تعداد نتایج: 834601  

2004
Chulhee Lee

In this paper, we investigate the dimension expansion property of 3 layer feedforward neural networks and provide a helpful insight into how neural networks define complex decision boundaries. First, we note that adding a hidden neuron is equivalent to expanding the dimension of the space defined by the outputs of the hidden neurons. Thus, if the number of hidden neurons is larger than the numb...

2016
Zhifei Zhang

Derivation of backpropagation in convolutional neural network (CNN) is conducted based on an example with two convolutional layers. The step-by-step derivation is helpful for beginners. First, the feedforward procedure is claimed, and then the backpropagation is derived based on the example. 1 Feedforward

Journal: :eLife 2016
Yedidyah Dordek Daniel Soudry Ron Meir Dori Derdikman

Many recent models study the downstream projection from grid cells to place cells, while recent data have pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a single-layer neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights a...

2015
Yedidyah Dordek Ron Meir Dori Derdikman

Many recent models study the downstream projection from grid cells to place cells, while recent data has pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a two-layered neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights wer...

1996
André Elisseeff Hélène Paugam-Moisy

This article presents a new result about the size of a multilayer neural network computing real outputs for exact learning of a finite set of real samples. The architecture of the network is feedforward, with one hidden layer and several outputs. Starting from a fixed training set, we consider the network as a function of its weights. We derive, for a wide family of transfer functions, a lower ...

1996
Christian Goerick Werner von Seelen

In this paper we investigate the learning of an unlearnable problem and how this relates to the premature saturation of hidden neu-rons in error backpropagation learning. General aspects of our model are discussed. A sketch of the derivation of equations for the development of the signiicant weights in time is given. 1. Introduction The phenomenon of premature saturation of hidden neurons in fe...

1999
Bogdan M. Wilamowski Yixin Chen Aleksander Malinowski

Efficient second order algorithm for training feedforward neural networks is presented. The algorithm has a similar convergence rate as the Lavenberg-Marquardt (LM) method and it is less computationally intensive and requires less memory. This is especially important for large neural networks where the LM algorithm becomes impractical. Algorithm was verified with several examples.

Journal: :Discrete Mathematics 2001
Zhaozhi Zhang Xiaomin Ma Yixian Yang

This paper presents a uni ed method to construct decoders which are implemented by a feedforward neural network. By setting the parameters of the network, it can decode any given code {(ci; Di); i=1; : : : ; M}: We focus on the case that the sets D1; : : : ; DM are weighted distance spheres. Properties and constructions of weighted distance spheres are developed. Weighted distance perfect codes...

2010
Christian Emmerich René Felix Reinhart Jochen J. Steil

We shed light on the key ingredients of reservoir computing and analyze the contribution of the network dynamics to the spatial encoding of inputs. Therefore, we introduce attractor-based reservoir networks for processing of static patterns and compare their performance and encoding capabilities with a related feedforward approach. We show that the network dynamics improve the nonlinear encodin...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید