نتایج جستجو برای: recurrent input

تعداد نتایج: 345825  

2009
Alexey Minin Bernhard Lang

Neural networks applied in control loops and safety-critical domains have to meet hard requirements. First of all, a small approximation error is required, then, the smoothness and the monotonicity of selected input-output relations have to be taken into account and finally, for some processes, time dependencies in time series should be induced into the model. If not then the stability of the c...

Journal: :Neural computation 2005
Peter Tiño Ashley J. S. Mills

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream ar...

Journal: :CoRR 2016
Yoshua Bengio Benjamin Scellier Olexa Bilaniuk João Sacramento Walter Senn

We consider deep multi-layered generative models such as Boltzmann machines or Hopfield nets in which computation (which implements inference) is both recurrent and stochastic, but where the recurrence is not to model sequential structure, only to perform computation. We find conditions under which a simple feedforward computation is a very good initialization for inference, after the input uni...

1997
Hava T. Siegelmann Bill G. Horne

Recently, fully connected recurrent neural networks have been proven to be computationally rich—at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are ...

Journal: :Neural computation 2009
Takuma Tanaka Takeshi Kaneko Toshio Aoyagi

Recently multineuronal recording has allowed us to observe patterned firings, synchronization, oscillation, and global state transitions in the recurrent networks of central nervous systems. We propose a learning algorithm based on the process of information maximization in a recurrent network, which we call recurrent infomax (RI). RI maximizes information retention and thereby minimizes inform...

2008
Klaus Wimmer Marcel Stimberg Robert Martin Lars Schwabe Jorge Mariño James Schummers David C. Lyon Mriganka Sur Klaus Obermayer

The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Hux...

1996
Tony Robinson Mike Hochberg Steve Renals

This chapter was written in 1994. Further advances have been made such as: context-dependent phone modelling; forward-backward training and adaptation using linear input transformations. This chapter describes a use of recurrent neural networks (i.e., feedback is incorporated in the computation) as an acoustic model for continuous speech recognition. The form of the recurrent neural network is ...

2003
Brian Halabisky Ben W. Strowbridge

Halabisky, Brian and Ben W. Strowbridge. -Frequency excitatory input to granule cells facilitates dendrodendritic inhibition in the rat olfactory bulb. J Neurophysiol 90: 644–654, 2003. First published April 23, 2003; 10.1152/jn.00212.2003. Recurrent and lateral inhibition play a prominent role in patterning the odor-evoked discharges in mitral cells, the output neurons of the olfactory bulb. I...

2014
Cengiz Pehlevan Haim Sompolinsky

Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random conne...

2012
Christian Tetzlaff Christoph Kolodziejski Marc Timme Florentin Wörgötter

Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external inp...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید