Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons

نویسندگان

  • Peter Tiño
  • Ashley J. S. Mills
چکیده

We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Beyond Bumps: Spiking Networks that Store Smooth n-Dimensional Functions

There are currently a number of models that use spiking neurons in recurrent networks to encode a stable Gaussian ‘bump’ of activation. These models successfully capture some behaviors of various neural systems (e.g., storing a single spatial location in parietal cortex). We extend this previous work by showing how to construct and analyze realistic spiking networks that encode smooth n-dimensi...

متن کامل

Beyond bumps: Spiking networks that store sets of functions

There are currently a number of models that use spiking neurons in recurrent networks to encode a stable Gaussian &bump' of activation. These models successfully capture some behaviors of various neural systems (e.g., storing a single spatial location in working memory). However, they are limited to encoding single bumps of uniform height. We extend this previous work by showing how to construc...

متن کامل

Gradient Descent for Spiking Neural Networks

Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here,...

متن کامل

Learning recurrent dynamics in spiking networks

Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibi...

متن کامل

A spiking network model of short-term active memory.

Studies of cortical neurons in monkeys performing short-term memory tasks have shown that information about a stimulus can be maintained by persistent neuron firing for periods of many seconds after removal of the stimulus. The mechanism by which this sustained activity is initiated and maintained is unknown. In this article we present a spiking neural network model of short-term memory and use...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Neural computation

دوره 18 3  شماره 

صفحات  -

تاریخ انتشار 2005