نتایج جستجو برای: learning rule

تعداد نتایج: 734770  

2011
Pankaj Agarwal

This paper presents two methods for predicting the secondary structure of proteins based on artificial neural network learning. Two variations of NN learning rule are employed using feedforward backpropogation architecture for predicting secondary structure of proteins from their promary sequences of amino acids. About 500 sequences and more than 10000 patterns were trained with variable size o...

1998
M. J. del Jesus F. Herrera M. Lozano

The main aim of this paper is to present MOGUL, a Methodology to Obtain Genetic fuzzy rule-based systems Under the iterative rule Learning approach. MOGUL will consist of some design guidelines that allow us to obtain diierent Genetic Fuzzy Rule-Based Systems, i. e., evolutionary algorithm-based processes to automatically design Fuzzy Rule-Based Systems by learning and/or tuning the Fuzzy Rule ...

2007
Lars Buesing Wolfgang Maass

We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its...

2008
Frederik Janssen Johannes Fürnkranz

Most commonly used inductive rule learning algorithms employ a hill-climbing search, whereas local pattern discovery algorithms employ exhaustive search. In this paper, we evaluate the spectrum of different search strategies to see whether separate-and-conquer rule learning algorithms are able to gain performance in terms of predictive accuracy or theory size by using more powerful search strat...

Journal: :The Journal of neuroscience : the official journal of the Society for Neuroscience 2016
Denise M Werchan Anne G E Collins Michael J Frank Dima Amso

Recent research indicates that adults and infants spontaneously create and generalize hierarchical rule sets during incidental learning. Computational models and empirical data suggest that, in adults, this process is supported by circuits linking prefrontal cortex (PFC) with striatum and their modulation by dopamine, but the neural circuits supporting this form of learning in infants are large...

2011
Johanni Brea Walter Senn Jean-Pascal Pfister

We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden ...

Journal: :J. Economic Theory 2000
David L. Kelly Jamsheed Shorish

In this paper we examine a representative agent forecasting prices in a first-order self-referential overlapping generations model. We first consider intermediate stage learning, where agents update the forecasting rule every m periods. We show that, in theory and simulations, the learning rule does not converge to the rational expectations equilibrium (REE). We next consider two stage learning...

Journal: :NeuroImage 2006
Christian F Doeller Bertram Opitz Christoph M Krick Axel Mecklinger Wolfgang Reith

It is a topic of current interest whether learning in humans relies on the acquisition of abstract rule knowledge (rule-based learning) or whether it depends on superficial item-specific information (instance-based learning). Here, we identified brain regions that mediate either of the two learning mechanisms by combining fMRI with an experimental protocol shown to be able to dissociate both le...

Journal: :CoRR 2018
Subhashini Krishnasamy Ari Arapostathis Ramesh Johari Sanjay Shakkottai

We consider learning-based variants of the cμ rule – a classic and well-studied scheduling policy – in single and multi-server settings for multi-class queueing systems. In the single server setting, the cμ rule is known to minimize the expected holdingcost (weighted queue-lengths summed both over classes and time). We focus on the setting where the service rates μ are unknown, and are interest...

1997
Amos J. Storkey

Hoppeld networks are commonly trained by one of two algorithms. The simplest of these is the Hebb rule, which has a low absolute capacity of n=(2 ln n), where n is the total number of neurons. This capacity can be increased to n by using the pseudo-inverse rule. However, capacity is not the only consideration. It is important for rules to be local (the weight of a synapse depends ony on informa...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید