نتایج جستجو برای: feedforward neural network

تعداد نتایج: 834601  

Journal: :CoRR 2015
Matus Telgarsky

This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/6, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feed...

2011
Rafael Peña Aurelio Medina

This contribution presents the application of feed-forward neural networks to the problem of time series forecasting. This forecast technique is applied to the water flow and wind speed time series. The results obtained from the forecasting of these two renewable resources can be used to determine the power generation capacity of micro or mini-hydraulic plants, and wind parks, respectively. The...

1997
Robert Eigenmann Josef A. Nossek

The central theme of this paper is to overcome the inability of feedforward neural networks with hard limiting units to provide confidence evaluation. We consider a Madaline architecture for a 2-group classification problem and concentrate on the probability density function for the neural activation of the first-layer units. As the following layers perform a Boolean table, the expectation valu...

2003
Germán Gutiérrez Beatriz García Jiménez José M. Molina López Araceli Sanchis

Many methods to codify Artificial Neural Networks have been developed to avoid the defects of direct encoding schema, improving the search into the solution's space. A method to estimate how the search space is covered and how are the movements along search process applying genetic operators is needed in order to evaluate the different encoding strategies for Feedforward Neural Networks. A firs...

1991
Leonard G. C. Hamey

Existing metrics for the learning performance of feed-forward neural networks do not provide a satisfactory basis for comparison because the choice of the training epoch limit can determine the results of the comparison. I propose new metrics which have the desirable property of being independent of the training epoch limit. The efficiency measures the yield of correct networks in proportion to...

Journal: :CoRR 2017
Behnam Neyshabur Srinadh Bhojanapalli David McAllester Nathan Srebro

We present a generalization bound for feedforward neural networks in terms of the product of the spectral norms of the layers and the Frobenius norm of the weights. The generalization bound is derived using a PAC-Bayes analysis.

2014

Many researchers have proposed pruning algorithms in numerous ways to optimize the network architecture (Castellano et al., 1997; Ahmmed et al., 2007; Henrique et al., 2000; Ponnapalli et al., 1999). Reed (1993) and Engelbrecht (2001) have given detailed surveys of pruning algorithms. Each algorithm has its own advantages and limitations. Some algorithms (Engelbrecht, 2001; Xing & Hu, 2009) pru...

Journal: :Expert Syst. Appl. 2006
Fangcheng Tang Youmin Xi Jun Ma

Artificial neural network has been put into abundant applications in social science research recently. In this study, we investigate the topological structures of organization network, which can possibly account for the different performances of intra-organizational knowledge transfer. We construct two types of networks including hierarchy and scale-free networks, and single-layer perceptron mo...

2002
J. Yáñez

Abstract. For the Traveling Salesman Problem ( ), a combinatorial optimization problem, a feedforward artificial neural network model, the Continuous Hopfield Network ( ) model, is used to solve it. This neural network approach is based on the solution of a differential equation. An appropriate parameter setting of this differential equation can assure that the solution is associated with a tou...

2007
A. Ceccatto H. Navone Henri Waelbroeck

How do learning processes escape from local optima? Doing so requires an exploration of the landscape at a range of the order of the landscape correlation length – a “long jump” in synapsis space. This brings up a dilemma: because of the high dimensionality of this space, the probability that a random long jump lead to a better optimum is nearly zero. We conjecture that “intelligent” coarse-gra...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید