نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

1994
Sam Waugh Anthony Adams

The Cascade Correlation architecture is unique in its abilities to construct sensibly a network topology during training — taking into account all possible connections. This paper investigates a number of different connection strategies for the hidden nodes that Cascade Correlation inserts into a network, by limiting the connections a hidden node may make.

Journal: :CoRR 2014
Adriana Romero Nicolas Ballas Samira Ebrahimi Kahou Antoine Chassang Carlo Gatta Yoshua Bengio

While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this pa...

2016
Hui Wen Weixin Xie Jihong Pei

This paper presents a structure-adaptive hybrid RBF-BP (SAHRBF-BP) classifier with an optimized learning strategy. SAHRBF-BP is composed of a structure-adaptive RBF network and a BP network of cascade, where the number of RBF hidden nodes is adjusted adaptively according to the distribution of sample space, the adaptive RBF network is used for nonlinear kernel mapping and the BP network is used...

Journal: :IEEE Trans. Neural Networks 1994
James Ting-Ho Lo

As opposed to the analytic approach used in the modern theory of optimal filtering, a synthetic approach is presented. The signalhensor data, which are generated by either computer simulation or actual experiments, are synthesized into a filter by training a recurrent multilayer perceptron (RMLP) with at least one hidden layer of fully or partially interconnected neurons and with or without out...

2013
Rahul Samant Srikantha Rao

This paper investigates the ability of variously designed & trained Artificial Neural Network (ANN) to predict the probability of occurrence of Hypertension (HT) in a mixed (healthy + hypertensive, both sexes) patient population. To do this a multi layer feed-forward neural network with 13 inputs and 1 output was created with multiple hidden layers. Network parameters such as count of hidden la...

2007
Sander Canisius Caroline Sporleder

We present two machine learning approaches to information extraction from semi-structured documents that can be used if no annotated training data are available, but there does exist a database filled with information derived from the type of documents to be processed. One approach employs standard supervised learning for information extraction by artificially constructing labelled training dat...

2011
Yann Soullard Thierry Artières

We propose a hybrid model combining a generative model and a discriminative model for signal labelling and classification tasks, aiming at taking the best from each world. The idea is to focus the learning of the discriminative model on most likely state sequences as output by the generative model. This allows taking advantage of the usual increased accuracy of generative models on small traini...

Journal: :CoRR 2014
Alexander W. Churchill Siddharth Sigtia Chrisantha Fernando

An algorithm is described that adaptively learns a non-linear mutation distribution. It works by training a denoising autoencoder (DA) online at each generation of a genetic algorithm to reconstruct a slowly decaying memory of the best genotypes so far. A compressed hidden layer forces the autoencoder to learn hidden features in the training set that can be used to accelerate search on novel pr...

1991
Lai-Wan Chan

The internal representation of the training patterns of multi-layer perceptrons was examined and we demonstrated that the connection weights between layers are eeectively transforming the representation format of the information from one layer to another one in a meaningful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including ...

Journal: :CoRR 2015
Saurabh Sihag Pranab Kumar Dutta

A Deep Belief Network (DBN) requires large, multiple hidden layers with high number of hidden units to learn good features from the raw pixels of large images. This implies more training time as well as computational complexity. By integrating DBN with Discrete Wavelet Transform (DWT), both training time and computational complexity can be reduced. The low resolution images obtained after appli...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید