نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

Journal: :CoRR 2004
Vitaly Schetinin

Evolving Cascade Neural Networks (ECNNs) and a new training algorithm capable of selecting informative features are described. The ECNN initially learns with one input node and then evolves by adding new inputs as well as new hidden neurons. The resultant ECNN has a near minimal number of hidden neurons and inputs. The algorithm is successfully used for training ECNN to recognise artefacts in s...

Journal: Desert 2009
H. Memarian Khalilabad K. Zakikhani S. Feiznia

Abstract Erosion and sedimentation are the most complicated problems in hydrodynamic which are very important in water-related projects of arid and semi-arid basins. For this reason, the presence of suitable methods for good estimation of suspended sediment load of rivers is very valuable. Solving hydrodynamic equations related to these phenomenons and access to a mathematical-conceptual mode...

2009
Piotr W. Mirowski Yann LeCun

This article presents a method for training Dynamic Factor Graphs (DFG) with continuous latent state variables. A DFG includes factors modeling joint probabilities between hidden and observed variables, and factors modeling dynamical constraints on hidden variables. The DFG assigns a scalar energy to each configuration of hidden and observed variables. A gradient-based inference procedure finds...

2011
Matthew Hyde Gabriela Ochoa

This document shows the instances that were used for the CHeSC 2011 competition. These instances are available within the JAR file containing the HyFlex software framework[2] version used for the competition. The first four domains were released before the competition as training domains with 10 instances each. There are now 12 instances in each because we added two hidden instances for the com...

2015
Jia Cui George Saon Bhuvana Ramabhadran Brian Kingsbury

This work proposes a new architecture for deep neural network training. Instead of having one cascade of fully connected hidden layers between the input features and the target output, the new architecture organizes hidden layers into several regions with each region having its own target. Regions communicate with each other during the training process by connections among intermediate hidden l...

2012
Chia-Ling Chang Chung-Sheng Liao

The present study focuses on the discussion over the parameter of Artificial Neural Network (ANN). Sensitivity analysis is applied to assess the effect of the parameters of ANN on the prediction of turbidity of raw water in the water treatment plant. The result shows that transfer function of hidden layer is a critical parameter of ANN. When the transfer function changes, the reliability of pre...

Journal: :Cognitive science 2014
Geoffrey E. Hinton

It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were...

2015
Hang Su Haihua Xu

In this paper we propose a Shared Hidden Layer Multisoftmax Deep Neural Network (SHL-MDNN) approach for semi-supervised training (SST). This approach aims to boost low-resource speech recognition where limited training data is available. Supervised data and unsupervised data share the same hidden layers but are fed into different softmax layers so that erroneous automatic speech recognition (AS...

1998
Michael W. Towsey Joachim Diederich Ingo Schellhammer Stephan K. Chalup Claudia Brugman

We present preliminary results of experiments with two types of recurrent neural networks for a natural language learning task. The neural networks, Elman networks and Recurrent Cascade Correlation (RCC), were trained on the text of a first-year primary school reader. The networks performed a one-step-look-ahead task, i.e. they had to predict the lexical category of the next following word. Elm...

2011
Virendra P. Vishwakarma M. N. Gupta R. Chellappa C. L. Wilson V. P. Vishwakarma S. Pandey K. Choi K. A. Toh C. L. Giles A. C. Tsoi

For high dimensional pattern recognition problems, the learning speed of gradient based training algorithms (back-propagation) is generally very slow. Local minimum, improper learning rate and over-fitting are some of the other issues. Extreme learning machine was proposed as a non-iterative learning algorithm for single-hidden layer feed forward neural network (SLFN) to overcome these issues. ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید