نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

1990
Derrick Nguyen Bernard Widrow

A two-layer neural network can be used to approximate any nonlinear function. T h e behavior of the hidden nodes tha t allows the network to do this is described. Networks w i th one input are analyzed first, and the analysis is then extended to networks w i t h mult iple inputs. T h e result of th is analysis is used to formulate a method for ini t ial izat ion o f the weights o f neural netwo...

2007
David Barber Peter Sollich

We complement the recent progress in thermodynamic limit analyses of mean on-line gradient descent learning dynamics in multi-layer networks by calculating the uctuations possessed by nite dimensional systems. Fluctuations from the mean dynamics are largest at the onset of specialisation as student hidden unit weight vectors begin to imitate speciic teacher vectors, and increase with the degree...

Journal: :Pattern Recognition Letters 2012
Dong Yu Li Deng

Recently there have been renewed interests in single-hidden-layer neural networks (SHLNNs). This is due to its powerful modeling ability as well as the existence of some efficient learning algorithms. A prominent example of such algorithms is extreme learning machine (ELM), which assigns random values to the lower-layer weights. While ELM can be trained efficiently, it requires many more hidden...

1998
H. Altun

Mathematical proofs for an improvement in neural learning are presented in this paper. Within an analytical and statistical framework, dependency of neural learning on the distribution characteristic of training set vectors is established for a function approximation problem. It is shown that the BP algorithm works well for a certain type of training set vector distribution and the degree of sa...

2017
Sibo Tong Philip N. Garner Hervé Bourlard

Different training and adaptation techniques for multilingual Automatic Speech Recognition (ASR) are explored in the context of hybrid systems, exploiting Deep Neural Networks (DNN) and Hidden Markov Models (HMM). In multilingual DNN training, the hidden layers (possibly extracting bottleneck features) are usually shared across languages, and the output layer can either model multiple sets of l...

LEILA AFSHAR MUHAMADREZA ABDOLMALEKI SEDIGHEH MOMENI, SHAHRAM YAZDANI

Introduction: Hidden curriculum plays a main role in professionallearning, formation of professional identity, socialization,moral development and learning values, attitudes, beliefs, andknowledge in learners, so it needs to be managed. Althoughthe majority of the theorists believe in the existence of a hiddencurriculum and its greater effect and sustainability com...

2017
Ashok Kumar

This study proposed a novel Nonlinear Auto Regressive eXogenous Neural Network (NARXNN) with Tracking Signal (TS) approach and seeks to investigate the various training functions to forecast the closing index of the stock market. A novel approach strives to adjust the number of hidden neurons of a NARXNN model with different training functions. It uses the Tracking Signal (TS) and rejects all m...

Journal: :iranian journal of chemistry and chemical engineering (ijcce) 2012
azadeh magharei farzaneh vahabzadeh morteza sohrabi, yousef rahimi kashkouli mohammad maleki

production of several yeast products occur in presence of mixtures of monosaccharides. to study effect of xylose and glucose mixtures with system aeration and nitrogen source as the other two operative variables on xylitol production by pichia guilliermondii, the present work was defined. artificial neural network (ann) strategy was used to athematically show interplay between these three contr...

Journal: :IEEE Trans. Acoustics, Speech, and Signal Processing 1988
R. Paul Gorman Terrence J. Sejnowski

We have applied massively parallel learning networks to the classification of sonar returns from two undersea targets and have studied the ability of networks to correctly classify both training and testing examples. Networks with an intermediate layer of hidden processing units achieved a classification accuracy as high as 100 percent on a training set of 104 returns. These networks correctly ...

Journal: :CoRR 2014
Tapani Raiko Mathias Berglund Guillaume Alain Laurent Dinh

Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been shown to be an excellent regularizer,...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید