نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

Journal: :Journal of Machine Learning Research 2007
Aggelos Chariatis

The experimental investigation on the efficient learning of highly non-linear problems by online training, using ordinary feed forward neural networks and stochastic gradient descent on the errors computed by back-propagation, gives evidence that the most crucial factors for efficient training are the hidden units’ differentiation, the attenuation of the hidden units’ interference and the selec...

2005
Paul Taylor

We propose a method for determining the canonical phonemic transcription of a word from its orthography using hidden Markov models. In the model, phonemes are the hidden states and graphemes the observations. Apart from one pre-processing step, the model is fully automatic. The paper describes the basic HMM framework and enhancements which use preprocessing, context dependent models and a sylla...

Journal: :IEEE Trans. Geoscience and Remote Sensing 1999
Lorenzo Bruzzone Diego Fernández-Prieto

In this paper, a supervised technique for training radial basis function (RBF) neural network classifiers is proposed. Such a technique, unlike traditional ones, considers the class memberships of training samples to select the centers and widths of the kernel functions associated with the hidden neurons of an RBF network. The result is twofold: a significant reduction in the overall classifica...

2012
Emmanuel Ramasso Thierry Denoeux Noureddine Zerhouni

This paper addresses the problem of Hidden Markov Models (HMM) training and inference when the training data are composed of feature vectors plus uncertain and imprecise labels. The “soft” labels represent partial knowledge about the possible states at each time step and the “softness” is encoded by belief functions. For the obtained model, called a Partially-Hidden Markov Model (PHMM), the tra...

1994
Sam Waugh

We report on results of experiments using several variations of CascadeCorrelation. The first examines the application of patience parameters to the addition of hidden nodes with the aim of halting network training. The other techniques involve altering standard candidate training: both training candidates in subgroups of the same node style and training candidates individually, instead of trai...

2016
Somesh Kumar Rajkumar Goel

In this paper the RSA algorithm has been implemented with feed forward artificial neural network using MATLAB. This implementation is focused on the network parameters like topology, training algoritahm, no. of hidden layers, no. of neurons in each layer and learning rate in order to get the more efficient results. Many examples are tested and it is obtained that two hidden layers feed forward ...

Journal: :Neural Networks 1988
R. Paul Gorman Terrence J. Sejnowski

-A neural network learning procedure has been applied to the classification ~/sonar returns [kom two undersea targets, a metal cylinder and a similarly shaped rock. Networks with an intermediate layer ~/ hidden processing units achieved a classification accuracy as high as 100% on a training set of l04 returns. These net~orks correctly classified up to 90.4% of 104 test returns not contained in...

Journal: :journal of chemical and petroleum engineering 2014
aliakbar heydari fazel dolati mojtaba ahmadi yasser vasseghian

in this study, activated sludge process for wastewater treatment in a refinery was investigated. for such purpose, a laboratory scale rig was built. the effect of several parameters such as temperature, residence time, effect of leca (filling-in percentage of the reactor by leca) and uv radiation on cod removal efficiency were experimentally examined. maximum cod removal efficiency was obtained...

2012
Enrico Di Lello Tinne De Laet Herman Bruyninckx

The Hierarchical Dirichlet Process Hidden Markov model (HDP-HMM) is a Bayesian non parametric extension of the classical Hidden Markov Model (HMM) that allows to infer posterior probability over the cardinality of the hidden space, thus avoiding the necessity of cross-validation arising in standard EM training. This paper presents the application of Hierarchical Dirichlet Process Hidden Markov ...

2015
Bernard Widrow Youngsik Kim Yizheng Liao Dookun Park Aaron Greenblatt

Back-Prop and No-Prop, two training algorithms for multi-layer neural networks, are compared in design and performance. With Back-Prop, all layers of the network receive least squares training. With No-Prop, only the output layer receives least squares training, whereas the hidden layer weights are chosen randomly and then fixed. No-Prop is much simpler than Back-Prop. No-Prop can deliver equal...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید