نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

Journal: :journal of medical education 0
shayesteh salehi

background and purpose: the hidden curriculum has great impact on students’ learning. the present study was conducted on nursing and midwifery students to determine their experience with the hidden curriculum. methods: it was a combined survey achieved in two stages on nursing and midwifery students. during the first stage, a free interview was carried out to determine their attitudes towards, ...

2010
Amit Choudhary Savita Ahlawat Rahul Rishi

The purpose of this work is to analyze the performance of back-propagation feed-forward algorithm using various different activation functions for the neurons of hidden and output layer and varying the number of neurons in the hidden layer. For sample creation, 250 numerals were gathered form 35 people of different ages including male and female. After binarization, these numerals were clubbed ...

Journal: :IET Computer Vision 2014
Hayet Boughrara Mohamed Chtourou Chokri Ben Amar Liming Chen

This study presents a modified constructive training algorithm for multilayer perceptron (MLP) which is applied to face recognition problem. An incremental training procedure has been employed where the training patterns are learned incrementally. This algorithm starts with a small number of training patterns and a single hidden-layer using an initial number of neurons. During the training, the...

Journal: :CoRR 2017
Ravid Shwartz-Ziv Naftali Tishby

Despite their great success, there is still no comprehensive theoretical understanding of learning with Deep Neural Networks (DNNs) or their inner organization. Previous work [Tishby and Zaslavsky (2015)] proposed to analyze DNNs in the Information Plane; i.e., the plane of the Mutual Information values that each layer preserves on the input and output variables. They suggested that the goal of...

2006
Jüri Lember Alexey Koloydenko

To estimate the emission parameters in hidden Markov models one commonly uses the EM algorithm or its variation. Our primary motivation, however, is the Philips speech recognition system wherein the EM algorithm is replaced by the Viterbi training algorithm. Viterbi training is faster and computationally less involved than EM, but it is also biased and need not even be consistent. We propose an...

2017
Milena Rabovsky Steven Stenberg Hansen James L. McClelland

Why do neural responses decrease with practice? We used a predictive neural network model of sentence processing (St. John &McClelland, 1990) to simulate neural responses during language understanding, and examined the model’s correlate of neural responses (specifically, the N400 component), measured as stimulus-induced change in hidden layer activation, across training. N400 magnitude first in...

2010
Amit Choudhary Rahul Rishi Savita Ahlawat

Objective of this paper is to study the character recognition capability of feed-forward back-propagation algorithm using more than one hidden layer. This analysis was conducted on 182 different letters from English alphabet. After binarization, these characters were clubbed together to form training patterns for the neural network. Network was trained to learn its behavior by adjusting the con...

Journal: :Neurocomputing 2015
Alexandros Iosifidis

This paper proposes a novel method for supervised subspace learning based on Single-hidden Layer Feedforward Neural networks. The proposed method calculates appropriate network target vectors by formulating a Bayesian model exploiting both the labeling information available for the training data and geometric properties of the training data, when represented in the feature space determined by t...

Journal: :IEEE transactions on neural networks 1992
William J. Byrne

Training a Boltzmann machine with hidden units is appropriately treated in information geometry using the information divergence and the technique of alternating minimization. The resulting algorithm is shown to be closely related to gradient descent Boltzmann machine learning rules, and the close relationship of both to the EM algorithm is described. An iterative proportional fitting procedure...

Journal: :CoRR 2011
Amit Choudhary Rahul Rishi

This work is focused on improving the character recognition capability of feed-forward back-propagation neural network by using one, two and three hidden layers and the modified additional momentum term. 182 English letters were collected for this work and the equivalent binary matrix form of these characters was applied to the neural network as training patterns. While the network was getting ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید