نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

1998
Jack Baskin

1 Abstract 1.1 Motivation Complete forward-backward (Baum-Welch) hidden Markov model training cannot take advantage of the linear space, divide-and-conquer sequence alignment algorithms because of the examination of all possible paths rather than the single best path. This paper discusses the implementation and performance of checkpoint-based reduced space sequence alignment in the SAM Hidden M...

Journal: :IEEE Trans. Speech and Audio Processing 2000
Mark J. F. Gales

2007
Jüri Lember Alexey Koloydenko

We consider estimation of the emission parameters in hidden Markov models. Commonly, one uses the EM algorithm for this purpose. However, our primary motivation is the Philips speech recognition system wherein the EM algorithm is replaced by the Viterbi training algorithm. Viterbi training is faster and computationally less involved than EM, but it is also biased and need not even be consistent...

2001
G. E. Hinton

This paper describes research in progress on two quite different ways of training systems that are composed of many small Hidden Markov Models (HMM’s). The first is a purely discriminative method in which all of the parameters of all the HMM’s are adjusted to optimize classification performance. The second is an unsupervised method in which many little HMM’s are used to model the probability de...

2014
Yingbo Zhou Devansh Arpit Ifeoma Nwogu Venu Govindaraju

Traditionally, when generative models of data are developed via deep architectures, greedy layer-wise pre-training is employed. In a well-trained model, the lower layer of the architecture models the data distribution conditional upon the hidden variables, while the higher layers model the hidden distribution prior. But due to the greedy scheme of the layerwise training technique, the parameter...

2014
Alberto Torres David Díaz José R. Dorronsoro

We discuss how to build sparse one hidden layer MLP replacing the standard l2 weight decay penalty on all weights by an l1 penalty on the linear output weights. We will propose an iterative two step training procedure where the output weights are found using FISTA proximal optimization algorithm to solve a Lasso-like problem and the hidden weights are computed by unconstrained minimization. As ...

2013
K. A Narayanankutty

Probabilistic Finite State Machines (PFSM) are used in feature Extraction, training and testing which are the most important steps in any speech recognition system. An important PFSM is the Hidden Markov Model which is dealt in this paper. This paper proposes a hardware architecture for the forward-backward algorithm as well as the Viterbi Algorithm used in speech recognition based on Hidden Ma...

Journal: :desert 2010
h. memarian khalilabad s. feiznia k. zakikhani

abstract erosion and sedimentation are the most complicated problems in hydrodynamic which are very important in water-related projects of arid and semi-arid basins. for this reason, the presence of suitable methods for good estimation of suspended sediment load of rivers is very valuable. solving hydrodynamic equations related to these phenomenons and access to a mathematical-conceptual model ...

ژورنال: پژوهش در پزشکی 2003
حیدر علی عابدی, , شایسته صالحی, , مجید رحیمی, , مسعود بهرامی, ,

  Background : The hidden curriculum has great impact on students' learning. The present study was conducted on Nursing and Midwifery students to determine their experience with the hidden curriculum. Materials and methods : It was a combined survey achieved in two stages on Nursing and Midwifery students. During the first stage, a free interview was carried out to determine their attitudes tow...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید