نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

2007
Heping Li Zhanyi Hu Yihong Wu Fuchao Wu

The traditional co-training algorithm, which needs a great number of unlabeled examples in advance and then trains classifiers by iterative learning approach, is not suitable for online learning of classifiers. To overcome this barrier, we propose a novel semi-supervised learning algorithm, called MAPACo-Training, by combining the co-training with the principle of Maximum A Posteriori adaptatio...

2011
Klaus Neumann Jochen J. Steil

Extreme learning machines are single-hidden layer feed-forward neural networks, where the training is restricted to the output weights in order to achieve fast learning with good performance. The success of learning strongly depends on the random parameter initialization. To overcome the problem of unsuited initialization ranges, a novel and efficient pretraining method to adapt extreme learnin...

2004
Andrew Singer Frances Shin Eugene Church

The problem of modeling chaotic nonlinear dynamical systems using hidden Markov models is considered. A hidden Markov model for a class of chaotic systems is developed from noise-free observations of the output of that system. A combination of vector quantization and the Baum-Welch algorithm is used for training. The importance of this combined iterative approach is demonstrated. The model is t...

2011
Michael Pucher Nadja Kerschhofer-Puhalo Dietmar Schabus

This paper describes a method for selecting an appropriate phone set in dialect speech synthesis for a so far undescribed dialect by applying hidden Markov model (HMM) based training and clustering methods. In this pilot study we show how a phone set derived from the phonetic surface can be optimized given a small amount of dialect speech training data.

1995
Peter G. Anderson Roger S. Gaborski Ming Ge Sanjay Raghavendra Mei-Ling Lung

Peter G. Anderson Roger S. Gaborski Ming Ge Sanjay Raghavendra Mei-Ling Lung Abstract We present a novel training algorithm for a feed forward neural network with a single hidden layer of nodes (i.e., two layers of connection weights). Our algorithm is capable of training networks for hard problems, such as the classic two-spirals problem. The weights in the rst layer are determined using a qua...

1991
Sowmya Ramachandran Lorien Y. Pratt

Automatic determination of proper neural network topology by trimming over-sized networks is an important area of study, which has previously been addressed using a variety of techniques. In this paper, we present Information Measure Based Skeletonisation (IMBS), a new approach to this problem where superfluous hidden units are removed based on their information measure (1M). This measure, borr...

2010
Ravindra Babu

The purpose of this paper is to classify the LISS-III satellite images into different classes as agriculture, urban and water body. Here pixel based classification is used to classify each pixel of the satellite image as belonging to one of those three classes. To perform this classification, a neural network back propagation technique is used. The neural network consists of three layers: Input...

Aliakbar Heydari Fazel Dolati Mojtaba Ahmadi Yasser Vasseghian,

In this study, activated sludge process for wastewater treatment in a refinery was investigated. For such purpose, a laboratory scale rig was built. The effect of several parameters such as temperature, residence time, effect of Leca (filling-in percentage of the reactor by Leca) and UV radiation on COD removal efficiency were experimentally examined. Maximum COD removal efficiency was obtained...

2002
P. M. Silva F. Garces

This paper explores training and initialization aspects of dynamic neural networks when applied to the nonlinear system identification problem. A well known dynamic neural network structure contains both output states and hidden states. Output states are related to the outputs of the system represented by the network. Hidden states are particularly important in allowing dynamic neural networks ...

1994
Anders Krogh

A hidden Markov model for labeled observations, called a CHMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI training, but is more general, and has MMI as a special case. The standard forward-backward procedure ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید