نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

Journal: :CoRR 2014
Jonathan Tapson Philip de Chazal André van Schaik

We present a closed form expression for initializing the input weights in a multilayer perceptron, which can be used as the first step in synthesis of an Extreme Learning Machine. The expression is based on the standard function for a separating hyperplane as computed in multilayer perceptrons and linear Support Vector Machines; that is, as a linear combination of input data samples. In the abs...

Journal: :IEEE transactions on neural networks 1998
Guang-Bin Huang Haroon Atique Babri

It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons...

2007
Kamalakar Karlapalem

Information overload can hamper extracting hidden knowledge from a database. Data mining techniques ooer automated exploratory data analysis of databases. The mining process reveals knowledge buried in the data and provides insights into these data. Conventional mining techniques such as neural nets, regression analysis, and the discovering of rules uncover hidden information from user-provided...

Journal: :J. Inf. Sci. Eng. 2013
Wei-Tyng Hong

Hidden conditional random fields (HCRFs) are derived from the theory of conditional random fields with hidden-state probabilistic framework. It directly models the conditional probability of a label sequence given observations. Compared to hidden Markov models, HCRFs provide a number of benefits in the acoustic modeling of speech signals. Prior works for training on HCRFs were accomplished with...

Journal: :CoRR 2015
Sanjeev Arora Yingyu Liang Tengyu Ma

Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net’s input given the value of a hidden layer several levels above. However, there is no accompanying “proof of correctness” for the generative model, showing that the feedforwar...

2015
Jiaojiao Li Qian Du Wei Li Yunsong Li

Extreme learning machine (ELM) is of great interest to the machine learning society due to its extremely simple training step. Its performance sensitivity to the number of hidden neurons is studied under the context of hyperspectral remote sensing image classification. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a re...

2012
Tsung-Ying Sun Chan-Cheng Liu Chun-Ling Lin Sheng-Ta Hsieh Cheng-Sen Huang

Radial Basis Function neural network (RBFNN) is a combination of learning vector quantizer LVQ-I and gradient descent. RBFNN is first proposed by (Broomhead & Lowe, 1988), and their interpolation and generalization properties are thoroughly investigated in (Lowe, 1989), (Freeman & Saad, 1995). Since the mid-1980s, RBFNN has been used to apply on many applications, such as pattern classification...

1997
Levent M. Arslan

Traditional maximum likelihood estimation of hidden Markov model parameters aims at maximizing the overall probability across the training tokens of a given speech unit. Therefore, it disregards any interaction and biases across the models in the training procedure. Often the resulting model parameters do not result in minimum error classiication in the training set. A new selective training me...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید