نتایج جستجو برای: hidden training

تعداد نتایج: 378572  

Journal: :Neural computation 2016
Marc-Alexandre Côté Hugo Larochelle

We present a mathematical construction for the restricted Boltzmann machine (RBM) that does not require specifying the number of hidden units. In fact, the hidden layer size is adaptive and can grow during training. This is obtained by first extending the RBM to be sensitive to the ordering of its hidden units. Then, with a carefully chosen definition of the energy function, we show that the li...

2001
Derong Liu Tsu-Shuan Chang Yi Zhang

We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape...

2001
Jason M. Kinser

The quality of a feedforward neural network that allows it to associate data not used in training is called generalization. A common method of creating the desired network is for the user to select the network architecture (largely based on the selecting the number of hidden neurons) and allowing a training algorithm to evolve the synaptic weights between the neurons. A popular belief is that t...

Journal: :CoRR 2015
Rossella Cancelliere R. Deluca Mario Gai Patrick Gallinari Luca Rubini

Some novel strategies have recently been proposed for single hidden layer neural network training that set randomly the weights from input to hidden layer, while weights from hidden to output layer are analytically determined by pseudoinversion. These techniques are gaining popularity in spite of their known numerical issues when singular and/or almost singular matrices are involved. In this pa...

Maghsoudi, S, Malekshahi, M,

Background: Today the information and communication technology is the integral part of the work place and the classroom. The main aim of this survey is to examine the impact of e-learning in the hidden curriculum. Hidden Curriculum is masked for education Planners but its influences on thought,emotion and behaviours of students. Method: This article is a quasi-experimental, pretest-posttest...

Journal: :IEEE transactions on neural networks 1991
Michael A. Sartori Panos J. Antsaklis

A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to #1>/= p-1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivation...

2015
Anirudh Vemula Senthil Purushwalkam Varun Joshi

A very commonly faced issue while training prediction models using machine learning is overfitting. Dropout is a recently developed technique designed to counter this issue in deep neural networks and has also been extended to other algorithms like SVMs. In this project, we formulate and study the application of Dropout to Hidden Unit Conditional Random Fields (HUCRFs). HUCRFs use binary stocha...

1996
Yoshua Bengio Samy Bengio

In learning tasks in which input sequences are mapped to output sequences, it is often the case that the input and output sequences are not synchronous. For example, in speech recognition , acoustic sequences are longer than phoneme sequences. Input/Output Hidden Markov Models have already been proposed to represent the distribution of an output sequence given an input sequence of the same leng...

2015
Young-Bum Kim Karl Stratos Ruhi Sarikaya

In this paper, we apply the concept of pretraining to hidden-unit conditional random fields (HUCRFs) to enable learning on unlabeled data. We present a simple yet effective pre-training technique that learns to associate words with their clusters, which are obtained in an unsupervised manner. The learned parameters are then used to initialize the supervised learning process. We also propose a w...

Journal: :Bioinformatics 1998
C. Tarnas Richard Hughey

MOTIVATION Complete forward-backward (Baum-Welch) hidden Markov model training cannot take advantage of the linear space, divide-and-conquer sequence alignment algorithms because of the examination of all possible paths rather than the single best path. RESULTS This paper discusses the implementation and performance of checkpoint-based reduced space sequence alignment in the SAM hidden Markov...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید