نتایج جستجو برای: training iteration

تعداد نتایج: 358779  

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2022

We address the poor scalability of learning algorithms for orthogonal recurrent neural networks via use stochastic coordinate descent on group, leading to a cost per iteration that increases linearly with number states. This contrasts cubic dependency typical feasible such as Riemannian gradient descent, which prohibits big network architectures. Coordinate rotates successively two columns matr...

2010
Frank-Florian Steege André Hartmann Erik Schaffernicht Horst-Michael Groß

In this paper we present a Reinforcement Learning (RL) approach with the capability to train neural adaptive controllers for complex control problems without expensive online exploration. The basis of the neural controller is a Neural fitted Q-Iteration (NFQ). This network is trained with data from the example set enriched with artificial data. With this training scheme, unlike most other exist...

2012
Ashkan Parsi Mehrdad Salehi Ali Doostmohammadi

This paper presents a new feature selection method by modifying fitness function of genetic algorithm. Our implementation environment is a face recognition system which uses genetic algorithm for feature selection and k-Nearest Neighbor as a classifier together with our proposed Swap Training. In each iteration of genetic algorithm for assessment of one specific chromosome, swaps training switc...

2002
Aoying Zhou Fang Xiong Weining Qian

Although there are various approaches to facilitate the information search on the Web, most current Web search and query systems only return URLs of relevant pages. Learning-based Web search is invented targeting at processing the URLs to dig out the desired information by utilizing user feedback. However, the involvement of user behavior makes the study of system performance rather complex. In...

2010
Amit Choudhary Rahul Rishi Savita Ahlawat

Objective of this paper is to study the character recognition capability of feed-forward back-propagation algorithm using more than one hidden layer. This analysis was conducted on 182 different letters from English alphabet. After binarization, these characters were clubbed together to form training patterns for the neural network. Network was trained to learn its behavior by adjusting the con...

2011
Hai Son Le Ilya Oparin Abdelkhalek Messaoudi Alexandre Allauzen Jean-Luc Gauvain François Yvon

This paper presents continuation of research on Structured OUtput Layer Neural Network language models (SOUL NNLM) for automatic speech recognition. As SOUL NNLMs allow estimating probabilities for all in-vocabulary words and not only for those pertaining to a limited shortlist, we investigate its performance on a large-vocabulary task. Significant improvements both in perplexity and word error...

2014
Michael Heck Satoshi Nakamura

In this work the theoretical concepts of unsupervised acoustic model training and the application and evaluation of unsupervised training schemes are described. Experiments aiming at speaker adaptation via unsupervised training are conducted on the KIT lecture translator system. Evaluation takes place with respect to training e ciency and overall system performance in dependency of the availabl...

Journal: :iranian journal of science and technology (sciences) 2013
m. merdan

in this article, an analytical approximate solution of nonlinear fractional convection-diffusion with modifiedriemann-liouville derivative was obtained with the help of fractional variational iteration method (fvim). a newapplication of fractional variational iteration method (fvim) was extended to derive analytical solutions in theform of a series for this equation. it is indicated that the so...

1998
Albino Nogueiras José B. Mariño Enric Monte-Moreno

Although having revealed to be a very powerful tool in acoustic modelling, discriminative training presents a major drawback: the lack of a formulation guaranteeing convergence in no matter which initial conditions, such as the Baum-Welch algorithm in maximum likelihood training. For this reason, a gradient descent search is usually used in this kind of problem. Unfortunately, standard gradient...

Journal: :IEEE transactions on neural networks 2002
George D. Magoulas Vassilis P. Plagianakos Michael N. Vrahatis

A novel generalized theoretical result is presented that underpins the development of globally convergent first-order batch training algorithms which employ local learning rates. This result allows us to equip algorithms of this class with a strategy for adapting the overall direction of search to a descent one. In this way, a decrease of the batch-error measure at each training iteration is en...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید