نتایج جستجو برای: training iteration

تعداد نتایج: 358779  

2017
Vinod Kumar Chauhan Kalpana Dahiya Anuj Sharma

Big Data problems in Machine Learning have large number of data points or large number of features, or both, which make training of models difficult because of high computational complexities of single iteration of learning algorithms. To solve such learning problems, Stochastic Approximation offers an optimization approach to make complexity of each iteration independent of number of data poin...

2017
Jafar Tanha Maarten van Someren Hamideh Afsarmanesh

Recently Semi-Supervised learning algorithms such as co-training are used in many application domains. In co-training, two classifiers based on different views of data or on different learning algorithms are trained in parallel and then unlabeled data that are classified differently by the classifiers but for which one classifier has large confidence are labeled and used as training data for th...

2005
Ming Li Zhi-Hua Zhou

Self-training is a semi-supervised learning algorithm in which a learner keeps on labeling unlabeled examples and retraining itself on an enlarged labeled training set. Since the self-training process may erroneously label some unlabeled examples, sometimes the learned hypothesis does not perform well. In this paper, a new algorithm named Setred is proposed, which utilizes a specific data editi...

Journal: :Neural networks : the official journal of the International Neural Network Society 2016
Kartik Audhkhasi Osonde Osoba Bart Kosko

Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework...

Journal: :CoRR 2017
Haoxuan You Zhicheng Jiao Haojun Xu Jie Li Ying Wang Xinbo Gao

Generative adversarial network (GAN) has gotten wide re-search interest in the field of deep learning. Variations of GAN have achieved competitive results on specific tasks. However, the stability of training and diversity of generated instances are still worth studying further. Training of GAN can be thought of as a greedy procedure, in which the generative net tries to make the locally optima...

Journal: :international journal of industrial mathematics 2015
sh. javadi

in this paper, we have proposed a new iterative method for finding the solution of ordinary differential equations of the first order. in this method we have extended the idea of variational iteration method by changing the general lagrange multiplier which is defined in the context of the variational iteration method.this causes the convergent rate of the method increased compared with the var...

2012
Huisheng Zhang Wei Wu

This paper investigates a split-complex backpropagation algorithm with momentum (SCBPM) for complex-valued neural networks. Some convergence results for SCBPM are proved under relaxed conditions compared with existing results. The monotonicity of the error function during the training iteration process is also guaranteed. Two numerical examples are given to support the theoretical findings.

The purpose of this study is to analyze the performance of Back propagation algorithm with changing training patterns and the second momentum term in feed forward neural networks. This analysis is conducted on 250 different words of three small letters from the English alphabet. These words are presented to two vertical segmentation programs which are designed in MATLAB and based on portions (1...

B. Yousefi M. A. Fariborzi Araghi

In this paper, we apply the Newton’s and He’s iteration formulas in order to solve the nonlinear algebraic equations. In this case, we use the stochastic arithmetic and the CESTAC method to validate the results. We show that the He’s iteration formula is more reliable than the Newton’s iteration formula by using the CADNA library.

2000
Yoshihiko Nankaku Keiichi Tokuda Tadashi Kitamura Takao Kobayashi

This paper presents an approach to estimating the parameters of continuous density HMMs for visual speech recognition. One of the key issues of image-based visual speech recognition is normalization of lip location and lighting condition prior to estimating the parameters of HMMs. We presented a normalized training method in which the normalization process is integrated in the model training. T...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید