نتایج جستجو برای: training iteration

تعداد نتایج: 358779  

2011
Yuanyuan Guo Harry Zhang Xiaobo Liu

Semi-supervised learning methods utilize abundant unlabeled data to help to learn a better classifier when the number of labeled instances is very small. A common method is to select and label unlabeled instances that the current classifier has high classification confidence to enlarge the labeled training set and then to update the classifier, which is widely used in two paradigms of semi-supe...

Journal: :Journal of Machine Learning Research 2006
Tobias Glasmachers Christian Igel

Support vector machines are trained by solving constrained quadratic optimization problems. This is usually done with an iterative decomposition algorithm operating on a small working set of variables in every iteration. The training time strongly depends on the selection of these variables. We propose the maximum-gain working set selection algorithm for large scale quadratic programming. It is...

Journal: :Wireless Personal Communications 2007
Manav R. Bhatnagar R. Vishwanath M. K. Arti Vaibhav Bhatnagar

In this paper we present a new iterative decoder of Space-Time Block Coded data in the presence of carrier offsets. This decoder starts the iteration with a small amount of training data to find the initial estimate of carrier offset, channel and noise co-variance. Subsequently it attempts to find the estimate of the STBC data by an alternating maximization (AM) technique. In this way better es...

2015
Ching-Pei Lee Dan Roth

Training machine learning models sometimes needs to be done on large amounts of data that exceed the capacity of a single machine, motivating recent works on developing algorithms that train in a distributed fashion. This paper proposes an efficient box-constrained quadratic optimization algorithm for distributedly training linear support vector machines (SVMs) with large data. Our key technica...

2004
Wolfgang Macherey Ralf Schlüter Hermann Ney

Discriminative training techniques have proved to be a powerful method for improving large vocabulary speech recognition systems based on Gaussian mixture hidden Markov models. Typically, the optimization of discriminative objective functions is done using the extended Baum algorithm. Since for continuous distributions no proof of fast and stable convergence is known up to now, parameter re-est...

2010
Antti Airola Tapio Pahikkala Tapio Salakoski

RankRLS is a recently proposed state-of-the-art method for learning ranking functions by minimizing a pairwise ranking error. The method can be trained by solving a system of linear equations. In this work, we investigate the use of conjugate gradient and regularization by iteration for linear RankRLS training on very large and high dimensional, but sparse data sets. Such data is typically enco...

We present an improved version of a full Nesterov-Todd step infeasible interior-point method for linear complementarityproblem over symmetric cone (Bull. Iranian Math. Soc., 40(3), 541-564, (2014)). In the earlier version, each iteration consisted of one so-called feasibility step and a few -at most three - centering steps. Here, each iteration consists of only a feasibility step. Thus, the new...

In the present paper, we show that $S^*$ iteration method can be used to approximate fixed point of almost contraction mappings. Furthermore, we prove that this iteration method is equivalent to CR iteration method  and it produces a slow convergence rate compared to the CR iteration method for the class of almost contraction mappings. We also present table and graphic to support this result. F...

2005
Shao Hongmei Wu Wei Li Feng

Abstract Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaran...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید