نتایج جستجو برای: margin maximization

تعداد نتایج: 53753  

2014
Przemyslaw Dymarski

The dominant role of Hidden Markov Models (HMMs) in automatic speech recognition (ASR) is not to be denied. At first, the HMMs were trained using the Maximum Likelihood (ML) approach, using the BaumWelch or Expectation Maximization algorithms (Rabiner, 1989). Then, discriminative training methods emerged, i.e. the Minimum Classification Error (Sha & Saul, 2007; Siohan et al., 1998), the Conditi...

2012
Przemyslaw Dymarski

The dominant role of Hidden Markov Models (HMMs) in automatic speech recognition (ASR) is not to be denied. At first, the HMMs were trained using the Maximum Likelihood (ML) approach, using the BaumWelch or Expectation Maximization algorithms (Rabiner, 1989). Then, discriminative training methods emerged, i.e. the Minimum Classification Error (Sha & Saul, 2007; Siohan et al., 1998), the Conditi...

2014
Do-kyum Kim Matthew F. Der Lawrence K. Saul

We investigate a Gaussian latent variable model for semi-supervised learning of linear large margin classifiers. The model’s latent variables encode the signed distance of examples to the separating hyperplane, and we constrain these variables, for both labeled and unlabeled examples, to ensure that the classes are separated by a large margin. Our approach is based on similar intuitions as semi...

2010
Nayyar Abbas Zaidi David McG. Squire David Suter

The Nearest Neighbor (NN) classification/regression techniques, besides their simplicity, are amongst the most widely applied and well studied techniques for pattern recognition in machine learning. A drawback, however, is the assumption of the availability of a suitable metric to measure distances to the k nearest neighbors. It has been shown that k-NN classifiers with a suitable distance metr...

2015
V. Tamilselvan

This study addresses a shuffled frog leaping algorithm for solving the multi-objective reactive power dispatch problem in a power system. Optimal Reactive Power Dispatch (ORPD) is formulated as a nonlinear, multimodal and mixed-variable problem. The intended technique is based on the minimization of the real power loss, minimization of voltage deviation and maximization of the voltage stability...

Journal: :JACIII 2004
Takio Kurita

The Support Vector Machine (SVM) has been extended to build up nonlinear classifiers using the kernel trick [1– 3]. As a learning model, it has the best recognition performance among the many methods currently known because it is devised to obtain high performance for unlearned data. The SVM uses linear threshold elements to build up two-classes classifier. It learns linear threshold element pa...

2008
Theodoros Tsagaris

We consider the Brownian market model and the problem of expected utility maximization of terminal wealth. We, specifically, examine the problem of maximizing the utility of terminal wealth under the presence of transaction costs of a fund/agent investing in futures markets. We offer some preliminary remarks about statistical arbitrage strategies and we set the framework for futures markets, an...

2016
Yu-Xiong Wang Martial Hebert

This work explores CNNs for the recognition of novel categories from few examples. Inspired by the transferability analysis of CNNs, we introduce an additional unsupervised meta-training stage that exposes multiple top layer units to a large amount of unlabeled real-world images. By encouraging these units to learn diverse sets of low-density separators across the unlabeled data, we capture a m...

Journal: :Int. J. Hybrid Intell. Syst. 2009
Ricardo Ñanculef Carlos Concha Héctor Allende Diego Candel Claudio Moraga

The margin maximization principle implemented by binary Support Vector Machines (SVMs) has been shown to be equivalent to find the hyperplane equidistant to the closest points belonging to the convex hulls that enclose each class of examples. In this paper, we propose an extension of SVMs for multicategory classification which generalizes this geometric formulation. The obtained method preserve...

2016
Yu-Xiong Wang Martial Hebert

This work explores CNNs for the recognition of novel categories from few examples. Inspired by the transferability properties of CNNs, we introduce an additional unsupervised meta-training stage that exposes multiple top layer units to a large amount of unlabeled real-world images. By encouraging these units to learn diverse sets of low-density separators across the unlabeled data, we capture a...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید