نتایج جستجو برای: kernel trick

تعداد نتایج: 52726  

2018
Siong Thye Goh Cynthia Rudin

We present a new machine learning approach to estimate personalized treatment effects in the classical potential outcomes framework with binary outcomes. To overcome the problem that both treatment and control outcomes for the same unit are required for supervised learning, we propose surrogate loss functions that incorporate both treatment and control data. The new surrogates yield tighter bou...

Journal: :Neural computation 2017
Giorgio Gnecco Alberto Bemporad Marco Gori Marcello Sanguineti

Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random ...

2012
Alhussein Fawzi

In the previous lectures, we have focused on finding linear classifiers, i.e., ones in which the decision boundary is a hyperplane. However, in many scenarios the data points cannot be really classified in this manner, as there simply might be no hyperplane that separates most of the positive examples from the negative ones see, e.g., Figure 1 (a). Clearly, in such situations one needs to resor...

2013
DI ZHANG YUN ZHAO MINGHUI DU

Linear discriminant analysis (LDA) is one of the most popular supervised dimensionality reduction (DR) techniques used in computer vision, machine learning, and pattern classification. However, LDA only captures global geometrical structure information of the data and ignores the geometrical variation of local data points of the same class. In this paper, a new supervised DR algorithm called lo...

2015
Cyril Banderier Michael Wallner

We analyze some enumerative and asymptotic properties of Dyck paths under a line of slope 2/5. This answers to Knuth’s problem #4 from his “Flajolet lecture” during the conference “Analysis of Algorithms” (AofA’2014) in Paris in June 2014. Our approach relies on the work of Banderier and Flajolet for asymptotics and enumeration of directed lattice paths. A key ingredient in the proof is the gen...

2013
Steffen Grünewälder Arthur Gretton John Shawe-Taylor

We develop a generic approach to form smooth versions of basic mathematical operations like multiplication, composition, change of measure, and conditional expectation, among others. Operations which result in functions outside the reproducing kernel Hilbert space (such as the product of two RKHS functions) are approximated via a natural cost function, such that the solution is guaranteed to be...

Journal: :Neural computation 2004
Wenming Zheng Li Zhao Cairong Zou

Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same eigenvalue), which makes them not optimal in terms of the discriminant abilit...

Journal: :SIAM Journal on Optimization 2017
Amin Jalali Maryam Fazel Lin Xiao

We propose a new class of convex penalty functions, called variational Gram functions (VGFs), that can promote pairwise relations, such as orthogonality, among a set of vectors in a vector space. These functions can serve as regularizers in convex optimization problems arising from hierarchical classification, multitask learning, and estimating vectors with disjoint supports, among other applic...

2010
Hakan Çevikalp

This paper introduces a novel method for face recognition based on multiple images. When multiple images are considered, the face recognition problem is defined as taking a set of face images from an unknown person and finding the most similar set among the database of labeled image sets. Our proposed method approximates each image set with a geometric convex model (affine/convex hulls) by usin...

Journal: :Knowl.-Based Syst. 2012
Yongqiao Wang He Ni

As one of important nonparametric regression method, support vector regression can achieve nonlinear capability by kernel trick. This paper discusses multivariate support vector regression when its regression function is restricted to be convex. This paper approximates this convex shape restriction with a series of linear matrix inequality constraints and transforms its training to a semidefini...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید