Spectral properties of the kernel matrix and their relation to kernel methods in machine learning

نویسنده

  • Mikio L. Braun
چکیده

This chapter serves as a brief introduction to the supervised learning setting and kernel methods. Moreover, several results from linear algebra, probability theory, and functional analysis are reviewed which will be used throughout the thesis. 2.1 Some notational conventions We begin by introducing some basic notational conventions. The sets N, Z, R, C denote the natural, integer, real, and complex numbers. Vectors will be denoted by lowercase letters, whereas matrices will be denoted by bold uppercase letters. Random variables will be denoted by uppercase letters. The individual entries of vectors and matrices are denoted by square brackets. For example, x ∈ R is a vector with coefficients [x]i. The matrix A has entries [A]ij . Vector and matrix transpose is denoted by x>. Sometimes, the set of square n × n matrices are denoted by Mn, and the set of general n × m matrices by Mn,m. The set of eigenvalues of a square matrix A is denoted by λ(A). For a symmetric n × n matrixA, we will always assume that the eigenvalues and eigenvectors are sorted in non-increasing order with eigenvalues repeated according to their multiplicity. The eigenvalues of A are thus λ1(A) ≥ . . . ≥ λn(A). We use the following standard norms on finite-dimensional vector spaces. Let x ∈ R and A ∈Mn. Then, ‖x‖ = √√√√ n ∑ i=1 [x]i , ‖A‖ = max x : ‖x‖6=0 ‖Ax‖ ‖x‖ . (2.1) A useful upper bound on ‖A‖ is given by ‖A‖ ≤ n max 1≤i,j≤n |[A]ij |. (2.2) Another matrix norm we will encounter is the Frobenius norm ‖A‖F = √√√√ n ∑

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Geometry Preserving Kernel over Riemannian Manifolds

Abstract- Kernel trick and projection to tangent spaces are two choices for linearizing the data points lying on Riemannian manifolds. These approaches are used to provide the prerequisites for applying standard machine learning methods on Riemannian manifolds. Classical kernels implicitly project data to high dimensional feature space without considering the intrinsic geometry of data points. ...

متن کامل

یادگیری نیمه نظارتی کرنل مرکب با استفاده از تکنیک‌های یادگیری معیار فاصله

Distance metric has a key role in many machine learning and computer vision algorithms so that choosing an appropriate distance metric has a direct effect on the performance of such algorithms. Recently, distance metric learning using labeled data or other available supervisory information has become a very active research area in machine learning applications. Studies in this area have shown t...

متن کامل

MODELING OF FLOW NUMBER OF ASPHALT MIXTURES USING A MULTI–KERNEL BASED SUPPORT VECTOR MACHINE APPROACH

Flow number of asphalt–aggregate mixtures as an explanatory factor has been proposed in order to assess the rutting potential of asphalt mixtures. This study proposes a multiple–kernel based support vector machine (MK–SVM) approach for modeling of flow number of asphalt mixtures. The MK–SVM approach consists of weighted least squares–support vector machine (WLS–SVM) integrating two kernel funct...

متن کامل

A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression

Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction or network inference problems. During the last decade kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their beha...

متن کامل

Composite Kernel Optimization in Semi-Supervised Metric

Machine-learning solutions to classification, clustering and matching problems critically depend on the adopted metric, which in the past was selected heuristically. In the last decade, it has been demonstrated that an appropriate metric can be learnt from data, resulting in superior performance as compared with traditional metrics. This has recently stimulated a considerable interest in the to...

متن کامل

Spectral Algorithms for Supervised Learning

We discuss how a large class of regularization methods, collectively known as spectral regularization and originally designed for solving ill-posed inverse problems, gives rise to regularized learning algorithms. All of these algorithms are consistent kernel methods that can be easily implemented. The intuition behind their derivation is that the same principle allowing for the numerical stabil...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005