نتایج جستجو برای: gramian matrix

تعداد نتایج: 364823  

2007
Steffen Börm Jochen Garcke

To compute the exact solution of Gaussian process regression one needs O(N) computations for direct and O(N) for iterative methods since it involves a densely populated kernel matrix of size N×N , here N denotes the number of data. This makes large scale learning problems intractable by standard techniques. We propose to use an alternative approach: the kernel matrix is replaced by a data-spars...

Journal: :Journal of Machine Learning Research 2009
Qinfeng Shi James Petterson Gideon Dror John Langford Alexander J. Smola S. V. N. Vishwanathan

We propose hashing to facilitate efficient kernels. This generalizes previous work using sampling and we show a principled way to compute the kernel matrix for data streams and sparse feature spaces. Moreover, we give deviation bounds from the exact kernel matrix. This has applications to estimation on strings and graphs.

Journal: :CoRR 2014
Aleksandar Haber Michel Verhaegen

In this paper we show that inverses of wellconditioned, finite-time Gramians and impulse response matrices of large-scale interconnected systems described by sparse state-space models, can be approximated by sparse matrices. The approximation methodology established in this paper opens the door to the development of novel methods for distributed estimation, identification and control of large-s...

2007
Ulrike Baur Peter Benner

We consider linear time-invariant (LTI) systems of the following form Σ : { ẋ(t) = Ax(t) + Bu(t), t > 0, x(0) = x, y(t) = Cx(t) + Du(t), t ≥ 0, with stable state matrix A ∈ Rn×n and B ∈ Rn×m, C ∈ Rp×n, D ∈ Rp×m, arising, e.g., from the discretization and linearization of parabolic PDEs. Typically, in practical applications, we have a large state-space dimension n = O(105) and a small input and ...

Journal: :CoRR 2017
Melih Engin Lei Wang Luping Zhou Xinwang Liu

Being symmetric positive-definite (SPD), covariance matrix has traditionally been used to represent a set of local descriptors in visual recognition. Recent study shows that kernel matrix can give considerably better representation by modelling the nonlinearity in the local descriptor set. Nevertheless, neither the descriptors nor the kernel matrix is deeply learned. Worse, they are considered ...

1999
RUDOLF SCHARLAU

Let p be an odd prime. It is known that the symplectic group Sp2n(p) has two (algebraically conjugate) irreducible representations of degree (pn +1)/2 realized over Q(√ p), where = (−1)(p−1)/2. We study the integral lattices related to these representations for the case pn ≡ 1 mod 4. (The case pn ≡ 3 mod 4 has been considered in a previous paper.) We show that the class of invariant lattices co...

Journal: :CoRR 2016
Xiaolu Hou Frédérique E. Oggier

We consider a variation of Construction A of lattices from linear codes based on two classes of number fields, totally real and CM Galois number fields. We propose a generic construction with explicit generator and Gram matrices, then focus on modular and unimodular lattices, obtained in the particular cases of totally real, respectively, imaginary, quadratic fields. Our motivation comes from c...

Journal: :Journal of Machine Learning Research 2013
Sivan Sabato Nathan Srebro Naftali Tishby

We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the margin-adapted dimension, which is a simple function of the second order statistics of the data distribution, and show distribution-specific upper and lower bounds on the sample complexity, both governed by the margin-adapted dimension of the ...

2017
Intissar SAYEHI Okba TOUALI Mohsen MACHHOUT

In this paper, we propose a new approach aiming to ameliorate the performances of the regularization networks (RN) method and speed up its computation time. A considerable rapidity in totaling calculation time and high performance were accomplished through conveying difficult calculation charges to FPGA. Using Xilinx System Generator, a successful HW/SW CoDesign was constructed to accelerate th...

Journal: :Math. Program. 2011
Luigi Grippo Laura Palagi Veronica Piccialli

In this paper we consider low-rank semidefinite programming (LRSDP) relaxations of the max cut problem. Using the Gramian representation of a positive semidefinite matrix, the LRSDP problem is transformed into the nonconvex nonlinear programming problem of minimizing a quadratic function with quadratic equality constraints. First, we establish some new relationships among these two formulations...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید