نتایج جستجو برای: gramian matrix

تعداد نتایج: 364823  

2007
R. BALAJI R. B. BAPAT

In this paper, block distance matrices are introduced. Suppose F is a square block matrix in which each block is a symmetric matrix of some given order. If F is positive semidefinite, the block distance matrix D is defined as a matrix whose (i, j)-block is given by D ij = F ii +F jj −2F ij. When each block in F is 1 × 1 (i.e., a real number), D is a usual Euclidean distance matrix. Many interes...

2011
Mohammad Ghavamzadeh Alessandro Lazaric Rémi Munos Matthew W. Hoffman

In this paper, we analyze the performance of Lasso-TD, a modification of LSTD in which the projection operator is defined as a Lasso problem. We first show that Lasso-TD is guaranteed to have a unique fixed point and its algorithmic implementation coincides with the recently presented LARS-TD and LC-TD methods. We then derive two bounds on the prediction error of Lasso-TD in the Markov design s...

Journal: :CoRR 2016
Tao Hong Zhihui Zhu

The aim of this brief is to design a robust projection matrix for the Compressive Sensing (CS) system when the signal is not exactly sparse. The optimal projection matrix design is obtained by minimizing the Frobenius norm of the difference between the identity matrix and the Gram matrix of the equivalent dictionary ΦΨ. A novel penalty ‖Φ‖F is added to make the projection matrix robust when the...

2013
Shunsuke Koshita Masahide Abe Masayuki Kawamata

This paper aims to reveal the relationship between the minimum L2-sensitivity of state-space digital filters and the Gramian-preserving frequency transformation. To this end, we first give a prototype low-pass state-space filter in such a manner that its structure becomes the minimum L2-sensitivity structure. Then we apply the Gramian-preserving (LP-LP) frequency transformation with a tunable p...

Journal: :SIAM Journal on Optimization 2013
Laurent Sorber Marc Van Barel Lieven De Lathauwer

The canonical polyadic and rank-(Lr , Lr , 1) block term decomposition (CPD and BTD, respectively) are two closely related tensor decompositions. The CPD and, recently, BTD are important tools in psychometrics, chemometrics, neuroscience, and signal processing. We present a decomposition that generalizes these two and develop algorithms for its computation. Among these algorithms are alternatin...

2001
Joel A. Tropp Robert W. Heath

A description of optimal sequences for direct-spread code division multiple access is a byproduct of recent characterizations of the sum capacity. This papers restates the sequence design problem as an inverse singular value problem and shows that it can be solved with finite-step algorithms from matrix analysis. Relevant algorithms are reviewed and a new one-sided construction is proposed that...

Journal: :CoRR 2016
Yaroslav Nikulin Roman Novak

In this work we explore the method of style transfer presented in [1]. We first demonstrate the power of the suggested style space on a few examples. We then vary different hyper-parameters and program properties that were not discussed in [1], among which are the recognition network used, starting point of the gradient descent and different ways to partition style and content layers. We also g...

Journal: :Acta Cybern. 2006
Zoltán Szabó András Lörincz

Relation between a family of generalized Support Vector Machine (SVM) problems and the novel -sparse representation is provided. In defining -sparse representations, we use a natural generalization of the classical insensitive cost function for vectors. The insensitive parameter of the SVM problem is transformed into component-wise insensitivity and thus overall sparsification is replaced by co...

2007
Sergio Rojas Galeano Mark Herbster

Given an n vertex weighted tree with (structural) diameter SG and a set of ` vertices we give a method to compute the corresponding ` × ` Gram matrix of the pseudoinverse of the graph Laplacian in O(n+ `SG) time. We discuss the application of this method to predicting the labeling of a graph. Preliminary experimental results on a digit classification task are given.

2005
Petros Drineas Michael W. Mahoney

A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an n × n Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form G̃k = CW + k C ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید