نتایج جستجو برای: norm l0
تعداد نتایج: 46034 فیلتر نتایج به سال:
Theorem 1.1. Let G, X, p0, and r be as above. Suppose T is translation invariant, maps L log L to L, and is bounded on L0 . Then T is bounded on L, 1 < p < p0 with an operator norm of O((p − 1) ). This theorem is false without the assumption of translation invariance, since L is not an interpolation space between L log L and L0 . For a concrete counterexample, take E and F be subsets of X of me...
In this paper we present an algorithm for complex-valued sparse representation. In our previous work we presented an algorithm for Sparse representation based on smoothed norm. Here we extend that algorithm to complex-valued signals. The proposed algorithm is compared to FOCUSS algorithm and it is experimentally shown that the proposed algorithm is about two or three orders of magnitude faster ...
In this paper, a new algorithm for Sparse Component Analysis (SCA) or atomic decomposition on over-complete dictionaries is presented. The algorithm is essentially a method for obtaining sufficiently sparse solutions of underdetermined systems of linear equations. The solution obtained by the proposed algorithm is compared with the minimum `-norm solution achieved by Linear Programming (LP). It...
The proportionate normalized least-mean-square (PNLMS) algorithm was developed in the context of network echo cancellation. It has been proven to be efficient when the echo path is sparse, which is not always the case in realworld echo cancellation. The improved PNLMS (IPNLMS) algorithm is less sensitive to the sparseness character of the echo path. This algorithm uses the l1 norm to exploit sp...
Suppose we have a signal y which we wish to represent using a linear combination of a number of basis atoms ai, y = ∑ i xiai = Ax. The problem of finding the minimum l0 norm representation for y is a hard problem. The Basis Pursuit (BP) approach proposes to find the minimum l1 norm representation instead, which corresponds to a linear program (LP) that can be solved using modern LP techniques, ...
In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...
We present a new penalizing scheme for a recently introduced prior model [8] on discrete frameworks. The model convincingly assumes that the optimal solutions for the frameworks possess sparse representation on certain transform domains, and applies this sparsity assumption as a prior information for inference problems. Promoting the sparsity, we proposes to penalize l0-norm of coefficient vect...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید