نتایج جستجو برای: norm l0

تعداد نتایج: 46034  

Journal: :Applied sciences 2022

The dictionary learning algorithm has been successfully applied to electronic noses because of its high recognition rate. However, most algorithms use l0-norm or l1-norm regularize the sparse coefficients, which means that nose takes a long time test samples and results in inefficiency system. Aiming at accelerating speed system, an efficient is proposed this paper where performs multi-column a...

Journal: :Filomat 2021

We define a minimal relation L0 generated by an integral equation with operators measures and give description of the relations L0- ?E, L*0- where L*0 is adjoint for L0, ?? C. The obtained results are applied to T(?) such that ?E ? T-1(?) bounded everywhere defined operators.

Journal: :CoRR 2012
Christian Schou Oxvig Patrick Steffen Pedersen Thomas Arildsen Torben Larsen

Signal reconstruction in compressive sensing involves finding a sparse solution that satisfies a set of linear constraints. Several approaches to this problem have been considered in existing reconstruction algorithms. They each provide a trade-off between reconstruction capabilities and required computation time. In an attempt to push the limits for this trade-off, we consider a smoothed `0 no...

2014
Wenxing Zhu Zhengshan Dong

In this paper, two homotopy methods, which combine the advantage of the homotopy technique with the effectiveness of the iterative hard thresholding method, are presented for solving the compressed sensing problem. Under some mild assumptions, we prove that the limits of the sequences generated by the proposed homotopy methods are feasible solutions of the problem, and under some conditions the...

2010
Xiaobo Qu Xue Cao Di Guo Changwei Hu Zhong Chen

Undersampling the k-space is an efficient way to speed up the magnetic resonance imaging (MRI). Recently emerged compressed sensing MRI shows promising results. However, most of them only enforce the sparsity of images in single transform, e.g. total variation, wavelet, etc. In this paper, based on the principle of basis pursuit, we propose a new framework to combine sparsifying transforms in c...

2014
Mila Nikolova

We have an M × N real-valued arbitrary matrix A (e.g. a dictionary) with M < N and data d describing the sought-after object with the help of A. This work provides an in-depth analysis of the (local and global) minimizers of an objective function Fd combining a quadratic data-fidelity term and an l0 penalty applied to each entry of the sought-after solution, weighted by a regularization paramet...

Journal: :CoRR 2011
Mohammad Rostami Zhou Wang

In this paper we aim to tackle the problem of reconstructing a high-resolution image from a single low-resolution input image, known as single image super-resolution. In the literature, sparse representation has been used to address this problem, where it is assumed that both low-resolution and high-resolution images share the same sparse representation over a pair of coupled jointly trained di...

Journal: :Remote Sensing 2023

Infrared dim small target detection has received a lot of attention, because it is crucial component the IR search and track systems (IRST). The robust principal analysis (RPCA) common framework, which works with poor performance complex background edges sparse clutters due to inappropriate approximation items. A nonconvex constraint method based on difference between L1 L2 (L1–L2) norm total v...

2012
Abdallah Elguindy Milan Korda

So far, we have seen streaming algorithms for two important variants of Lp-norm estimation problem: L0-norm estimation (the distinct elements problem) and L2-norm estimation. We also noted that the L1norm estimation problem (at least, when we do not allow element deletions) corresponds to just computing the length of the stream and thus can be trivially solved in O(log n) space. Therefore, the ...

2016

In this paper, we consider sparse optimization problems with L0 norm penalty or constraint. We prove that it is strongly NP-hard to find an approximate optimal solution within certain error bound, unless P = NP. This provides a lower bound for the approximation error of any deterministic polynomialtime algorithm. Applying the complexity result to sparse linear regression reveals a gap between c...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید