نتایج جستجو برای: l1 norm

تعداد نتایج: 74840  

2013
Gideon Schechtman

If E = {ei} and F = {fi} are two 1-unconditional basic sequences in L1 with E r-concave and F p-convex, for some 1 ≤ r < p ≤ 2, then the space of matrices {ai,j} with norm ∥{ai,j}∥E(F ) = ∥∥∑ k ∥ ∑ l ak,lfl∥ek ∥∥ embeds into L1. This generalizes a recent result of Prochno and Schütt.

1997
Russell M. Mersereau

An L1 norm minimization scheme is applied to the determination of the impulse response vector h of flaws detected in practical examples of ultrasonic nondestructive evaluation in CANDU nuclear reactors. For each problem, parametric programming is applied to find the optimum value of the damping parameter that will yield the best estimate of h according to a quantified performance factor. This p...

Journal: :J. UCS 2017
Chuandong Qin Zhenxia Xue Quanxi Feng Xiaoyang Huang

Taking full advantages of the L1-norm support vector machine and the L2-norm support vector machine, a new improved double regularization support vector machine is proposed to analyze the datasets with small samples, high dimensions and high correlations in the parts of the variables. A kind of smooth function is used to approximately overcome the disdifferentiability of the L1-norm and the ste...

2012
Antigoni Panagiotopoulou

In multi-frame Super-Resolution (SR) image reconstruction a single High-Resolution (HR) image is created from a sequence of Low-Resolution (LR) frames. This work considers stochastic regularized multi-frame SR image reconstruction from the data-fidelity point of view. In fact, a novel estimator named  inv L1 norm is proposed for assuring fidelity to the measured data. This estimator presents t...

2009
A. Majumdar R. K. Ward

This paper proposes solution to the following non-convex optimization problem: min || x || p subject to || y Ax || q Such an optimization problem arises in a rapidly advancing branch of signal processing called ‘Compressed Sensing’ (CS). The problem of CS is to reconstruct a k-sparse vector xnX1, from noisy measurements y = Ax+ , where AmXn (m<n) is the measurement matrix and mX1 is additive no...

2009
M. AKÇAKAYA S. NAM P. HU W. MANNING V. TAROKH R. NEZAFAT

Fig 3: Comparison of BLS-GSM CS and l1 norm CS for imaging of right coronary artery. Fig. 1: a) Wavelet coefficients of a 2D slice of a coronary image. b) Random permutation of the same coefficients shown in (a). Both data have equivalent lp norm, which suggests CS lp norm regularizers do not take into account the clustering and correlation of information in the transform domain. Compressed Sen...

2009
Rodolphe Jenatton Jean-Yves Audibert Francis Bach

We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual l1-norm and the group l1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problem...

2016
M. Baburaj Sudhish N. George

The t-SVD based Tensor Robust Principal Component Analysis (TRPCA) decomposes low rank multi-linear signal corrupted by gross errors into low multi-rank and sparse component by simultaneously minimizing tensor nuclear norm and l1 norm. But if the multi-rank of the signal is considerably large and/or large amount of noise is present, the performance of TRPCA deteriorates. To overcome this proble...

Journal: :Journal of Machine Learning Research 2011
Rodolphe Jenatton Jean-Yves Audibert Francis R. Bach

We consider the empirical risk minimization problem for linear supervised learning, with regularization by structured sparsity-inducing norms. These are defined as sums of Euclidean norms on certain subsets of variables, extending the usual l1-norm and the group l1-norm by allowing the subsets to overlap. This leads to a specific set of allowed nonzero patterns for the solutions of such problem...

Journal: :CoRR 2013
Suman Kalyan Bera Anamitra R. Choudhury Syamantak Das Sambuddha Roy Jayram S. Thatchachar

We describe a primal-dual framework for the design and analysis of online convex optimization algorithms for drifting regret. Existing literature shows (nearly) optimal drifting regret bounds only for the l2 and the l1-norms. Our work provides a connection between these algorithms and the Online Mirror Descent (OMD) updates; one key insight that results from our work is that in order for these ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید