نتایج جستجو برای: l1 norm

تعداد نتایج: 74840  

B. Babazadeh E. Najafi M. AhadzadehNamin Y. jafari Z. Ebrahimi

Data Envelopment Analysis )DEA( is a technique for measuring the efficiency of decision making units. In all models of the DEA, for each unit under assessment, the numerical efficiency which may be less than or equal to one is obtained. Given the possible large number of efficiency units for evaluating units, we use various methods of ranking. span style="font-family: Cambria Math;font-size:8pt...

2004
E. MORENO A. R. VILLENA

Let δa be a nontrivial dilation. We show that every complete norm ‖ · ‖ on L1(RN ) that makes δa from (L1(RN ), ‖ · ‖) into itself continuous is equivalent to ‖ · ‖1. δa also determines the norm of both C0(R ) and Lp(RN ) with 1 < p < ∞ in a weaker sense. Furthermore, we show that even all the dilations do not determine the norm on L∞(RN ).

2010
Markus Flierl

This paper discusses an adaptive non-linear transform for image sequences that aims to generate a l1-norm preserving sparse approximation for efficient coding. Most sparse approximation problems employ a linear model where images are represented by a basis and a sparse set of coefficients. In this work, however, we consider image sequences where linear measurements are of limited use due to mot...

2009
A. Borsic A. Adler

Maximum A Posteriori (MAP) estimates in inverse problems are often based on quadratic formulations, corresponding to a Least Squares fitting of the data and to the use of the L2 norm on the regularization term. While the implementation of this estimation is straightforward and usually based on the Gauss Newton method, resulting estimates are sensitive to outliers, and spatial distributions of t...

2018
Jinshan Qi Xun Liang Rui Xu

By utilizing kernel functions, support vector machines (SVMs) successfully solve the linearly inseparable problems. Subsequently, its applicable areas have been greatly extended. Using multiple kernels (MKs) to improve the SVM classification accuracy has been a hot topic in the SVM research society for several years. However, most MK learning (MKL) methods employ L1-norm constraint on the kerne...

Journal: :CoRR 2010
Jun Liu Jieping Ye

Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1/lq norm with q > 1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due t...

Journal: :Journal of Machine Learning Research 2016
Bo Peng Lan Wang Yichao Wu

Comparing with the standard L2-norm support vector machine (SVM), the L1-norm SVM enjoys the nice property of simultaneously preforming classification and feature selection. In this paper, we investigate the statistical performance of L1-norm SVM in ultra-high dimension, where the number of features p grows at an exponential rate of the sample size n. Different from existing theory for SVM whic...

Journal: :Linear Algebra and its Applications 1987

Journal: :IEEE Journal of Selected Topics in Signal Processing 2021

Tucker decomposition is a standard method for processing multi-way (tensor) measurements and finds many applications in machine learning data mining, among other fields. When tensor arrive streaming fashion or are too to jointly decompose, incremental analysis preferred. In addition, dynamic adaptation of bases desired when the nominal subspaces change. At same time, it has been documented that...

Journal: :The Computer Journal 1973

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید