نتایج جستجو برای: pabon lasso analysis
تعداد نتایج: 2827094 فیلتر نتایج به سال:
Structured estimation methods, such as LASSO, have received considerable attention in recent years and substantial progress has been made in extending such methods to general norms and non-Gaussian design matrices. In real world problems, however, covariates are usually corrupted with noise and there have been efforts to generalize structured estimation method for noisy covariate setting. In th...
This paper concerns a class of group-lasso learning problems where the objective function is the sum of an empirical loss and the group-lasso penalty. For a class of loss function satisfying a quadratic majorization condition, we derive a unified algorithm called groupwisemajorization-descent (GMD) for efficiently computing the solution paths of the corresponding group-lasso penalized learning ...
We exhibit an approximate equivalence between the Lasso es-timator and Dantzig selector. For both methods we derive parallel oracle inequalities for the prediction risk in the general nonparamet-ric regression model, as well as bounds on the p estimation loss for 1 ≤ p ≤ 2 in the linear model when the number of variables can be much larger than the sample size.
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including ...
In regression settings where explanatory variables have very low correlations and where thereare relatively few effects each of large magnitude, it is commonly believed that the Lasso shall beable to find the important variables with few errors—if any. In contrast, this paper shows thatthis is not the case even when the design variables are stochastically independent. In a regim...
In this paper we consider the problem of grouped variable selection in high-dimensional regression using `1-`q regularization (1 ≤ q ≤ ∞), which can be viewed as a natural generalization of the `1-`2 regularization (the group Lasso). The key condition is that the dimensionality pn can increase much faster than the sample size n, i.e. pn À n (in our case pn is the number of groups), but the numb...
We re-examine the original Group Lasso paper of Yuan and Lin (2007). The form of penalty in that paper seems to be designed for problems with uncorrelated features, but the statistical community has adopted it for general problems with correlated features. We show that for this general situation, a Group Lasso with a different choice of penalty matrix is generally more effective. We give insigh...
The least absolute deviation (LAD) regression is a useful method for robust regression, and the least absolute shrinkage and selection operator (lasso) is a popular choice for shrinkage estimation and variable selection. In this article we combine these two classical ideas together to produce LAD-lasso. Compared with the LAD regression, LAD-lasso can do parameter estimation and variable selecti...
PURPOSE To measure progression of the visual field (VF) mean deviation (MD) index in longitudinal 10-2 VFs more accurately, by adding information from 24-2 VFs using Lasso regression. METHODS A training dataset consisted of 138 eyes from 97 patients with glaucoma or ocular hypertension and a testing dataset consisted of 40 eyes from 34 patients with glaucoma or ocular hypertension. The Lasso ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید