نتایج جستجو برای: variable selection

تعداد نتایج: 561442  

Journal: :Annals of statistics 2009
Larry Wasserman Kathryn Roeder

This paper explores the following question: what kind of statistical guarantees can be given when doing variable selection in high dimensional models? In particular, we look at the error rates and power of some multi-stage regression methods. In the first stage we fit a set of candidate models. In the second stage we select one model by cross-validation. In the third stage we use hypothesis tes...

2008
Artin Armagan Russell L. Zaretzki

We introduce a new Bayesian approach to the variable selection problem which we term Bayesian Shrinkage Variable Selection (BSVS ). This approach is inspired by the Relevance Vector Machine (RVM ), which uses a Bayesian hierarchical linear setup to do variable selection and model estimation. RVM is typically applied in the context of kernel regression although it is also suitable in the standar...

2012
Li Ma

In this work we introduce a new model space prior for Bayesian variable selection in linear regression. This prior is designed based on a recursive constructive procedure that randomly generates models by including variables in a stagewise fashion. We provide a recipe for carrying out Bayesian variable selection and model averaging using this prior, and show that it possesses several desirable ...

2008
Brent A. Johnson Limin Peng L. Peng

This note considers variable selection in the robust linear model via R-estimates. The proposed rank-based approach is a generalization of the penalized least squares estimators where we replace the least squares loss function with Jaeckel’s (1972) dispersion function. Our rank-based method is robust to outliers in the errors and has roots in traditional nonparametric statistics for simple loca...

Journal: :CoRR 2017
Chunxia Zhang Yilei Wu Mu Zhu

In the context of variable selection, ensemble learning has gained increasing interest due to its great potential to improve selection accuracy and to reduce false discovery rate. A novel ordering-based selective ensemble learning strategy is designed in this paper to obtain smaller but more accurate ensembles. In particular, a greedy sorting strategy is proposed to rearrange the order by which...

Journal: :Annals of statistics 2014
Jianqing Fan Yingying Fan Emre Barut

Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted L1-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the L1-penalty. In the ul...

2004
Angelika van der Linde

Model comparison is discussed from an information theoretic point of view. In particular the posterior predictive entropy is related to the target yielding DIC and modifications thereof. The adequacy of criteria for posterior predictive model comparison is also investigated depending on the comparison to be made. In particular variable selection as a special problem of model choice is formalize...

Journal: :Technometrics 2005
Berwin A. Turlach William N. Venables Stephen J. Wright

We propose a new method for selecting a common subset of explanatory variables where the aim is to explain or predict several response variables. The basic idea is a natural extension of the LASSO technique proposed by Tibshirani (1996) based on minimising the (joint) residual sum of squares while constraining the parameter estimates to lie within a suitable polyhedral region. This leads to a c...

2012
Sudhir Shankar Raman

Traditionally, variable selection in the context of linear regression has been approached using optimization based approaches like the classical Lasso. Such methods provide a sparse point estimate with respect to regression coefficients but are unable to provide more information regarding the distribution of regression coefficients like expectation, variance estimates etc. In the recent years, ...

2015
Dean P. Foster Howard J. Karloff Justin Thaler

Variable selection for sparse linear regression is the problem of finding, given anm×pmatrixB and a target vector y, a sparse vector x such thatBx approximately equals y. Assuming a standard complexity hypothesis, we show that no polynomial-time algorithm can find a k′-sparse x with ||Bx − y|| ≤ h(m, p), where k′ = k ·2log p and h(m, p) ≤ pC1m1−C2 , where δ > 0, C1 > 0, C2 > 0 are arbitrary. Th...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید