نتایج جستجو برای: partial linear model preliminary test lasso

تعداد نتایج: 3367252  

2007

Regularized regression methods for linear regression have been developed the last few decades to overcome the flaws of ordinary least squares regression with regard to prediction accuracy. In this chapter, three of these methods (Ridge regression, the Lasso, and the Elastic Net) are incorporated into CATREG, an optimal scaling method for both linear and nonlinear transformation of variables in ...

2015
Adam S. Brown Chirag J. Patel

UNLABELLED Robust conversion between microarray platforms is needed to leverage the wide variety of microarray expression studies that have been conducted to date. Currently available conversion methods rely on manufacturer annotations, which are often incomplete, or on direct alignment of probes from different platforms, which often fail to yield acceptable genewise correlation. Here, we descr...

پایان نامه :0 1392

nowadays in trade and economic issues, prediction is proposed as the most important branch of science. existence of effective variables, caused various sectors of the economic and business executives to prefer having mechanisms which can be used in their decisions. in recent years, several advances have led to various challenges in the science of forecasting. economical managers in various fi...

2009
Jinzhu Jia Karl Rohe Bin Yu

Lasso is a popular method for variable selection in regression. Much theoretical understanding has been obtained recently on its model selection or sparsity recovery properties under sparse and homoscedastic linear regression models. Since these standard model assumptions are often not met in practice, it is important to understand how Lasso behaves under nonstandard model assumptions. In this ...

Journal: :Journal of Machine Learning Research 2017
Jason D. Lee Qiang Liu Yuekai Sun Jonathan E. Taylor

We devise a communication-efficient approach to distributed sparse regression in the highdimensional setting. The key idea is to average “debiased” or “desparsified” lasso estimators. We show the approach converges at the same rate as the lasso as long as the dataset is not split across too many machines, and consistently estimates the support under weaker conditions than the lasso. On the comp...

2009
Nicole Krämer

I briefly report on some unexpected results that I obtained when optimizing the model parameters of the Lasso. In simulations with varying observations-to-variables ratio n/p, I typically observe a strong peak in the test error curve at the transition point n/p = 1. This peaking phenomenon is well-documented in scenarios that involve the inversion of the sample covariance matrix, and as I illus...

2010
Shuheng Zhou

Given n noisy samples with p dimensions, where n ≪ p, we show that the multi-step thresholding procedure based on the Lasso – we call it the Thresholded Lasso, can accurately estimate a sparse vector β ∈ R in a linear model Y = Xβ + ǫ, where Xn×p is a design matrix normalized to have column l2 norm √ n, and ǫ ∼ N(0, σ2In). We show that under the restricted eigenvalue (RE) condition (Bickel-Rito...

ژورنال: پیاورد سلامت 2016
درگاهی, حسین, صادقی فر, جمیل, طلوعی رخشان, شیوا,

Background & Aim: One of the most important and useful models for assessing hospital performance is the Pabon Lasso Model, a graphical model that determines the relative performance of hospitals using three indicators: 1. Bed Occupancy Rate (BOR); 2. Bed turnover (BTO); 3 Average Length of Stay (ALS). The aim of this research is to investigate the performance of the hospitals affiliated with Te...

2017
Jason Xu Eric C. Chi Kenneth Lange

Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and...

2007
Sara van de Geer

We study high-dimensional generalized linear models and empirical risk minimization using the Lasso. An oracle inequality is presented, under a so called compatibility condition. Our aim is three fold: to proof a result announced in van de Geer (2007), to provide a simple proof with simple constants, and to separate the stochastic problem from the deterministic one.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید