نتایج جستجو برای: pabón lasso model

تعداد نتایج: 2106803  

2012
Daniel V. Samarov Matthew L. Clarke Ji Youn Lee David W. Allen Maritoni Litorja Jeeseong Hwang

We present a framework for hyperspectral image (HSI) analysis validation, specifically abundance fraction estimation based on HSI measurements of water soluble dye mixtures printed on microarray chips. In our work we focus on the performance of two algorithms, the Least Absolute Shrinkage and Selection Operator (LASSO) and the Spatial LASSO (SPLASSO). The LASSO is a well known statistical metho...

Journal: :Journal of Machine Learning Research 2008
Francis R. Bach

We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1-norm where all spaces have dimension one, where it is commonly referred to as the Lasso. In this paper, we study the asymptotic model consistency of...

Journal: :Electronic Journal of Statistics 2011

Journal: :Journal of machine learning research : JMLR 2016
Matey Neykov Jun S. Liu Tianxi Cai

It is known that for a certain class of single index models (SIMs) [Formula: see text], support recovery is impossible when X ~ 𝒩(0, 𝕀 p×p ) and a model complexity adjusted sample size is below a critical threshold. Recently, optimal algorithms based on Sliced Inverse Regression (SIR) were suggested. These algorithms work provably under the assumption that the design X comes from an i.i.d. Gaus...

2016
Dennis Becker Vincent Bremer Burkhardt Funk Joost Asselbergs Heleen Riper Jeroen Ruwaard

Smartphones are increasingly utilized in society and enable scientists to record a wide range of behavioral and environmental information. These information, referred to as Unobtrusive Ecological Momentary Assessment Data, might support prediction procedures regarding the mood level of users and simultaneously contribute to an enhancement of therapy strategies. In this paper, we analyze how the...

2011
Jinyu Li Ming Yuan Chin-Hui Lee

 Inspired by the success of least absolute shrinkage and selection operator (LASSO) in statistical learning, we propose an regularized maximum likelihood linear regression (MLLR) to estimate models with only a limited set of adaptation data to improve accuracy for automatic speech recognition, by regularizing the standard MLLR objective function with an constraint. The so-called LASSO MLLR is ...

Mahdi Roozbeh, Monireh Maanavi,

Background and purpose: Machine learning is a class of modern and strong tools that can solve many important problems that nowadays humans may be faced with. Support vector regression (SVR) is a way to build a regression model which is an incredible member of the machine learning family. SVR has been proven to be an effective tool in real-value function estimation. As a supervised-learning appr...

Journal: :Genetic epidemiology 2010
Jing Wu Bernie Devlin Steven Ringquist Massimo Trucco Kathryn Roeder

Epistasis could be an important source of risk for disease. How interacting loci might be discovered is an open question for genome-wide association studies (GWAS). Most researchers limit their statistical analyses to testing individual pairwise interactions (i.e., marginal tests for association). A more effective means of identifying important predictors is to fit models that include many pred...

2011
Kotaro Kitagawa Kumiko Tanaka-Ishii

Relational lasso is a method that incorporates feature relations within machine learning. By using automatically obtained noisy relations among features, relational lasso learns an additional penalty parameter per feature, which is then incorporated in terms of a regularizer within the target optimization function. Relational lasso has been tested on three different tasks: text categorization, ...

2007
Peng Zhao Bin Yu Saharon Rosset

Many statistical machine learning algorithms (in regression or classification) minimize either an empirical loss function as in AdaBoost, or a penalized empirical loss as in SVM. A single regularization tuning parameter controls the trade-off between fidelity to the data and generalibility, or equivalently between bias and variance. When this tuning parameter changes, a regularization “path” of...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید