نتایج جستجو برای: ridge regression method

تعداد نتایج: 1900430  

2005
Gavin C. Cawley Nicola L. C. Talbot Olivier Chapelle

In many regression tasks, in addition to an accurate estimate of the conditional mean of the target distribution, an indication of the predictive uncertainty is also required. There are two principal sources of this uncertainty: the noise process contaminating the data and the uncertainty in estimating the model parameters based on a limited sample of training data. Both of them can be summaris...

Journal: :IACR Cryptology ePrint Archive 2017
Marc Joye

Ridge regression is an algorithm that takes as input a large number of data points and finds the best-fit linear curve through these points. It is a building block for many machine-learning operations. This report presents a system for privacy-preserving ridge regression. The system outputs the best-fit curve in the clear, but exposes no other information about the input data. This problem was ...

2006
Ling Wang Liefeng Bo Licheng Jiao

Based on the feature map principle, Sparse Kernel Ridge Regression (SKRR) model is proposed. SKRR obtains the sparseness by backward deletion feature selection procedure that recursively removes the feature with the smallest leave-one-out score until the stop criterion is satisfied. Besides good generalization performance, the most compelling property of SKRR is rather sparse, and moreover, the...

Journal: :Technometrics 2000
Arthur E. Hoerl Robert W. Kennard

Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://links.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive...

2010
Marcin Budka Bogdan Gabrys

Traditional methods of assessing chemical toxicity of various compounds require tests on animals, which raises ethical concerns and is expensive. Current legislation may lead to a further increase of demand for laboratory animals in the next years. As a result, automatically generated predictions using Quantitative Structure–Activity Relationship (QSAR) modelling approaches appear as an attract...

2014
Vedide Rezan Uslu Erol Egrioglu Eren Bas

A multiple regression model has got the standard assumptions. If the data can not satisfy these assumptions some problems which have some serious undesired effects on the parameter estimates arise. One of the problems is called multicollinearity which means that there is a nearly perfect linear relationship between explanatory variables used in a multiple regression model. This undesirable prob...

Journal: :Technometrics 2011
Ricardo A. Maronna

Ridge regression, being based on the minimization of a quadratic loss function, is sensitive to outliers. Current proposals for robust ridge regression estimators are sensitive to “bad leverage observations”, cannot be employed when the number of predictors p is larger than the number of observations n; and have a low robustness when the ratio p=n is large. In this paper a ridge regression esti...

2012
Haimao Zhan Shizhong Xu

It is widely believed that both common and rare variants contribute to the risks of common diseases or complex traits and the cumulative effects of multiple rare variants can explain a significant proportion of trait variances. Advances in high-throughput DNA sequencing technologies allow us to genotype rare causal variants and investigate the effects of such rare variants on complex traits. We...

2015
Xi Peng Zhang Yi Huajin Tang

In this material, we provide the theoretical analyses to show that the trivial coefficients always correspond to the codes over errors. Lemmas 1–3 show that our errors-removing strategy will perform well when the lp-norm is enforced over the representation, where p = {1, 2,∞}. Let x 6= 0 be a data point in the union of subspaces SD that is spanned by D = [Dx D−x], where Dx and D−x consist of th...

Journal: :Journal of Machine Learning Research 2010
Zhihua Zhang Guang Dai Congfu Xu Michael I. Jordan

Fisher linear discriminant analysis (FDA) and its kernel extension—kernel discriminant analysis (KDA)—are well known methods that consider dimensionality reduction and classification jointly. While widely deployed in practical problems, there are still unresolved issues surrounding their efficient implementation and their relationship with least mean squares procedures. In this paper we address...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید