نتایج جستجو برای: dichotomous coordinate descent dcd

تعداد نتایج: 77679  

2012
Vamsi K. Potluru

The Nonnegative Least Squares (NNLS) formulation arises in many important regression problems. We present a novel coordinate descent method which differs from previous approaches in that we do not explicitly maintain complete gradient information. Empirical evidence shows that our approach outperforms a state-of-the-art NNLS solver in computation time for calculating radiation dosage for cancer...

Journal: :CoRR 2017
Wen-Jun Zeng Hing-Cheung So

Phase retrieval aims at recovering a complex-valued signal from magnitude-only measurements, which attracts much attention since it has numerous applications in many disciplines. However, phase recovery involves solving a system of quadratic equations, indicating that it is a challenging nonconvex optimization problem. To tackle phase retrieval in an effective and efficient manner, we apply coo...

Journal: :Statistics and Computing 2014
Hao Wang

Bien and Tibshirani (2011) have proposed a covariance graphical lasso method that applies a lasso penalty on the elements of the covariance matrix. This method is definitely useful because it not only produces sparse and positive definite estimates of the covariance matrix but also discovers marginal independence structures by generating exact zeros in the estimated covariance matrix. However, ...

2008
T. T. WU K. LANGE

Imposition of a lasso penalty shrinks parameter estimates toward zero and performs continuous model selection. Lasso penalized regression is capable of handling linear regression problems where the number of predictors far exceeds the number of cases. This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty. The previously known ℓ2 algorithm...

2017
Tao Sun Robert Hannah Wotao Yin

Asynchronous-parallel algorithms have the potential to vastly speed up algorithms by eliminating costly synchronization. However, our understanding of these algorithms is limited because the current convergence of asynchronous (block) coordinate descent algorithms are based on somewhat unrealistic assumptions. In particular, the age of the shared optimization variables being used to update a bl...

2013
Qi Deng Jeffrey Ho Anand Rangarajan

Stochastic coordinate descent, due to its practicality and efficiency, is increasingly popular in machine learning and signal processing communities as it has proven successful in several large-scale optimization problems , such as l1 regularized regression, Support Vector Machine, to name a few. In this paper, we consider a composite problem where the nonsmoothness has a general structure that...

2012
Chad Scherrer Ambuj Tewari Mahantesh Halappanavar David J. Haglin

Large-scale `1-regularized loss minimization problems arise in high-dimensional applications such as compressed sensing and high-dimensional supervised learning, including classification and regression problems. High-performance algorithms and implementations are critical to efficiently solving these problems. Building upon previous work on coordinate descent algorithms for `1-regularized probl...

2017
Dmytro Perekrestenko Volkan Cevher Martin Jaggi

Coordinate descent methods employ random partial updates of decision variables in order to solve huge-scale convex optimization problems. In this work, we introduce new adaptive rules for the random selection of their updates. By adaptive, we mean that our selection rules are based on the dual residual or the primal-dual gap estimates and can change at each iteration. We theoretically character...

2011
Francesco Dinuzzo Cheng Soon Ong Peter V. Gehler Gianluigi Pillonetto

We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performance and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space of vector-valued func...

Journal: :SIAM Journal on Optimization 2015
Olivier Fercoq Peter Richtárik

We propose a new stochastic coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only. Our method (APPROX) is simultaneously Accelerated, Parallel and PROXimal; this is the first time such a method is proposed. In the special case when the number of processors is equal to the number of coordinates, the method converges at th...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید