Linear convergence of SDCA in statistical estimation

نویسندگان

  • Chao Qu
  • Huan Xu
چکیده

In this paper, we consider stochastic dual coordinate (SDCA) without strongly convex assumption or convex assumption. We show that SDCA converges linearly under mild conditions termed restricted strong convexity. This covers a wide array of popular statistical models including Lasso, group Lasso, and logistic regression with l1 regularization, corrected Lasso and linear regression with SCAD regularizer. This significantly improves previous convergence results on SDCA for problems that are not strongly convex. As a by product, we derive a dual free form of SDCA that can handle general regularization term, which is of interest by itself.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

SDCA without Duality

Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. In this paper we show how a variant of SDCA can be applied for non-convex losses. We prove linear convergence rate even if individual loss functions are non-convex as long as the expected loss is convex.

متن کامل

SDCA without Duality, Regularization, and Individual Convexity

Stochastic Dual Coordinate Ascent is a popular method for solving regularized loss minimization for the case of convex losses. We describe variants of SDCA that do not require explicit regularization and do not rely on duality. We prove linear convergence rates even if individual loss functions are non-convex, as long as the expected loss is strongly convex.

متن کامل

Linear Wavelet-Based Estimation for Derivative of a Density under Random Censorship

In this paper we consider estimation of the derivative of a density based on wavelets methods using randomly right censored data. We extend the results regarding the asymptotic convergence rates due to Prakasa Rao (1996) and Chaubey et al. (2008) under random censorship model. Our treatment is facilitated by results of Stute (1995) and Li (2003) that enable us in demonstrating that the same con...

متن کامل

Linear Convergence of the Randomized Feasible Descent Method Under the Weak Strong Convexity Assumption

In this paper we generalize the framework of the feasible descent method (FDM) to a randomized (R-FDM) and a coordinate-wise random feasible descent method (RC-FDM) framework. We show that the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM. We prove linear convergence for both R-FDM ...

متن کامل

Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields

This work investigates training Conditional Random Fields (CRF) by Stochastic Dual Coordinate Ascent (SDCA). SDCA enjoys a linear convergence rate and a strong empirical performance for independent classification problems. However, it has never been used to train CRF. Yet it benefits from an exact line search with a single marginalization oracle call, unlike previous approaches. In this paper, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1701.07808  شماره 

صفحات  -

تاریخ انتشار 2017