An inexact proximal augmented Lagrangian framework with arbitrary linearly convergent inner solver for composite convex optimization
نویسندگان
چکیده
We propose an inexact proximal augmented Lagrangian framework with explicit inner problem termination rule for composite convex optimization problems. consider arbitrary linearly convergent solver including in particular stochastic algorithms, making the resulting more scalable facing ever-increasing dimension. Each subproblem is solved inexactly and self-adaptive stopping criterion, without requiring to set a priori target accuracy. When primal dual domain are bounded, our method achieves \(O(1/\sqrt{\epsilon })\) \(O(1/{\epsilon complexity bound terms of number iterations, respectively strongly non-strongly case. Without boundedness assumption, only logarithm need be added above two bounds increase \({\tilde{O}}(1/\sqrt{\epsilon \({\tilde{O}}(1/{\epsilon })\), which hold both obtaining \(\epsilon \)-optimal \)-KKT solution. Within general that we propose, also obtain ^2})\) under relative smoothness assumption on differentiable component objective function. show through theoretical analysis as well numerical experiments computational speedup possibly achieved by use randomized solvers large-scale
منابع مشابه
On the non-ergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming
In this paper, we consider the linearly constrained composite convex optimization problem, whose objective is a sum of a smooth function and a possibly nonsmooth function. We propose an inexact augmented Lagrangian (IAL) framework for solving the problem. The proposed IAL framework requires solving the augmented Lagrangian (AL) subproblem at each iteration less accurately than most of the exist...
متن کاملInexact proximal stochastic gradient method for convex composite optimization
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to a...
متن کاملAugmented Lagrangian Methods and Proximal Point Methods for Convex Optimization
We present a review of the classical proximal point method for nding zeroes of maximal monotone operators, and its application to augmented Lagrangian methods, including a rather complete convergence analysis. Next we discuss the generalized proximal point methods, either with Bregman distances or -divergences, which in turn give raise to a family of generalized augmented Lagrangians, as smooth...
متن کاملThe proximal augmented Lagrangian method for nonsmooth composite optimization
We study a class of optimization problems in which the objective function is given by the sum of a differentiable but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian – a continuous...
متن کاملA Globally Convergent Linearly Constrained Lagrangian Method for Nonlinear Optimization
For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods sequentially minimize a Lagrangian function subject to linearized constraints. These methods converge rapidly near a solution but may not be reliable from arbitrary starting points. The well known example MINOS has proven effective on many large problems. Its success motivates us to propose a glo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematical Programming Computation
سال: 2021
ISSN: ['1867-2957', '1867-2949']
DOI: https://doi.org/10.1007/s12532-021-00205-x