نتایج جستجو برای: augmented lagrangian method

تعداد نتایج: 1688574  

Ketabchi, S., Moosaei, H.,

One of the issues that has been considered by the researchers in terms of theory and practice is the problem of finding minimum norm solution. In fact, in general, absolute value equation may have infinitely many solutions. In such cases, the best and most natural choice is the solution with the minimum norm. In this paper, the minimum norm-1 solution of absolute value equation is investigated. ...

2016
Jules Djoko Jonas Koko

In this article, we discuss the numerical solution of the Stokes and Navier-Stokes equations completed by nonlinear slip boundary conditions of friction type in two and three dimensions. To solve the Stokes system, we first reduce the related variational inequality into a saddle point-point problem for a well chosen augmented Lagrangian. To solve this saddle point problem we suggest an alternat...

Journal: :J. Applied Mathematics 2013
Saeed Ketabchi Malihe Behboodi-Kahoo

The augmented Lagrangian method can be used for solving recourse problems and obtaining their normal solution in solving two-stage stochastic linear programming problems. The augmented Lagrangian objective function of a stochastic linear problem is not twice differentiable which precludes the use of a Newton method. In this paper, we apply the smoothing techniques and a fast Newton-Armijo algor...

2010
Andre F.T. Martins Noah A. Smith Eric P. Xing Mario A. T. Figueiredo André F. T. Martins Pedro M. Q. Aguiar Mário A. T. Figueiredo

In this paper, we propose combining augmented Lagrangian optimization with the dual decomposition method to obtain a fast algorithm for approximate MAP (maximum a posteriori) inference on factor graphs. We also show how the proposed algorithm can efficiently handle problems with (possibly global) structural constraints. The experimental results reported testify for the state-of-the-art performa...

Journal: :J. Global Optimization 2008
Angelia Nedic Asuman E. Ozdaglar

We provide a unifying geometric framework for the analysis of general classes of duality schemes and penalty methods for nonconvex constrained optimization problems. We present a separation result for nonconvex sets via general concave surfaces. We use this separation result to provide necessary and sufficient conditions for establishing strong duality between geometric primal and dual problems...

1994
G Di Pillo

Exact penalty methods for the solution of constrained optimization problems are based on the construction of a function whose unconstrained minimizing points are also solution of the constrained problem. In the rst part of this paper we recall some deenitions concerning exactness properties of penalty functions, of barrier functions, of augmented Lagrangian functions, and discuss under which as...

Journal: :Comp. Opt. and Appl. 2012
Philip E. Gill Daniel P. Robinson

Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to bot...

Journal: :CoRR 2018
Feihu Huang Songcan Chen

In the paper, we study the mini-batch stochastic ADMMs (alternating direction method of multipliers) for the nonconvex nonsmooth optimization. We prove that, given an appropriate mini-batch size, the mini-batch stochastic ADMM without variance reduction (VR) technique is convergent and reaches the convergence rate of O(1/T ) to obtain a stationary point of the nonconvex optimization, where T de...

2016
Jonathan Eckstein Wang Yao

We present three new approximate versions of alternating direction method of multipliers (ADMM), all of which require only knowledge of subgradients of the subproblem objectives, rather than bounds on the distance to the exact subproblem solution. One version, which applies only to certain common special cases, is based on combining the operator-splitting analysis of the ADMM with a relative-er...

1997
Tony F. Chan Xue-Cheng Tai

Estimation of coeecients of partial diierential equations is ill-posed. Output-least-squares method is often used in practice. Convergence of the commonly used minimization algorithms for the inverse problem is often very slow. By using the augmented Lagrangian method, the inverse problem is reduced to a coupled linear algebraic system, which can be solved eeciently. Total variation techniques ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید