نتایج جستجو برای: augmented lagrangian method

تعداد نتایج: 1688574  

Journal: :Journal of Scientific Computing 2022

The augmented Lagrangian method (ALM) is one of the most useful methods for constrained optimization. Its convergence has been well established under convexity assumptions or smoothness assumptions, both assumptions. ALM may experience oscillations and divergence when underlying problem simultaneously nonconvex nonsmooth. In this paper, we consider linearly with a (in particular, weakly convex)...

Journal: :CoRR 2017
Qingjiang Shi Mingyi Hong Xiao Fu Tsung-Hui Chang

Many contemporary signal processing, machine learning and wireless communication applications can be formulated as nonconvex nonsmooth optimization problems. Often there is a lack of efficient algorithms for these problems, especially when the optimization variables are nonlinearly coupled in some nonconvex constraints. In this work, we propose an algorithm named penalty dual decomposition (PDD...

Journal: :Annals OR 2003
Alexandre Belloni Andre L. Diniz Souto Lima Maria Elvira Piñeiro Maceira Claudia A. Sagastizábal

We consider the inclusion of commitment of thermal generation units in the optimal management of the Brazilian power system. By means of Lagrangian relaxation we decompose the problem and obtain a nondifferentiable dual function that is separable. We solve the dual problem with a bundle method. Our purpose is twofold: first, bundle methods are the methods of choice in nonsmooth optimization whe...

2017
Boris Houska Moritz Diehl

This paper presents novel convergence results for the Augmented Lagrangian based Alternating Direction Inexact Newton method (ALADIN) in the context of distributed convex optimization. It is shown that ALADIN converges for a large class of convex optimization problems from any starting point to minimizers without needing line-search or other globalization routines. Under additional regularity a...

2016
XIAOJUN CHEN LEI GUO JANE J. YE

We consider a class of constrained optimization problems where the objective function is a sum of a smooth function and a nonconvex non-Lipschitz function. Many problems in sparse portfolio selection, edge preserving image restoration and signal processing can be modelled in this form. First we propose the concept of the Karush-Kuhn-Tucker (KKT) stationary condition for the non-Lipschitz proble...

Journal: :Comp. Opt. and Appl. 2005
Ernesto G. Birgin R. A. Castillo José Mario Martínez

Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as Powell-Hestenes-Rockafellar...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید