نتایج جستجو برای: lagrangian optimization

تعداد نتایج: 338580  

2009
Aida Khajavirad Jeremy J. Michalek

We propose a deterministic approach for global optimization of nonconvex quasiseparable problems encountered frequently in engineering systems design. Our branch and bound-based optimization algorithm applies Lagrangian decomposition to (1) generate tight lower bounds by exploiting the structure of the problem and (2) enable parallel computing of subsystems and use of efficient dual methods. We...

1996
Georgios Karakatsanis Thomas Buchert Adrian L. Melott

The Lagrangian perturbation theory on Friedmann– Lemaı̂tre cosmologies is compared with numerical simulations (tree–, adaptive P3M– and PM codes). In previous work we have probed the large–scale performance of the Lagrangian perturbation solutions up to the third order by studying their cross–correlations with N–body simulations for various power spectra (Buchert et al. 1994, Melott et al. 1995,...

1999
P. B. Luh X. Zhao L. S. Thakur K. H. Chen T. D. Chiueh S. C. Chang

By combining neural network optimization ideas with “Lagrangian relaxation” for constraint handling, a novel Lagrangian relaxation neural network (LRNN) has recently been developed for job shop scheduling. This paper is to explore architectural design issues for the hardware implementation of such neural networks. A digital circuitry with a micro-controller and an optimization chip is designed,...

Journal: :Comp. Opt. and Appl. 2014
Mengwei Xu Jane J. Ye

In this paper, we design a numerical algorithm for solving a simple bilevel program where the lower level program is a nonconvex minimization problem with a convex set constraint. We propose to solve a combined problem where the first order condition and the value function are both present in the constraints. Since the value function is in general nonsmooth, the combined problem is in general a...

Journal: :CoRR 2016
Joachim Giesen Sören Laue

We address the problem of solving convex optimization problems with many convex constraints in a distributed setting. Our approach is based on an extension of the alternating direction method of multipliers (ADMM) that recently gained a lot of attention in the Big Data context. Although it has been invented decades ago, ADMM so far can be applied only to unconstrained problems and problems with...

2015
Antonio Frangioni Enrico Gorgone Bernard Gendron

Subgradient methods (SM) have long been the preferred way to solve large-scale Nondifferentiable Optimization problems, such as those arising from the solution of Lagrangian duals of hard combinatorial optimization problems. Although other methods exist that show a significantly higher convergence rate in some circumstances, SM have certain unique advantages that may make them competitive under...

Journal: :Pattern Recognition 1998
Stan Z. Li William Y. C. Soh Eam Khwang Teoh

A novel relaxation labeling (RL) method is presented based on Augmented Lagrangian multipliers and the graded Hoppeld neural network (ALH). In this method, an RL problem is converted into a constrained optimization problem and solved by using the augmented Lagrangian and Hoppeld techniques. The ALH method yields results comparable to the best of the existing RL algorithms in terms of the optimi...

Journal: :Math. Oper. Res. 2005
Gianni Di Pillo Stefano Lucidi Laura Palagi

We define a primal-dual algorithm model (SOLA) for inequality constrained optimization problems that generates a sequence converging to points satisfying the second order necessary conditions for optimality. This property can be enforced by combining the equivalence between the original constrained problem and the unconstrained minimization of an exact augmented Lagrangian function and the use ...

2015
Ya-Feng Liu Xin Liu Shiqian Ma

In this paper, we consider the linearly constrained composite convex optimization problem, whose objective is a sum of a smooth function and a possibly nonsmooth function. We propose an inexact augmented Lagrangian (IAL) framework for solving the problem. The proposed IAL framework requires solving the augmented Lagrangian (AL) subproblem at each iteration less accurately than most of the exist...

Journal: :RAIRO - Operations Research 2010
Alfredo N. Iusem Mostafa Nasri

We introduce augmented Lagrangian methods for solving finite dimensional variational inequality problems whose feasible sets are defined by convex inequalities, generalizing the proximal augmented Lagrangian method for constrained optimization. At each iteration, primal variables are updated by solving an unconstrained variational inequality problem, and then dual variables are updated through ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید