The Newton Bracketing method for the minimization of convex functions subject to affine constraints

نویسندگان

  • Adi Ben-Israel
  • Yuri Levin
چکیده

The Newton Bracketing method [9] for the minimization of convex functions f : Rn → R is extended to affinely constrained convex minimization problems. The results are illustrated for affinely constrained Fermat–Weber location problems.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Newton Bracketing Method for Convex Minimization

An iterative method for the minimization of convex functions f : R → R, called a Newton Bracketing (NB) method, is presented. The NB method proceeds by using Newton iterations to improve upper and lower bounds on the minimum value. The NB method is valid for n = 1, and in some cases for n > 1 (sufficient conditions given here). The NB method is applied to large scale Fermat–Weber location probl...

متن کامل

SDO relaxation approach to fractional quadratic minimization with one quadratic constraint

In this paper, we study the problem of minimizing the ratio of two quadratic functions subject to a quadratic constraint. First we introduce a parametric equivalent of the problem. Then a bisection and a generalized Newton-based method algorithms are presented to solve it. In order to solve the quadratically constrained quadratic minimization problem within both algorithms, a semidefinite optim...

متن کامل

A Primal-dual Algorithm for Minimizing a Non-convex Function Subject to Bound and Linear Equality Constraints

A new primal-dual algorithm is proposed for the minimization of non-convex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primal-dual step and a Newton-like step in order to ensure descent on a suitable merit function. Convergence of a well-deened subsequence of iterates is proved from arbitrary starting points. Algorithmi...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

On the quadratic support of strongly convex functions

In this paper, we first introduce the notion of $c$-affine functions for $c> 0$. Then we deal with some properties of strongly convex functions in real inner product spaces by using a quadratic support function at each point which is $c$-affine. Moreover, a Hyers–-Ulam stability result for strongly convex functions is shown.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Discrete Applied Mathematics

دوره 156  شماره 

صفحات  -

تاریخ انتشار 2008