Newton-type methods for non-convex optimization under inexact Hessian information
نویسندگان
چکیده
منابع مشابه
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
We consider variants of trust-region and cubic regularization methods for nonconvex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ǫ-approximate second-order optimality which have shown to be tight. Our Hessian approximation condi...
متن کاملProximal Newton-type methods for convex optimization
We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such met...
متن کاملQuasi-Newton Bundle-Type Methods for Nondifferentiable Convex Optimization
In this paper we provide implementable methods for solving nondifferentiable convex optimization problems. A typical method minimizes an approximate Moreau–Yosida regularization using a quasi-Newton technique with inexact function and gradient values which are generated by a finite inner bundle algorithm. For a BFGS bundle-type method global and superlinear convergence results for the outer ite...
متن کاملInexact Newton-type Optimization with Iterated
This paper presents and analyzes an Inexact Newton-type optimization method 4 based on Iterated Sensitivities (INIS). A particular class of Nonlinear Programming (NLP) problems 5 is considered, where a subset of the variables is defined by nonlinear equality constraints. The pro6 posed algorithm considers any problem-specific approximation for the Jacobian of these constraints. 7 Unlike other i...
متن کاملProximal Quasi-Newton Methods for Convex Optimization
In [19], a general, inexact, e cient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mathematical Programming
سال: 2019
ISSN: 0025-5610,1436-4646
DOI: 10.1007/s10107-019-01405-z