A Quadratically Convergent Newton-Like Method Based Upon Gaussian Elimination
نویسندگان
چکیده
منابع مشابه
A quadratically convergent Newton method for vector optimization
We propose a Newton method for solving smooth unconstrained vector optimization problems under partial orders induced by general closed convex pointed cones. The method extends the one proposed by Fliege, Graña Drummond and Svaiter for multicriteria, which in turn is an extension of the classical Newton method for scalar optimization. The steplength is chosen by means of an Armijo-like rule, gu...
متن کاملA quadratically convergent VBSCF method.
A quadratically convergent valence bond self-consistent field method is described where the simultaneous optimisation of orbitals and the coefficients of the configurations (VB structures) is based on a Newton-Raphson scheme. The applicability of the method is demonstrated in actual calculations. The convergence and efficiency are compared with the Super-CI method. A necessary condition to achi...
متن کاملA Quadratically Convergent Newton Method for Computing the Nearest Correlation Matrix
The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well-studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasi-Newton methods such as BFGS have been used directly to obtain globally convergent methods. S...
متن کاملA Globally Convergent LP-Newton Method
We develop a globally convergent algorithm based on the LP-Newton method, which has been recently proposed for solving constrained equations, possibly nonsmooth and possibly with nonisolated solutions. The new algorithm makes use of linesearch for the natural merit function and preserves the strong local convergence properties of the original LP-Newton scheme. We also present computational expe...
متن کاملA globally convergent incremental Newton method
Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Numerical Analysis
سال: 1969
ISSN: 0036-1429,1095-7170
DOI: 10.1137/0706051