نتایج جستجو برای: trust region dogleg method
تعداد نتایج: 2150475 فیلتر نتایج به سال:
The least squares adjustment (LSA) method is studied as an optimisation problem and shown to be equivalent to the undamped Gauss-Newton (GN) optimisation method. Three problem-independent damping modifications of the GN method are presented: the line-search method of Armijo (GNA); the LevenbergMarquardt algorithm (LM); and Levenberg-Marquardt with Powell dogleg (LMP). Furthermore, an additional...
In this paper we consider the regularity of the trust region-cg algorithm, when it is applied to nonlinear ill-posed iverse problems. The trust region algorithm can be viewed as a regularization method, but it diiers from the traditional regularization method, because no penalty term is need. Thus, the determing of the so-called regular-ization parameter in a standard regularization method is a...
Abstract. We consider methods for large-scale unconstrained minimization based on finding an approximate minimizer of a quadratic function subject to a two-norm trust-region inequality constraint. The Steihaug-Toint method uses the conjugate-gradient algorithm to minimize the quadratic over a sequence of expanding subspaces until the iterates either converge to an interior point or cross the co...
A nonmonotone trust-region method for the solution of nonlinear systems of equations with box constraints is considered. The method differs from existing trust-region methods both in using a new nonmonotonicity strategy in order to accept the current step and by using a new updating technique for the trust-region-radius. The overall method is shown to be globally convergent. Moreover, when comb...
Line search method and trust region method are two important classes of techniques for solving optimization problems and have their advantages respectively. In this paper we use the Armijo line search rule in a more accurate way and propose a new line search method for unconstrained optimization problems. Global convergence and convergence rate of the new method are analyzed under mild conditio...
In (NAR 08/18 and 08/21, Oxford University Computing Laboratory, 2008) we introduced a second-derivative SQP method (S2QP) for solving nonlinear nonconvex optimization problems. We proved that the method is globally convergent and locally superlinearly convergent under standard assumptions. A critical component of the algorithm is the so-called predictor step, which is computed from a strictly ...
A trust-region-based BFGS method is proposed for solving symmetric nonlinear equations. In this given algorithm, if the trial step is unsuccessful, the linesearch technique will be used instead of repeatedly solving the subproblem of the normal trust-region method. We establish the global and superlinear convergence of the method under suitable conditions. Numerical results show that the given ...
In this paper, we incorporate a nonmonotone technique with the new proposed adaptive trust region radius (Shi and Guo, 2008) [4] in order to propose a new nonmonotone trust region method with an adaptive radius for unconstrained optimization. Both the nonmonotone techniques and adaptive trust region radius strategies can improve the trust region methods in the sense of global convergence. The g...
In this paper, we present a new trust region method for nonlinear equations with the trust region converging to zero. The new method preserves the global convergence of the traditional trust region methods in which the trust region radius will be larger than a positive constant. We study the convergence rate of the new method under the local error bound condition which is weaker than the nonsin...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید