نتایج جستجو برای: inexact inverse iteration
تعداد نتایج: 134033 فیلتر نتایج به سال:
We propose a sequential quadratic optimization method for solving nonlinear optimization problems with equality and inequality constraints. The novel feature of the algorithm is that, during each iteration, the primal-dual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i...
We propose a sequential quadratic optimization method for solving nonlinear constrained optimization problems. The novel feature of the algorithm is that, during each iteration, the primal-dual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i.e., inexact subproblem solut...
This paper considers the inexact Barzilai-Borwein algorithm applied to saddle point problems. To this aim, we study the convergence properties of the inexact Barzilai-Borwein algorithm for symmetric positive definite linear systems. Suppose that gk and g̃k are the exact residual and its approximation of the linear system at the k-th iteration, respectively. We prove the R-linear convergence of t...
We propose a stochastic variance-reduced cubic regularized Newton algorithm to optimize the finite-sum problem over Riemannian submanifold of Euclidean space. The proposed requires full gradient and Hessian update at beginning each epoch while it performs updates in iterations within epoch. iteration complexity $$O(\epsilon ^{-3/2})$$ obtain an $$(\epsilon ,\sqrt{\epsilon })$$ -second-order sta...
Classical iteration methods for linear systems, such as Jacobi iteration, can be accelerated considerably by Krylov subspace methods like GMRES. In this paper, we describe how inexact Newton methods for nonlinear problems can be accelerated in a similar way and how this leads to a general framework that includes many well-known techniques for solving linear and nonlinear systems, as well as new...
In this article, a new inexact alternating direction method(ADM) is proposed for solving a class of variational inequality problems. At each iteration, the new method firstly solves the resulting subproblems of ADM approximately to generate an temporal point x̃, and then the multiplier y is updated to get the new iterate y. In order to get x, we adopt a new descent direction which is simple comp...
In this paper, we propose an inexact Uzawa method with variable relaxation parameters for iteratively solving linear saddle-point problems. The method involves two variable relaxation parameters, which can be updated easily in each iteration, similar to the evaluation of the two iteration parameters in the conjugate gradient method. This new algorithm has an advantage over most existing Uzawa-t...
The accelerated proximal gradient (APG) method, first proposed by Nesterov for minimizing smooth convex functions, later extended by Beck and Teboulle to composite convex objective functions, and studied in a unifying manner by Tseng, has proven to be highly efficient in solving some classes of large scale structured convex optimization (possibly nonsmooth) problems, including nuclear norm mini...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید