A modified SOR-like method for the augmented systems

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimum parameter for the SOR-like method for augmented systems

Recently, several proposals for the generalization of Young’s SOR method to the saddle point problem or the augmented system has been presented. One of the most practical versions is the SOR-like method given by Golub et al., [(2001). SOR-like methods for augmented systems. BIT, 41, 71–85.], where the convergence and the determination of its optimum parameters were given. In this article, a ful...

متن کامل

A note on an SOR-like method for augmented systems

Golub et al. (2001, BIT, 41, 71–85) gave a generalized successive over-relaxation method for the augmented systems. In this paper, the connection between the SOR-like method and the preconditioned conjugate gradient (PCG) method for the augmented systems is investigated. It is shown that the PCG method is at least as accurate (fast) as the SOR-like method. Numerical examples demonstrate that th...

متن کامل

Optimal parameters of the generalized symmetric SOR method for augmented systems

For the augmented system of linear equations, Zhang and Lu (2008) recently studied the generalized symmetric SOR method (GSSOR) with two parameters. In this note, the optimal parameters of the GSSOR method are obtained, and numerical examples are given to illustrate the corresponding results. © 2014 Elsevier B.V. All rights reserved.

متن کامل

Sor-like Methods for Augmented Systems Dedicated to Professor David Young

Several SOR-like algorithms are proposed for solving augmented systems which have appeared in many diierent applications of scientiic computing , for example, constrained optimization and the nite element approximation for solving the Stokes equation. The convergence and the choice of optimal parameter for these algorithms are studied. The convergence and divergence regions for some algorithms ...

متن کامل

Augmented Downhill Simplex a Modified Heuristic Optimization Method

Augmented Downhill Simplex Method (ADSM) is introduced here, that is a heuristic combination of Downhill Simplex Method (DSM) with Random Search algorithm. In fact, DSM is an interpretable nonlinear local optimization method. However, it is a local exploitation algorithm; so, it can be trapped in a local minimum. In contrast, random search is a global exploration, but less efficient. Here, rand...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Computational and Applied Mathematics

سال: 2015

ISSN: 0377-0427

DOI: 10.1016/j.cam.2014.07.002