نتایج جستجو برای: matrix krylov subspaces

تعداد نتایج: 373988  

2008
ROLAND W. FREUND

A standard approach to model reduction of large-scale higher-order linear dynamical systems is to rewrite the system as an equivalent first-order system and then employ Krylov-subspace techniques for model reduction of first-order systems. This paper presents some results about the structure of the block-Krylov subspaces induced by the matrices of such equivalent first-order formulations of hig...

2007
Daniel Kressner

Aggressive early deflation [1] has proven to significantly enhance the convergence of the QR algorithm for computing the eigenvalues of a nonsymmetric matrix. It is shown that this deflation strategy is equivalent to extracting converged Ritz vectors from certain Krylov subspaces. As a special case, the single-shift QR algorithm enhanced with aggressive early deflation corresponds to a Krylov s...

2012
Erlend Aune Daniel Simpson Jo Eidsvik

In order to compute the log-likelihood for high dimensional Gaussian models, it is necessary to compute the determinant of the large, sparse, symmetric positive definite precision matrix. Traditional methods for evaluating the log-likelihood, which are typically based on Choleksy factorisations, are not feasible for very large models due to the massive memory requirements. We present a novel ap...

2001
Peter Lancaster Qiang Ye Hans Schneider QIANG YE

We are concerned with eigenvalue problems for definite and indefinite symmetric matrix pencils. First, Rayleigh-Ritz methods are formulated and, using Krylov subspaces, a convergence analysis is presented for definite pencils. Second, generalized symmetric Lanczos algorithms are introduced as a special Rayleigh-Ritz method. In particular, an a posteriori convergence criterion is demonstrated by...

Journal: :SIAM J. Matrix Analysis Applications 2017
Ian N. Zwaan Michiel E. Hochstenbach

We consider the two-sided Arnoldi method and propose a two-sided Krylov–Schurtype restarting method. We discuss the restart for standard Rayleigh–Ritz extraction as well as harmonic Rayleigh–Ritz extraction. Additionally, we provide error bounds for Ritz values and Ritz vectors in the context of oblique projections and present generalizations of, e.g., the Bauer–Fike theorem and Saad’s theorem....

1995
Yousef Saad

In a general projection technique the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of techniques based on this principle is that of Krylov subspace methods which utilize subspaces of the form spanfv; Av; ::::;A m?1 vg. This general principle can be used to solve linear systems and eigenvalue problems whic...

2012
MARTIN H. GUTKNECHT

For the iterative solution of large sparse linear systems we develop a theory for a family of augmented and deflated Krylov space solvers that are coordinate based in the sense that the given problem is transformed into one that is formulated in terms of the coordinates with respect to the augmented bases of the Krylov subspaces. Except for the augmentation, the basis is as usual generated by a...

Journal: :SIAM J. Scientific Computing 2005
James Baglama Lothar Reichel

New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the ne...

2000
Eiji Mizutani James Demmel

This paper describes a method of dogleg trust-region steps, or restricted Levenberg-Marquardt steps, based on a projection process onto the Krylov subspaces for neural networks nonlinear least squares problems. In particular, the linear conjugate gradient (CG) method works as the inner iterative algorithm for solving the linearized Gauss-Newton normal equation, whereas the outer nonlinear algor...

2013
Mian Ilyas Ahmad Daniel B. Szyld Martin B. van Gijzen MIAN ILYAS AHMAD DANIEL B. SZYLD MARTIN B. VAN GIJZEN

Modern methods for H2-optimal model order reduction include the Iterative Rational Krylov Algorithm (IRKA, [Gugerkin, Antoulas, and Beattie, 2008]) and Parameterized Model Reduction (PMOR, [Baur, Beattie, Benner, and Gugercin, 2011]). In every IRKA or PMOR iteration, two sequences of shifted linear systems need to be solved, one for the shifted matrices and one for their adjoint, using the same...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید