نتایج جستجو برای: matrix krylov subspaces

تعداد نتایج: 373988  

2007
Zdeněk Strakoš

Consider a system of linear algebraic equations Ax = b where A is an n by n real matrix and b a real vector of length n. Unlike in the linear iterative methods based on the idea of splitting of A, the Krylov subspace methods, which are used in computational kernels of various optimization techniques, look for some optimal approximate solution xn in the subspaces Kn(A, b) = span{b, Ab, . . . , A...

2010
QIANG YE

The nonsymmetric Lanczos tridiagonalization algorithm is essentially the Gram-Schmidt biorthogonalization method for generating biorthogonal bases of a pair of Krylov subspaces. It suffers from breakdown and instability when a pivot at some step is zero or nearly zero, which is often the result of mismatch of the two Krylov subspaces. In this paper, we propose to modify one of the two Krylov su...

Journal: :Finite Fields and Their Applications 2022

Let T be a linear operator on vector space V of dimension n over Fq. For any divisor m n, an m-dimensional subspace W is T-splitting ifV=W⊕TW⊕⋯⊕Td−1W, where d=n/m. σ(m,d;T) denote the number subspaces. Determining for arbitrary open problem that closely related to another important Krylov spaces. We discuss this connection and give explicit formulae in case invariant factors satisfy certain deg...

2017
MING ZHOU

Gradient iterations for the Rayleigh quotient are elemental methods for computing the smallest eigenvalues of a pair of symmetric and positive definite matrices. A considerable convergence acceleration can be achieved by preconditioning and by computing Rayleigh-Ritz approximations from subspaces of increasing dimensions. An example of the resulting Krylov subspace eigensolvers is the generaliz...

Journal: :SIAM J. Matrix Analysis Applications 2016
Klaus Neymeyr Ming Zhou

The A-gradient minimization of the Rayleigh quotient allows to construct robust and fastconvergent eigensolvers for the generalized eigenvalue problem for (A,M) with symmetric and positive definite matrices. The A-gradient steepest descent iteration is the simplest case of more general restarted Krylov subspace iterations for the special case that all step-wise generated Krylov subspaces are tw...

Journal: :SIAM J. Numerical Analysis 2014
Clara Mertens Raf Vandebril

There are many classical results in which orthogonal vectors stemming from Krylov subspaces are linked to short recurrence relations, e.g., three-terms recurrences for Hermitian and short rational recurrences for unitary matrices. These recurrence coefficients can be captured in a Hessenberg matrix, whose structure reflects the relation between the spectrum of the original matrix and the recurr...

Journal: :SIAM J. Scientific Computing 2007
Valeria Simoncini

In this paper we propose a new projection method to solve large-scale continuous-time Lyapunov matrix equations. The new method projects the problem onto a much smaller approximation space, generated as a combination of Krylov subspaces in A and A. The reduced problem is then solved by means of a direct Lyapunov scheme based on matrix factorizations. The reported numerical results show the comp...

2016
XIN WANG

Submitted for the MAR16 Meeting of The American Physical Society Spectral Gauss quadrature method with subspace interpolation for Kohn-Sham Density functional theory XIN WANG, US Army Rsch Lab Aberdeen — Algorithms with linear-scaling (O(N )) computational complexity for Kohn-Sham density functional theory (K-S DFT) is crucial for studying molecular systems beyond thousands of atoms. Of the O(N...

Journal: :SIAM Review 2002
Jing-Rebecca Li Jacob K. White

This paper presents the Cholesky factor–alternating direction implicit (CF–ADI) algorithm, which generates a low rank approximation to the solution X of the Lyapunov equation AX + XAT = −BBT . The coefficient matrix A is assumed to be large, and the rank of the righthand side −BBT is assumed to be much smaller than the size of A. The CF–ADI algorithm requires only matrix-vector products and mat...

Journal: :The Journal of chemical physics 2014
Yi Zeng Penghao Xiao Graeme Henkelman

Minimum mode following algorithms are widely used for saddle point searching in chemical and material systems. Common to these algorithms is a component to find the minimum curvature mode of the second derivative, or Hessian matrix. Several methods, including Lanczos, dimer, Rayleigh-Ritz minimization, shifted power iteration, and locally optimal block preconditioned conjugate gradient, have be...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید