نتایج جستجو برای: bidiagonalization

تعداد نتایج: 146  

2007
Kevin Browne Sanzheng Qiao Yimin Wei Pei Yuan Wu

This paper presents a fast algorithm for bidiagonalizing a Hankel matrix. An m×n Hankel matrix is reduced to a real bidiagonal matrix in O((m+ n)n log(m+ n)) floating-point operations (flops) using the Lanczos method with modified partial orthogonalization and reset schemes to improve its stability. Performance improvement is achieved by exploiting the Hankel structure, as fast Hankel matrix–ve...

Journal: :SIAM J. Matrix Analysis Applications 2013
Mario Arioli

The Golub–Kahan bidiagonalization algorithm has been widely used in solving leastsquares problems and in the computation of the SVD of rectangular matrices. Here we propose an algorithm based on the Golub–Kahan process for the solution of augmented systems that minimizes the norm of the error and, in particular, we propose a novel estimator of the error similar to the one proposed by Hestenes a...

2013
PERSI DIACONIS JASON FULMAN

We study the combinatorics of addition using balanced digits, deriving an analog of Holte’s “amazing matrix” for carries in usual addition. The eigenvalues of this matrix for base b balanced addition of n numbers are found to be 1, 1/b, · · · , 1/b, and formulas are given for its left and right eigenvectors. It is shown that the left eigenvectors can be identified with hyperoctahedral Foulkes c...

Journal: :SIAM J. Scientific Computing 2015
Sarah W. Gaaf Michiel E. Hochstenbach

Reliable estimates for the condition number of a large, sparse, real matrix A are important in many applications. To get an approximation for the condition number κ(A), an approximation for the smallest singular value is needed. Standard Krylov subspaces are usually unsuitable for finding a good approximation to the smallest singular value. Therefore, we study extended Krylov subspaces which tu...

Journal: :SIAM J. Scientific Computing 2011
David Chin-Lung Fong Michael A. Saunders

An iterative method LSMR is presented for solving linear systems Ax = b and leastsquares problems min ‖Ax−b‖2, with A being sparse or a fast linear operator. LSMR is based on the Golub-Kahan bidiagonalization process. It is analytically equivalent to the MINRES method applied to the normal equation ATAx = ATb, so that the quantities ‖Ark‖ are monotonically decreasing (where rk = b−Axk is the re...

2012
Andreas Kloeckner Marsha Berger Travis Askham Steven Delong

We discuss an investigation into parallelizing the computation of a singular value decomposition (SVD). We break the process into three steps: bidiagonalization, computation of the singular values, and computation of the singular vectors. We discuss the algorithms, parallelism, implementation, and performance of each of these three steps. The original goal was to accomplish all three tasks usin...

Journal: :Applied Mathematics and Computation 2006
Saeed Karimi Faezeh Toutounian

In this paper, we present the block least squares method for solving nonsymmetric linear systems with multiple righthand sides. This method is based on the block bidiagonalization. We first derive two algorithms by using two different convergence criteria. The first one is based on independently minimizing the 2-norm of each column of the residual matrix and the second approach is based on mini...

Journal: :Proceedings of the National Academy of Sciences of the United States of America 2008
Vladimir Rokhlin Mark Tygert

We introduce a randomized algorithm for overdetermined linear least-squares regression. Given an arbitrary full-rank m x n matrix A with m >/= n, any m x 1 vector b, and any positive real number epsilon, the procedure computes an n x 1 vector x such that x minimizes the Euclidean norm ||Ax - b || to relative precision epsilon. The algorithm typically requires ((log(n)+log(1/epsilon))mn+n(3)) fl...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید