نتایج جستجو برای: ε weakly chebyshev subspace

تعداد نتایج: 79738  

2016
Phani Motamarri Vikram Gavini Michael Ortiz

We present a spectrum-splitting approach to conduct all-electron Kohn-Sham density functional theory (DFT) calculations by employing Fermi-operator expansion of the Kohn-Sham Hamiltonian. The proposed approach splits the subspace containing the occupied eigenspace into a core-subspace, spanned by the core eigenfunctions, and its complement, the valence-subspace, and thereby enables an efficient...

2011
W. A. Kirk Brailey Sims

Let X be a Banach space, let φ denote the usual Kuratowski measure of noncompactness, and let kX (ε) = sup r (D) where r (D) is the Chebyshev radius of D and the supremum is taken over all closed convex subsets D of X for which diam (D) = 1 and φ (D) ≥ ε. The space X is said to have φ-uniform normal structure if kX (ε) < 1 for each ε ∈ (0, 1) . It is shown that this concept, which lies strictly...

Journal: :Chicago J. Theor. Comput. Sci. 2012
Avraham Ben-Aroya Igor Shinkar

A subspace-evasive set over a field F is a subset of F that has small intersection with any low-dimensional affine subspace of F. Interest in subspace evasive sets began in the work of Pudlák and Rödl (Quaderni di Matematica 2004). More recently, Guruswami (CCC 2011) showed that obtaining such sets over large fields can be used to construct capacity-achieving list-decodable codes with a constan...

Journal: :Numerical Lin. Alg. with Applic. 2000
Luca Bergamaschi Marco Vianello

In this paper we compare Krylov subspace methods with Chebyshev series expansion for approximating the matrix exponential operator on large, sparse, symmetric matrices. Experimental results upon negative-definite matrices with very large size, arising from (2D and 3D) FE and FD spatial discretization of linear parabolic PDEs, demonstrate that the Chebyshev method can be an effective alternative...

Journal: :CoRR 2007
Xinjia Chen

In this article, we derive a new generalization of Chebyshev inequality for random vectors. We demonstrate that the new generalization is much less conservative than the classical generalization. 1 Classical Generalization of Chebyshev inequality The Chebyshev inequality discloses the fundamental relationship between the mean and variance of a random variable. Extensive research works have been...

2013
M. M. Hosseini

In this paper, an Adomian decomposition method using Chebyshev orthogonal polynomials is proposed to solve a well-known class of weakly singular Volterra integral equations. Comparison with the collocation method using polynomial spline approximation with Legendre Radau points reveals that the Adomian decomposition method using Chebyshev orthogonal polynomials is of high accuracy and reduces th...

2006
John P. Boyd

The Kepler equation for the parameters of an elliptical orbit, E−ε sin(E)=M , is reduced from a transcendental to a polynomial equation by expanding the sine as a series of Chebyshev polynomials. The single real root is found by applying standard polynomial rootfinders and accepting only the polynomial root that lies on the interval predicted by rigorous theoretical bounds. A complete Matlab im...

2014
Jelani Nelson Huy L. Nguyen

An oblivious subspace embedding (OSE) for some ε, δ ∈ (0, 1/3) and d ≤ m ≤ n is a distribution D over Rm×n such that for any linear subspace W ⊂ Rn of dimension d, P Π∼D (∀x ∈W, (1− ε)‖x‖2 ≤ ‖Πx‖2 ≤ (1 + ε)‖x‖2) ≥ 1− δ. We prove that any OSE with δ < 1/3 must have m = Ω((d + log(1/δ))/ε2), which is optimal. Furthermore, if every Π in the support of D is sparse, having at most s non-zero entries...

Journal: :Numerical Lin. Alg. with Applic. 2000
Yousef Saad

The convergence behavior of a number of algorithms based on minimizing residual norms over Krylov subspaces, is not well understood. Residual or error bounds currently available are either too loose or depend on unknown constants which can be very large. In this paper we take another look at traditional as well as alternative ways of obtaining upper bounds on residual norms. In particular, we d...

Journal: :SIAM J. Matrix Analysis Applications 2004
Christopher A. Beattie Mark Embree John Rossi

The performance of Krylov subspace eigenvalue algorithms for large matrices can be measured by the angle between a desired invariant subspace and the Krylov subspace. We develop general bounds for this convergence that include the effects of polynomial restarting and impose no restrictions concerning the diagonalizability of the matrix or its degree of nonnormality. Associated with a desired se...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید