Preconditioning Legendre Spectral Collocation Approximations to Elliptic Problems

نویسنده

  • Ernest E. Rothman
چکیده

This work deals with the H1 condition numbers and the distribution of the ~ N;Msingular values of the preconditioned operators f~ 1 N;M WN;M ÂN;Mg. ÂN;M is the matrix representation of the Legendre Spectral Collocation discretization of the elliptic operator A de ned by Au := u + a1ux + a2uy + a0u in (the unit square) with boundary conditions: u = 0 on 0; @u @ A = u on 1. ~ N;M is the sti ness matrix associated with the nite element discretization of the positive de nite elliptic operator B de ned by Bv := v + b0v in with boundary conditions v = 0 on 0; @v @ B = v on 1. The nite element space is either the space of continuous functions which are bilinear on the rectangles determined by the Legendre-Gauss-Lobatto (LGL) points or the space of continuous functions which are linear on a triangulation of determined by the LGL points. WN;M is the matrix of quadrature weights. When A = B we obtain results on the eigenvalues of ~ 1 N;M WN;M B̂N;M . We show that there is an integer N0 and constants ; with 0 < < , such that: if min(N;M) N0, then all the ~ N;M -singular values of ~ 1 N;M WN;M ÂN;M lie in the interval [ ; ]. Moreover, there is a smaller interval, [ 0; 0], independent of the operator A, such that: if min(N;M) N0, then all but a xed nite number of the ~ N;M -singular value lie in [ 0; 0]. These results are related to results of Manteu el and Parter [MP] Parter and Wong [PW] and Wong [W1], [W2] for nite element discretizations. 2 1. Introduction Let be the square [ 1; 1] [ 1; 1] and consider a uniformly elliptic operator given by (1:1a) Au := [uxx + uyy] + a1ux + a2uy + a0u in with boundary conditions (1:1b) u = 0 on 0; @u @ = u on 1 where (1:2) @ = 0 [ 1 and 0 ( 1) consists of complete edges of the square, e.g., we could have (1:3) 0 = f( 1; y); 1 y 1g: While many of our major results are valid (Theorem 7.1, Theorem 8.3) in the general case of variable coe cients, we limit this discussion to the case where a1; a2; and a0 are constants. We assume that A is an invertible operator, but not necessarily de nite. Let fAN;Mg be a family of spectral collocation discretizations based on the Legendre-GaussLobatto [LGL] points which arise from a variational or weak representation of the operator A (see [QZ] or [BM]). Consider the systems of linear equations. (1:4) ÂN;MU = F which arise in the numerical solution of the boundary value problem Au = f using these spectral collocation discretizations and the Lagrange basis f'ij(x; y)g of the polynomial space P 0 N;M (see Section 2 for a complete discussion of notations, etc). The actual solution of the system (1.4) is di cult (see [QZ]) because the matrix ÂN;M is badly conditioned. This is true even in the case when A is a symmetric, positive de nite operator. Preconditioned iterative methods are a preferred approach (see [CHQZ]). Building on an early suggestion of Orszag [Or], who used a nite di erence preconditioner, several authors have suggested the use of nite element preconditioners ([QZ], [CQ], [DM]). Let ~ N;M be the sti ness matrix of the nite element discretization and let MN;M be the associated mass matrix. A natural approach is to replace (1.4) by (1:5) ~ 1 N;M MN;M ÂN;M U = ~ 1 N;M MN;M F The solution of (1.5) would then be e ected by a damped Jacobi iterative method, GMRES or Bi-CGSTAB [V]. However, these methods can only be e ectively used when the eigenvalues of the preconditioned matrix 1 N;M MN;M ÂN;M all have positive real parts. Since 3 one usually uses a positive de nite preconditioner ~ N;M such approaches can be used only when A is itself de nite, i.e., its eigenvalues have positive real parts. In the more general case one could consider the Conjugate Gradient method applied to the normal equations associated with (1.5). Another preconditioning of (1.4) is given by (1:6) ~ 1 N;M WN;M ÂN;M U = ~ 1 N;M WN;M F; where WN;M is the diagonal matrix of the quadrature weights !k!̂j associated with the Gauss-Lobatto quadrature. There has been quite a bit of research, mostly experimental, on the eigenvalues of the preconditioned matrices ~ 1 N;M MN;M ÂN;M ; ~ 1 N;M WN;M ÂN;M : For example, in a recent paper [QZ] the authors describe a modi cation of (1.4) and carry out a series of interesting computational experiments on the eigenvalues and the solution e cacy of their method. The nite element space employed in [QZ] is the space of continuous piecewise bilinear functions,V 0 N;M, with the basis being the tensor product of the one dimensional \hat" functions. In this work we consider the same nite element space with the same basis as well as the space of continuous piecewise linear functions, Z0 N;M , with the basis of two dimensional \hat" functions (see [J]). We give a complete analysis of the ~ N;M singular values of (1:7) LN;M := ~ 1 N;M WN;M ÂN;M the preconditioned matrix associated with (1.6). The matrix ~ N;M is the sti ness matrix of any symmetric positive de nite operator of the form (1:8a) Bv = [vxx + vyy ] + bv in with boundary conditions (1:8b) u = 0 on 0; @u @v = u on 1: While it is essential that the 0 (and 1) of (1.8b) be the same as the 0 (and 1) of (1.1b), there are no other conditions, i.e. we allow 6= : Indeed, we will carry out the discussion for the case = 0. We do this for de niteness and because this is probably as good a choice as any for practical reasons. This exibility in the choice of boundary conditions is consistent with results of [MP, Theorem 3.2]. The preconditioning results are contained in Theorem 7.1, Theorem 8.3 and Theorem 7.2, Theorem 8.4 which we restate as follows 4 Theorem 7:10. There is an integer N0(A) such that if min(N;M) N0, then the operators AN;M are uniformly invertible. Assume min(N;M) N0: There are two positive constants, 0 < < , independent of (N;M), such that for all U = (u1; u2; : : : ; ud)T 6= 0; (d = dimension P 0 N;M ) we have the inequalities (1:9) 0 < 2 ( ~ N;M [LN;MU ]; [LN;MU ])l2 ( ~ N;MU;U)l2 2: Theorem 7:20. Let 0; 0 be the constants of Theorem 7:10 associated with the special case where A = B: That is, let QN;M := ~ 1 N;M WN;M B̂N;N: From theorem 7:10 we have, for U 6= 0, (1:10) 0 < 20 ( ~ N;M [QN;MU ]; [QN;MU ])l2 ( ~ N;MU;U)l2 2 0 Returning to the general invertible operator A; there is an integer N1 N0(A) such that if min(N;M) N1 and j (N;M) are the ~ N;M-singular values of LN;M , then these singular values cluster in the interval K := " 30 0 1=2 ; 3 0 0 1=2# : The precise statement about clustering is: for > 0 there is an integer k = k( ) such that all of the ~ N;M -singular values, j (N;M), lie in K with the exception of k of them. We observe that the interval K does not depend upon the operator A. Theorem 7:10 remains true in the general case of variable coe cients a1(x; y); a2(x; y); a0(x; y). Due to technical details of the weighted discrete inner product used in the formulation of the spectral collocation methods, we can only prove Theorem 7:20 for the case of constant coe cients. Hence, for simplicity we deal only with case of constant coe cients a1; a2; a0. Since these results are in the ~ N;M-norm they imply eigenvalue results as well. In particular, when A is a positive de nite self-adjoint operator,WN;M ÂN;M is a symmetric positive de nite matrix and the ~ N;M-singular values of LN;M are also the eigenvalues of LN;M . In general, if j(N;M) are the eigenvalues of LN;M then min j(N;M) j j(N;M)j max j (N;M): These results enable one to apply the Conjugate Gradient method to the ~ N;M normal equations in the ~ N;M -inner product. This approach has been discussed in [BP] and 5 [PW]. The details given there explain and clarify the implementation problems. However, it is also pertinent to mention an interesting experimental fact which will become clear from the computational results of Section 9. The usual singular values, i.e. the square roots of the eigenvalues of (L N;M LN;M) seem to have the same distribution properties as the ~ N;M-singular values. We have no theoretical explanation of these computational results. However, they imply that { in practice, without proof { one could employ the Conjugate Gradient on the usual normal equations in the usual (`2) inner product. Our arguments depend upon the theory of preconditioning and boundary conditions developed in [MP] and extended in [PW], [W1], [W2], [GMP], [G] for nite element equations and some new very powerful estimates on interpolation at the (LGL) points developed in [M] and [BM]. While the full development of our results is technically complicated, the basic idea is relatively simple. The ideas developed in [FMP], [MP], [PW], and [W] allow one to develop the general theory provided that one has obtained the basic result of Theorem 7.1 in the special case where (1:11a) A = B: That is, we are concerned with (1:11b) QN;M = ~ 1 N;MWB̂N;M : In this case we have the two basic facts (1:12a) ( ~ N;MU;U)`2 kuk21 (1:12b) (WB̂N;MU;U)`2 kIN;Muk21 where u(x; y) 2 V 0 N;M(Z0 N;M ) and (IN;Mu) 2 P 0 N;M is its polynomial interpolant. Thus, we need only prove the existence of positive constants 0 < 0 < 0, independent of (N,M), such that (1:13) 0 kuk1 kIN;Muk1 0 kuk1 : The upper bound in (1.13) is provided by the results of [M] and [BM]. Our task is to establish the lower bound. This is done in Sections 3,4,5 for the case of V 0 N;M by building on one dimensional results. In Section 8 we establish the existence of positive constants 0 < 00 < 00 such that (1:14) 00 kuk1 kKN;Muk1 00 kuk1 where u 2 V 0 N;M and (KN;Mu) 2 Z0 N;M is its piecewise linear interpolant. The results for Z0 N;M then follow from those for V 0 N;M . In Section 2 we describe the spaces and the notation. In Section 3 we develop some one dimensional estimates relating to polynomials P 0 N and the continuous piecewise linear 6 functions de ned by their values at the (LGL) points, V 0 N . Within this narrow context we extend the results of [M] and [BM] to show equivalence of both the L2 norm and the H1 norm of a function and its LGL interpolant. Section 4 is devoted to the study of a basic one-dimensional operators where A = B. In section 5 we extend equivalence of the L2 and H1 norms of polynomials p(x; y) 2 P 0 N;M with their piecewise bilinear interpolants. In Section 6 we deal with preconditioning within P 0 N;M . That is, ÂN;M is multiplied by (B̂N;M) 1 where B̂N;M is the spectral collocation matrix associated with the operator B. Section 7 is devoted to the matrix LN;M for the space V 0 N;M . In Section 8 we establish the estimate (1.14) and state the results for LN;M for the space Z0 N;M . Section 9 describes some computational experiments, both one dimensional and two dimensional. We are indebted to David Gottlieb for suggesting this research project and our collaboration on it. He also was a patient listener and provided advice and encouragement. We are extremely grateful to Paul Nevai, who gave advice and helpful information and estimates on orthogonal polynomials. 2. Preliminaries In this work we deal with many vector spaces and use relatively standard notations. For example: a.) if U = (uk); V = (vk) are N tuples [or (N;M) tuples ] of real numbers then (2:1) (U; V )`2 := X(k) ukvk b.) if u(x); v(x) [ or u(x; y); v(x; y)] are real functions de ned on [ 1; 1][ or ] then (u; v)L2 ; kukL2 ; kuks denote the usual L2 inner product, L2-norm or Hs-norm. There are many occasions when we want to express the fact two families of positive quantities faNg; fbNg; [or faN;Mg; fbN;Mg] are uniformly equivalent in the sense that: there are positive constants, ( ; ); independent of N; [ or (N;M)] such that (2:2a) 0 < aN < bN < aN ; 8N: Rather than repeat this phrase over and over, we write (2:2b) aN bN : De nition: Let T be a real positive de nite d d matrix. The bilinear form (2:3a) (U; V )T := (TU; V )`2 is an inner product. Let S be any other real d d matrix. The T -adjoint of S is that unique matrix S] such that (2:3b) (SU; V )T = (U;S]V )T : 7 It is easy to see that (2:3c) S] = T 1STT: The T -singular values of S are the square roots of the eigenvalues of S]S. We denote these singular values by j (S : T ) with (2:3d) 1(S : T ) j (S : T ) j+1(S : T ) d(S : T ): Finally, the usual min max characterization of the 2 j (S : T ) holds. In our case T = ~ ( ~ N or ~ N;M ) and S = ~ 1(WÂ) (WN ÂN or WN;M ÂN;M ). Hence S] = ~ 1(WÂ) and the -singular values are the square roots of the eigenvalues of the matrix. (2:3e) S] S = [~ 1(WÂ) ] [ ~ 1(WÂ)]: Let N be a positive integer and let PN denote the set of polynomials of degree N or less. P 0 N is a subspace of PN which satis es an additional constraint, that is (2:4) P 0 N := ff 2 PN : f(x) = 0 for x 2 0g where 0 is a subset of f 1; 1g which may be empty. Let N andM be positive integers and let PN;M denote the set of all functions of (x; y) which are polynomials in x of degree N or less and are polynomials in y of degree M or less. Set (2:5) P 0 N;M := ff 2 PN;M : f(x; y) = 0 for (x; y) 2 0g: Let fxkg; k = 0; 1; ;N ; be the Legendre-Gauss-Lobatto (LGL) points associated with the (N + 1) point quadrature rule. That is (2:6a) x0 = 1; xN = 1 and the intermediate values 1 < x1 < x2 < xN 1 < 1 are the roots of (2:7) d dxLN (x) = 0 where LN is the Legendre polynomial of degree N . Let f!kg; k = 0; 1 N be the associated quadrature weights. Then (2:8) N Xk=0!kf(xk) = Z 1 0 f(x)dx; 8f 2 P2N 1 where P2N 1 is the set of polynomials of degree 2N 1 or less. Similarly, let fyjg; j = 0; ;M , be the (LGL) points associated with the (M + 1) point quadrature rule. Let !̂j be the associated quadrature weights. 8 In this work we use the Lagrange basis for PN ; P 0 N ; PN;M and P 0 N;M . That is, we take (2:9a) 'i(x) = Q k 6= i (x xk) = Q k 6= i (xi xk) (2:9b) '̂j(x) = Q j 6= s (y ys) = Q j 6= s (yj ys) where, of course, the xk and ys are the (LGL) points. Then the set f'i(x) ; i = 0; 1; ;Ng is the basis for PN . The same set, with some possible deletions, is the basis for P 0 N . Similarly, the set f'ij(x; y) = 'i(x)'̂j (y); i = 0; 1; ;N ; j = 0; 1; ;Mg is the basis for PN;M . And, the same set with the necessary deletions is the basis for P 0 N;M . We de ne the discrete inner products (2:10a) hf; giN :=X(k) !kf(xk)g(xk); 8f; g 2 PN ; (2:10b) hf; giN;M :=Xk Xj !k!̂jf(xk ; yj )g(xk; yj ); 8f; g 2 PN;M The discrete norms are given by (2:11a) kfkN := [hf; fiN ]1=2; 8f 2 PN (2:11b) kfkN;M := [hf; fiN;M ]1=2; 8f 2 PN;M We also require a discrete boundary inner-product and a discrete boundary integral. In order to express this inner-product in complete detail we introduce the following de nitions and notations. We number the sides of s1 = Side 1 := f(x; 1) : 1 x 1g s2 = Side 2 := f(1; y) : 1 y 1g s3 = Side 3 := f(x; 1) : 1 x 1g s4 = Side 4 := f( 1; y) : 1 y 1g We de ne a boundary inner-product for each side. Thus (2:12a) [f; g]N;M;1 := N X k=0 f(xk ; 1)g(xk; 1)!k 9 (2:12b) [f; g]N;M;2 := M Xj=0 f(1; yj )g(1; yj)!̂j (2:12c) [f; g]N;M;3 := N Xk=0 f(xk ; 1)g(xk; 1)!k (2:12d) [f; g]M;M;4 := M Xj=0 f( 1; yj )g( 1; yj)!̂j If 1 = union of sides s then (2:13) [f; g]N;M; 1 = X(s) [f; g]N;M; s The boundary integral is de ned by (2:14a) kfkN;M; 1 = ([f; f ]N;M; 1)1=2 The space VN (VM ) consists of the continuous piecewise linear functions de ned on [ 1; 1] which are de ned by their values at the points xk(yj ). That is, u 2 VN if u 2 C[ 1; 1] and u is linear on each interval (xk; xk+1); k = 1; 2; ;N . The space V 0 N (V 0 M ) is the subspace of VN (VM ) consisting of functions which vanish on 0. The basis f i(x); i = 0; 1; ;Ng of VN is given by the usual \hat" functions. These functions satisfy (2:15) i(xk) = ik The basis of V 0 N is the same set with the necessary deletions. The basis for VM and V 0 M is given by f ̂j(y); j = 0; 1; ;Mg, the corresponding hat functions based on the (M + 1) LGL points,with the appropriate deletions in the case of V 0 M . Consider the partition of into rectangles K whose vertices are LGL points (xk; yj ); (xk+1; yj ); (xk+1; yj+1); (xk; yj+1). The space VN;M is the set of continuous functions u which are bilinear, (of the form u = a+ bx+ cy+dxy) on each such rectangle K. The basis of this space is the set f ij(x; y) = i(x) ̂j (y) : 0 i N; 0 j Mg. the space V 0 N;M is the subspace of VN;M which vanishes on 0. The basis for V 0 N;M is the basis for VN;M with appropriate deletions Let K be the rectangle above. Let K be partitioned into two triangles T1; T2 by drawing the diagonal connecting (xk+1; yj) and (xk; yj+1). The space ZN;M is the space of continuous functions w which are linear on each such triangle. The basis for ZN;M is the interpolatory basis based on two dimensional \hat" functions. That is, as in VN;M , the \degrees of freedom" are the values at the LGL points. The space Z0 N;M is the subspace of ZN;M which vanishes on 0. The basis for Z0 N;M is the basis for ZN;M with appropriate deletions. 10 De nition. Let u 2 VN and (2:16a) u(x) = N Xk=0uk k(x) Then (IN u) 2 PN , the polynomial interpolant of u, is given by (2:16b) (IN u)(x) = N Xk=0uk'k(x) 2 PN Clearly, this map is 1 to 1 and onto. The inverse map is denoted by JN . That is, if f 2 PN and (2:17a) f(x) = X fk'k(x); then (2:17b) (JNf)(x) = X fk k(x) 2 VN In a similar fashion we de ne IN;M and JN;M . That is, if (2:18a) u(x; y) = Xukj kj(x; y) 2 VN;M then (2:18b) (IN;Mu)(x; y) = X ukj'kj(x; y) 2 PN;M : And, if (2:19a) f(x; y) = X fkj'kj(x; y) 2 PN;M then (2:19b) (JN;Mf)(x; y) = X fkj kj (x; y) 2 VN;M : The interpolation operator KN;M takes functions u 2 V 0 N;M into functions w 2 Z0 N;M which agree at the LGL points. Speci cally, if K is the rectangle described above with vertices (xk; yj ); (xk+1; yj); (xk+1; yj+1); (xk; yj+1) and u(x; y) 2 V 0 N;M is given on K by (2:20) u jK = a+ b(x xk) + c(y yj) + d(x xk)(y yj) Then (2:21) KN;Mu jK = w jK = a+ b(x xk) + c(y yj) on T1 + (x xk+1) + (y yj+1) on T2 11 where T1 is the triangle with vertices (xk; yj); (xk+1; yj); (xk ; yj+1) and T2 is the triangle with vertices (xk+1; yj); (xk+1; yj+1); (xk ; yj+1) and (2:22a) = a+ b(xk+1 xk) + c(yj+1 yj) + d(xk+1 xk)(yj+1 yj) (2:22b) = b+ d(yj+1 yj) (2:22c) = c+ d(xk+1 xk) A consequence of the fact that our bases are interpolatory at the same points (xk) or (xk; yj ), is that we can interpret coe cient vectors as representing u 2 VN or INu 2 PN , etc. Thus, if we have (2.16a) we may interpret the U = (u0; u1; ; uN )T as the representor of u(x) 2 VN . Or, if we so desire, we may also interpret U as the representor of (INu)(x) 2 PN . Similar remarks apply to the two dimensional vectors (u1; u2; ; ud)T which may be interpretted as representing u(x; y) 2 V 0 N;M ; (IN;Mu) 2 P 0 N;M or (KN;Mu) 2 Z0 N;M : If f; g 2 PN and (2:23a) F = (fo; f1; ; fN )T ; G = (g0; g1; ; gN)T are the vectors of coe cients then (2:23b) hf; giN = (WN F;G)`2 where (2:24) WN = diagonal (w0; w1; ; wN ) . Similarly if f; g 2 PN;M and (2:25a) F = ff(xi; yj )g; G = fg(xi; yj)g are the \vectors" of coe cients then (2:25b) hf; giN;M = (WN;MF;G)`2 where (2:25c) WN;M = diagonal (!k!̂j) A more precise description of WN;M will be given later. There are many spaces, P 0 N ; P 0 N;M ; V 0 N , etc. and these are operators and matrices etc. We adopt the following notational conventions: 12 (1.) Operators mapping P 0 N ! P 0 N or P 0 N;M ! P 0 N;M are denoted by capital Roman letters with subscripts. For example BN : P 0 N ! P 0 N ; AN;M : P 0 N;M ! P 0 N;M : (2.) The matrix representations of these operators in the Lagrange basis are denoted by the same capital Roman letters with subscripts and a \hat". For example, the matrix representation of BN is B̂N . (3.) Operators mapping V 0 N ! V 0 N or V 0 N;M ! V 0 N;M are denoted by lower case Greek letters with subscripts. For example N : V 0 N ! V 0 N ; N;M : V 0 N;M ! V 0 N;M : (4.) The matrix representations of these operators in the f i(x)g or f ij(x; y)g basis are denoted by the same lower case Greek letters with subscripts and a \hat". For example, the matrix representation of N is ̂N . (5.) When N or ( N;M ) is the mapping associated with nite element discretization of a di erential operator, the sti ness matrix associated with N or ( N;M) in the Lagrange basis is denoted by the same lower case Greek letter with subscripts and a \tilde". That is, ~ N or ~ N;M . In Section 8 we consider the nite element discretization of B in the space Z0 N;M . For that Section only, ~ N;M denotes the sti ness matrix of that nite element discretization. 3. One Dimensional Estimates In this section we collect some basic estimates relating u 2 VN and (INu) 2 PN . We rst recall results of [CQ], [M] and [BM]. Theorem CQ. For every f 2 PN we have (3:1) kfk2L2 < f; f >N : For every f 2 PN;M we have (3:2) kfk2L2 < f; f >N;M : Proof. See [CQ. page 83]. Theorem M. There is a constant C, independent of N such that (3:3) kINuk1 C kuk1 ; 8u 2 VN : Proof. The results of [M] are more general. However, this is the result we require in the sequel. 13 Lemma 3.1. Let !k and xk be the LGL weights and the LGL points respectively. Then (3:4a) !k (xk+1 xk 1); k = 1; 2; ;N 1; (3:4b) !0 (x1 x0) = 1 + x1 (3:4c) wN (xN xN 1) = 1 xN 1: Proof. For k = 0 and N we have !0 = wN = 2 N(N + 1) : Thus the estimates (3.4b), follow immediately from the estimates on the distribution of the zeros of LN (x) given in [S, theorem 6.21.3]. The Legendre polynomials satisfy the di erential equation d dx (1 x2) d dxLN = N(N + 1)LN : Hence, the functions L0N (x) are the orthogonal polynomials of degree (N 1) associated with the weight function (1 x2). We use the notation of [N] and denote the quadrature weights (or Cote's numbers) of the Gauss quadrature with weight w = (1 x2) by N 1(w; j;N+1). Of course j;N 1 = xj ; j = 1; 2; ; (N 1). It is immediate that (3:5) !j = N 1(w; j;N 1) 1 ( j;N 1)2 : Using [N, Theorem 6.3.28, page 120] we see that !j q1 x2j N ; j = 1; 2; ;N However, it follows from [N, Theorem 9.22, page 166] that (3:6) q1 x2j N (xj+1 xj 1): Thus, the Lemma is proven. Theorem 3.1. For all u 2 VN we have (3:7a) kukL2 kINukL2 14 (3:7b) kukL2 kINukN Proof. The statements (3.7a) and (3.7b) are equivalent by (3.1) of Theorem CQ. We shall prove (3.7b). A direct computation shows that (3:8) Z xj+1 xj u2(t)dt = xj+1 xj 6 [u(xj)2 + u(xj)u(xj+1) + u(xj+1)2]: Let T := [(1 + x1)u( 1)2 + (1 xN 1)u(1)2 + N 1 Xk=1 (xk+1 xk 1)u(xk)2]: Then (3.8) shows that 1 6T kuk2L2 1 3T: The theorem now follows from Lemma 3.1. Using this result we can strengthen Theorem M. Theorem 3.2. For all u 2 VN we have (3:9) kuk1 kINuk1 Proof. In view of Theorem M and Theorem 3.1 we need only prove (3:10) Z 1 1 ju0j2 dt Z 1 1 j(INu)0j2 dt: Observe that (3:11a) Z xk+1 xk ju0(t)j2 dt = [u(xk+1) u(xk)]2 xk+1 xk and that (3:11b) [u(xk+1) u(xk)]2 = Z xk+1 xk (IN u)0dt 2 : The Schwarz inequality yields (3:11c) [u(xx+1) u(xk)]2 Z xk+1 xk [(INu)0]2dt (xk+1 xk): Thus (3.9) follows from (a) dividing (3.11c) by (xk+1 xk) and (b) summing on k. 15 4. The Basic 1D Preconditioning We consider two positive de nite self-adjoint di erential operators B1; B2 de ned on C2[ 1; 1]. Then we describe the spectral collocation discretization of B1 in P 0 N and the nite element discretization of B2 in V 0 N . Finally, we discuss the preconditioned matrix. (4:1) Q : = ~ 2 N 1WN B̂1 N where ~ 2 N is the \sti ness" matrix associated with the nite element discretization of B2 and B̂1 N is the matrix representation of the spectral collocation discretization of B1. Actually, it would be su cient for our purposes to consider the case of one operator B and deal with the preconditioned matrix Q1 : = (~ N ) 1WN B̂N where ~ N is the associated sti ness matrix and B̂N is the matrix representation of the spectral collocation of B. However we consider the more general case to illustrate the basic ideas and emphasize the fact it is not necessary that these operators have the same boundary conditions. Indeed, it is necessary and su cient only that the points at which one imposes the \essential" boundary condition, u = 0, be the same for both operators. Consider the di erential operators given by (4:2a) Bku := u00 + bk;0 u; 1 < x < 1; k = 1; 2; with boundary conditions (4:2b) u = 0 on 0; u0 = dku on 1 where (4:3a) 0 [ 1 = f 1; 1g: The coe cients bk;0 are non-negative constants and (4:3b) dk( 1) 0; dk(1) 0: Both operators are to be positive de nite. Hence, if 0 = then either bk;0 > 0 or jdk( 1)j+ jdk(1)j > 0. For simplicity and de niteness we will assume 0 = f 1g; 1 = f1g. The reader will observe that the argument is completely general. Associated with these operators are bilinear forms bk( ; ), which are the basis of both the nite element method and the variational form of the spectral collocation approximation of the operators B1;B2. Let (4:4) V := fu 2 H1[ 1; 1]; u( 1) = 0g Then bk : V V ! R; k = 1; 2, are de ned by (4:5a) bk(u; v) := Z 1 1[u0v0 + bk;0 uv]dx dk(1)u(1)v(1): Each of these bilinear forms is an inner product on V and the associated norms given by (4:5b) kuk1;k = [bk(u; u)]1=2; k = 1; 2: are equivalent to the H1 norm on V . Thus, we have 16 Lemma 4.1. For all u 2 V we have (4:6a) kuk21;1 = b1(u; u) b2(u; u) = kuk21;2 ; (4:6b) b1(u; u) kuk21 : We set (4:7a) P 0 N := ff 2 PN : f( 1) = 0g; (4:7b) V 0 N := fu 2 VN : u( 1) = 0g: Let b1;N ( ; ) be de ned on P 0 N P 0 N by (4:8a) b1;N (f; g) := hf 0; g0iN + b1;0hf; giN d1(1)f(1)g(1); and let b2;N ( ; ) be de ned on V 0 N V 0 N by (4:8b) b2;N (u; v) := b2(u; v); 8u; v 2 V 0 N : These bilinear forms induce operators B1 N ; 2 N which are given by (4:9a) B1 N : P 0 N ! P 0 N ; 2 N : V 0 N ! V 0 N ; (4:9b) hB1 N f; giN = b1;N (f; g); 8f; g 2 P 0 N : (4:9c) ( 2 Nu; v)L2 = b2;N (u; v); 8u; v 2 V 0 N : Let f'k(x); k = 1; 2; ;Ng be the Lagrange basis of P 0 N . Let B̂1 N be the matrix representation of B1 N in this basis. Then (see [QZ]) (4:10) (B̂1 N )ij = (B1'j)(xi) + 1 !N ['0j(xN ) d1(1)'j(xi)] i;N Let f k(x); k = 1; 2; ;Ng be the basis of V 0 N . Let ~ 2 N be the sti ness matrix associated with the nite element treatment of this problem and let MN be the mass matrix. That is (4:11a) ( ~ 2 N )ij = b2;N ( i; j ): (4:11b) (MN )ij = ( i; j )L2 : Let ̂2 N be the matrix representation of 2 N . Then (4:12) ̂2 N = M 1 N ~ 2 N : Lemma 4.2. For all u 2 V 0 N and all f 2 P 0 N we have (4:13a) b1;N(f; f) kfk21 (4:13b) b2;N (u; u) kuk21 Proof. The estimates (4.13b) follow directly from Lemma 4.1. The estimates (4.13a) follow from Theorem CQ and the fact that hf 0; g0iN = Z 1 1 f 0g0dx: The operator B1 N is symmetric and positive de nite in the < ; > inner product. Hence the matrix ~ B1 N = WN B̂1 N is symmetric and positive de nite in the `2 inner product. Thus we have 17 Lemma 4.3. Let (4:14a) f(x) = X fj'j(x) 2 P 0 N (4:14b) u(x) = X uj j (x) 2 V 0 N and let (4:15) F = (f1; f2; ; fN )T ; U = (u1; u2; ; uN )T be the coe cient vectors. Then (4:16a) (WN B̂1 NF;F )`2 = b1;N(f; f) kfk21 (4:16b) ( ~ 2 NU;U)`2 = b2(u; u) kuk21 Proof. Immediate from the de nitions. Theorem 4.1. For every U = (u1; ; uN)T we have (4:17) ( ~ 2 NU;U)`2 (WN B̂1 NU;U)`2 : In matrix theory language, there are constants 0 < 0 < 0 independent of N such that the eigenvalues qj of the matrix (4:18a) Q = (~ 2 N ) 1WB̂1 N satisfy (4:18b) 0 qj 0: Proof. Let u(x) 2 V 0 N be de ned by (4:19a) u(x) = N Xj=1 uj j (x): Then (4:19b) (INu)(x) = N Xj=1 uj j(x): Thus, the vector U represents u(x) and also represents (INu). That is (4:20a) ( ~ 2 NU;U)`2 = b2(u; u) kuk21 and (4:20b) (WN B̂1 NU;U)`2 = b1;N (INu; INu) kINuk21 The theorem now follows from Theorem 3.2. 18 5. The Basic 2D Preconditioning Consider the elliptic di erential operator (5:1a) Bv := [vxx + vyy ] + v; (x; y) 2 ; with boundary conditions (5:1b) v = 0 on 0; @u @ = 0 on 1: In this section we describe both the spectral collocation approximation to B in P 0 N;M and the nite element approximation to B in V 0 N;M . Our concern is the preconditoned matrix. (5:2) QN;M := (~ N;M ) 1WN;MB̂N;M where ~ N;M is the sti ness matrix of the nite element method and B̂N;M is the matrix representation of the spectral collocation discretization of B. Let V := fu 2 H1( ); u = 0 on 0g where the boundary condition on 0 is taken in the sense of the trace theorem. Associated with the operator B is the bilinear form b( ; ) de ned on V V given by (5:3) b(u; v) = (ru;rv)L2 + (u; v)L2 ; 8u; v 2 V: It is well known that (5:4) b(u; u) kuk21 : Let (5:5a) P 0 N;M := ff 2 PN;M : f = 0 on 0g (5:5b) V 0 N;M := fu 2 VN;M : u = 0 on 0g: Let bN;M ( ; ) be de ned on P 0 N;M P 0 N;M by (5:6) bN;M(f; g) = hrf;rgiN;M + hf; giN;M ; 8f; g 2 P 0 N;M : This bilinear form de nes the operator (5:7a) BN;M : P 0 N;M ! P 0 N;M via the equation (5:7b) bN;M (f; g) = hBN;Mf; giN;M ; 8f; g 2 P 0 N;M : This operator BN;M is the spectral collocation discretization of the operator B. 19 Lemma 5.1. For every f 2 V 0 N;M we have (5:8) kfk21 bN;M (f; f): Proof. We have (5:9) bN;M (f; f) = hrf;rfiN;M + hf; fiN;M : The basic result of [CQ: page 83] shows that kfk2L2 hf; fiN;M Hence we need only deal with the rst term. Consider the expression hfx; fxiN;M . We have (5:10) hfx; fxiN;M = M Xj=0 ~ !j Z 1 1 f2 x(t; yj )dt: Consider the function (5:11) g(y) := Z 1 1 f2 x(t; y)dt: This function is a polynomial of even order of degree 2M or less which is non-negative for 1 < y <1. Therefore by [S, Theorem 1.21.2 page 5] g(y) = A(y)2 +B(y)2 where A(y)2 2 P2M ; B(y)2 2 P2M : Thus by Theorem CQ we have hfx; fxiN;M Z 1 1 g(y)dy = Z 1 1 Z 1 1 f2 x (t; y)dtdy: Thus, the Lemma is proven. The nite element discretization, N;M ; of the operator B is de ned by (5:12a) N;M : V 0 N;M ! V 0 N;M and (5:12b) b(u; v) = ( N;Mu; v)L2 ; 8u; v 2 V 0 N;M : Lemma 5.2. For every u 2 V 0 N;M we have (5:13) b(u; u) = ( N;Mu; u)L2 kuk21 20 Proof. The Lemma follows immediately from (5.4). We now turn to the matrix representations of these operators and their properties. Let us order the LGL points by horizontal lines. For example, if 1 = @ so that all the LGL points appear in our computation and we list the LGL points as P1; P2; ; P(N+1)(M+1) then (5:14a) (xk ; yj) = P where (5:14b) = (k + 1) + (N + 1)j We order the basis vectors 'ij(x; y) 2 P 0 N;M ; ij (x; y) 2 V 0 N;M with k = 0; 1; ;N and j = 0; 1; ;M , in the same order. We de ne (5:15a) (x; y) = kj(x; y) = k(x) ~ j (y); (5:15b) (x; y) = 'kj(x; y) = 'k(x) ~ 'j (y): Because of the multiplicative structure of these basis functions it is easy to see that these matrix representations have a tensor product structure. For example, let ~ N;M be the sti ness matrix of the nite element discretization and let MN;M be the mass matrix. Then (~ N;M ) ; = b( ; ) Given one can compute k and j from (5.15b) by observing that 0 k N and 0 j M . Hence k + 1 = mod (N + 1); j = maxf (k + 1) N + 1 ; 0g: Let Bx and By be the ordinary di erential operators (5:16a) Bxu = u00 + 1 2u; 1 x 1 with boundary conditions (5:16b) u = 0 on 0(x); u0 = 0 on 1(x); and (5:17a) Byu = u00 + 1 2u; 1 y 1 with boundary conditions (5:17b) u = 0 on 0(y); u0 = 0 on 1(y): Here (5:18) 0(x) [ 1(x) = f 1; 1g; 0(y) [ 1(y) = f 1; 1g and these decompositions are related to 0 and 1. For example, if the side (s4) 0 then ( 1) 2 0(x), etc. 21 Theorem 5.1. Let ~ x N be the sti ness matrix associated with the nite element discretization of Bx in the nite element space V 0 N . Let ~ y M be the sti ness matrix associated with the nite element discretization of By in the nite element space V 0 M . Let MN and MM be the corresponding mass matrices. Then ~ = MM ~ x N + ~ y M MN Proof. Computation. Theorem 5.2. Let B̂x N be the matrix representation of the spectral collocation discretization of Bx in P 0 N [as discussed in section 4 e.g., equation 4.10 ] and let B̂y M be the matrix representation of the spectral colocation discretization of By in P 0 M . Then B̂N;M , the matrix representation of the spectral collocation discretization of the operator B in P 0 N;M is given by (5:19) B̂N;M = (id)M B̂x N + B̂y M (id)N where (id)M and (id)N are the identity matrices. Let (5:20a) ~ Bx N = WN B̂x N ; ~ By M = WM B̂y M (5:20b) ~ BN;M = WN;M B̂N;M : Then (5:21) ~ BN;M = WM ~ Bx N + ~ By M WN Proof. Computation. Remark: As stated in the Introduction, the matrix B̂N;M is not the matrix used in [QZ]. These matrices di er in the rows relating to points of 1 Remark: The matrix ~ BN;M is the symmetric matrix of the spectral collocation equations see [QZ], the matrices ~ Bx N ; ~ By M are symmetric see section 4. Hence ~ BN;M is symmetric. Lemma 5.3. Let (5:22a) f(x; y) = X f (x; y) 2 P 0 N;M ; (5:22b) u(x; y) = Xu (x; y) 2 V 0 N;M : Let (5:23a) F = (f1; f2; ; f(N+1)(M+1))T ; 22 (5:23b) U = (u1; u2; ; u(N+1)(M+1))T : (The number of f , and u will in general be less than (N + 1)(M + 1).) Then (5:24a) ( ~ N;MU;U)`2 = b(u; u) kuk21 and (5:24b) ( ~ BN;MF;F )`2 = bN;M (f; f) kfk21 Proof. The equality of (5.24a) is a well known fact of nite element theory and follows from (5.12b). The equivalence statement in (5.24a) follows from (5.13). The equality of (5.24b) follows from (5.7b) and (2.25b). The equivalence statement of (5.24b) follows from (5.8). Lemma 5.4. For every vector U = (u1; ; u(N+1)(M+1))Twe have (5:25a) ((WM ~ Bx N)U;U)`2 ((MM ~ x N)U;U)`2 and (5:25b) (( ~ By M WN )U;U)`2 (( ~ y M MN )U;U)`2 Proof. For every UN = (u0; u1; ; uN)T and VM = (v0; v1; ; vM )T Theorem 3.1 implies that (5:26a) (WNUN ; UN )`2 (MNUN ; UN )`2 :; (5:26b) (WMV M ; VM )`2 (MMVM ; VM )`2 : Theorem 4.1 asserts that (5:27a) ( ~ Bx MUN ; UN )`2 ( ~ x NUN ; UN )`2 ; and (5:27b) ( ~ By MVM ; V M )`2 ( ~ y MVM ; VM )`2 : Since all the matrices involved are symmetric and positive de nite these equivalence statements represent bounds on certain eigenvalues. 23 Consider the eigenvalue problems (5:28a) WNUN = MNUN ; (5:28b) ~ By MVM = ~ y MVM : Each has a complete set of eigenvectors UM (s); s = 1; 2; ; (N + 1); VM (t); t = 1; 2; ; (M + 1): Therefore, the vectors and eigenvalues Zst = V M (t) UN (s); t s = st are a complete set of eigenvectors and eigenvalues of the eigenvalue problem ( ~ By M WN )U = (~ y M MN )U: The equivalences(5.26a) and (5.27b) yield the precise statement: There are constants C1; C2;D1;D2 all positive and independent of N,M such that 0 < C1 t C2; 0 < D1 s D2: Thus, C1D1 st C2D2: and (5.25b) follows. The same argument yields (5.25a). Theorem 5.3. For every U = (u1; u2; ; u(N+1)(M+1))T we have (5:29a) ( ~ N;MU;U)`2 ( ~ BN;MU;U)`2 This statement is equivalent to: Let 0 < q1 q2 q(N+1)(M+1) be the eigenvalues of the matrix QN;M given by (5.2). Then there are constants 0 < 0 < 0 and (5:29b) 0 < 0 qj 0 Proof. The theorem follows immediately from Lemma 5.4. The same argument also yields. Theorem 5.4. For every U = u1; u2; ; u(N+1)(M+1) T we have (WN;MU;U)`2 (MN;MU;U)`2 : Theorems 5.3 and 5.4 can be interpreted as extensions of Theorems 3.1 and 3.2. 24 Theorem 5.5. For all u 2 V 0 N;M we have (5:30a) kuk1 kIN;Muk1 (5:30b) kukL2 kIN;MukL2 Proof. Let u(x; y) = Xu (x; y) 2 V 0 N;M (IN;Mu)(x; y) = Xu (x; y) 2 P 0 N;M : The vector U = (u1; u2; ; u(N+1)(M+1))T is the coe cient vector of both functions. The general result of CQ yields kIN;Muk2L2 (WN;MU;U)`2 : Standard nite element theory gives kuk2L2 = (MN;MU;U)`2 : Thus (5.30b) follows from Theorem 5.4. Similarly, (5.30a) follows from (5.29a) and Lemma 5.3. 6. Preconditioning Within P0N;M The spectral collocation discretization, AN;M , of the operator A given by (1.1) is de ned by the bilinear form (6:1) aN;M(f; g) = hrf;rgiN;M + a1hfx; giN;M + a2hf; giN;M + a0hf; giN;M [f; g]N;M; 1 As in (5.7b) we have (6:2) aN;M (f; g) = hAN;Mf; giN;M 8f; g 2 P 0 N;M : In this section we are concerned with the operator (6:3) SN;M := B 1 N;MAN;M : P 0 N;M ! P 0 N;M : Our results are essentially the results of [MP:Theorem 3.2] and [PW:Theorem 14]. 25 Lemma 6.1. There is a positive integer N0 > 0 and a constant K1(A) > 0 such that if (6:4a) min(N;M) N0; then A 1 N;M exists and (6:4b) A 1 N;M N;M K1(A) Proof. This result is immediate from the fact that the eigenvalues and eigenfunctions of AN;M converge to those of A as min(N;M) !1. A direct proof is easily obtained from the arguments of [P1]. Throughout the rest of this paper we assume (6:5) min(N;M) N0: Using Lemma 5.1 and the arguments of [MP, Lemma 3.5, Theorem 3.27], which apply word for word, we obtain Theorem 6.1. Assume (6.5). For every f 2 P 0 N;M let (6:6a) g = B 1 N;MAN;Mf = SN;Mf: Then (6:6b) hBN;Mg; giN;M hBN;Mf; fiN;M (6:6c) kSN;Mfk1 kfk1 Translating this result into a statement about matrices and using Theorem 5.3 we obtain Theorem 6.2. For every U = (u; ; u(N+1)(M+1))T let (6:7a) Z = B̂ 1 N;MÂN;MU Then (6:7b) ( ~ BN;MZ;Z)`2 ( ~ BN;MU;U)`2 (6:7c) ( ~ N;MZ;Z)`2 ( ~ N;MU;U)`2 : The theorems above are essentially the restatement of the results of [MP] in this context. We now turn to the clustering results of [PW] and [G]. 26 Lemma 6.2. Let (6:8) S = B 1A : V ! V: Then fSN;Mg is consistent with S. That is for every f 2 V and every sequence fN;M 2 P 0 N;M such that (6:9a) kfN;M fk1 ! 0; we have (6:9b) kSN;MfN;M ! Sfk1 ! 0: Proof. S is a bounded operator, see [MP]. The family of operators fSN;Mg is uniformly bounded. Hence, it is su cient to prove (6.9b) for f in a dense set of V and ffN;Mg a particular sequence which satis es (6.9a). Let f 2 C4( ). Let (6:10a) g = Af and let fN;M be the \elliptic interpolant", i.e. (6:10b) AN;MfN;M = g = Af: (at the collocation points) Let (6:10c) SN;MfN;M = B 1 N;MAN;MfN;M = uN;M ; and v = B 1Af = B 1g = Sf: Then Bv = Af = g: Since the convergence theory of spectral collocation shows that kuN;M vk1 ! 0; we see that (6.9b) holds. Observe that since BN;M is self-adjoint in the h ; iN;M inner product we have (6:11) hB 1 N;MAN;Mf; giN;M = hAN;Mf; B 1 N;MgiN;M : That is (6:12a) hB 1 N;MAN;Mf; giN;M = aN;M (f; q); 27 where (6:12b) q := B 1 N;Mg: Let (6:13) 8><>: N;M(f; q) = a1hf; qxiN;M a2hf; qyiN;M + (a0 1)hf; qiN;M [f; q]N;M; 1 + a1f[f; q]N;M;2 [f; q]N;M;4g+ a2f[f; q]N;M;3 [f; q]N;M;1g Lemma 6.3. Let TN;M : P 0 N;M ! P 0 N;M be de ned by (6:14a) SN;M = (id) + TN;M where (id) denotes the identity operator in P 0 N;M . Let (6:14b) T = (id) S T is a bounded operator taking V ! V . Then TN;M is consistent with T . Moreover TN;M is determined by the bilinear form (6:14c) N;M(f; g) = hTN;Mf; giN;M = N;M(f; [B 1 N;Mg]): Proof. We have (6:15) aN;M(f; q) = bN;M(f; q) + a1hfx; qiN;M + a2hfy ; qiN;M + (a0 1)hf; qiN;M [f; q]N;M; 1 Using (6.12a) we see that (6:16a) hSN;Mf; giN;M = bN;M (f; q) + a1hfx; qiN;M + a2hfy ; qiN;M + (a0 1)hf; qiN;M [f; q]N;M; 1 with (6:16b) q = B 1 N;Mg: Note: bN;M( ; ) is de ned in (5.6). Since bN;M (f; B 1 N;Mg) = hBN;Mf;B 1 N;MgiN;M we see that hTN;Mf; giN;M = a1hfx; qiN;M + a2hfy; qiN;M + (a0 1)hf; qiN;M [f; q]N;M; 1: The nal form (6.14c) comes from integration by parts. 28 Lemma 6.4. Let min(N;M) N0. There is a constant KT , depending only on T , such that kTN;Mfk1 KT kfk3=4 Proof. Let (6:17) TN;Mf = v: Because of (6.14a) and (6.6c) we see that there is a constant c0 > 0 such that (6:18) kfk1 c0 kvk1 : Also we see that BN;Mv = AN;Mf BN;Mf: Hence (6:19) bN;M(v; g) = N;M(f; g); 8g 2 P 0 N;M : In particular bN;M (v; v) = N;M(f; v): Then, using the form of N;M and the equivalence of norms discussed earlier, we see that there is a constant c1 > 0 such that (6:20) kvk21 = krvk2L2 + kvk2L2 c1 "kfkL2 kvk1 + Z 1 jf j2 d 1=2 Z 1 jvj2 d 1=2# : Consider the boundary integrals. It is well-known that there is a constant c2, depending only on , such that (6:21a) Z 1 jvj2 d c2 kvk21 (6:21b) Z 1 jf j2 d c2 kfk23=4 Since (6:21c) kfkL2 kfk3=4 We have (6:22) kvk21 c1 kfk3=4 kvk1 + c1c2 kfk3=4 kvk1 : 29 We recall the inequality (6:23) jabj 1=2 " jaj2 + 1" jbj2 : Applying the inequality to (6.22) we nd that there is a constant c3, depending only on c1 and c2, such that kvk21 1=2 kvk21 + c3 kfk23=4 : or kvk21 2c3 kfk23=4 : which proves the Lemma. The results of Lemma 6.2 and Lemma 6.4 enable one to prove the collective compactness [A] of the family fTN;Mg. The argument which follows is to be found in [W2 : appendix]. The operator T = S (id) is a bounded compact operator taking V ! V . Lemma 6.5. The operators fTN;M : min(N;M) N0g are collectively compact. That is, let fN;M 2 P 0 N;M and (6:24a) kfN;Mk1 C: Then, there is a subsequence (Nk;Mk) and a function g 2 V such that (6:24b) kTNk;Mk fNk;Mk gk1 ! 0 Proof. Let QN;M denote the H1 projection onto P 0 N;M based on b( ; ). That is, for every u 2 V , we have (6:25a) Y N;M u 2 P 0 N;M and (6:25b) b(Y N;M u; gN;M) = b(u; gN;M); 8gN;M 2 P 0 N;M : The functions ffN;Mg which satisfy (6.24a) have a subsequence,which we again denote by fN;M , which is weakly convergent in V using the b( ; ) inner product, to a function f . That is, for every u 2 V (6:26a) b(fN;M ; u)! b(f; u): Moreover, the functions fN;M converge strongly to f in H3=4. That is (6:26b) kfN;M fk3=4 ! 0: 30 We will show that TN;MfN;M is a Cauchy sequence in H1. Let min(N;M) ! 1 and min(N 0;M 0)!1. Consider (6:27a) R := kTN;MfN;M TN 0;M 0fN 0M 0k1 (6:27b) R TN;MfN;M TN;M Y N;M f 1 + TN;M Y N;M f TN 0;M 0 Y N 0;M 0 f 1 + TN 0;M 0 Y N 0;M 0 f TN 0;M 0fN 0;M 0 1 The middle term tends to zero because TN;M is consistent with T . Using Lemma 6.4 we have R KT 264 fN;M Y N;M f 3=4 + fN 0;M 0 Y N 0;M 0 f 3=4375 + middle term: Since fN;M Y N;M f 3=4 kfN;M fk3=4 + f Y N;M f 1 we see that (6.27b) implies that R! 0 as min [min(N;M);min(N 0;M 0)]!1: The arguments of [A], [PW], [G], [W2], [GMP] nally yield our nal result. Theorem 6.3. Let (6:28) ŜN;M = B̂ 1 N;M ÂN;M : let ̂j(N;M)+ ̂j+1(N;M)+ denote the ~ BN;M-singular values of ŜN;M which satisfy (6:29a) ̂j (N;M)+ 1; and let ̂j(N;M) ̂j+1(N;M) denote the ~ BN;M-singular values of ŜN;M which satisfy (6:29b) 0 < ̂j(N;M) < 1: Then these values \cluster" about = 1. Speci cally, let " > 0 be given. There is an N1 N0, depending on ", and an integer J such that, for min(N;M) N1 we have (6:30a) ̂j(N;M)+ 1 < "; j J; 31 (6:30b) 1 ̂j(N;M) < "; j J: These quantities are given by (6:31a) [̂j(N;M)+]2 = max min dim s = j 0 6= F 2 s ( ~ BN;M ŜN;MF; ŜN;MF )`2 ( ~ BN;MF;F )`2 ; (6:31b) [̂j(N;M) ]2 = min max dim s = j 0 6= F 2 s ( ~ BN;M ŜN;MF; ŜN;MF )`2 ( ~ BN;MF;F )`2 : We close this section with a heuristic discussion of what one might expect of the few ~ BN;M-singular values ̂ j (N;M) which are not part of the cluster about = 1. Speci cally, let (6:32a) Au = Bu k2u where the constant k2 satis es (6:32b) j0 < k2 < j0+1 where j0 and j0+1 are consecutive eigenvalues of the operator B. The preconditioner is taken to be B where (6:33a) B u = Bu+ u with (6:33b) 0: In [GP] the authors consider the problem of choosing so as to minimize the condition number of [B (N;M)] 1AN;M . They show that = k2 is optimal. It is easy to see that the B-singular values of B 1 A are given by j k2 + j : Hence, for (N;M) N0 we expect (6:34) min ̂j(N;M) minf j0 k2 + j0 ; j0+1 k2 + j0+1 g: Thus there is one relatively small ~ BN;M-singular value. Moreover, as k2 gets very large, this minimal singular value gets quite small. The largest ~ BN;M-singular value will be associated with the smallest eigenvalue 1 of B . Thus (6:35) max j (N;M)+ k2 1 + 1 Hence, a large will make the minimal ~ BN;M-singular value small while a smaller > 0 increases the maximum BN;M-singular value. 32 7. General Preconditioning In this section we discuss the matrix (7:1) LN;M = [~ N;M ] 1 WN;MÂN;M its ~ N;M condition number and the distribution of its ~ N;M-singular values. Let QN;M be the matrix de ned by (5.2). Since WN;M B̂N;M = ~ BN;M is symmetric, the matrix QN;M is self-adjoint in the ~ N;M -innerproduct. Hence Theorem 5.3 gives Lemma 7.1. For every U = (u1; u2; ; u(N+1)(M+1))T we have (7:2) ( ~ N;MQN;MU;QN;MU)`2 ( ~ N;MU;U)`2 : Theorem 7.1. Let min(N;M) N0. For every u = (u1; u2; ; u(N+1)(M+1))T we have (7:3) ( ~ N;M [LN;MU ]; [LN;MU ])`2 ( ~ N;MU;U)`2 : This statement (7.3) asserts that these are positive constants, 0 < < , independent of (N;M), such that the ~ N;Msingular values of LN;M , denoted by j(N;M), satisfy 0 < j(N;M) : In other words, in the B-norm (the ~ N;M in each nite dimensional space) the condition number of LN;M is uniformly bounded. Proof. We write (7:4a) LN;M = [~ N;M ] 1[WN;MB̂N;M ][B̂ 1 N;MÂN;M ]: That is (7:4b) LN;M = QN;M ŜN;M where ŜN;M is the matrix representation of the operator SN;M de ned in (6.3). Let (7:5a) Z = ŜN;MU Then (7:5b) ( ~ N;M [LN;MU ]; [LN;MU ])`2 = (~ N;M [QN;MZ]; [QN;MZ])`2 : Using (7.2) we see that (7:6a) ( ~ N;M [LN;MU ]; [LN;MU ])`2 ( ~ N;MZ;Z)`2 (7:6b) ( ~ N;M [LN;MU ]; [LN;MU ])`2 ( ~ N;M [ŜN;MU ]; [ŜN;MU ])`2 Using (6.7c) of Theorem 6.2 we have ( ~ N;MZ;Z)`2 ( ~ N;MU;U)`2 and hence (7:6c) ( ~ N;M [LN;MU ]; [LN;MU ])`2 ( ~ N:MU; ~ N;MU)`2 : The theorem is proven. Finally, we discuss the clustering of the j(N;M). 33 Theorem 7.2. Let min(N;M) N0. Let 0; 0 be the constants of Theorem 5.3 given in (5.29b) and (5.29c). Then the ~ N;M -singular values of LN;M ; j (N;M), cluster in the interval " 30 0 1=2 ; 3 0 0 1=2# : Note: This interval is independent of the operator A. Proof. The [ j (N;M)]2, the squares of the ~ N;M-singular values of LN;M are given by the min max principle applied to the Rayleigh quotient (7:7a) (U) := (~ N;M [LN;MU ]; [LN;MU ]) ( ~ N;MU;U)`2 That is, in the notation introduced in the proof of Theorem 7.1, (7:7b) (U) = (~ N;M [QN;MZ]; [QN;MZ])`2 ( ~ N;MU;U)`2 Using (5.29b) we have (7:8) 20 ( ~ N;MZ;Z)`2 ( ~ N;MU;U)`2 (U) 2 0 ( ~ N;MZ;Z)`2 ( ~ N;MU;U)`2 Using Theorem 5.3 again we note that 0( ~ N;MZ;Z)`2 ( ~ BN;MZ;Z)`2 0( ~ N;MZ;Z)`2 Hence, using (7.8) we have 20 0 ( ~ BN;MZ;Z)`2 ( ~ N;MU;U)`2 (U) 2 0 0 ( ~ BN;MZ;Z)`2 ( ~ N;MU;U)`2 30 0 ( ~ BN;MZ;Z)`2 ( ~ BN;MU;U)`2 (U) 3 0 0 ( ~ BN;MZ;Z)`2 ( ~ BN;MU;U)`2 : However the min max properties of the Rayleigh quotient ̂ := ( ~ BN;MZ;Z)`2 ( ~ BN;MU;U)`2 = ( ~ BN;M ŜN;MU; ŜN;MU)`2 ( ~ BN;MU;U)`2 are discussed in Theorem 6.3. Its min max values are the quantities [̂j(N;M) ]2 which cluster about = 1. That is, the j (N;M) satisfy (7:9) 30 0 ̂ j (N;M) 2 [ j(N;M)]2 3 0 0 ̂+ j (N;M) 2 where the ̂ j (N;M) are the ~ BN;M-singular values of ŜN;M . Thus, the Theorem is proven. 34 8. Piecewise-linear Preconditioning The main goal of this section is to establish the estimate (1.14). Then, as indicated in the Introduction, the arguments of Section 7 apply word-for-word to obtain bounds and distribution estimates on the ~ N;M-singular values of (8:1) LN;M = ~ 1 N;M ~ AN;M = ~ N;MWN;M ÂN;M In this case ~ N;M is the sti ness matrix associated with the nite element discretization of the operator B in the space Z0 N;M with the interpolatory basis. We collect some basic facts which are the result of straightforward, but somewhat tedious, calculations. Let K be the basic rectangle with the four LGL vertices (xk; yj ); (xk+1; yj ); (xk+1; yj+1); (xk ; yj+1). Let u(x; y) 2 V 0 N;M and on K let (8:2) u(x; y) = a+ b(x xk) + c(y yj ) + d(x xk)(y yj ); on K Let (8:3) h := (yj+1 yj ); m := (xk+1 xk) Then, a direct computation shows that (8:4a) a = u(xk; yj ); (8:4b) b = [u(xk+1; yj ) u(xk; yj )] m ; (8:4c) c = [u(xk; yj+1) u(xk; yj )] h ; (8:4d) d = f[u(xk+1; yj+1) + u(xk; yj )] [u(xk; yj+1) + u(xk+1; yj )]g mh Let KN;M be the interpolation operator de ned by (2.21). Let w(x; y) = (KN;Mu)(x; y). A straightforward computation shows that, on K we have (8:5) w(x; y) = a+ b(x xk) + c(y yj ); (x; y) 2 T1 + (x xk+1) + (y yj+1); (x; y) 2 T2 where, as in Section 2, T1 is the triangle with vertices (xk; yj ); (xk+1; yj ); (xk ; yj+1) and T2 is the triangle with vertices (xk+1; yj ); (xk+1; yj+1); (xk ; yj+1) and (8:6a) = a+ bm+ ch+ dmh; 35 (8:6b) = b+ dh; (8:6c) = c+ dm: Completing the integrals we nd that (8:7) ZZK u2dx dy = A(K) 9 TK [M(u)] K where (8:8a) A(K) = mh is the area of K and (8:8b) K := (u(xk; yj); u(xk+1; yj ); u(xk+1; yj+1); u(xk; yj+1))T is the vector of the four function values which determine u(x; y) on K as well as w(x; y) on K and (8:8c) M(u) := 264 1 1=2 1=4 1=2 1=2 1 1=2 1=4 1=4 1=2 1 1=2 1=2 1=4 1=2 1 375 : Another integration yields (8:9) ZZK w2dxdy = A(K) 12 TK [M(w)] K where (8:10) M(w) := 264 1 1=2 0 1=2 1=2 2 1=2 1 0 1=2 1 1=2 1=2 1 1=2 2 375 Lemma 8.1. There are two positive constants, 0 < < , such that, for each u(x; y) 2 V 0 N;M , we have (8:11) kuk2L2 kKN;Muk2L2 kuk2L2 The constants and are given by (8:12a) :19 = 0 36 (8:12b) 10:5 = 0 Proof. A calculation using Matlab gives the eigenvalues ofM(u) andM(w). These are: The eigenvalues of M(u) are (8:13) 1=4; 3=4; 3=4; 9=4: [M(u)] the eigenvalues of M(w) are (8:14a) 1 0:5858; 1; 1; 4 3:4142: [M(w)] Thus (8:14b) 1 0:58; 4 3:5 Using (8.7) and (8.9) we see that: for each rectangle K, we have 9 12 0:5858 9=4 ZZK u2dxdy ZZK w2dxdy: That is, (8:15a) 0 ZZK u2dxdy ZZK jKN;Muj2 dxdy Similarly, ZZK w2dxdy 3:5 1=4 9 12 ZZK u2dxdy: That is, (8:15b) ZZK jKN;Muj2 dxdy 0 ZZK u2dxdy: The nal result follows upon summation over the rectangles K. In dealing with the integrals ZZK jruj2 dxdy; ZZK jrwj2 dxdy we nd it convenient to use the vector (8:16) K := (b; c; d)T Using these variables rather than K enables us to bypass the di culties caused by the constant functions. 37 Further elementary, but tedious, calculations show that (8:17a) ZZK jruj2 dxdy = A(K) T KSK(u) K where (8:17b) SK(u) := 24 1 0 h=2 0 1 m=2 h=2 m=2 (m2 + h2)=335 and (8:18a) ZZK jrwj2 dxdy = A(K) T KSK(w) K where (8:18b) SK(w) = 24 1 0 h=2 0 1 m=2 h=2 m=2 (h2 +m2)=235 : Lemma 8.2. Let u 2 V 0 N;M . Then (8:19) ZZ jruj2 dxdy ZZ jr(KN;Mu)j2 3ZZ jruj2 dxdy Proof. Once more, it su ces to establish these inequalities for each rectangle K. Clearly SK(w) = SK(u) + 1=624 0 0 0 0 0 0 0 0 (h2 +m2)35 : Hence, using (8.17) and (8.18) we have (8:20) ZZK jruj2 dxdy ZZK jr(KN;Mu)j2 dxdy: Similarly, we verify that (8:21a) SK(w) = 3SK(u) 2 K where (8:21b) K := 24 1 0 h=2 0 1 m=2 h=2 m=2 (h2 +m2)=435 : 38 A simple computation shows that det K = 0: Further computation shows that K is positive semi-de nite. Hence (8:22) ZZK jr(KM;Nu)j2 dxdy 3ZZK jrU j2 dxdy and the Lemma is proven. Theorem 8.1. Let 0 and 0 be the constants of Lemma 8.1. Then, for all u 2 V 0 N;M we have (8:23) ( 0)1=2 kuk1 kKN;Muk1 ( 0)1=2 kuk1 : Proof. The Theorem follows immediately from the two preceding Lemmas. Having established this result we immediately obtain the Z0 N;M version of Theorem 5.3, Theorem 7.1 and Theorem 7.2. Theorem 8.2. Let ~ N;M be the sti ness matrix of the nite element discretization of the operator B in the space Z0 N;M with the interpolatory basis (i.e., the degrees of freedom are the values of w(x; y) 2 Z0 N;M at the 2 dimensional LGL points). For every U = (u1; u2; ; u(N+1)(M+1))T we have (8:24) ( ~ N;MU;U)`2 ( ~ BN;MU;U)`2 This statement is equivalent to : Let 0 < q1 q(N+1)(M+1) be the eigenvalues of the matrix (8:25) QN;M = ~ 1 N;MWN;M B̂N;M = ~ 1 N;M ~ BN;M : There are constants 0 < 1 < 1 such that (8:26) 0 < 1 qj 1: Theorem 8.3. Let ~ N;M be as in Theorem 8.2. Let min(N;M) N0. For every U = (u1; u2; ; u(N+1)(M+1))T , we have (8:27) ( ~ N;M [LN;MU ]; [LN;MU ])`2 ( ~ N;MU;U)`2 : This statement (8.27) asserts that these are positive constants 0 < < , independent of (N;M), such that the ~ N;M-singular values of LN;M , denoted by j (N;M), satisfy 0 < j(N;M) : In other words, in the B-norm, the condition number of LN;M is uniformly bounded. Theorem 8.4. Let min(N;M) N0. Let 1; 1 be the constants of (8.26). Then the N;M singular values of LN;M , the j (N;M), cluster in the interval " 31 1 1=2 ; 3 1 1 1=2# : 39 9. Computational Results In this section we discuss numerical experiments of both one-and-two-dimensional problems. First, let us consider the following second-order elliptic problem in one dimension: (9:1a) Au := uxx + sux + bu for x 2 ( 1; 1) with boundary conditions (9:1b) ux( 1) u( 1) = 0 = u(1): Let B be the operator de ned by (9:2a) Bv = vxx + jbjv for x 2 ( 1; 1) with boundary conditions (9:2b) vx( 1) = 0 = v(1): We let AN be the pseudospectral discretization based on the the Legendre-Gauss-Lobatto [LGL] points representation of the operator A. We consider the we weak formulation which leads to the following treatment of the Robin boundary condition: (9:3a) w0(uNxx( 1) + suNx ( 1) + buN ( 1)) uNx ( 1) = w0f( 1) (9:3b) uNxx(xj ) + suNx (xj ) + buN (xj ) = f(xj ); for j = 1; 2; : : : ;N 1 (9:3c) uN (1) = 0; where w0 = 2=(N(N + 1)). Consider the linear equation (9:4) ÂNu = f which arises in the numerical solution of the boundary value problem Au = f: We let ~ N be the nite-element sti ness matrix associated with the operator B in the nite element space V 0 N . We precondition ÂN with ~ NWN , whereWN is a diagonal matrix whose entries are the Legendre-Gauss-Lobatto quadrature weights. Finally, let LN denote the preconditioned matrix given by (9:5) LN = ~ 1 N WN ÂN : 40 We present numerical results for two problems: one in which A has all positive eigenvalues, the other in which A has a few negative eigenvalues. The problems presented here are representative of similar problems for which we did numerical experiments. The eigenvalues listed are actually (for the inde nite cases) the absolute values of the eigenvalues. The singular values of LN are de ned, as in Section 2, as the square roots of the eigenvalues of the matrix [ ~ 1 N (WN ÂN ) ] [ ~ 1 N (WNA)]. The singular values of LN are, as usual, the square roots of the eigenvalues of the matrix L NLN . One can see from these tables that the eigenvalues, singular values, and -singular values are clustered and produce small condition numbers. For the case in which A is positive de nite, we take s = 5; b = (( 54) )2: We report the eigenvalues, singular values, and -singular values for various values of N along with the associated condition numbers in tables 9.1 9.3. Table 9.1 Extreme Eigenvalues w/ spectral condition nos: LN , b = ( 54 )2; s = 5: N min. eig. max. eig. condition no. 8 0.1167964E+01 0.1847266E+01 0.1581612E+01 16 0.1130649E+01 0.1991085E+01 0.1761010E+01 32 0.1107615E+01 0.2185080E+01 0.1972780E+01 64 0.1064111E+01 0.2315478E+01 0.2175973E+01 128 0.1033620E+01 0.2388818E+01 0.2311117E+01 256 0.1017152E+01 0.2427464E+01 0.2386530E+01 512 0.1008628E+01 0.2447273E+01 0.2426340E+01 1024 0.1004330E+01 0.2457297E+01 0.2446703E+01 Table 9.2 Extreme singular values w/ condition nos: LN , b = ( 54 )2; s = 5: N min. sing. max. sing. condition no. 8 0.4483755E+00 0.2268380E+01 0.5059109E+01 16 0.3372913E+00 0.2314083E+01 0.6860783E+01 32 0.2626978E+00 0.2657845E+01 0.1011750E+02 64 0.2001321E+00 0.3439328E+01 0.1718529E+02 128 0.1482101E+00 0.4630036E+01 0.3123968E+02 256 0.1075456E+00 0.6377109E+01 0.5929680E+02 512 0.7709549E-01 0.8895498E+01 0.1153829E+03 1024 0.5490092E-01 0.1249217E+02 0.2275402E+03 41 Table 9.3 Extreme -singular values w/ condition nos: LN , b = ( 5 4 )2; s = 5: N min. -sing. max. -sing. condition no. 8 0.7090087E+00 0.2202798E+01 0.3106870E+01 16 0.7162984E+00 0.2254583E+01 0.3147547E+01 32 0.7177499E+00 0.2337237E+01 0.3256339E+01 64 0.7180715E+00 0.2396285E+01 0.3337112E+01 128 0.7181492E+00 0.2430340E+01 0.3384172E+01 256 0.7181685E+00 0.2448497E+01 0.3409363E+01 512 0.7181734E+00 0.2457856E+01 0.3422371E+01 1024 0.7181746E+00 0.2462605E+01 0.3428979E+01 For the case of the inde nite problem we take s = 5; b = ( 5 4 )2: One can nd by elementary methods that the operator A has 2 negative eigenvalues in this case. We also computed the number of negative eigenvalues of AN and it too had 2 negative eigenvalues (at least for N = 8; 16; : : : ; 1024). We report the extreme eigenvalues, singular values and -singular values along with associated condition numbers for these values of N in Tables 9.4-9.6. Table 9.4 Extreme Eigenvalues w/ spectral condition nos: LN , b = ( 5 4 ) )2; s = 5: N min. eig max. eig condition no. 8 0.1998642E+00 0.1129986E+01 0.5653770E+01 16 0.2034377E+00 0.1789578E+01 0.8796686E+01 32 0.2080575E+00 0.2133634E+01 0.1025502E+02 64 0.2096394E+00 0.2302579E+01 0.1098352E+02 128 0.2100719E+00 0.2385592E+01 0.1135607E+02 256 0.2101830E+00 0.2426658E+01 0.1154545E+02 512 0.2102110E+00 0.2447071E+01 0.1164102E+02 1024 0.2102181E+00 0.2457247E+01 0.1168904E+02 Table 9.5 Extreme singular values w/ condition nos: LN , b = ( 5 4 )2; s = 5: N min. sing. max. sing. condition no. 8 0.9485507E-02 0.1691159E+01 0.1782887E+03 16 0.7362417E-02 0.2126797E+01 0.2888722E+03 32 0.5552403E-02 0.2333357E+01 0.4202428E+03 64 0.4062287E-02 0.3130882E+01 0.7707192E+03 128 0.2921448E-02 0.4398256E+01 0.1505506E+04 256 0.2083042E-02 0.6208273E+01 0.2980388E+04 512 0.1479009E-02 0.8774315E+01 0.5932563E+04 1024 0.1047958E-02 0.1240583E+02 0.1183809E+05 42 Table 9.6 Extreme -singular values w/ condition nos: LN , b = ( 5 4 )2; s = 5: N min. -sing. max. -sing. condition no. 8 0.7998722E-02 0.1533053E+01 0.1916622E+03 16 0.7616712E-02 0.2053065E+01 0.2695474E+03 32 0.7514645E-02 0.2283756E+01 0.3039074E+03 64 0.7488212E-02 0.2382606E+01 0.3181809E+03 128 0.7481484E-02 0.2426887E+01 0.3243858E+03 256 0.7479787E-02 0.2447630E+01 0.3272325E+03 512 0.7479360E-02 0.2457639E+01 0.3285894E+03 1024 0.7479250E-02 0.2462551E+01 0.3292510E+03 It may appear from these tables that the preconditioning is not very e ective. However, this is not the case in light of Theorem 7.2 which guarantees that the -singular values will be nicely clustered. This suggests that conjugate gradient can be applied to the normal equations using the -inner product. Numerical evidence presented in Table 9.7 con rms this. Also, numerical results on the singular values presented in Table 9.8 suggests that the singular values are nicely clustered. At present we do not have a proof of this. However, this empirical evidence suggests that one can successfully apply conjugate gradient using the l2-inner product. In fact, in both cases ( -singular and singular values), only one value is outside the appropriate interval (for all N = 8; 16; : : : ; 1024). Condition numbers in Tables 9.7 and 9.8 are based on all -singular and singular values excluding the largest and smallest. Actually, one can obtain almost identical results by excluding only the smallest values. Table 9.7 2nd Extreme singular values w/ condition nos: LN , b = ( 5 4 )2; s = 5: N 2nd min. sing. 2nd max. sing. condition no. 8 0.8036099E+00 0.1433090E+01 0.1783315E+01 16 0.6747487E+00 0.1906044E+01 0.2824820E+01 32 0.6100785E+00 0.2258877E+01 0.3702600E+01 64 0.5818087E+00 0.2395103E+01 0.4116651E+01 128 0.5688084E+00 0.2433109E+01 0.4277554E+01 256 0.5626150E+00 0.2450706E+01 0.4355920E+01 512 0.5595988E+00 0.2459166E+01 0.4394517E+01 1024 0.5581114E+00 0.2463312E+01 0.4413656E+01 43 Table 9.8 2nd Extreme -singular values w/ condition nos: LN , b = ( 5 4 )2; s = 5: N 2nd min. -sing. 2nd max. -sing. condition no. 8 0.8516556E+00 0.1486775E+01 0.1745747E+01 16 0.7354986E+00 0.1919885E+01 0.2610318E+01 32 0.7002824E+00 0.2177841E+01 0.3109946E+01 64 0.6906184E+00 0.2318123E+01 0.3356590E+01 128 0.6881112E+00 0.2391556E+01 0.3475537E+01 256 0.6874753E+00 0.2429167E+01 0.3533460E+01 512 0.6873154E+00 0.2448205E+01 0.3561981E+01 1024 0.6872753E+00 0.2457783E+01 0.3576126E+01 These problems are representative of other problems we tested including problems with pure Dirichlet boundary conditions. 44 We now turn our attention to two-dimensional problems. Let be the square [ 1; 1] [ 1; 1] and consider a uniformly elliptic operator given by (9:6a) Au := [uxx + uyy] + a1(ux + uy) + a0u in with boundary conditions (9:6b) u = 0 on 0; where @ = 0; and a0 < 0. We assume that A is an invertible operator, but not necessarily de nite. Let fAN;Mg be a family of spectral collocation discretizations based on the Legendre-GaussLobatto [LGL] points representation of the operator A. Consider the systems of linear equations. (9:7) ÂN;MU = F which arise in the numerical solution of the boundary value problem Au = f using these spectral collocation discretizations and the Lagrange basis f'ij(x; y)g of the polynomial space P 0 N;M (see section 2 for a complete discussion of notations etc). Let ~ N;M be the sti ness matrix of the nite element discretization. We precondition (9.2) as (9:8) ~ 1 N;M WN;M ÂN;M U = ~ 1 N;M WN;M F; where WN;M is the diagonal matrix of the quadrative weights !k!̂j associated with the Gauss-Lobatto quadrature. ~ N;M is the sti ness matrix of the symmetric positive de nite operator of the form (9:9a) Bv = [vxx + vyy ] + bv in with boundary conditions (9:9b) u = 0 on 0; where @ = 0: The nite element space employed here is the space of continuous piecewise bilinear functions,V 0 N;M, with the basis being the tensor product of the one dimensional \hat" functions. 45 One can show that the eigenvalues of the operator in (9.6) are given by (9:10) ( k;j + a0); where (9:11) k;j = 14(2a21 + 2(j2 + k2)); and k; j = 1; 2; : : :. Clearly, A has negative eigenvalues for values of k; j such that (9:12) j k;j j < ja0j: We report computational results for two cases which are representative of those which have negative eigenvalues (i.e., are inde nite), but are nonsingular. For the case of a0 = 34:05, and a1 = 6, the operator A has 3 negative eigenvalues and its minimum eigenvalue is 3.68920880217871527. For a0 = 22:1725, and a1 = 3:5, A has 3 negative eigenvalues and its minimum eigenvalue is 3.69170880217871655. For comparison we show the number of negative eigenvalues and the minimum eigenvalue of AN;N , the pseudospectral approximation to A, in Tables 9.9 and 9.10. (In our numerical examples, we take N = M .) This comparison serves as a measure of how good an approximation AN;N is to A. Table 9.9 Number of negative eigenvalues and minimum eigenvalues of AN;N , with a0 = 34:05; a1 = 6 (N 1)2 No. neg. eig. Min. eig. 49 3 0.3696983E+01 81 3 0.3689242E+01 121 3 0.3689209E+01 169 3 0.3689209E+01 225 3 0.3689209E+01 289 3 0.3689209E+01 361 3 0.3689209E+01 441 3 0.3689209E+01 529 3 0.3689209E+01 625 3 0.3689209E+01 729 3 0.3689209E+01 841 3 0.3689209E+01 961 3 0.3689209E+01 1089 3 0.3689209E+01 1225 3 0.3689209E+01 1369 3 0.3689209E+01 1521 3 0.3689209E+01 2401 3 0.3689209E+01 46 Table 9.10 Number of negative eigenvalues and minimum eigenvalues of AN;N , with a0 = 22:1725; a1 = 3:5 (N 1)2 No. neg. eig. Min. eig. 49 3 0.3691910E+01 81 3 0.3691710E+01 121 3 0.3691709E+01 169 3 0.3691709E+01 225 3 0.3691709E+01 289 3 0.3691709E+01 361 3 0.3691709E+01 441 3 0.3691709E+01 529 3 0.3691709E+01 625 3 0.3691709E+01 729 3 0.3691709E+01 841 3 0.3691709E+01 961 3 0.3691709E+01 1089 3 0.3691709E+01 1225 3 0.3691709E+01 1369 3 0.3691709E+01 1521 3 0.3691709E+01 2401 3 0.3691709E+01 The eigenvalues, singular and N;N-singular values of preconditioned matrices LN;N for these two cases are exhibited in the Tables 9.11-9.16. 47 Table 9.11 Extreme Eigenvalues w/ spectral condition nos: LN;N , a0 = 34:05; a1 = 6 N min. eig. max eig. condition no. 8 0.1013053E+00 0.3208008E+01 0.3166672E+02 10 0.1004212E+00 0.3951367E+01 0.3934793E+02 12 0.1015879E+00 0.4495625E+01 0.4425354E+02 14 0.1029258E+00 0.4901123E+01 0.4761803E+02 16 0.1040905E+00 0.5211775E+01 0.5006967E+02 18 0.1050387E+00 0.5456174E+01 0.5194440E+02 20 0.1057977E+00 0.5652912E+01 0.5343133E+02 22 0.1064051E+00 0.5814396E+01 0.5464395E+02 24 0.1068945E+00 0.5949156E+01 0.5565446E+02 26 0.1072923E+00 0.6063216E+01 0.5651118E+02 28 0.1076189E+00 0.6160944E+01 0.5724778E+02 30 0.1078896E+00 0.6245570E+01 0.5788851E+02 32 0.1081161E+00 0.6319536E+01 0.5845137E+02 34 0.1083073E+00 0.6384718E+01 0.5895003E+02 36 0.1084699E+00 0.6442578E+01 0.5939506E+02 38 0.1086094E+00 0.6494275E+01 0.5979480E+02 40 0.1087297E+00 0.6540736E+01 0.6015592E+02 50 0.1091398E+00 0.6716600E+01 0.6154125E+02 Table 9.12 Extreme singular values w/ condition nos: LN;N , a0 = 34:05; a1 = 6 N min. sing. max. sing. condition no. 8 0.1271260E-03 0.4391974E+01 0.3454819E+05 10 0.1190179E-03 0.5107123E+01 0.4291056E+05 12 0.1138260E-03 0.5566465E+01 0.4890330E+05 14 0.1107207E-03 0.5883391E+01 0.5313724E+05 16 0.1087129E-03 0.6113238E+01 0.5623286E+05 18 0.1073385E-03 0.6286398E+01 0.5856611E+05 20 0.1063557E-03 0.6420884E+01 0.6037182E+05 22 0.1056283E-03 0.6527970E+01 0.6180136E+05 24 0.1050747E-03 0.6615018E+01 0.6295541E+05 26 0.1046435E-03 0.6687020E+01 0.6390288E+05 28 0.1043011E-03 0.6747467E+01 0.6469222E+05 30 0.1040246E-03 0.6798866E+01 0.6535827E+05 32 0.1037981E-03 0.6843058E+01 0.6592663E+05 34 0.1036102E-03 0.6881427E+01 0.6641649E+05 36 0.1034527E-03 0.6915028E+01 0.6684243E+05 38 0.1033192E-03 0.6944680E+01 0.6721575E+05 40 0.1032052E-03 0.6971026E+01 0.6754528E+05 50 0.1028247E-03 0.7068096E+01 0.6873930E+05 48 Table 9.13 Extreme N;N-singular values w/ condition nos: LN;N, a0 = 34:05; a1 = 6 N min. -sing. max. -sing. condition no. 8 0.1278167E-03 0.4386774E+01 0.3432082E+05 10 0.1198081E-03 0.5093922E+01 0.4251733E+05 12 0.1146689E-03 0.5545806E+01 0.4836364E+05 14 0.1115933E-03 0.5859062E+01 0.5250372E+05 16 0.1096036E-03 0.6088021E+01 0.5554582E+05 18 0.1082408E-03 0.6261710E+01 0.5784981E+05 20 0.1072658E-03 0.6397332E+01 0.5964000E+05 22 0.1065438E-03 0.6505755E+01 0.6106176E+05 24 0.1059942E-03 0.6594157E+01 0.6221245E+05 26 0.1055658E-03 0.6667451E+01 0.6315918E+05 28 0.1052255E-03 0.6729095E+01 0.6394925E+05 30 0.1049505E-03 0.6781588E+01 0.6461699E+05 32 0.1047254E-03 0.6826776E+01 0.6518743E+05 34 0.1045383E-03 0.6866049E+01 0.6567977E+05 36 0.1043816E-03 0.6900471E+01 0.6610808E+05 38 0.1042485E-03 0.6930868E+01 0.6648412E+05 40 0.1041352E-03 0.6957894E+01 0.6681596E+05 50 0.1037555E-03 0.7057599E+01 0.6802144E+05 Table 9.14 Extreme Eigenvalues w/ spectral condition nos: LN;N , a0 = 22:1725; a1 = 3:5 N min. eig. max. eig. condition no. 8 0.1234644E+00 0.3862221E+01 0.3128205E+02 10 0.1158532E+00 0.4573414E+01 0.3947592E+02 12 0.1122258E+00 0.5040091E+01 0.4491026E+02 14 0.1102098E+00 0.5374463E+01 0.4876575E+02 16 0.1089700E+00 0.5626984E+01 0.5163792E+02 18 0.1081510E+00 0.5824507E+01 0.5385533E+02 20 0.1075804E+00 0.5983128E+01 0.5561539E+02 22 0.1071663E+00 0.6113208E+01 0.5704411E+02 24 0.1068559E+00 0.6221745E+01 0.5822557E+02 26 0.1066169E+00 0.6313633E+01 0.5921794E+02 28 0.1064289E+00 0.6392398E+01 0.6006264E+02 30 0.1062782E+00 0.6460643E+01 0.6078993E+02 32 0.1061555E+00 0.6520327E+01 0.6142243E+02 34 0.1060542E+00 0.6572957E+01 0.6197732E+02 36 0.1059697E+00 0.6619704E+01 0.6246791E+02 38 0.1058983E+00 0.6661497E+01 0.6290467E+02 40 0.1058375E+00 0.6699080E+01 0.6329591E+02 50 0.1056357E+00 0.6841562E+01 0.6476561E+02 49 Table 9.15 Extreme singular values w/ condition nos: LN;N , a0 = 22:1725; a1 = 3:5 N min. sing. max. sing. condition no. 8 0.4261767E-02 0.4770060E+01 0.1119268E+04 10 0.3974212E-02 0.5380431E+01 0.1353836E+04 12 0.3818710E-02 0.5769969E+01 0.1510973E+04 14 0.3724948E-02 0.6039779E+01 0.1621440E+04 16 0.3663956E-02 0.6236708E+01 0.1702179E+04 18 0.3622016E-02 0.6386079E+01 0.1763128E+04 20 0.3591925E-02 0.6502862E+01 0.1810411E+04 22 0.3569595E-02 0.6596437E+01 0.1847951E+04 24 0.3552565E-02 0.6672956E+01 0.1878349E+04 26 0.3539279E-02 0.6736602E+01 0.1903383E+04 28 0.3528712E-02 0.6790313E+01 0.1924303E+04 30 0.3520170E-02 0.6836207E+01 0.1942011E+04 32 0.3513165E-02 0.6875847E+01 0.1957166E+04 34 0.3507350E-02 0.6910412E+01 0.1970266E+04 36 0.3502469E-02 0.6940802E+01 0.1981688E+04 38 0.3498333E-02 0.6967722E+01 0.1991727E+04 40 0.3494796E-02 0.6991726E+01 0.2000610E+04 50 0.3482976E-02 0.7080935E+01 0.2033013E+04 Table 9.16 Extreme N;N-singular values w/ condition nos: LN;N , a0 = 22:1725; a1 = 3:5 N min. -sing. max. -sing. condition no. 8 0.4276759E-02 0.4766816E+01 0.1114586E+04 10 0.3992227E-02 0.5373119E+01 0.1345895E+04 12 0.3839581E-02 0.5759006E+01 0.1499905E+04 14 0.3748074E-02 0.6026895E+01 0.1607998E+04 16 0.3688794E-02 0.6223216E+01 0.1687060E+04 18 0.3648152E-02 0.6372701E+01 0.1746830E+04 20 0.3619053E-02 0.6489940E+01 0.1793270E+04 22 0.3597493E-02 0.6584113E+01 0.1830195E+04 24 0.3581070E-02 0.6661269E+01 0.1860134E+04 26 0.3568268E-02 0.6725543E+01 0.1884820E+04 28 0.3558094E-02 0.6779851E+01 0.1905473E+04 30 0.3549873E-02 0.6826302E+01 0.1922970E+04 32 0.3543136E-02 0.6866457E+01 0.1937961E+04 34 0.3537544E-02 0.6901495E+01 0.1950929E+04 36 0.3532852E-02 0.6932321E+01 0.1962245E+04 38 0.3528876E-02 0.6959640E+01 0.1972197E+04 40 0.3525478E-02 0.6984011E+01 0.1981011E+04 50 0.3514122E-02 0.7074674E+01 0.2013212E+04 50 Additionally, we are concerned with the distribution of the N;N-singular values of the preconditioned matrix LN;N . In particular, we are interested in the clustering of the N;N-singular values of the preconditioned matrix LN;N according to Theorem 7.2. We computed the number of N;N-singular values as well as the number of singular values outside the interval given in Theorem 7.2 and report the results in Tables 9.17 9.20. The columns entitled \lower" and \upper" are the computed lower and upper bounds for the clustering interval given by Theorem 7.2. The values are computed by rst computing the eigenvalues of ~ N;M WN;M B̂N;M : These numerical results of N;N-singular values con rm the results of Theorem 7.2. Although we do not have a theoretical justi cation for it, the numerical results reported in these tables suggest that the singular values behave similarly to the the N;N-singular values. Table 9.17 Number of N;N-singular values outside interval; LN;N , a0 = 34:05; a1 = 6 N No. outside lower upper 10 5 0.4282405E+00 0.1559045E+02 12 5 0.4149416E+00 0.1614158E+02 14 5 0.4062323E+00 0.1658141E+02 16 5 0.4001191E+00 0.1693874E+02 18 5 0.3956103E+00 0.1723368E+02 20 5 0.3921580E+00 0.1748059E+02 22 5 0.3894362E+00 0.1768995E+02 24 6 0.3872393E+00 0.1786949E+02 26 6 0.3854311E+00 0.1802501E+02 28 6 0.3839187E+00 0.1816095E+02 30 6 0.3826361E+00 0.1828072E+02 32 6 0.3815353E+00 0.1838700E+02 34 6 0.3805809E+00 0.1848192E+02 36 6 0.3797459E+00 0.1856720E+02 38 6 0.3790094E+00 0.1864421E+02 40 6 0.3783552E+00 0.1871409E+02 50 6 0.3759452E+00 0.1898452E+02 51 Table 9.18 Number of singular values outside interval; LN;N , a0 = 34:05; a1 = 6 N No. outside lower upper 10 5 0.4282405E+00 0.1559045E+02 12 5 0.4149416E+00 0.1614158E+02 14 5 0.4062323E+00 0.1658141E+02 16 5 0.4001191E+00 0.1693874E+02 18 6 0.3956103E+00 0.1723368E+02 20 6 0.3921580E+00 0.1748059E+02 22 6 0.3894362E+00 0.1768995E+02 24 6 0.3872393E+00 0.1786949E+02 26 6 0.3854311E+00 0.1802501E+02 28 6 0.3839187E+00 0.1816095E+02 30 6 0.3826361E+00 0.1828072E+02 32 6 0.3815353E+00 0.1838700E+02 34 6 0.3805809E+00 0.1848192E+02 36 6 0.3797459E+00 0.1856720E+02 38 6 0.3790094E+00 0.1864421E+02 40 6 0.3783552E+00 0.1871409E+02 50 6 0.3759452E+00 0.1898452E+02 Table 9.19 Number of N;N-singular values outside interval; LN;N , a0 = 22:1725; a1 = 3:5 N No. outside lower upper 10 4 0.4309650E+00 0.1531501E+02 12 4 0.4167516E+00 0.1594577E+02 14 4 0.4075116E+00 0.1643620E+02 16 4 0.4010664E+00 0.1682733E+02 18 4 0.3963373E+00 0.1714577E+02 20 4 0.3927322E+00 0.1740963E+02 22 4 0.3899004E+00 0.1763156E+02 24 4 0.3876218E+00 0.1782067E+02 26 4 0.3857515E+00 0.1798362E+02 28 4 0.3841908E+00 0.1812544E+02 30 4 0.3828698E+00 0.1824993E+02 32 4 0.3817382E+00 0.1836007E+02 34 4 0.3807586E+00 0.1845817E+02 36 4 0.3799028E+00 0.1854610E+02 38 4 0.3791489E+00 0.1862535E+02 40 4 0.3784801E+00 0.1869713E+02 50 4 0.3760224E+00 0.1897385E+02 52 Table 9.20 Number of singular values outside interval; LN;N, a0 = 22:1725; a1 = 3:5 N No. outside lower upper 10 4 0.4309650E+00 0.1531501E+02 12 4 0.4167516E+00 0.1594577E+02 14 4 0.4075116E+00 0.1643620E+02 16 4 0.4010664E+00 0.1682733E+02 18 5 0.3963373E+00 0.1714577E+02 20 5 0.3927322E+00 0.1740963E+02 22 5 0.3899004E+00 0.1763156E+02 24 5 0.3876218E+00 0.1782067E+02 26 5 0.3857515E+00 0.1798362E+02 28 5 0.3841908E+00 0.1812544E+02 30 5 0.3828698E+00 0.1824993E+02 32 5 0.3817382E+00 0.1836007E+02 34 5 0.3807586E+00 0.1845817E+02 36 5 0.3799028E+00 0.1854610E+02 38 5 0.3791489E+00 0.1862535E+02 40 5 0.3784801E+00 0.1869713E+02 50 5 0.3760224E+00 0.1897385E+02 The clustering of -singular as well as singular values may be seen graphically in Figures 9.1 to 9.4. Figure 9.1 -Singular Value Distribution of LN;N ; a0 = 22:1725; a1 = 3:5 15 20 25 30 35 40 45 50 55 0 1 2 3 4 5 6 7 8 N beta-singular values 53 15 20 25 30 35 40 45 50 55 0 1 2 3 4 5 6 7 8 N singular values Figure 9.2 Singular Value Distribution of LN;N; a0 = 22:1725; a1 = 3:5 15 20 25 30 35 40 45 50 55 0 1 2 3 4 5 6 7 8 N beta-singular values Figure 9.3 -Singular Value Distribution of LN;N; a0 = 34:05; a1 = 6 54 15 20 25 30 35 40 45 50 55 0 1 2 3 4 5 6 7 8 N singular values Figure 9.4 Singular Value Distribution of LN;N; a0 = 34:05; a1 = 6 In the case where a0 > 0, the operator A has all positive eigenvalues and so does its pseudospectral approximation AN;N for all values of N 8 that we tested. The eigenvalues, singular and N;N-singular values of the preconditioned matrix LN;N are exhibited in Tables 9.21 9.23. These results are representative of problems that we tested having all positive eigenvalues. The eigenvalues, singular and N;N-singular values are bounded independent of N and are clustered so that they yield quite small condition numbers. 55 Table 9.21 Extreme Eigenvalues w/ spectral condition nos: LN;N , a0 = ( 54 )2; a1 = 10: N min. eig max. eig condition no. 8 0.1338860E+01 0.5668794E+01 0.4234045E+01 10 0.1342297E+01 0.5807957E+01 0.4326879E+01 12 0.1355080E+01 0.5891385E+01 0.4347628E+01 14 0.1369836E+01 0.5960662E+01 0.4351368E+01 16 0.1383634E+01 0.6026450E+01 0.4355522E+01 18 0.1395368E+01 0.6090501E+01 0.4364799E+01 20 0.1401319E+01 0.6152401E+01 0.4390434E+01 22 0.1373448E+01 0.6211488E+01 0.4522550E+01 24 0.1340351E+01 0.6267309E+01 0.4675872E+01 26 0.1312209E+01 0.6319664E+01 0.4816051E+01 28 0.1288272E+01 0.6368538E+01 0.4943474E+01 30 0.1267689E+01 0.6414039E+01 0.5059630E+01 32 0.1249812E+01 0.6456343E+01 0.5165852E+01 34 0.1234145E+01 0.6495663E+01 0.5263288E+01 36 0.1220307E+01 0.6532220E+01 0.5352931E+01 38 0.1207998E+01 0.6566236E+01 0.5435636E+01 40 0.1196979E+01 0.6597924E+01 0.5512145E+01 Table 9.22 Extreme singular values w/ condition nos: LN;N , a0 = ( 54 )2; a1 = 10: N min. sing. max. sing. condition no. 8 0.1096162E+01 0.6892226E+01 0.6287598E+01 10 0.1062761E+01 0.6899788E+01 0.6492321E+01 12 0.1037516E+01 0.6908768E+01 0.6658948E+01 14 0.1007030E+01 0.6921852E+01 0.6873532E+01 16 0.9755505E+00 0.6938240E+01 0.7112128E+01 18 0.9505292E+00 0.6956526E+01 0.7318582E+01 20 0.9313886E+00 0.6975512E+01 0.7489368E+01 22 0.9166251E+00 0.6994384E+01 0.7630583E+01 24 0.9050569E+00 0.7012650E+01 0.7748297E+01 26 0.8958449E+00 0.7030044E+01 0.7847390E+01 28 0.8883985E+00 0.7046444E+01 0.7931625E+01 30 0.8822976E+00 0.7061816E+01 0.8003894E+01 32 0.8772383E+00 0.7076175E+01 0.8066422E+01 34 0.8729974E+00 0.7089566E+01 0.8120947E+01 36 0.8694081E+00 0.7102047E+01 0.8168830E+01 38 0.8663437E+00 0.7113682E+01 0.8211154E+01 40 0.8637069E+00 0.7124535E+01 0.8248788E+01 56 Table 9.23Extreme N;N-singular values w/ condition nos: LN;N, a0 = (54 )2; a1 = 10:Nmin. -sing.max. -sing.condition no.8 0.1106446E+01 0.6886236E+01 0.6223742E+0110 0.1075918E+01 0.6888207E+01 0.6402166E+0112 0.1058669E+01 0.6889533E+01 0.6507733E+0114 0.1047784E+01 0.6897600E+01 0.6583035E+0116 0.1040359E+01 0.6911276E+01 0.6643164E+0118 0.1034990E+01 0.6928357E+01 0.6694127E+0120 0.1030933E+01 0.6947070E+01 0.6738626E+0122 0.1027758E+01 0.6966227E+01 0.6778083E+0124 0.1025203E+01 0.6985104E+01 0.6813383E+0126 0.1023102E+01 0.7003288E+01 0.6845151E+0128 0.1021340E+01 0.7020567E+01 0.6873875E+0130 0.1019841E+01 0.7036852E+01 0.6899950E+0132 0.1018548E+01 0.7052124E+01 0.6923706E+0134 0.1017420E+01 0.7066408E+01 0.6945422E+0136 0.1016426E+01 0.7079751E+01 0.6965338E+0138 0.1015544E+01 0.7092210E+01 0.6983659E+0140 0.1014754E+01 0.7103847E+01 0.7000561E+0157 10. References[A]P.M. Anselone, \Collectively Compact Operator Approximation Theory and Applica-tions to Integral Equations ", Prentic Hall, Englewood Cli s, N.J., (1971).[BM]C. Bernardi and Y. Maday, \Polynomial Interpolation Results in Sobolev Spaces",Jour. Comp. Appl. Math 43, , (1992).[BP]J.H. Bramble and J.E. Pasciak; \Preconditioned iterative methods for nonself-adjointor inde nite elliptic boundary value problems" in Uni cation of Finite Elements, H.Kardestuncer (ed.) Elsevier, North-Holland, Amsterdam, 167-184, (1984).[CG]C. Carlenzoli and P. Gervasio; \E ective numerical algorithms for the solution of al-gebraic systems arising in spectral methods", University of Minnesota SupercomputerInstitute Research Report UMSI 91/137 (1991).[CHQZ] C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang; Spectral Methods in FluidDynamics, Springer Verlag, New York 1988.[CQ]C. Canuto and A. Quarteroni; \Approximation results for orthogonal polynomials inSobolev spaces", Math. Comp. 38, 67-87 (1982).[DM]M. Deville and C. Mund; \Finite element preconditioning for pseudospectral solutionsof elliptic problems"SIAM J. STAT. Comp. 2 311-342 (1990).[FMP] V. Faber, T.A. Manteu el, and S.V. Parter; \On the equivalences of operators andthe implications to preconditioned iterative methods for elliptic problems", Advancesin Applied Mathematics 11, 109-163 (1990).[G]C.I. Goldstein, \Spectral distribution of preconditioned elliptic operators and con-vergence estimates for iterative methods" To Appear: Numer. Funct. Anal. andOptimiz, 14, 45-68 (1993).[GMP] C.I. Goldstein, T.A. Manteu el and S.V. Parter, \Preconditioning and boundary con-ditions without H2 estimates: L2 condition numbers and the distribution of the sin-gular values" To appear SIAM J. Num. Anal.[J]C. Johnson; Numerical Solution of Partial Di erential Equations by the Finite Ele-ment Method. Cambridge University Press, New York, (1987).[M]Y. Maday, \Resultats d'approximation optimaux pour les operateurs d'interpolationpolynomials", C.R. Acad. Sci. Paris, 312, serie 1, 705-710 (1991).[MP]T.A. Manteu el and S.V. Parter; \Preconditioning and Boundary Conditions", SIAM.J. Numerical Anal. 27, 656-694 (1990).58 [N]P.G. Nevai, Orthogonal Polynomials, Memoirs of the AMS, Amer. Math. Soc., Prov-idence R.I., (1979).[Or]S.A. Orszag; \Spectral methods for problems in complex geometries", J. Comp.Physics, 37, 70-92 (1980).[P1]S.V. Parter; \On the eigenvalues of second order elliptic di erence operators", SIAMJ. Numer. Anal. 19, 518530 (1982).[PW]S.V. Parter and S-P. Wong; \Preconditioning second-order elliptic operators: Con-dition numbers and the distribution of the singular values", Journal of Scienti cComputation, 6, 129-157, (1991).[QZ]A. Quarteroni and E. Zampieri; \Finite element preconditioning for Legendre spectralcollocation approximations to elliptic equations and systems", SIAM J. Num. Anal..29, 917-936, (1992).[S]G. Szego; Orthogonal Polynomials, AMS Colloquium Publications, XXII, fourth ed.,Amer. Math. Soc., (1955).[V]H.A. Van der Vorst; \Bi-CGSTAB: A fast smoothly converging variant of Bi-CG forthe solution of non-symmetric linear systems." To appear Scient. and Stat. Comp..[W1]S-P Wong; \Preconditioning nonconforming nite element methods for treatingDirichlet boundary conditions. I", Numer. Math, 62, 391-411 (1992).[W2]S-P Wong; \Preconditioning nonconforming nite element methods for treatingDirichlet boundary conditions. II", Numer. Math, 62, 413-437 (1992).59

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Analysis of High-order Approximations by Spectral Interpolation Applied to One- and Two-dimensional Finite Element Method

The implementation of high-order (spectral) approximations associated with FEM is an approach to overcome the difficulties encountered in the numerical analysis of complex problems. This paper proposes the use of the spectral finite element method, originally developed for computational fluid dynamics problems, to achieve improved solutions for these types of problems. Here, the interpolation n...

متن کامل

Domain Decomposition Spectral Approximations for an Eigenvalue Problem with a Piecewise Constant Coefficient

Consider a model eigenvalue problem with a piecewise constant coefficient. We split the domain at the discontinuity of the coefficient function and define the multidomain variational formulation for the eigenproblem. The discrete multidomain variational formulations are defined for Legendre–Galerkin and Legendre-collocation methods. The spectral rate of convergence of the approximate eigensolut...

متن کامل

New Numerical Approximations for Space-time Fractional Burgers’ Equations via a Legendre Spectral-collocation Method

Burgers’ equation is a fundamental partial differential equation in fluid mechanics. This paper reports a new space-time spectral algorithm for obtaining an approximate solution for the space-time fractional Burgers’ equation (FBE) based on spectral shifted Legendre collocation (SLC) method in combination with the shifted Legendre operational matrix of fractional derivatives. The fractional der...

متن کامل

Chebyshev–legendre Super Spectral Viscosity Method for Nonlinear Conservation Laws∗

In this paper, a super spectral viscosity method using the Chebyshev differential operator of high order Ds = ( √ 1− x2∂x) is developed for nonlinear conservation laws. The boundary conditions are treated by a penalty method. Compared with the second-order spectral viscosity method, the super one is much weaker while still guaranteeing the convergence of the bounded solution of the Chebyshev–Ga...

متن کامل

A New Spectral Algorithm for Time-space Fractional Partial Differential Equations with Subdiffusion and Superdiffusion

This paper reports a new spectral collocation algorithm for solving time-space fractional partial differential equations with subdiffusion and superdiffusion. In this scheme we employ the shifted Legendre Gauss-Lobatto collocation scheme and the shifted Chebyshev Gauss-Radau collocation approximations for spatial and temporal discretizations, respectively. We focus on implementing the new algor...

متن کامل

An Extension of the Legendre-Galerkin Method for Solving Sixth-Order Differential Equations with Variable Polynomial Coefficients

We extend the application of Legendre-Galerkin algorithms for sixth-order elliptic problems with constant coefficients to sixth-order elliptic equations with variable polynomial coefficients. The complexities of the algorithm are O(N) operations for a one-dimensional domain with N − 5 unknowns. An efficient and accurate direct solution for algorithms based on the LegendreGalerkin approximations...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995