next up previous contents index
Next: Eigenvector/Null-Space Purification Up: The Generalized Eigenvalue Problem Previous: The Generalized Eigenvalue Problem

Structure of the Spectral Transformation

The use of spectral transformations  is closely related to the inverse power and inverse iteration techniques. They are typically designed to enhance convergence  to eigenvalues near a selected point in the complex plane. For example, if one were interested in computing the smallest eigenvalues of a symmetric positive definite matrix then would be a reasonable choice.

A convenient way to provide such a spectral transformation is to note that

Thus

A moments reflection will reveal the advantage of such a spectral transformation. Eigenvalues that are near will be transformed to eigenvalues that are at the extremes and typically well separated from the rest of the transformed spectrum. The corresponding eigenvectors remain unchanged. Perhaps more important is the fact that eigenvalues far from the shift are mapped into a tight cluster in the interior of the transformed spectrum. We illustrate this by showing the transformed spectrum of the matrix ${\bf A}$ from Figure 4.8 with a shift (here ). Again, we show the total filter polynomial that was constructed during an IRA iteration on the transformed matrix . Here we compute the six eigenvalues of largest magnitude. These will transform back to eigenvalues of ${\bf A}$ nearest to through the formula . The surface shown in Figure 4.9 is again but plotted over a region containing the spectrum of . Here, is the product of all of the filter polynomials constructed during the course of the iteration. Since the extrem eigenvalues are well separated the iteration converges much faster and degree of is only 45 in this case. Here, the ``+" signs are the eigenvalues of in the complex plane and the contours are the level curves of . The circled plus signs are the converged eigenvalues. The figure illustrates how much easier it is to isolate desired eigenvalues after a spectral transformation.


  
Figure 4.9: Total Filter Polynomial with Spectral Transformation

If ${\bf A}$ is symmetric then one can maintain symmetry in the Arnoldi/Lanczos process by taking the inner product to be

It is easy to verify that the operator is symmetric with respect to this inner product if ${\bf A}$ is symmetric. In the Arnoldi/Lanczos process the matrix-vector product ${\bf w}\leftarrow{\bf A}{\bf v}$ is replaced by and the step is replaced by .If ${\bf A}$ is symmetric then the matrix is symmetric and tridiagonal. Moreover, this process is well defined even when ${\bf M}$ is singular and this can have important consequences even if ${\bf A}$ is non-symmetric. We shall refer to this process as the ${\bf M}$-Arnoldi process.  

If ${\bf M}$ is singular  then the operator has a non-trivial null space and the bilinear function is a semi-inner product and is a semi-norm. Since is assumed to be nonsingular, Vectors in are generalized eigenvectors corresponding to infinite eigenvalues.   Typically, one is only interested in the finite eigenvalues of () and these will correspond to the non-zero eigenvalues of S. The invariant subspace corresponding to these non-zero eigenvalues is easily corrupted by components of vectors from during the Arnoldi process. However, using the M-Arnoldi process  with some refinements can provide a solution.

In order to better understand the situation, it is convenient to note that since ${\bf M}$ is positive semi-definite, there is an orthogonal matrix such that

where is a positive definite diagonal matrix of order n, say. Thus  

 

where is a square matrix of order n and is an matrix with the original being of order m+n. Observe now that a non-zero eigenvalue of satisfies , i.e.

so that must hold. Note also that for any eigenvector , the leading vector must be an eigenvector of . Since is block triangular, .Assuming has full rank, it follows that if has a zero eigenvalue then there is no corresponding eigenvector (since would be implied ). Thus, if zero is an eigenvalue of with algebraic multiplicity mo, then zero is an eigenvalue of of algebraic multiplicity m+mo and with geometric multiplicity m. Of course, since is similar to all of these statements hold for as well.


next up previous contents index
Next: Eigenvector/Null-Space Purification Up: The Generalized Eigenvalue Problem Previous: The Generalized Eigenvalue Problem
Chao Yang
11/7/1997