# On the Use of Sparse Direct Solver in a

Document Sample

```					   On the Use of Sparse Direct Solver in a
Projection Method for Generalized Eigenvalue
Problems Using Numerical Integration

Takamitsu Watanabe and Yusaku Yamamoto
Dept. of Computational Science & Engineering
Nagoya University
Outline

   Background
   Objective of our study
   Projection method for generalized
eigenvalue problems using numerical
integration
   Application of the sparse direct solver
   Numerical results
   Conclusion
Background

   Generalized eigenvalue problems in quantum chemistry
and structural engineering

nn
Given A, B  R     , find   R and x  0 such that Ax  Bx .

   Problem characteristics                      specified interval
   A and B are large and sparse.
   A is real symmetric and B is s.p.d.
real
Eigenvalues are real.                                      axis
eigenvalues
   Eigenvalues in a specified interval
are often needed.                        HOMO LUMO
Background (cont’d)
A projection method using numerical integration
Sakurai and Sugiura, A projection method for generalized eigenvalue problems,
J. Comput. Appl. Math. (2003)

   Reduce the original problem to a small generalized eigenvalue
problem within a specified region in the complex plane.
   By solving the small problem, the eigenvalues lying in the region
can be obtained.
Small generalized
Original problem                                               eigenvalue problem
within the region
Ax  Bx
region

   The main part of computation is to solve multiple linear
simultaneous equations.
   Suited for parallel computation.
Objective of our study

   Previous approach
   Solve the linear simultaneous equations by an iterative
method.
   The number of iterations needed for convergence
differs from one simultaneous equations to another.
decreasing parallel efficiency.

   Our study
   Solve the linear simultaneous equations by a sparse
direct solver without pivoting.
   Load balance will be improved since the computational
times are the same for all linear simultaneous equations.
Projection method for generalized eigenvalue
problems using numerical integration

Suppose that Ax  Bx has distinct eigenvalues 1 , 2 ,  , d
and that we need 1 , 2 ,  , m (m  d ) that lie in a closed curve .

Using two arbitrary complex vectors       ×λm+1                   d
u , v  C n define a complex function
,                                        1
.                                   2


×λm+2
m
Then, f (z) can be expanded as
follows:

c
, c     C, g(z): polynomial in z.
Projection method for generalized eigenvalue
problems using numerical integration (cont’d)

Further define the moments by
and two Hankel matrices by
 0       1     m 1                         d

        2     m          1
 1                     
m
H m :  i  j  2 
               i, j                           

                                           2
 m 1     m    2 m2 
           
 1      2     m 

       3     m 1 
m
 2                    
m
H m :  i  j 1 

              i, j                             

                       
 m      m 1   2 m 1  .


Th. 1 , 2 ,  , m are the m roots of                                    .

The original problem Ax  Bx has been reduced to a small
problem                  through contour integral.
Projection method for generalized eigenvalue
problems using numerical integration (cont’d)

Computation of the moments  k :

   Set the path of integration  to a
circle with center  and radius r .

   Approximate the integral using the
trapezoidal rule.

The function values
Path of integration
have to be computed for each                                                j
.                                 1
        2
        r
Solution of N independent linear                              m
simultaneous equations is necessary                                     m 1
(N = 64 128).
Application of the sparse direct solver

   A and B: sparse symmetric matrices,  j : a complex number

The coefficient matrix is a sparse complex symmetric matrix.

   Application of the sparse direct solver
 For a sparse s.p.d. matrix, the sparse direct solver provides

an efficient way for solving the linear simultaneous equations.

   We adopt this approach by extending the sparse direct solver
to deal with complex symmetric matrices.
The sparse direct solver

Characteristics
   Reduce the computational work and memory
requirements of the Cholesky factorization by
exploiting the sparsity of the matrix.
   Stability is guaranteed when the matrix is s.p.d.
   Efficient parallelization techniques are available.

   Find a permutation of rows/columns that reduces
ordering                 computational work and memory requirements.

   Estimate the computational work and memory
symbolic factorization          requirements.
   Prepare data structures to store the Cholesky
factor.
Cholesky factorization

triangular solution
Extension of the sparse direct solver to
complex symmetric matrices
   Algorithm
   Extension is straightforward by using the Cholesky
factorization for complex symmetric matrices.
   Advantages such as reduced computational work, reduced
memory requirements and parallelizability are carried over.

   Accuracy and stability
   Theoretically, pivoting is necessary when factorizing
complex symmetric matrices.
   Since our algorithm does not incorporate pivoting, accuracy
and stability is not guaranteed.

   We examine the accuracy and stability experimentally by
comparing the results with those obtained using GEPP.
Numerical results
Matrices used in the experiments
matrix     N      NNZ                       explanation
BCSSTK12   1473       17,857 Ore car -- consistent mass                 Harwell-Boeing
BCSSTK13   2003       42,943 Fluid flow generalized eigenvalues         Library
FMO      1980   365,030 Fragment molecular orbital method

BCSSTK12               BCSSTK13                        FMO

   For each matrix, we solve the equations with the sparse direct solver
(with MD and ND ordering) and GEPP.
   We compare the computational time and accuracy of the eigenvalues.
Computational time
Computational time (sec.) for one set of linear simultaneous equations and speedup
(PowerPC G5, 2.0GHz)
matrix           LAPACK (GEPP)         sparse solver (MD)    sparse solver (ND)
BCSSTK12                     2.44 (1x)         0.017 (144x)          0.021 (116x)
BCSSTK13                     6.12 (1x)          0.36 (17x)            0.43 (14x)
FMO                       5.86 (1x)          2.93 (2.0x)           3.51 (1.7x)

BCSSTK12                BCSSTK13                    FMO

The sparse direct solver is two to over one hundred times faster
than GEPP, depending on the nonzero structure.
Accuracy of the eigenvalues (BCSSTK12)
Example of an interval containing 4 eigenvalues

Distribution of the eigenvalues and the specified interval

eigenvalues
specified interval

Relative errors in the eigenvalues for each algorithm (N=64)
LAPACK (GEPP)             sparse solver (MD)          sparse solver (ND)
1.1E-08                     2.4E-09                   4.5E-09
2.1E-10                     9.8E-10                   7.6E-10
2.8E-09                     1.0E-08                   2.9E-08
1.0E-08                     1.3E-08                   3.4E-08
   The errors were of the same order for all three solvers.
   Also, the growth factor for the sparse solver was O(1).
Accuracy of the eigenvalues (BCSSTK13)
Example of an interval containing 3 eigenvalues

Distribution of the eigenvalues and the specified interval

eigenvalues
specified interval

Relative errors in the eigenvalues for each algorithm (N=64)
LAPACK (GEPP)             sparse solver (MD)          sparse solver (ND)
2.4E-11                     4.9E-11                   4.6E-11
4.5E-10                     1.6E-10                   2.5E-11
1.2E-10                     5.4E-11                   3.7E-11

The errors were of the same order for all three solvers.
Accuracy of the eigenvalues (FMO)
Example of an interval containing 4 eigenvalues

Distribution of the eigenvalues and the specified interval

eigenvalues
specified interval

Relative errors in the eigenvalues for each algorithm (N=64)
LAPACK (GEPP)             sparse solver (MD)          sparse solver (ND)
-5.0E-13                    -5.0E-13                   -5.0E-13
-1.2E-10                    -8.5E-11                   -2.2E-11
-1.7E-10                    -3.0E-10                   -1.4E-11
-8.4E-12                    -3.5E-12                   -3.5E-12

The errors were of the same order for all three solvers.
Conclusion
Summary of this study
   We applied a complex symmetric version of the sparse direct
solver to a projection method for generalized eigenvalue
problems using numerical integration.
   The sparse solver succeeded in solving the linear
simultaneous equations stably and accurately, producing
eigenvalues that are as accurate as those obtained by GEPP.

Future work
   Apply our algorithm to larger matrices arising from quantum
chemistry applications.
   Construct a hybrid method that uses an iterative solver when
the growth factor becomes too large.
   Parallelize the sparse solver to enable more than N
processors to be used.

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 29 posted: 5/29/2010 language: English pages: 17
How are you planning on using Docstoc?