Iterative Refinement for Ill-conditioned Linear by yyf11174

VIEWS: 19 PAGES: 4

									                               2008 International Symposium on Nonlinear Theory and its Applications
                               NOLTA'08, Budapest, Hungary, September 7-10, 2008




      Iterative Refinement for Ill-conditioned Linear Equations.
                     Shin’ichi Oishi1,4 , Takeshi Ogita2,4 and Siegfried M. Rump3,1
                 1
                     Department of Applied Mathematics, Faculty of Science and Engineering,
                                                Waseda University,
                                3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555 Japan
                          2
                            Department of Mathematics, College of Arts and Sciences,
                                      Tokyo Woman’s Christian University,
                             2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585, Japan
                                       3
                                         Institute for Reliable Computing,
                                       Hamburg University of Technology
                                 Schwarzenbergstr. 95 21071 Hamburg Germany
                                              4
                                                CREST, JST, Japan

  Abstract—This paper treats a linear equation                    provided that the residuals are evaluated by ex-
                                                                  tended precision, in which the unit round off u is,
                            Av = b,                               for example, the order of u2 , before rounding back
where A ∈ Fn×n and b ∈ Fn . Here, F is a set of                   to the working precision. In this paper, we will
floating point numbers. Let u be the unit round-off                 treat ill-conditioned problems with
of the working precision and κ(A) = ∥A∥∞ ∥A−1 ∥∞
                                                                                      1 < uκ(A) < ∞.                   (4)
be the condition number of the problem. In this
paper, ill-conditioned problems with
                                                                  We can assume without loss of generality that for a
                      1 < uκ(A) < ∞                               certain positive integer k the following is satisfied:

are considered and an iterative refinement algo-                                       uk κ(A)     β < 1.               (5)
rithm for the problems is proposed. In this paper,
the forward and backward stability will be shown                     In Ref. [8], Rump has shown that for arbitrary
for this iterative refinement algorithm.                           ill-conditioned matrices A, we can have good ap-
                                                                  proximate inverses R1:k satisfying ∥R1:k A − I∥∞
1. Introduction                                                   α < 1. Here, R1:k is obtained as

   In this paper, we will consider the convergence                              R1:k = R1 + R2 + · · · + Rk            (6)
of an iterative refinement for a linear equation
                                                                  with Ri ∈ Fn×n and I is the n-dimensional unit
                            Av = b,                        (1)    matrix. In Ref. [6], we have partially clarified the
                                                                  mechanism of the convergence of Rump’s method.
where A ∈ Fn×n and b ∈ Fn . Here, F is a set of                     Let A, B, C ∈ Fn×n . We assume that we have
floating point numbers. Let u be the unit round-off                 an accurate matrix product calculation algorithm
of the working precision and κ(A) = ∥A∥∞ ∥A−1 ∥∞                  [AB − C]k such that
be the condition number of the problem. Here,
∥ · ∥∞ is the maximum norm defined by                                                 D1:k = [AB − C]k                  (7)
∥v∥∞ = max |vi |, for v = (v1 , v2 , · · · , vn ) ∈ R T      n
         1 i n                                                    satisfying
                                                           (2)
and                                                                   ∑
                                                                      k
                                                                            Di − (AB − C)              cuk ∥AB − C∥∞ . (8)
                     ∑
                     n
                                                                      i=1                     ∞
 ∥A∥∞ = max                |Aij | for A = (Aij ) ∈ Rn×n .
          1 i n
                     j=1                                          Here, D1:k is defined as
                                               (3)
The superscript T denotes the transpose and R is                                D1:k = D1 + D2 + · · · + Dk            (9)
the set of real numbers. For well posed problems,
i.e., in case of uκ(A) < 1, it has been shown [1]-                with Di ∈ Fn×n . Such algorithms have been pro-
[5] that the iterative refinement improves the for-                posed by the present authors with c < 2.1 (See [7],
ward and backward errors of computed solutions                    [9] and [10]).



                                                            - 516 -
 Now we propose the following iterative refine-          Theorem 1 Let vn be generated from (20) with
ment algorithm:                                         any starting vector v0 ∈ Fn . We assume the as-
                                                        sumptions (17) and (19). If
             v ′ = [v − R1:k [Av − b]k ]1 .      (10)
                                                                  γ = (α + cβ + cαβ)(1 + cu) < 1,       (21)
Put rk = [Av − b]k and let Φ(v) = [v − R1:k rk ]1 .
Then, we can write                                      the relative forward error ∥vn − v ∗ ∥∞ /∥v ∗ ∥∞ re-
                                                        duces until
                      v ′ = Φ(v).                (11)
                                                                     ∥vn − v ∗ ∥∞        cu
                                                                          ∗∥
                                                                                  ≈u+         .         (22)
The following holds:                                                    ∥v ∞           1−γ
        v ′ = v − R1:k [(Av − b) + er ] + em ,   (12)   Here, for real numbers a and b, a ≈ b means that
                                                        a is approximately equal to b.
where er = rk − (Av − b) and em ∈ Rn satisfying           This implies the forward stability of the iterative
             ∥er ∥∞     cuk ∥Av − b∥∞            (13)   refinement algorithm (20).

and                                                     3. Backward Stability
           ∥em ∥∞      cu∥v − R1:k rk ∥∞ .       (14)
   In this paper, we will show the forward and back-       In this section, we will show the backward sta-
ward stability of the iterative algorithm (10). Fur-    bility of the iterative refinement algorithm (20).
thermore, numerical examples are also given for il-        A normwise backward error of an approximation
lustrating the forward and backward stability of the    v is defined by
iterative refinement algorithm (10). The forward               η(v)   =   min{ε : (A + ∆A)v = b + ∆b,
stability of the algorithm guarantees that approx-
                                                                         ∥∆A∥∞ ε∥A∥∞ ,
imate solutions generated by the algorithm con-
verge, while the backward stability means the sta-                       ∥∆b∥∞       ε∥b∥∞ }.           (23)
bility of the algorithm against the rounding errors.    It is known [13] that
                                                                                    ∥r∥∞
2. Convergence Theorem: Forward Stability                            η(v) =                    .        (24)
                                                                              ∥A∥∞ ∥v∥∞ + ∥b∥∞
  Let us consider                                       Here, r = Av − b.
                                                           The next theorem shows the backward stability
                       Av = b,                   (15)
                                                        of the iterative refinement algorithm (20):
where A ∈ Fn×n and b ∈ Fn . Let                         Theorem 2 Let vn be generated from (20) with
                  1 < uκ(A) < ∞.                 (16)   any starting vector v0 ∈ Fn . We assume the as-
                                                        sumptions (17) and (19). If
We assume that we have a good approximate in-
verses R1:k satisfying                                            γ = (α + cβ + cαβ)(1 + cu) < 1,       (25)

             ∥R1:k A − I∥∞          α < 1.       (17)   the backward error η(v) reduces until
                                                                              η(v)    c2 u,             (26)
Here, R1:k is defined as
                                                        where c2 is a certain constant. Here, for real num-
            R1:k = R1 + R2 + · · · + Rk          (18)
                                                        bers a and b, a b means that a is approximately
with Ri ∈ Fn×n . As mentioned in the previous sec-      equal to b or a is less than b.
tion in Ref.[8], Rump has proposed a method of cal-        This implies the backward stability of the itera-
culating such approximate inverses and in Ref.[6],      tive refinement algorithm (20).
we have partially clarified the mechanism of the
convergence of Rump’s method. Further, we as-           4. Numerical Examples Illustrating For-
sume also that the following is satisfied:                  ward and Backward Stability
                  uk κ(A)     β < 1.             (19)      In this section, we will present numerical exam-
We propose the following iterative refinement algo-      ples illustrating the forward and the backward sta-
rithm:                                                  bility of the iterative refinement algorithm (20).
                                                           We have used the IEEE 754 double precision
         vn = Φ(vn−1 ), Φ(v) = [v − R1:k rk ]1 ,        floating point number system in these numerical
         rk = [Av − b]k (n = 1, 2, · · · )       (20)   calculations. Thus, in the following calculations,
                                                        the unit round-off u is given as
with any starting vector v0 ∈ Fn . The aim of this
section is to show the following theorem:                   1.11 × 10−16 < u = 2−53 < 1.12 × 10−16 .    (27)



                                                  - 517 -
4.1. Hilbert Matrix                                        where
   Let H be the n × n Hilbert matrix. Let further                          R = R1 + R2 + · · · + R8            (34)
A = sH. Here, s is the minimum common multi-
                                                           with suitable R1 , R2 , · · · , R8 ∈ F. The iterative
plier of 1, 2, 3, · · · , n − 1. Furthermore,
                                                           refinement algorithm (20) converges with 3 itera-
                        b = Az,                     (28)   tions.

where, z ∈ Fn and zi = 1. We have solved Ax = b
for n = 20. In this example, 1.92 × 1016 < ∥A∥∞ <                      Table 2: Rump’s matrix (n=100)
1.93 × 1016 , 1.92 × 1016 < ∥b∥∞ < 1.93 × 1016 and
2.44 × 1028 < κ(A) < 2.45 × 1028 .                                 i    ∥v ∗ − vi ∥∞ /∥v ∗ ∥∞       η(vi )
  In this case, a good approximate inverse can be                  0        7.51 × 10−6         3.98 × 10−14
constructed with k = 2 such that                                   1        5.98 × 10−11        5.61 × 10−19
                                                                   2        4.88 × 10−16        2.46 × 10−19
            ∥RA − I∥∞ < α = 4.16 × 10−4 ,           (29)           3        3.18 × 10−19        6.58 × 10−19
                                                                   4        3.18 × 10−19        6.58 × 10−19
where
                     R = R1 + R2                    (30)
with suitable R1 , R2 ∈ F. The iterative refinement
                                                               Moreover, it is seen that
algorithm (20) converges with 3 iterations. We fi-
nally have an approximate solution with the rela-
                                                            β    = u8 κ(A) < (1.12 × 10−16 )8 × 1.75 × 10107
tive maximum error about 1.92 × 10−16 . Further-
more, it is seen that                                            < 4.34 × 10−21 .                        (35)

  β   = u2 κ(A) < (1.2 × 10−16 )2 × 2.45 × 1028            Table 3 shows the relative errors and the backward
      < 3.08 × 10−4 .                               (31)   errors of approximate solutions obtained by the it-
                                                           erative refinement calculations (20). The calcula-
Table 1 shows the relative errors                          tions are done by the same computational environ-
                      ∥v ∗ − vi ∥∞                         ment as that for the previous example.
                                                    (32)
                         ∥v ∗ ∥∞
                                                           4.3. Rump’s Matrix (n=300)
and the backward errors η(vi ) of approximate so-
lutions obtained by the iterative refinement calcu-           Let A be n × n matrix generated by Rump’s
lations (20). These calculations are done by MAT-          algorithm [12]. We choose n = 300 and b =
LAB on Intel core 2 duo CPU.                               (1, 1, · · · , 1)T ∈ Fn . In this example, 3.10 × 108 <
                                                           ∥A∥∞ < 3.11 × 108 , ∥b∥∞ = 1 and 6.28 × 1059 <
            Table 1: Hilbert matrix (n=20)                 κ(A) < 6.29 × 1059 .
                                                             In this case, a good approximate inverse can be
        i   ∥v ∗ − vi ∥∞ /∥v ∗ ∥∞        η(vi )            constructed with k = 5 such that
        0       3.50 × 10−4          4.55 × 10−6
        1       4.03 × 10−9          4.12 × 10−11                      ∥RA − I∥∞ < α = 1.16 × 10−9 ,           (36)
        2       5.10 × 10−14         5.04 × 10−16
        3       1.91 × 10−16         1.77 × 10−18          where
        4       1.91 × 10−16         1.77 × 10−18                          R = R1 + R2 + · · · + R5            (37)

                                                           with suitable R1 , R2 , · · · , R5 ∈ F. The iterative re-
                                                           finement algorithm converges (20) with 1 iteration.
4.2. Rump’s Matrix (n=100)                                   Moreover, it is seen that
  Let A be n × n matrix generated by Rump’s
                                                            β     = u5 κ(A) < (1.12 × 10−16 )5 × 6.29 × 1059
algorithm [12]. We choose n = 100 and b =
(1, 1, · · · , 1)T ∈ Fn . In this example, 1.04 × 1016 <          < 1.11 × 10−20 .                       (38)
∥A∥∞ < 1.05 × 1016 , ∥b∥∞ = 1 and 1.74 × 10107 <
κ(A) < 1.75 × 10107 .                                      Table 3 shows the relative errors and the backward
  In this case, a good approximate inverse can be          errors of approximate solutions obtained by the it-
constructed with k = 8 such that                           erative refinement calculations (20). The calcula-
                                                           tions are done by the same computational environ-
            ∥RA − I∥∞ < α = 1.86 × 10−4 ,           (33)   ment as that for the previous example.



                                                     - 518 -
                                                            http://www.ti3.tu-harburg.de
          Table 3: Rump’s matrix (n=300)                    /publications/rump.
      i    ∥v ∗ − vi ∥∞ /∥v ∗ ∥∞       η(vi )          [11] T. Ohta, T. Ogita, S. M. Rump and S. Oishi:
      0        8.02 × 10−12        1.42 × 10−17             ”A Method of Verified Numerical Computa-
      1        8.10 × 10−23        4.07 × 10−19             tion for Ill-conditioned Linear System of Equa-
      2        8.10 × 10−23        4.07 × 10−19             tions”, Journal of JSIAM,15:3 (2005), pp. 269–
                                                            287 in Japanese.
                                                       [12] S. M. Rump: A class of arbitrarily ill-
                     References                             conditioned floating-point matrices, SIAM J.
 [1] G.B.Moler, ”iterative refinement in floating             Matrix Anal. Appl., 12:4 (1991), 645–653.
     point”, J. Assoc. Comput. Mach., 14 316-321       [13] J. D. Rigal and J. Gaches:”On the compati-
     (1967)                                                 vility of a given solution with the data of a
 [2] R. D. Skeel, ”Iterative refinement implies nu-          linear equation”, J. Assoc. Comput. Mach., 14
     merical stability for Gaussian elimination”,           (1967), 543-548.
     Math. Comp., 35 817-832 (1980)
 [3] N.J.Higham, ”Iterative refinement for linear
     systems and LAPACK”, IMA J. Numer. Anal.
     17 495-509 (1997).
 [4] M.Jankowsky and H.Woznlakowski, ”Iterative
     refinement implies numerical stability”, BIT
     17 303-311 (1997).
 [5] F.Tisseur, ”Newton’s method in floating point
     arithmetic and iterative refinement of gener-
     alized eigenvalue problems”, SIAM J. Matrix
     Anal. Appl. 22 No.4 1038-1057 (2001).
 [6] S. Oishi, K. Tanabe, T.Ogita, and S.M. Rump,
     ”Convergence of Rump’s method for inverting
     arbitrary ill-conditioned matrices”, J. Comp.
     and Appl. Math, 205 533-544 (2007).
 [7] T. Ogita, S. M. Rump and S. Oishi: ”Accu-
     rate Sum and Dot Product”, SIAM Journal on
     Scientific Computing, 26/6,1955-1988,(2005)
 [8] S.M.Rump:”Approximate inverses of almost
     singular matrices still contain useful informa-
     tion”, Technical Report 90.1, Faculty of In-
     formation and Communication Sciences, Ham-
     burg University of Technology (1990).
 [9] S.M. Rump, T. Ogita, and S. Oishi, ”Accurate
     Floating-point Summation I: Faithful Round-
     ing”, accepted for publication in SIAM Jour-
     nal on Scientific Computing. Preprint is avail-
     able from

    http://www.ti3.tu-harburg.de
    /publications/rump.

[10] S.M. Rump, T. Ogita, and S. Oishi, ”Accu-
     rate Floating-point Summation II: Sign, K-
     fold Faithful and Rounding to Nearest”, sub-
     mitted for publication in SIAM Journal on Sci-
     entific Computing. Preprint is available from



                                                  - 519 -

								
To top