NUMERICAL METHODS

Document Sample
NUMERICAL METHODS Powered By Docstoc
					NUMERICAL METHODS




   The Rössler Attractor
                            5. NUMERICAL METHODS


5.0     INTRODUCTION


One of the most important tasks in a study of dynamical systems is the numerical
calculation of the trajectories. Thus far we have considered the integration method to
be a black box into which we pop the system, initial conditions, method and time
range and out pops a plot of the trajectories. Although this approach is common in
courses on dynamical systems it obscures many of the pitfalls of numerical
integration.


It is not possible at the present state of the art to choose a ‘best’ algorithm for the
calculation of trajectories. There are several types of numerical algorithm, each with
their own advantages and disadvantages. We shall consider a class of methods known
as discrete variable methods. These methods approximate the continuous time
dynamical system by a discrete time system. This means that we are not really
simulating the continuous system but a discrete system which may have different
dynamical properties. This is an extremely important point.


The discrete variable methods which we consider fall into two main types, Runge-
Kutta methods and Linear Multistep methods. Maple has implementations of both
types of method as well as a number of more sophisticated techniques designed to
overcome some of the pitfalls of numerical solution. The more sophisticated methods
still fall into the discrete variable category.




                                              1
 5.1     TYPES OF METHOD


Although the dynamical systems which we are simulating are usually in more than
one dimension we can, without loss, restrict our numercal anlaysis of the methods to
the single non-autonomous differential equation


                                                       bg
                                               x  f t, x
                                               



                                          bg
subject to the initial condition x t 0  x0 . We shall usually refer to the differential
equation together with the initial condition as an initial value problem (IVP). Discrete
variable methods yield a series of approximations


                                             X n  x (t n )


on the set of points


                                    tn1  tn  h,         n  0,1 N


where h is the stepsize.


Taylor Series Method


These methods are based on the Taylor series expansion


                                                           h2         h3
                       x (tn 1 )  x (t n )  hx tn ) 
                                                (            x tn )  (tn ) 
                                                              (        x
                                                           2!         3!


If the series is truncated and x(tn ) is replaced by the approximation X n we obtain the
Taylor Series Method of order p


                                              2       3         p
                                        n  h X n  h X n  h X n( p )
                       X n 1  X n  hX              
                                             2!      3!        p!




                                                      2
                    dpX
 where X   ( p)
                        .
                    dt p


Although there is an implementation of this method in Maple it is not much used in
practice due to the necessity of computing the higher order derivatives of X. We shall
only use it as a reference method when discussing the accuracy of other methods.


Runge-Kutta Methods


These methods are based on the notion of finding a formula which agrees with the
Taylor series as closely as possible without involving derivatives. For example
consider the possibility of matching the second order Taylor series method


                                                                     2
                                                            h 
                                       X n 1  X n  hX n     Xn
                                                             2!


by using a formula of the form


                                       X n1  X n  h (tn , X n , h)


where


                          (t , x, h)  f (t , x)  f (t  ah, x  bhf (t , x))


The equivalent Taylor series expression to  (t , x , h) is


                                             h
                    (t , x , h)  x t ) 
                                   (          (t )
                                               x
                                             2
                                                         h
                                         f (t , x )      f t (t , x )  f x (t , x ) f (t , x )
                                                         2


Expanding  (t , x , h) in a Taylor series up to terms of O h gives         bg
            (t , x , h)  (   ) f (t , x )  h af t (t , x )  bf x (t , x ) f (t , x )  O(h 2 )



                                                         3
 Comparing the two expressions we see that


                              1,            a  2 ,
                                                     1
                                                                    b    1
                                                                           2




resulting in the family of solutions


                                                                            1
                          1 ,              ,               a b
                                                                           2


where   0 is a free parameter. For            1
                                                  2       we obtain the improved Euler method



                  X n 1  X n 
                                   h
                                   2
                                                           b
                                     f (tn , X n )  f tn  h, X n  hf (tn , X n )   g

and for   1 the modified Euler method


                                              b
                         X n1  X n  hf tn  h , X n  h f (tn , X n )
                                               2         2                     g
Unfortunately the terminology for naming second order Runge-Kutta methods is not
standardised and Maple calls the improved Euler method the Heun formula and the
modified Euler method the improved polygon method.


The procedure above can be extended to give higher order methods such as the
classical 4th order method


                                           h
                           X n 1  X n  ( k1  2 k 2  2 k 3  k 4 )
                                           6
                           k1  f (tn , X n )
                           k2  f (tn  2 h, X n  2 hk1 )
                                        1          1


                           k3  f (tn  2 h, X n  2 hk2 )
                                        1          1


                           k4  f (tn  h, X n  hk 3 )




                                                      4
 Linear Multistep Methods


These methods are based on integration of an interpolating polynomial having the
values f n  f (tn , X n ) on a set of points tn at which X n has already been computed.
By integrating


                                                 x  f t, x
                                                          bg
over the interval t n , t n 1 we obtain



                                   b g bg z f (t , x)dt
                                 x tn1  x tn 
                                                                  tn1

                                                                 tn




Using linear interpolation in the interval we have



                      b g bg zLtt
                              M
                     x tn1  x tn 
                              N
                              t
                               t           tn1

                                           tn
                                                       n
                                                               n 1

                                                               n 1
                                                                      fn 
                                                                               t  tn
                                                                             tn1  tn
                                                                                            O
                                                                                            P
                                                                                       f n 1 dt
                                                                                            Q
Integrating gives



                                   b g bg h b  f g
                                 x tn 1  x tn 
                                          2
                                            f                            n    n 1




This leads to the Trapezoidal method



                                    X n 1  X n 
                                                           h
                                                           2
                                                                 b
                                                             f n  f n 1       g

The general form of these methods is


                          k                      k

                           j X n  j  h  j f n  j ,
                         j 0                   j 0
                                                                             n  0,1,2,




                                                           5
 where  j and  j are constants,  k  0 . This formula is called a linear k-step

method. In order to generate the sequence of approximations X n it is first necessary          l q
to obtain k starting values X 0 , X 1 ,, X k 1 . If  k  0 the method is explicit. If  k  0
then the method is implicit and leads to a non-linear equation for X n k .


The main methods of this type which we shall consider are:


Adams-Bashforth


These methods are explicit with methods of order k being k-step. Methods of order
from one to three have the formulae


                                 b g
              X n 1  X n  hf tn , X n

                              mb, X g f b , X g r
                              h
              X n 1  X n       3f t   t
                                       n          n              n 1        n 1
                              2
                              mf b, X g16 f b , X g 5 f b                                                       g
                                                                                                                 r
                               h
              X n 1  X n        23 t      t
                                             n       n  t                  n 1       n 1      n2   , X n2
                              12


The first order method is more normally called the Euler method.


Adams-Moulton


These methods are implicit with methods of order k being k  1 -step. Methods of              b g
order from one to three have the formulae


                                   b g
                X n 1  X n  hf tn 1 , X n 1

                                   fb
                                mt , X g f b, X g
                                                 r
                                h
                X n 1  X n              t
                                           n 1       n 1              n          n
                                2
                                mb , X g 8 f b, X g f b , X g
                                                             r
                                 h
                X n 1  X n        5f t      t
                                              n 1  t    n 1                     n    n      n 1     n 1
                                12


The first order method is called the backward Euler formula and the second order
method is the Trapezoidal method.




                                                             6
 Gear Methods


These methods are implicit with methods of order k being k-step. Methods of order
from one to three have the formulae


                                        b
                  X n 1  X n  hf tn 1 , X n 1         g
                  X n 1    1
                             3   l4 X  X q 23h f b , X g
                                    n        
                                             n 1  t               n 1       n 1



                  X n 1  11
                           1
                                  l18 X  9 X  2 X q 6h f b , X g
                                        n            
                                                    n 1
                                                       11
                                                           t   n2                   n 1   n 1




The first order method is again the backward Euler formula.



5.2    MAPLE IMPLEMENTATION


Maple contains a large range of numerical procedures for integrating non-
autonomous dynamical systems of the type


                                            b g xbg x
                                  x  f t,x ,
                                                t            0          0




All of these procedures are invoked using the Maple dsolve command with the
numeric option. The syntax for the command is


                    dsolve(deqns, vars, type=numeric, options)


where deqns defines the system of differential equations and initial values, vars
defines the dependent and independent variables, type=numeric tells Maple to use a
numerical algorithm and options allows a choice of method, stepsize and other
options associated with the method. The default method is a Fehlberg fourth-fifth
order Runge-Kutta method.




                                                           7
    Worked Example 1 - Van der Pol Equation


As an example consider the solution of the Van der Pol equation written as the first
order system


                                xy
                                
                                           c h
                                y   x   1 x2  y
                                


> epsilon:=10:
> ivp := {diff(x(t),t)=y(t),diff(y(t),t)=-x(t)+epsilon*(1-x(t)^2)*y(t),
                 x(0)=0,y(0)=0.5}:
> fcns := {x(t), y(t)}:


This defines the initial value problem. Now we invoke dsolve


> p1:= dsolve(ivp,fcns,type=numeric);


                            p1 := proc(rkf45_x) ... end


Note that the output from the dsolve command is a procedure. In order to find the
numerical solution we need to evaluate the procedure at the appropriate value of t.


> p1(10);


                          -1.746862466305488 , y( t )
         [ t , x( t )
              10                                       .08476007025811474 ]


We can us the Maple odeplot command to graph the solution. This command is in the
plots package so this has to be loaded first.


> with(plots):




                                             8
 Now we can plot the solution in various ways. Firstly here is a plot of the solution
components v t over the range [0,20]. numpoints controls the number of plotted points
which needs to be relatively high here to obtain a realistic plot.


> odeplot(p1,[[t,x(t)],[t,y(t)]],0..20, numpoints=500);




It is also possible to obtain a phase plot


> odeplot(p1,[[x(t),y(t)]],0..20, numpoints=500);




and a three dimensional plot


> odeplot(p1,[[t,x(t),y(t)]],0..20, numpoints=500);




                                             9
Classical Methods


Maple contains a number of one-step methods for the numerical solution of initial
value problems. These are referred to as classical methods and are invoked by
including the option method=classical[type] in the call to dsolve. Here type can be
one of


foreuler       the forward Euler method;
heunform       the Heun formula (also known as the trapezoidal rule, or the improved
               Euler method);
impoly         the improved polygon method (also known as the modified
               Euler method);
rk2            the second-order classical Runge-Kutta method;
rk3            the third-order classical Runge-Kutta method;
rk4            the fourth-order classical Runge-Kutta method;
adambash       the Adams-Bashford method (a "predictor" method);
abmoulton      the Adams-Bashford-Moulton method (a "predictor-
               corrector" method).


If no type is specified the forward Euler method is used.




                                           10
    Worked Example 2 - The Forward Euler Method


Consider the IVP


                               y   2 xy 2 ,        bg
                                                      y 0 1


We can define this in Maple as


> ivp:={diff(y(x),x)=-2*x*y(x)^2,y(0)=1}:


In this case Maple can find the exact solution using dsolve


> Exactsoln:=rhs(dsolve(ivp,y(x)));


                                                      1
                                 Exactsoln :=
                                                  x 2
                                                       1


Now we use Euler's method to obtain the numerical solution. Note that this method
like all the other methods of type classical uses a fixed stepsize which we provide.


> es0:=dsolve(ivp,y(x),type=numeric,
                    method=classical[foreuler],stepsize=0.001):


Thus we can find the solution at x  0.4


> es0(0.4);


                        [ x , y( x )
                             .4          .8623085097414066 ]


and plot the solution for a range of values of x


> odeplot(es0,[x,y(x)],0..6,labels=[x,y]);




                                                 11
Now often we want to investigate how the solution behaves for differing values of the
stepsize h. In order to do this we can define a different version of the function for the
numerical solution which uses the stepsize as one of the input parameters


> es1:=h->dsolve(ivp,y(x),type=numeric,
                      method=classical[foreuler],stepsize=h):


Thus we can obtain the solution for different stepsizes


> es1(0.001)(0.4);


                        [ x , y( x )
                             .4          .8623085097414066 ]

> es1(0.01)(0.4);


                        [ x , y( x )
                             .4          .8644849248890492 ]


We can compare the solution at different stepsizes by constructing a table of values as
follows:


Firstly define the output points


> x:=k->k*0.1:




                                            12
 Now define a function which finds the approximate solution at a given output point


> EulerSoln:=(x,h)->rhs(es1(h)(x)[2]):


Define the exact solution


> ExactSoln:=x->1/(1+x^2);


                                                   1
                                 ExactSoln := x
                                                 1 2
                                                    x


Construct an array whose elements compare the exact solution to the numerical
solution for three different stepsizes.


> mm:=array(1..8,1..5):
mm[1,1]:=`x(k)`:mm[1,2]:=`Exactsoln`:mm[1,3]:=`h=0.1`:
mm[1,4]:=`h=0.01`:mm[1,5]:=`h=0.001`:
for i from 2 to 8 do
      mm[i,1]:=0.1*(i-2):
      mm[i,2]:=evalf(ExactSoln(x(i-2)),5):
      for j from 3 to 5 do
            mm[i,j]:=evalf(EulerSoln(x(i-2),10^(-j+2)),5)
      od:
od:
> eval(mm);


                     x(k) Exact soln     h=0.1      h=0.01 h=0.001
                                                                  
                     0        1.          1.           1.     1. 
                                                                  
                     .1    .99010         1.        .99107 .99020 
                                                                  
                     .2    .96154      .98000       .96330 .96171 
                                                                  
                     .3    .91743      .94158       .91969 .91766 
                                                                  
                     .4    .86207      .88839       .86448 .86231 
                                                                  
                     .5    .80000      .82525       .80229 .80023 
                                                                  
                     .6    .73529      .75715       .73727 .73549 



                                                13
    Another possibility is to compare the errors at each stepsize. Firstly define a function
giving the error


> err:=(x,h)->ExactSoln(x)-EulerSoln(x,h):


> tt:=array(1..8,1..4):
tt[1,1]:=`x(k)`:tt[1,2]:=`h=0.1`:tt[1,3]:=`h=0.01`:tt[1,4]:=`h=0.001`:
for i from 2 to 8 do
      tt[i,1]:=0.1*(i-2);
      for j from 2 to 4 do
            tt[i,j]:=evalf(err(x(i-2),10^(-j+1)),5);
      od:
od:


> eval(tt);


            x(k)    h=0.1     h=0.01 h=0.001
                                              
             0         0         0       0 
                                              
             .1    -.00990    -.00097 -.00010 
                                              
             .2    -.01846    -.00176 -.00017 
                                              
             .3    -.02415    -.00226 -.00023 
                                              
             .4    -.02632    -.00241 -.00024 
                                              
             .5    -.02525    -.00229 -.00023 
                                              
             .6    -.02186    -.00198 -.00020 



     Worked Example 3 - The Classical Second Order Runge-Kutta Method


Consider the solution of the IVP


                                y   4 y  4 x ,        bg
                                                          y 0 1


> IVP:={diff(y(x),x)=-4*y(x)+4*x,y(0)=1}:




                                                     14
 The exact solution is given by


> dsolve(IVP,y(x));


                                             1 5 ( 4 x )
                                 y( x )   e
                                         x
                                             4 4


Use the 2nd order classical Runge-Kutta method


> rk2:=h->dsolve(IVP,y(x),type=numeric,method=classical[rk2],stepsize=h):
> x:=k->k*0.5:
> RK2Soln:=(x,h)->rhs(rk2(h)(x)[2]):
> ExactSoln:=x->x-1/4+5/4*exp(-4*x);


                                                1 5 ( 4 x )
                            ExactSoln := x   e
                                            x
                                                4 4


> mm:=array(1..10,1..5):
mm[1,1]:=`x(k)`:mm[1,2]:=`Exactsoln`:mm[1,3]:=`h=0.25`:
mm[1,4]:=`h=0.5`:mm[1,5]:=`h=0.75`:
for i from 2 to 10 do
      mm[i,1]:=0.5*(i-2):
      mm[i,2]:=evalf(ExactSoln(x(i-2)),5):
      for j from 3 to 5 do mm[i,j]:=evalf(RK2Soln(x(i-2),0.25*(j-2)),5) od;
od:
> eval(nm);




                                               15
   x(k)   Exact soln h=0.25     h=0.5     h=0.75
                                                 
   0         1.          1.        1.        1. 
                                                 
   .5      .41918     .56250    1.5000    1.5000 
                                                 
   1.0     .77290     .82813       2.     2.3125 
                                                 
   1.5     1.2531     1.2695    2.5000    9.0625 
                                                 
   2.0     1.7504     1.7549       3.     9.5625 
                                                 
   2.5     2.2501     2.2512    3.5000    12.016 
                                                 
   3.0     2.7500     2.7503      4.      51.578 
                                                 
   3.5     3.2500     3.2501    4.5000    52.078 
                                                 
   4.0     3.7500     3.7500       5.     64.785 


Note that as the stepsize is increased the numerical solution fails to represent the exact
solution accurately. Indeed for a stepsize of 0.75 the numerical solution 'blows up'.
This is due to non-convergence as a result of the numerical method becoming
unstable. We shall consider this phenomenon next.




                                            16
 Exercises 1


1.   Use the classical numerical methods foreuler, heunform, rk3, rk4 and adambash
     to attempt to obtain a numerical solution of the IVPs



     (a)
                                       dx
                                       dt
                                                           bg
                                           2tx 2 , x 0  1



     (b)
                                       dx
                                       dt
                                                    b g bg
                                           5x 1  x , x 0  0.5



     Use a range of stepsizes in the interval [0,1]. At what approximate value of the
     stepsize do the methods become unstable.


2.   Use each of the methods above to solve the systems of differential equations


                              x   x  xy ,
                                                   x (0)  0.5
     (a)
                              y  y  xy ,
                                                   y (0)  0.5


                              x  y,
                                                           x (0)  0
     (b)
                              y   x   (1  x ) y ,
                                                   2
                                                            y (0)  0.5


     where  is a parameter. (Try   051510 ).
                                       .,,,


                                       x  ( y  z),
                                                                   x(0)  1
     (c)                               y  x  0.2 y,
                                                                   y(0)  1
                                       z  0.2  8z  xz,
                                                                   z(0)  1


     In each case use odeplot to obtain time series and phase plots.




                                               17
 5.3     LOCAL AND GLOBAL ERRORS

                                                              l    q
The output of a discrete variable method is a set of points t n , X n and the output of

                                                     bg
the dynamical system is a continuous trajectory x t . For the numerical results to
provide a good approximation to the trajectory we require that the difference


                                             bg
                                     X N  x tN  


where  is some defined error tolerance, at each solution point. This difference is
called the global error and is the accumulated error over all solution steps.
Unfortunately it is extremely difficult to accomplish this and we have to confine
ourselves to controlling the local error


                                        ~
                                               bg
                                        X n  x tn


                   ~
at each step where X n is the numerical solution obtained on the assumption that the
numerical solution at the previous solution point is exact.
There are two sources of local error,the roundoff error and the truncation error.


Roundoff Error


The roundoff error is the error which arises from the fact that numerical methods are
implemented on digital computers which only calculate results to a fixed precision
which is dependent on the computer system used. Note that since roundoff errors
depend only on the number and type of arithmetic operations per step and is thus
independent of the integration stepsize h.


Truncation Error


The truncation error of a numerical method results from the approximation of a
continuous dynamical system by a discrete one. The truncation error is machine
independent, depending only on the algorithm used and the stepsize h.




                                             18
 An important concept in the analysis of the truncation error is that of consistency.
Basically consistency requires that the discrete variable method becomes an exact
representation of the dynamical system as the stepsize h  0 . Consistency conditions
can be derived for both Linear Multistep and Runge-Kutta methods.


Linear Multistep Methods


Consider the general linear multistep method


                       k                 k

                       j X n  j  h  j f n  j ,
                      j 0              j 0
                                                           n  0,1,2,



We can define the first characteristic poynomial by


                                                  k
                                        ( )    i i
                                                 i 0




and the second characteristic polynomial by


                                                  k
                                        ( )    i i
                                                 i 0




We can show that consistency requires that


                               (1)  0,            (1)   (1)


Runge-Kutta Methods


The general pth order Runge-Kutta method can be written in the form




                                                19
                                         p
                      Yn 1  Yn  h c r k r
                                      r 1

                               b g
                      k1  f t n , X n

                      kr
                               F
                            f G ha , X
                               t
                                                            p

                                                     n  h  brs k s ,
                                                                             I
                                                                             J          r  2,3, p
                               Hn            r
                                                           s 1              K
Here we have


                                                         bg
                                                         1


and it can be shown that consistency requires that


                                                                  m
                                     (1)  0,                  cr 1
                                                                         r     (1)




   Worked Example 4


Examine the consistency of
(a) the classical 4th order Runge-Kutta method,
(b) the two-step Adams-Bashforth method.


                                         4
                                                         1 1 1 1
(a)      ( )    1                c
                                      r 1
                                                 r          1
                                                         6 3 3 6
         ( )  1


thus


                                                                              4
                              (1)  0                    (1)  1   cr
                                                                             r 1




and hence the method is consistent.




                                                            20
                                              3        1
    (b)    ( )   2             ( )   
                                              2        2
           ( )  2  1


thus


                                (1)  0           (1)  1   (1)


and hence the method is consistent.


The accuracy with which a consistent numerical method represents a dynamical
system is determined by the order of consistency. The method of determining this is
best illustrated by an example.


    Worked Example 5


Determine the order of consistency of the Trapezoidal method.


The order of consistency is determined by substituting the exact solution xn into the
formula of the numerical algorithm and expanding the difference between the two
sides of the formual by Taylor series. The result is then normalised by multiplying by
                       1
the scaling factor          .
                       bg
                      1 h


                                                           1     1
                                 ( )    1     ( )   
                                                           2     2
                                 ( )  1


thus


                                (1)  0           (1)  1   (1)


and the method is consistent. Now the truncation error is given by




                                                  21
                    n     
                                 1L
                                  M          b g  h   O
                                                      P
                                      xn 1  xn  f n  f n 1
                             bNg
                                1 h             2   Q
                            1L                      O
                            M x  b x g
                                        h
                              x           x  P
                                          
                            hN                      Q
                                   n 1    n           n   n 1
                                        2
                             R  hx  h x  h   U
                             |x                 x
                                                   2       3

                                                            |
                            1|                              |
                                   n       n           n           n
                                        2     6
                            S
                            h|      hF              h      IV
                                                            |          2


                             | x  2 G x  hx  2  J
                             T
                                      
                                      x
                                      Hn       
                                               n      xn
                                                           K|
                                                            W  n           n


                                h2
                                 n 
                                   x
                                12


The order is given by the highest power of h remaining. Hence the method is
consistent of order two.




                                               22
 5.4      ZERO STABILITY


This is a problem peculiar to consistent linear k-step methods in which a first order
dynamical system is integrated using a kth order difference equation. This leads to the
possible existence of spurious solutions of the difference equation which can swamp
the desired solution. In order to avoid this occuring we have to restrict the roots of the
                                  bg
first characteristic polynomial   to satisfy the root condition.


Definition – Root Condition


We say that a linear k-step method satisfies the root condition if the roots of the
characteristic polynomial ( ) all lie within or on the unit circle, those on the unit
circle being simple.


Note that the roots of ( ) may be complex hence the necessity of considering the
unit circle rather than the interval 1,1 in the definition.


Theorem – Zero-Stability


A a linear k-step method is zero stable if and only if it satisfies the root condition.


We can now state the fundamental theorem concerning convergence:




Theorem - Convergence


A discrete variable method is convergent if and only if it is both consistent and zero
stable.


Often it is desirable for the roots of ( ) to satisfy the strong root condition.




                                             23
    Definition – Strong Root Condition


A linear k-step method is said to satisfy the strong root condition if the characteristic
polynomial has a simple root at  1  1 and all the remaining roots lie strictly within
the unit circle.


The roots  i , 1  i  k of ( ) for a consistent method satisfying the root condition
can be categorized as


                   1  1                                        principal root
                    i ,  i  1, 2  i  k                      spurious roots
                    i ,  i  1, 2  i  m                      essential roots
                    i ,  i  1, m  1  i  k                  non  essential roots


     Worked Example 6


Show that the Gear method



                             X n 1    1
                                        3   l4 X   n         q 23h f b , X g
                                                        X n 1     t      n 1   n 1




is convegent.


Before determining the characteristic polynomials write in the standard form



                             X k 2  4 X k 1  1 X k 
                                      3          3
                                                                  2h
                                                                   3
                                                                        b
                                                                     f t k 2 , X k 2   g

Then


                                  bg
                                   2  4  1 ,
                                           3    3                      bg
                                                                        22
                                                                           3




                                                            24
 Checking consistency


                     bg
                     1  1  4  1  0,
                              3   3              bg    3   3    bg
                                             1  2  4  2   1


               bg
The roots of   are given by


                                 bg
                                   2  4  1  0
                                           3    3

                                       b
                                 b  1g  1g
                                  3
                                  1
                                  3     
                                  1, 1
                                        3




and hence the method is zero-stable.


Combining these results we can conclude that the method is convergent.




                                            25
 Exercises 2


1. (a)     Show that the method



                X n  2  1   X n1  X n 
                                                     h
                                                        f n2  1    f n1  f n 
                                                     2


           is consistent and determine the value of  which maximizes the order of
           the method.


     (b)   Find the range of values of  for which the method is zero stable.


2.         Show that Quade’s method



            X n4 
                       8
                          X n3  X n1   X n  6h  f n4  4 f n3  4 f n1  f n 
                      19                           19


           is both consistent and zero stable.


3.         Show that the 3-step Gear method



                        X n 3 
                                   1
                                      18 X n2  9 X n1  2 X n   6h f n3
                                   11                                 11


           is both consistent and zero stable.




                                                   26
 5.4     ABSOLUTE STABILITY


So far we have considered the behaviour of numerical methods in the limit as the
stepsize h  0 . However in practice we must deal with finite stepsizes. To illustrate
the problems that might arise consider the mid-point method


                                    X n1  X n 1  2hf n


This is a linear two-step method. In standard form the method is


                                    X n2  X n  2hf n1


thus


                                  bg
                                     2  1,    2bg
Checking consistency


                        bg
                        1  12  1  0,                bg            bg
                                                       1  2  1   1


                bg
The roots of   are given by   1 hence the method is both consistent and zero-
stable and hence convergent.

Now consider the solution of the initial value problem


                                  x  2tx 2 ,
                                                       bg
                                                       x 0 1


by the mid-point method using a stepsize h  01 . Using Maple we define the initial
                                              .
value problem


> f:=(t,x)->-2*t*x(t)^2:x0:=1:
> IVP:={diff(x(t),t)=f(t,x),x(0)=x0}:
> FCN:={x(t)}:



                                                 27
We can find the exact solution


> Exact:=rhs(dsolve(IVP,FCN));


                                                   1
                                     Exact :=
                                                 2
                                                t  1


The mid-point method requires a starting value which can be obtained from the
classical fourth order Runge-Kutta method


> sv:=h->dsolve(IVP,FCN,type=numeric,method=classical[rk4],stepsize=h):
> SV:=(t,h)->rhs(sv(h)(t)[2]):


Now define the mid-point method


> midpt:=proc(n,h) option remember;
                 if n=1 then SV(h,h) elif n=2 then x0+2*h*f(h,SV(h,h))
                 else midpt(n-2,h)+2*h*f((n-1)*h,midpt(n-1,h))fi;
       end:


Obtain a plot of the solution


> plotmidpt:=proc(N,h)
       local l,i;
        l:=[];
        for i from 1 to N do
                 l:=[op(l),i*h,midpt(i,h)];
        od;
        pointplot(l,connect=true);
        end:


Finally plot both the numerical and exact solutions




                                              28
 > with(plots):
> p1:=plotmidpt(50,0.1):
> p2:=plot(Exact,t=0..5):
> display({p1,p2});




Notice that the numerical solution becomes increasingly innacurate, oscillating about
the exact solution, as t increases. This behaviour arises because the behaviour of the
numerical solution does not mimic that of the exact solution. In this case the problem
arises because of a spurious solution of the difference equation corresponding to the
                  bg
root   1 of   . However the problem can also arise in one-step methods which
have no spurious solutions.


Consider the Linear Multistep method.


                                  k                     k

                                  Y
                                 i 0
                                        i n i    h   i f n i
                                                       i 0




applied to the test equation


                                            y   y


On substitution into the method we obtain




                                                  29
                                           k                 k

                                           iYni  h   iYni
                                         i 0               i 0




Thus


                                   k                    k

                                    iYni  h   iYni  0
                                  i 0               i 0




Let Yn   n , then


                                       k                k

                                      i i  h   i i  0
                                    i 0             i 0




Hence


                                 ( ; h )  ( )  h ( )  0


 ( ; h ) is called the stability polynomial of the method. Now one of the roots
 1 (h ) will correspond to the true solution, the other roots will lead to spurious
solutions whose magnitude will have to be controlled to obtain stability.


Definition – Absolute Stability


A numerical method is said to be absolutely stable for a given h if all the roots of
 ( ; h ) lie within the unit circle.


A region RA of the complex plane is said to be a region of absolute stability if the
method is stable for all h in RA .




                                                   30
      Worked Example 7


Find and sketch the region of absolute stability for
(a) Euler's method,
(b) Trapezoidal method.


(a) For Euler's method


                                  ( ; h )    (1  h )


Thus


                                        1  h  1


RA is shown below




                                     RA

                      -2                                  0




(b) For the Trapezoidal method


                                        F h I  F h I
                                        G 2J G 2J
                             ( ; h )  1 
                                        H KH K    1



Thus




                                                31
                                          h
                                       1
                                           2 1
                                          h
                                       1
                                           2


giving the region RA shown below




                               RA


                                                        0




For Runge-Kutta methods the stability polynomial has the form


                                                        bg
                          ( ; h )   1   0  h 0 h


where  0 is a polynomial for an explicit method and a rational function for an
implicit method.




                                            32
      Worked Example 8


Find and sketch the absolute stability region for the second order Runge-Kutta method



                                    Yn 1  Yn 
                                                   h
                                                   2
                                                        b g
                                                     k1  k2



where


                                            b g
                                   k1  f xn , Yn
                                   k2    f b  h, Y  hk g
                                            x  n            n   1




Substituting into the test equation

                                  b g
                         k1  f xn , Yn  hYn
                         k2    f b  h, Y  hk g hY  h  Y
                                  x n      n           1       n
                                                                         2 2
                                                                               n



Thus


                                                   h 2 2
                               Yn 1  Yn  hYn         Yn
                                                      2
                                         F
                                         G
                                       1  h 
                                                  h 2 2
                                                          Yn
                                                                I
                                                                J
                                         H          2           K
Hence the stability polynomial is given by


                                                   F
                                                   G
                                ( ; h )    1  h 
                                                                h 2 2   I
                                                                         J
                                                   H              2      K
For absolute stability we require that


                                             h 2 2
                                    1  h         1
                                               2




                                                   33
 In order to draw the region of absolute stability consider the boundary R A of RA .
The locus of this boundary will be the set of complex numbers z such that


                                            z2
                                   1 z       1
                                            2


Thus


                                z2
                         1  z   ei ,         0    2
                                2


In order to obtain the region we need to plot the roots of the quadratic equation


                               z 2  2z  2  2ei  0


for  in the range 0    2 . This is best done on a computer. The resulting stability
region is shown below:




The method outlined above is an example of the boundary locus method which is
easily implemented for Linear Multistep methods as follows. The stability polynomial
is



                                            34
                               ( ; h )  ( )  h ( )  0


and hence


                                                          ( )
                                                  h 
                                                          ( )


but on R A


                         1, thus                  ei ,          0    2


Hence the locus of the boundary R A is given by the set of complex numbers z
satisfying



                                   z
                                          ch
                                          ei
                                            ,                 0    2
                                         c h
                                          e       i




   Worked Example 9


Find the region of absolute stability for the Gear method



                        X n 1    1
                                   3   l4 X   n           q 23h f b , X g
                                                   X n 1       t   n 1   n 1




From Worked Example 6


                             bg
                              2  4  1 ,
                                      3    3                       bg
                                                                     22
                                                                        3




Thus the stability polynomial is given by


                           ( ; h )   2  4   1  2 h 2  0
                                              3     3   3




                                                         35
Substituting z  h and solving


                                                 3 2  4  1
                                        z   1
                                             2
                                                     2


Now substitute   ei to obtain


                                  3e 2i  4ei  1 3
                         z   1
                              2            2 i
                                                    2  2e i  2 e 2i
                                                                  1
                                         e


which gives the plot




In order to determine whether RA is the interior or exterior of the closed curve choose
a point inside the curve, say z  1 and evaluate the roots of  ( ; h ) .


                                  2  4  1  22  0
                                       3    3   3

                                   2  4  1  0
                                       4  42  4
                                                2 3
                                           2


and we see that on of the roots, 2  3 , has modulus greater than one and hence RA
must consist of the exterior of the closed curve.




                                                    36
    Worked Example 10


Show that the mid-point method


                                                     b g
                               X n 1  X n 1  2hf t n , X n


has an empty region of absolute stability.


From above


                                 bg                 bg
                                   2  1,    2


Thus the stability polynomial is given by


                              ( ; h )   2  1  2h  0


Substituting z  h and solving


                                             2 1
                                        z
                                              2


Now substitute   ei to obtain


                                 e 2i  1 ei  e i
                            z                         i sin 
                                   2ei        2


which does not bound any region of the complex plane. Hence RA is empty.




                                              37
 Exercises 3


1.        Show, using the boundary locus method, that Quade’s method



           X n4 
                      8
                         X n3  X n1   X n  6h  f n4  4 f n3  4 f n1  f n 
                     19                           19


          has no real region of absolute stability.


2.        Determine the regions of absolute stability of the methods



          (a)     X n 3  X n  2 
                                        h
                                          23 f n2  16 f n1  5 f n 
                                       12

          (b)     X n 2  X n1 
                                        h
                                          5 f n2  8 f n1  f n 
                                       12

          (c)     X n 3 
                             1
                                18 X n2  9 X n1  2 X n   6h f n3
                             11                                 11

          (d)     X n4 
                             1
                                48 X n3  36 X n2  16 X n1  3 X n   12h f n4
                             25                                              25




                                                  38
    APPENDIX 1




ANSWERS AND HINTS
FOR THE EXERCISES




        39
 Methods

Exercises 1

1. Use the method outlined in section to obtain tables comparing the exact and
   numerical solutions for a variety of stepsizes.

2. Use the method outlined in section to obtain solutions and plots with the odeplot
   command.


Consistency and Zero Stability

Exercises 2

1. (a) Truncation error is       1
                                12   b 1gbg  b 3gbg  Oc h maximum
                                        x h        3
                                                   x h
                                                    n
                                                         2     1
                                                          h hence
                                                              24       n
                                                                        4   3            4


        order when   1.
    (b) Method is zero stable if 1    1 .

     bg
2.     4  19  3  19   1,    19  4  19  3  19   19 , hence
                8        8               6
                                             bg  24       24      6


    b  0,  b 
     1g        1g      60
                       19      bg
                              1 and roots of   are 1,  1,    bg             4
                                                                                19      345
                                                                                         19    i which all have
    absolute value one.

      bg       11
                        9      2          6
                                                 bg                    bg            b
3.     3  18  2  11   11 ,    11  3 , hence  1  0,   1  11   1 and
                                                                           6
                                                                                      g             bg
                bg
    roots of   are 1,        7
                              22           39
                                           22    i which all have absolute value less than or equal to
    one.


Absolute Stability

Exercises 3


1. Boundary locus gives z             1            b
                                            sin  19 cos   4
                                                                  i
                                                                   g
                                       3
                                           2 cos2   4 cos   1

2. (a) Region inside




                                                             40
 (b) Region inside




(c) Region outside




(d) Region outside




                     41
    APPENDIX 2




MAPLE WORK SHEETS




        42
 Worked Example 5 - Determining The Order of a Numerical Method

> restart:

Define a function to represent the exact solution

> x:=t->x(t):

Use some aliases for the derivatives of x

> alias(x1=D(x),x2=(D@@2)(x),x3=(D@@3)(x),x4=(D@@4)(x),
    x5=(D@@5)(x),x6=(D@@6)(x),x7=(D@@7)(x),x8=(D@@8)(x)):

Use the taylor command to obtain a Taylor series expansion of x(tn+h)

> ts:=taylor(x(t),t=tn,9):

Replace t-tn by h in the expansion

> tsh:=subs(t-tn=h,ts):

Convert to a polynomial so that we can perform the algebra later

> p:=unapply(convert(tsh,polynom),h):

Repeat above for x'(tn+h)

> dts:=taylor(x1(t),t=tn,8):
> dtsh:=subs(t-tn=h,dts):
> dp:=unapply(convert(dtsh,polynom),h):

Euler's Method

> simplify(p(h)-x(tn)-h*x1(tn));

Trapezoidal Rule

> simplify(p(h)-x(tn)-h*(x1(tn)+dp(h))/2);

Quade's Method

> simplify(p(4*h)-8*(p(3*h)-p(h))/19-x(tn)-
    6*h*(dp(4*h)+4*dp(3*h)+4*dp(h)+x1(tn))/19);




                                             43
 Worked Example 9


> restart:
> with(plots):


Define the characteristic polynomials


> rho:=theta->theta^2-4*theta/3+1/3;
> sigma:=theta->2/3*theta^2;


and the stability polynomial


> pi:=theta->rho(theta)-lambda*h*sigma(theta);
Substitute z = h*lambda;


> piz:=theta->rho(theta)-z*sigma(theta);


Define the boundary of the region


> rat:=solve(piz(theta),z);
> rat1:=subs(theta=exp(I*phi),rat);


Plot the boundary


> complexplot(rat1,phi=0..2*Pi,numpoints=500);


Check whether region is inside or outside closed curve


> rs:=solve(subs(z=1,piz(theta))=0,theta);


Must be outside.




                                             44
 Worked Example 10


> restart:
> with(plots):


Define the characteristic polynomials


> rho:=theta->theta^2-1;
> sigma:=theta->2*theta;


Find the stability polynomial


> pi:=theta->rho(theta)-lambda*h*sigma(theta);
> piz:=theta->rho(theta)-z*sigma(theta);


Solve for z


> rat:=solve(piz(theta),z);


Substitute theta = e^(i*phi);


> rat1:=subs(theta=exp(I*phi),rat);


Simplify the result


> simplify(rat1);


Thus empty region of absolute stability.




                                           45

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:31
posted:2/16/2012
language:English
pages:46