Docstoc

Lecture04-Gaussian Elimination, LU Decomposition, Gauss-Seidel Method

Document Sample
Lecture04-Gaussian Elimination, LU Decomposition, Gauss-Seidel Method Powered By Docstoc
					ES 84 – Numerical Methods
Gaussian Elimination, LU Decomposition & Gauss-Seidel

Stephen H. Haim Computer Engineering Dept./EECE

Gaussian Elimination

2

Naïve Gaussian Elimination
One of the most popular techniques for solving simultaneous linear equations of the form A X   C

Consists of 2 steps

1. Forward Elimination of Unknowns.

2. Back Substitution

Forward Elimination
The goal of Forward Elimination is to transform the coefficient matrix into an Upper Triangular Matrix

5 1   25 5 1 25  64 8 1   0  4.8  1.56     144 12 1  0 0 0.7     

Forward Elimination
Linear Equations
A set of n equations and n unknowns

a11 x1  a12 x2  a13 x3  ...  a1n xn  b1

a21 x1  a22 x2  a23 x3  ...  a2 n xn  b2
. . . . . .

an1 x1  an 2 x2  an 3 x3  ...  ann xn  bn

Forward Elimination
Transform to an Upper Triangular Matrix
Step 1: Eliminate x1 in 2nd equation using equation 1 as the pivot equation

 Eqn1    (a21 )  a11 
Which will yield

a21 a21 a21 a21 x1  a12 x2  ...  a1n xn  b1 a11 a11 a11

Forward Elimination
Zeroing out the coefficient of x1 in the 2nd equation.
Subtract this equation from 2nd equation

  a21  a21  a  a22   x2  ...   a2 n   xn  b2  21 b1 a12  a1n    a11  a11  a11  
Or
' ' ' a22 x2  ...  a2 n xn  b2

Where
' a 22  a 22 


' a2n  a2n 

a 21 a12 a11 a 21 a1n a11

Forward Elimination
Repeat this procedure for the remaining equations to reduce the set of equations as

a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
' ' ' ' a22 x2  a23 x3  ...  a2 n xn  b2

' ' ' a32 x2  a33 x3  ...  a3n xn  b3'
. . . . . . . . .

' ' ' ' an 2 x2  an 3 x3  ...  ann xn  bn

Forward Elimination
Step 2: Eliminate x2 in the 3rd equation.
Equivalent to eliminating x1 in the 2nd equation using equation 2 as the pivot equation.

 Eqn2  Eqn3     (a32 )  a22 

Forward Elimination
This procedure is repeated for the remaining equations to reduce the set of equations as

a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
' ' ' ' a22 x2  a23 x3  ...  a2 n xn  b2

" " " a33 x3  ...  a3n xn  b3
. . . . . .

" " " an 3 x3  ...  ann xn  bn

Forward Elimination
Continue this procedure by using the third equation as the pivot equation and so on. At the end of (n-1) Forward Elimination steps, the system of equations will look like:

a11 x1  a12 x 2  a13 x3  ...  a1n x n  b1
' ' ' ' a22 x2  a23 x3  ...  a2 n xn  b2

" " " a33 x3  ...  an xn  b3
. . . . . .

ann xn  bn

n 1

n 1 

Forward Elimination
At the end of the Forward Elimination steps

a11 a12  ' a22      

a13  ' a23  " a33  

a1n   x1   b1    x   b'  ' a2 n   2   2  " "   x3    b3  a3n            ( n 1)   (n  bn -1 )  ann   xn   

Back Substitution
The goal of Back Substitution is to solve each of the equations using the upper triangular matrix.

a11 0  0 

a12 a22 0

a13   x1   b1    x   b  a23   2   2  a33   x 3  b3     

Example of a system of 3 equations

Back Substitution
Start with the last equation because it has only one unknown

b xn  a

( n 1) n ( n 1) nn

Solve the second from last equation (n-1)th using xn solved for previously.
This solves for xn-1.

Back Substitution
Representing Back Substitution for all equations by formula

xi 
and

bi

i 1

   aiji 1 x j
n

 aiii 1
( n 1) n ( n 1) nn

j i 1

For i=n-1, n-2,….,1

b xn  a

Example: Rocket Velocity
The upward velocity of a rocket is given at three different times
Time, t Velocity, v

s 5 8
12

m/s 106.8 177.2
279.2

The velocity data is approximated by a polynomial as:

vt   a1t 2  a 2 t  a3 ,

5  t  12.

Find: The Velocity at t=6,7.5,9, and 11 seconds.

Example: Rocket Velocity
Assume
v t   a1t 2  a2t  a3 , 5  t  12.
Results in a matrix template of the form:
2 t1  2 t 2 2 t3 

t1 t2 t3

1  1 1 

 a1   v1   a   v   2  2  a3   v3     

Using date from the time / velocity table, the matrix becomes:

 25 5 1  a1  106.8   64 8 1 a   177.2     2   144 12 1  a3  279.2     

Example: Rocket Velocity
Forward Elimination: Step 1

 Row1 Row2     (64 )   25 
Yields

5 1   a 1   106.81   25  0  4.8  1.56 a    96.21    2   144 12 1  a 3   279.2      

Example: Rocket Velocity
Forward Elimination: Step 1

 Row1 Row3     (144 )   25 
Yields

5 1   a 1   106.8  25  0  4.8  1.56  a     96.21    2    0  16.8  4.76 a 3   336.0     

Example: Rocket Velocity
Forward Elimination: Step 2

 Row2  Row3     (16 .8)    4. 8 
Yields

5 1   a 1   106.8  25  0  4.8  1.56 a    96.21    2   0 0 0.7  a 3   0.735      
This is now ready for Back Substitution

Example: Rocket Velocity
Back Substitution: Solve for a3 using the third equation

0.7a3  0.735

0.735 a3  0.7 a3  1.050

Example: Rocket Velocity
Back Substitution: Solve for a2 using the second equation

 4.8a2  1.56 a3  96 .21
 96.21  1.56a3 a2   4.8

-96.21  1.561.050 a2  -4.8

a2  19.70

Example: Rocket Velocity
Back Substitution: Solve for a1 using the first equation

25 a1  5a 2  a3  106 .8
106.8  5a 2  a3 a1  25
106.8  519.70  1.050 a1  25
a1  0.2900

Example: Rocket Velocity
Solution:
The solution vector is

 a1  0.2900 a    19.70   2    a3   1.050     

The polynomial that passes through the three data points is then:

vt   a1t 2  a 2 t  a3

 0.2900 t 2  19 .70 t  1.050 , 5  t  12

Example: Rocket Velocity
Solution:
Substitute each value of t to find the corresponding velocity

v6   0.2900 6  19 .70 6  1.050  129 .69 m / s.
2

v7.5  0.2900 7.5  19 .70 7.5  1.050  165 .1 m / s.
2

v9   0.2900 9   19 .70 9   1.050  201 .8 m / s. 2 v11  0.2900 11  19 .70 11  1.050
2

 252 .8 m / s.

Pitfalls
Two Potential Pitfalls
-Division by zero: May occur in the forward elimination steps. Consider the set of equations:

10 x2  7 x3  7 6 x1  2.099x2  3x3  3.901 5 x1  x2  5 x3  6
- Round-off error: Prone to round-off errors.

Pitfalls: Example
Consider the system of equations:
Use five significant figures with chopping

 7 0  x1   7   10  3 2.099 6    x2  = 3.901     5  1 5  x3   6       

At the end of Forward Elimination
7 0  10  0  0.001 6    0 0 15005  

 x1   7   x  =  6.001    2   x3  15004    

Pitfalls: Example
Back Substitution
7 0   x1   7  10  0  0.001 6   x 2    6.001      0 0 15005  x3  15004     

x3 

15004  0.99993 15005

6.001  6 x3 x2   1.5  0.001

7  7 x 2  0 x3 x1   0.3500 10

Pitfalls: Example
Compare the calculated values with the exact solution

X  exact
X  calculated

 x1   0   x    1   2    x3   1     
 x1    0.35    x 2     1.5       x3  0.99993    

Improvements
Increase the number of significant digits
Decreases round off error Does not avoid division by zero

Gaussian Elimination with Partial Pivoting
Avoids division by zero Reduces round off error

Partial Pivoting
Gaussian Elimination with partial pivoting applies row switching to normal Gaussian Elimination.

How?
At the beginning of the kth step of forward elimination, find the maximum of

akk , ak 1,k ,......... .......,ank
If the maximum of the values is

a pk

In the pth row,

k  p  n,

then switch rows p and k.

Partial Pivoting
What does it Mean?
Gaussian Elimination with Partial Pivoting ensures that each step of Forward Elimination is performed with the pivoting element |akk| having the largest absolute value.

Partial Pivoting: Example
Consider the system of equations

10x1  7 x 2  7  3x1  2.099x 2  3x 3  3.901 5x 1  x 2  5x 3  6
In matrix form

7 0  x1   7   10 3.901  3 2.099 6  x      2 =   6  5  1 5  x3      
Solve using Gaussian Elimination with Partial Pivoting using five significant digits with chopping

Partial Pivoting: Example
Forward Elimination: Step 1
Examining the values of the first column |10|, |-3|, and |5| or 10, 3, and 5 The largest absolute value is 10, which means, to follow the rules of Partial Pivoting, we switch row1 with row1.

Performing Forward Elimination

7 0  x1   7   10  3 2.099 6  x   3.901   2    5  1 5  x3   6      



7 0  x1   7  10  0  0.001 6  x   6.001   2    0 2.5 5  x3   2.5      

Partial Pivoting: Example
Forward Elimination: Step 2
Examining the values of the first column |-0.001| and |2.5| or 0.0001 and 2.5 The largest absolute value is 2.5, so row 2 is switched with row 3

Performing the row swap

7 0  x1   7  10  0  0.001 6  x   6.001   2    0 2.5 5  x3   2.5      



7 0  x1   7  10 0 2.5 5  x2    2.5        0  0.001 6  x3  6.001     

Partial Pivoting: Example
Forward Elimination: Step 2

Performing the Forward Elimination results in:

0   x1   7  10  7  0 2.5 5   x 2    2.5       0 0 6.002  x3  6.002     

Partial Pivoting: Example
Back Substitution
Solving the equations through back substitution

0   x1   7  10  7  0 2.5 5   x 2    2.5       0 0 6.002  x3  6.002     

6.002 x3  1 6.002

2.5  5 x 2 x2  1 2.5
7  7 x 2  0 x3 x1  0 10

Partial Pivoting: Example
Compare the calculated and exact solution
The fact that they are equal is coincidence, but it does illustrate the advantage of Partial Pivoting

X  calculated

 x1   0    x2    1      x3   1     

X  exact

 x1   0    x 2    1      x3   1     

Summary
-Forward Elimination -Back Substitution -Pitfalls -Improvements -Partial Pivoting

LU Decomposition

40

LU Decomposition
LU Decomposition is another method to solve a set of simultaneous linear equations

Which is better, Gauss Elimination or LU Decomposition?

To answer this, a closer look at LU decomposition is needed.

LU Decomposition
Method
For most non-singular matrix A that one could conduct Naïve Gauss Elimination forward elimination steps, one can always write it as

A  LU 

Where

L

= lower triangular martix

U  = upper triangular martix

LU Decomposition
Proof
If solving a set of linear equations AX   C  If

A  LU 

LU X   C  1 Multiply by L 
Then

Which gives Remember

L 1 L  I  which leads to Now, if I U   U  then
Now, let Which ends with and

L1 LU X   L1 C  I U X   L1 C  U X   L1 C  L 1 C   Z  LZ  C  (1) U X  Z  (2)

LU Decomposition
How can this be used?
Given AX   C 
Decompose A into L  and U  Then solve

LZ   C for Z 

And then solve

U X  Z 

for

X 

LU Decomposition
How is this better or faster than Gauss Elimination?
Let’s look at computational time. n = number of equations

n3 To decompose [A], time is proportional to 3

To solve U X   C  and LZ  C
n2 time proportional to 2

LU Decomposition
Therefore, total computational time for LU Decomposition is proportional to 2 3

n3 n  2( ) 3 2
3

or

n  n2 3

Gauss Elimination computation time is proportional to

n n  3 2

2

How is this better?

LU Decomposition
What about a situation where the [C] vector changes?
In LU Decomposition, LU decomposition of [A] is independent of the [C] vector, therefore it only needs to be done once. Let m = the number of times the [C] vector changes The computational times are proportional to

n3 n2 n3  ) Gauss Elimination=  m( n 2 ) LU decomposition = m ( 3 2 3
Consider a 100 equation set with 50 right hand side vectors LU Decomposition =

8.33105

Gauss Elimination =

1.69 107

LU Decomposition
Another Advantage
Finding the Inverse of a Matrix
LU Decomposition Gauss Elimination
3

n 4n 2  n( n )  3 3

3

 n3 n2  n4 n3 n     3  3  2 2 
For large values of n

n n 4n   3 2 3

4

3

3

LU Decomposition
Method: [A] Decompose to [L] and [U]

1 A  LU    21   31 

0 1  32

0 u11 0  0  1  0 

u12 u 22 0

u13  u 23   u 33  

[U] is the same as the coefficient matrix at the end of the forward elimination step. [L] is obtained using the multipliers that were used in the forward elimination process

LU Decomposition
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination

 25 5 1  64 8 1   144 12 1   5 1   25  Row1 Row 2    (64)   0  4.8  1.56    25   144 12 1    5 1  25  Row1 Row 3    (144)   0  4.8  1.56     25    0  16.8  4.76  

LU Decomposition
Finding the [U] matrix
Using the Forward Elimination Procedure of Gauss Elimination
5 1  25  0  4.8  1.56     0  16.8  4.76  

5 1  25  Row 2  Row 3    (16.8)   0  4.8  1.56     4.8   0 0 0.7   

5 1  25  0  4.8  1.56 U     0 0 0. 7   

LU Decomposition
Finding the [L] matrix
Using the multipliers used during the Forward Elimination Procedure
From the first step of forward elimination

1   21  31 

0 1  32

0 0  1 

 21 
 31 

a 21 64   2.56 a11 25
a 31 144   5.76 a11 25

From the second step of forward elimination

5 1  25  0  4.8  1.56     0  16.8  4.76  

 32

a32  16 .8    3.5 a 22  4.8

LU Decomposition
0 0  1 L  2.56 1 0   5.76 3.5 1  
Does

LU   A

?

0 0 25 5 1   1 LU   2.56 1 0  0  4.8  1.56     5.76 3.5 1  0 0 0.7    

LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Solve the following set of linear equations using LU Decomposition
 25 5 1  a 1  106.8   64 8 1 a   177.2     2   144 12 1 a 3  279.2     

Using the procedure for finding the [L] and [U] matrices

0 0 25 5 1   1 A  LU   2.56 1 0  0  4.8  1.56    5.76 3.5 1  0 0 0.7    

LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Set

LZ   C 

0 0  z1  106.8   1 2.56 1 0  z   177.2    2    5.76 3.5 1  z 3  279.2     

z1  10
Solve for Z

 

2.56 z1  z 2  177.2 5.76 z1  3.5 z 2  z 3  279.2

LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Complete the forward substitution to solve for Z 

z1  106.8 z 2  177.2  2.56 z1  177.2  2.56(106.8)  96.2 z 3  279.2  5.76 z1  3.5 z 2  279.2  5.76(106.8)  3.5(96.21)  0.735

 z1   106.8   z    96.21 Z    2     z3   0.735     

LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Set U X   Z 

5 1   a1   106.8  25  0  4.8  1.56 a   - 96.21    2   0 0 0.7   a3   0.735      
The 3 equations become

Solve for  X 

25a1  5a 2  a3  106.8  4.8a 2  1.56a3  96.21 0.7a3  0.735

LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
From the 3rd equation Substituting in a3 and using the second equation

0.7a3  0.735 0.735 a3  0.7  1.050

 4.8a 2  1.56a3  96.21  96.21  1.56a3 a2   4.8 - 96.21 1.561.050  - 4.8  19.70

LU Decomposition
Example: Solving simultaneous linear equations using LU Decomposition
Substituting in a3 and a2 using the first equation Hence the Solution Vector is:

25a1  5a2  a3  106.8 a1  106.8  5a 2  a 3 25 106.8  519.70  1.050  25  0.2900

 a1  0.2900 a    19.70   2    a3   1.050     

LU Decomposition
Finding the inverse of a square matrix

Remember, the relative computational time comparison of LU decomposition and Gauss elimination is:

n n 4n   3 2 3

4

3

3

Review: The inverse [B] of a square matrix [A] is defined as

AB  I   BA

LU Decomposition
Finding the inverse of a square matrix
How can LU Decomposition be used to find the inverse?
Assume the first column of [B] to be b11 b12  bn1  Using this and the definition of matrix multiplication First column of [B]
 b11  1  b  0  A  21              b n1   0 
T

Second column of [B]
 b12  0  b  1  A   22              b n 2  0 

The remaining columns in [B] can be found in the same manner

LU Decomposition
Example: Finding the inverse of a square matrix

Find the inverse of [A]

 25 5 1 A   64 8 1   144 12 1  

Using the Decomposition procedure, the [L] and [U] matrices are found to be

0 0 25 5 1   1 A  LU   2.56 1 0  0 - 4.8 - 1.56    5.76 3.5 1  0 0 0.7    

LU Decomposition
Example: Finding the inverse of a square matrix
Solving for the each column of [B] requires to steps 1) Solve [L] [Z] = [C] for [Z] and 2) Solve [U] [X] = [Z] for [X]

0 0  z1  1  1 Step 1:LZ  C  2.56 1 0  z 2   0      5.76 3.5 1  z 3  0     
This generates the equations:

2.56 z1  z 2  0 5.76 z1  3.5 z 2  z 3  0

z1  1

LU Decomposition
Example: Finding the inverse of a square matrix
Solving for [Z]

z1  1 z 2  0 - 2.56z1  2.56 z 3  0  5.76z 1  3.5z 2  3. 2  0  5.761  3.5 2.56  0  2.561

 z1   1  Z  z 2    2.56      z 3   3 .2     

LU Decomposition
Example: Finding the inverse of a square matrix
Solving for [U] [X] = [Z] for [X]

25b11  5b21  b31  1
 4.8b21  1.56 b31  2.56

5 1  b11   1  25  0  4.8  1.56 b   - 2.56    21    0 0 0.7  b31   3.2      

0.7b31  3.2

LU Decomposition
Example: Finding the inverse of a square matrix
Using Backward Substitution

3.2 = 4.571 0.7 2.56 + 1.560b3 1 b2 1 = 4.8 2.56 + 1.560(4.571) = = 0.9524 4.8 1 5b2 1 b3 1 b1 1 = 25 1 5( 0.9524) 4.571 = = 0.04762 25 b3 1 =

So the first column of the inverse of [A] is:

b11   0.04762  b    0.9524  21    b31   4.571     

LU Decomposition
Example: Finding the inverse of a square matrix
Repeating for the second and third columns of the inverse Second Column  25 5 1 b12  0  64 8 1 b   1    22    144 12 1 b32  0     
b12   0.08333 b    1.417   22    b32    5.000     

Third Column

 25 5 1  b13  0  64 8 1 b   0    23    144 12 1 b 33  1     
b13   0.03571 b    0.4643  23    b33   1.429     

LU Decomposition
Example: Finding the inverse of a square matrix
The inverse of [A] is

A1

 0.4762 0.08333 0.0357    0.9524 1.417  0.4643    4.571  5.050 1.429   

To check your work do the following operation

AA1  I  A1 A

Gauss-Seidel Method

69

Gauss-Seidel Method
An iterative method. Basic Procedure:
-Algebraically solve each linear equation for xi -Assume an initial guess solution array -Solve for each xi and repeat -Use absolute relative approximate error after each iteration to check if error is within a pre-specified tolerance.

Gauss-Seidel Method
Why?
The Gauss-Seidel Method allows the user to control round-off error.

Elimination methods such as Gaussian Elimination and LU Decomposition are prone to prone to round-off error.

Also: If the physics of the problem are understood, a close initial guess can be made, decreasing the number of iterations needed.

Gauss-Seidel Method
Algorithm
A set of n equations and n unknowns:

a11 x1  a12 x2  a13 x3  ...  a1n xn  b1
a21 x1  a22 x2  a23 x3  ...  a2n xn  b2
. . . . . .

If: the diagonal elements are
non-zero

Rewrite each equation solving
for the corresponding unknown

an1 x1  an 2 x2  an 3 x3  ...  ann xn  bn

ex:
First equation, solve for x1 Second equation, solve for x2

Gauss-Seidel Method
Algorithm
c  a12 x 2  a13 x3   a1n x n x1  1 a11
x2  c2  a21 x1  a23 x3   a2 n xn a22  xn 1  xn   
From equation n-1

Rewriting each equation

From Equation 1

From equation 2

cn 1  an 1,1 x1  an 1, 2 x2   an 1,n  2 xn  2  an 1,n xn an 1,n 1

cn  an1 x1  an 2 x2    an ,n 1 xn 1 ann

From equation n

Gauss-Seidel Method
Algorithm
General Form of each equation
n

c1   a1 j x j x1 
j 1 j 1

n

cn 1  xn 1 

j 1 j  n 1

a
n

n 1, j

xj

a11

an 1,n 1

c2   a2 j x j x2 
j 1 j 2

n

c n   a nj x j xn 
j 1 j n

a 22

a nn

Gauss-Seidel Method
Algorithm
General Form for any row ‘i’

ci   aij x j xi 
j 1 j i

n

aii

, i  1,2,, n.

How or where can this equation be used?

Gauss-Seidel Method
Solve for the unknowns
Assume an initial guess for [X]

Use rewritten equations to solve for each value of xi.
Important: Remember to use the most recent value of xi. Which means to apply values calculated to the calculations remaining in the current iteration.

 x1  x   2       xn -1   xn   

Gauss-Seidel Method
Calculate the Absolute Relative Approximate Error

a

i

x inew  x iold  100 new xi

So when has the answer been found?
The iterations are stopped when the absolute relative approximate error is less than a prespecified tolerance for all unknowns.

Gauss-Seidel Method: Example 1
The upward velocity of a rocket is given at three different times
Time, t Velocity, v

s 5 8
12

m/s 106.8 177.2
279.2

The velocity data is approximated by a polynomial as:

vt   a1t 2  a 2 t  a3 ,

5  t  12.

Gauss-Seidel Method: Example 1
Using a Matrix template of the form

t12 2 t 2 2 t3 

t1 1  a1   v1      t 2 1 a2   v2  t3 1  a3  v3     

The system of equations becomes

 25 5 1  a1  106.8   64 8 1 a   177.2     2   144 12 1  a3  279.2     

Initial Guess: Assume an initial guess of

 a1  1  a    2  2    a 3  5     

Gauss-Seidel Method: Example 1
Rewriting each equation

106.8  5a 2  a3 a1  25
 25 5 1  a1  106.8   64 8 1 a   177.2     2   144 12 1  a3  279.2     

177.2  64a1  a3 a2  8
279.2  144a1  12a 2 a3  1

Gauss-Seidel Method: Example 1
Applying the initial guess and solving for ai
 a1  1  a    2  2    a 3  5     
Initial Guess

a1 

106.8  5(2)  (5)  3.6720 25

177.2  643.6720  5 a2   7.8510 8
279.2  1443.6720  12 7.8510 a3   155.36 1

When solving for a2, how many of the initial guess values were used?

Gauss-Seidel Method: Example 1
Finding the absolute relative approximate error

a

i

x inew  x iold  100 new xi
3.6720  1.0000 x100  72 .76 % 3.6720

At the end of the first iteration

a 1 

 a1   3.6720  a    7.8510  2    a 3    155.36     

a

2

The maximum absolute relative approximate error is  7.8510  2.0000  x100  125 .47 % 125.47%  7.8510
 155 .36  5.0000 x100  103 .22 %  155 .36

a 3 

Gauss-Seidel Method: Example 1
Iteration #2
Using

 a1   3.6720  a    7.8510  2    a 3    155.36     
from iteration #1

106.8  5 7.8510  155.36 a1   12.056 25
a2  177.2  6412.056  155.36  54.882 8

the values of ai are found:

279.2  14412.056  12 54.882 a3   798.34 1

Gauss-Seidel Method: Example 1
Finding the absolute relative approximate error
a 1 

12 .056  3.6720 x100  69 .542 % At the end of the second iteration 12 .056  a   12.056 
 54 .882   7.8510  x100  85 .695 %  54 .882

a 2 

a     54.882  2    a 3   798.34    

1

The maximum absolute relative approximate error is  798 .34   155 .36  a 3  x100  80 .54 % 85.695%
 798 .34

Gauss-Seidel Method: Example 1
Repeating more iterations, the following values are obtained
Iteration

a1 3.672 12.056 47.182 193.33 800.53 3322.6

a 1 %
72.767 67.542 74.448 75.595 75.850 75.907

a2 -7.8510 -54.882 -255.51 -1093.4 -4577.2 -19049

a 2 %
125.47 85.695 78.521 76.632 76.112 75.971

a3 -155.36 -798.34 -3448.9 -14440 -60072 -249580

a 3 %
103.22 80.540 76.852 76.116 75.962 75.931

1 2 3 4 5 6

! Notice – The relative errors are not decreasing at any significant rate Also, the solution is not converging to the true solution of

 a1  0.29048 a    19.690   2    a3   1.0858     

Gauss-Seidel Method: Pitfall
What went wrong?
Even though done correctly, the answer is not converging to the correct answer
This example illustrates a pitfall of the Gauss-Siedel method: not all systems of equations will converge.

Is there a fix?
One class of system of equations always converges: One with a diagonally dominant coefficient matrix.

Diagonally dominant: [A] in [A] [X] = [C] is diagonally dominant if:

aii   aij
j 1 j i

n

for all ‘i’

and

aii   a ij
j 1 j i

n

for at least one ‘i’

Gauss-Seidel Method: Pitfall
Diagonally dominant: The coefficient on the diagonal must be at least
equal to the sum of the other coefficients in that row and at least one row with a diagonal coefficient greater than the sum of the other coefficients in that row.

Which coefficient matrix is diagonally dominant?

 2 5.81 34 A   45 43 1    123 16 1  

124 34 56  [B]   23 53 5     96 34 129  

Most physical systems do result in simultaneous linear equations that have diagonally dominant coefficient matrices.

Gauss-Seidel Method: Example 2
Given the system of equations The coefficient matrix is:

12x 1  3x 2 - 5x 3  1

x 1  5x 2  3x 3  28
3x1  7x2  13x3  76
With an initial guess of

12 3  5 A   1 5 3     3 7 13   
Will the solution converge using the Gauss-Siedel method?

 x1  1  x   0   2    x 3  1     

Gauss-Seidel Method: Example 2
Checking if the coefficient matrix is diagonally dominant
12 3  5 A   1 5 3     3 7 13   

a11  12  12  a12  a13  3   5  8

a22  5  5  a21  a23  1  3  4
a33  13  13  a31  a32  3  7  10

The inequalities are all true and at least one row is strictly greater than:
Therefore: The solution should converge using the Gauss-Siedel Method

Gauss-Seidel Method: Example 2
Rewriting each equation
12 3  5  a1   1   1 5 3  a   28    2    3 7 13   a3  76     

With an initial guess of
 x1  1  x   0   2    x 3  1     

1  3 x 2  5 x3 x1  12 28  x1  3x3 x2  5 76  3x1  7 x2 x3  13

x1 

x2 

28  0.5  31  4.9000 5

1  30  51  0.50000 12

76  30.50000  74.9000 x3   3.0923 13

Gauss-Seidel Method: Example 2
The absolute relative approximate error

0.50000  1.0000 a 1  100  67 .662 % 0.50000

a
a

2

4.9000  0  100  100 .00 % 4.9000
3.0923  1.0000  100  67 .662 % 3.0923

3

The maximum absolute relative error after the first iteration is 100%

Gauss-Seidel Method: Example 2
After Iteration #1
 x1  0.5000  x   4.9000  2    x3  3.0923    
Substituting the x values into the equations

After Iteration #2
 x1  0.14679  x    3.7153   2    x3   3.8118     

1  34.9000  53.0923 x1   0.14679 12
x2  28  0.14679  33.0923  3.7153 5

x3 

76  30.14679  74.900  3.8118 13

Gauss-Seidel Method: Example 2
Iteration #2 absolute relative approximate error
a 1  0.14679  0.50000 100  240 .62 % 0.14679

a 2 

3.7153  4.9000 100  31 .887 % 3.7153

a 3 

3.8118  3.0923 100  18 .876 % 3.8118

The maximum absolute relative error after the first iteration is 240.62%

This is much larger than the maximum absolute relative error obtained in iteration #1. Is this a problem?

Gauss-Seidel Method: Example 2
Repeating more iterations, the following values are obtained
Iteration
1 2 3 4 5 6

a1
0.50000 0.14679 0.74275 0.94675 0.99177 0.99919

a 1
67.662 240.62 80.23 21.547 4.5394 0.74260

a2
4.900 3.7153 3.1644 3.0281 3.0034 3.0001

a

a3
2

a

3

100.00 31.887 17.409 4.5012 0.82240 0.11000

3.0923 3.8118 3.9708 3.9971 4.0001 4.0001

67.662 18.876 4.0042 0.65798 0.07499 0.00000

 x1  0.99919 The solution obtained    x2    3.0001     x3   4.0001       x1  1   x   3 is close to the exact solution of  2    x 3  4    

Gauss-Seidel Method: Example 3
Given the system of equations Rewriting the equations

3x1  7x2  13x3  76

x1  5x2  3x3  28
12x1  3x2 - 5x3  1
With an initial guess of
 x1  1   x   0   2    x3  1     

76  7 x2  13x3 x1  3

28  x1  3x3 x2  5
1  12 x1  3x2 x3  5

Gauss-Seidel Method: Example 3
Conducting six iterations
Iteration a1

a

a2
1

a

a3
2

a

3

1 2 3 4 5 6

21.000 -196.15 -1995.0 -20149 2.0364x105 -2.0579x105

110.71 0.80000 109.83 14.421 109.90 -116.02 109.89 1204.6 109.90 -12140 1.0990 1.2272x10
5

100.00 94.453 112.43 109.63 109.92 109.89

5.0680 -462.30 4718.1 -47636 4.8144x105 -4.8653x106

98.027 110.96 109.80 109.90 109.89 109.89

The values are not converging.

Does this mean that the Gauss-Seidel method cannot be used?

Gauss-Seidel Method
The Gauss-Seidel Method can still be used
The coefficient matrix is not diagonally dominant

 3 7 13  A   1 5 3    12 3  5  
12 3  5 A   1 5 3     3 7 13   

But this is the same set of equations used in example #2, which did converge.

If a system of linear equations is not diagonally dominant, check to see if rearranging the equations can form a diagonally dominant matrix.

Gauss-Seidel Method
Not every system of equations can be rearranged to have a diagonally dominant coefficient matrix.
Observe the set of equations

x1  x 2  x3  3
2 x1  3x 2  4 x3  9

x1  7 x 2  x3  9
Which equation(s) prevents this set of equation from having a diagonally dominant coefficient matrix?

Gauss-Seidel Method
Summary
-Advantages of the Gauss-Seidel Method -Algorithm for the Gauss-Seidel Method
-Pitfalls of the Gauss-Seidel Method

Slides are from: http://numericalmethods.eng.usf.edu
100


				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:9493
posted:12/12/2008
language:English
pages:100