Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

e200303 101 by 83uCkRd

VIEWS: 3 PAGES: 31

									              Part B. Linear Algebra, Vector Calculus
     Chap. 6: Linear Algebra; Matrices, Vectors, Determinants,
              Linear Systems of Equations

 - Theory and application of linear systems of equations, linear transformations, and
   eigenvalue problems.

 - Vectors, matrices, determinants, …

 6.1. Basic concepts
 - Basic concepts and rules of matrix and vector algebra
 - Matrix: rectangular array of numbers (or functions)

                       elements or entries
 Reference: “Matrix Computations” by G.H. Golub, C.F. Van Loan (1996)
                        column          Ex. 1) 5x – 2y + z = 0
                                               3x + 4z = 0
       2 0.4 8             row
       5  32 0                               Coefficient matrix: 5  2 1
               
                                                                    3 0 4 
                                                                           

Ch. 6: Linear Algebra                                                                   화공수학
      f1 ( x1 , x 2 ,..., x n )  0               a 11x1  a 12 x 2    a 1n x n  b1
      f 2 ( x1 , x 2 ,..., x n )  0              a 21x1  a 22 x 2    a 2 n x n  b 2
                                                 
      f n ( x1 , x 2 ,..., x n )  0              a n1x1  a n 2 x 2    a nn x n  b n

      a11 a12          a1n   x1   b1 
     a                 a 2n   x 2  b 2 
      21 a 22                                    Ax  b
                           
                                
     a n1 a n 2        a nn   x n  b n 
                                                                            m x n matrix
   General Notations and Concepts                                           m = n: square matrix
    Matrix : A, B                         a11 a12      a1n         aii: principal or main diagonal
                                         a             a 2n 
                                                                          of the matrix.
    Vector : a , b
                                  
                           A  a jk      21 a 22
                                                      
                                                                         (important for simultaneous
                                                                           linear equations)
                                                             
                     Row (Eqn.)          a m1 a m 2    a mn 
  A or aij                                                            - rectangular matrix that is not
                                                                         square
                      Column (Variable)
Ch. 6: Linear Algebra                                                                             화공수학
 Vectors                                                    b1 
   Row vector: b  b1 ,..., b m                       b   
                                                         T
                                                                        Transpose
                                                            
                                                           b m 
                                                            
   Column vector:             c1 
                           c  
                              
                             c m 
                              
 Transposition
   A (m  n)  A (n  m)
                       T


                                                    Symmetric matrices: A  A (a ij  a ij )
                                                                               T      T
                    a11    a 21  a m1 
                   a       a 22  a m 2           Skew-symmetric matrices: AT  A (a ij T  a ij )
           
   A  a kj
     T
                   12
                               
                                         
                                                   Square matrices
                   a1n     a 2 n  a mn 

   Properties of transpose:

         A 
           T T
                               
                  A, A  B  A  B , AB  B A ,
                                T       T       T
                                                        T    T    T


         kAT         T
                            
                  k A , ABC  C B A
                                    T       T   T   T




Ch. 6: Linear Algebra                                                                           화공수학
 Equalities of Matrices: A  B a ij  b ij                       same size

 Matrix Addition: C  A  B cij  a ij  b ij                    same size

 Scalar Multiplication: (k )A  k A

     AB BA                                                 
                                                       c A  B  cA  cB
     U  V   W  U  V  W                        c  k A  cA  k A
     A0A                                             ck A   ck A
           
     A A 0



 6.2. Matrix Multiplication
 - Multiplication of matrices by matrices
                                      m
     A n  m B ml  C n l   Cij    a
                                      k 1
                                             ik b kj



   (number of columns of 1st factor A = number of rows of 2nd factor B)



Ch. 6: Linear Algebra                                                          화공수학
  Difference from Multiplication of Numbers
  - Matrix multiplication is not commutative in general: A B  BA
  - A B  0 does not necessarily imply A  0 or B  0 or BA  0
  - AC  A D does not necessarily imply C  D            (even when A=0)

     k A  B  k AB  A k B
     A BC   A B C
     A  B C  AC  BC
  Special Matrices
Upper triangular matrix    Lower triangular matrix    Banded matrix          Results of finite
 a11   a12  a1n         a11 0       0           a11   a12     0     difference solutions
 0                                                  a             0 
       a22  a2 n 
                   
                          a
                           21 a22     0           21    a22         
                                                                             for PDE or ODE.
                                                      
                                                                       (especially,
 0      0  amn         am1 am 2    amn         0      0      amn 
                                                                                  tridiagonal)
  Matrix can be decomposed into Lower & Upper triangular matrices
   LU decomposition

Ch. 6: Linear Algebra                                                                    화공수학
  Special Matrices:
   a ii  0, a ij  0 (i  j) Diagonal matrix
    a ij  a j i Symmetric matrix, I or I ii  1 (I ij  0, i  j) Identity (or unit) matrix

  Inner Product of Vectors
                              b1 
                             b           n
     a  b  a1 a 2  a n   2        a     k bk
                                       k 1
                              
                             b n 
  Product in Terms of Row and Column Vectors: Amn Bnp  Cmp
    c j k  a j  b k (jth first row of A).(kth first column of B)

               a1  b1 a1  b 2  a1  b p 
              a  b a  b  a  b 
         AB  
                  2    1   2    2      2    p
                                      
                                              
              
               a m  b1 a m  b 2  a m  b p 
                                               

Ch. 6: Linear Algebra                                                                   화공수학
   Linear Transformations
     y  A x , x  Bw
       y  A Bw  C w

   6.3. Linear Systems of Equations: Gauss Elimination
   - The most important practical use of matrices: the solution of linear systems of equations
   Linear System, Coefficient Matrix, Augmented Matrix
     a 11x1  a 12 x 2    a 1n x n  b1           a11 a12       a1n   x1   b1 
     a 21x1  a 22 x 2    a 2 n x n  b 2        a              a 2n   x 2   b 2 
                                                     21 a 22               
                                                                      
     a m1x1  a m 2 x 2    a m nx n  b m                                
                                                    a m1 a m 2     a mn   x n  b m 

                          b = 0: homogeneous system (this has at least one solution, i.e., x = 0)
       Ax  b
                          b  0: nonhomogeneous system
                                                a11 a12  a1n b1 
                                                                        
                           Augmented matrix  a 21 a 22  a 2 n b 2 
                                                         
                                                                        
                                               a m1 a m 2  a mn b m 
Ch. 6: Linear Algebra                                                                      화공수학
  Ex. 1) Geometric interpretation: Existence of solutions

           x2
                                 Precisely one solution


                                  x1




          Same slope            Infinite solutions   Very close to singular
          Different intercept                         ill-conditioned
          No solution                                - difficult to identify the solution
                                                     - extremely sensitive to round-off error
                         Singular



Ch. 6: Linear Algebra                                                                   화공수학
Gauss Elimination
- Standard method for solving linear systems
- The elimination of unknowns can be extended to large sets of equations.
  (Forward elimination of unknowns & solution through back substitution)
     1 a'12      a'13     a'14   x1  b'1      1 a'12 a'13 a'14   x1   b'1 
    a                     a24   x2   b2       0 a'
     21 a22      a23                                22   a'23 a'24   x2  b'2 
                                                                             
                           x3   b3                          x3   b3 
                                                                       
    a41                 a44   x4   b4       a41          a44   x4   b4 
     During this operations the first row: pivot row (a11: pivot element)
     And then second row becomes pivot row: a’22 pivot element

    1 a'12     a'13     a'14   x1   b'1 
                                                       x4  b'4 , x3  a'34 x4  b'3  x3  b'3 a'34 x4
    0 1
              a'23      a'24   x2  b'2 
                                                               N

    0          1       a'34   x3  b'3           xi  b'i     a'      ij   xj
                                                              j i 1
    0                  1   x4  b'4 
                                                      Repeat back substitution, moving upward
     Upper triangular form



Ch. 6: Linear Algebra                                                                          화공수학
Elementary Row Operation: Row-Equivalent Systems
- Interchange of two rows
- Addition of a constant multiple of one row to another row
- Multiplication of a row by a nonzero constant c

   Overdetermined: equations > unknowns
   Determined:      equations = unknowns
   Underdetermined: equations < unknowns

   Consistent: system has at least one solution.
   Inconsistent: system has no solutions at all.

   Homogeneous solution: xh  A x  0                        
                                                    A x h  x p  Ax h  Ax p  0  b  b
    Nonhomogeneous solution: xp  A x  b
    xp+xh is also a solutions of the nonhomogeneous systems

   Homogeneous systems: always consistent (why? trivial solution exists, x=0)
   Theorem: A homogeneous system possess nontrivial solutions if the number of m of
            equations is less than the number of n of unknowns (m < n)



Ch. 6: Linear Algebra                                                             화공수학
Echelon Form: Information Resulting from It
   3   2     1        3   2      1     3
   0  1/ 3 1 / 3 and 0  1 / 3 1 / 3  2
                                         
   0
       0     0       0
                            0      0 12   



   a11x1  a12 x 2    a1n x n  b1
                                                (a) No solution: if r < m and one of the numbers
            c 22 x 2    c 2 n x n    b*
                                          2        ~           ~
                                                   br 1 ,..., bm is not zero.
                        
                                         ~      (b) Precisely one solution: if r=n and
               c rr x r    c rn x n  br          ~           ~
                                        ~            br 1 ,..., bm , if present, are zero.
                                    0  br 1
                                                (c) Infinitely many solutions: if r < n and
                                                    ~           ~
                                       ~             br 1 ,..., bm , if present, are zero.
                                   0  bm



                   Existence and uniqueness of the solutions  Next issue

Ch. 6: Linear Algebra                                                                         화공수학
    Gauss-Jordan elimination: particularly well suited to digital computation
   1 a'12    a'13     a'14   x1   b'1 
   0 1
             a'23     a'24   x2  b'2 
                                           Multiplying the second row by a’12 and
   0            1    a'34   x3  b'3        subtracting the second row from the first
                              
   0                 1   x4  b'4 

   1   0 a' '13      a' '14   x1  b' '1    1   0 0 0  x1   b' '1          x1  b ' '1
   0
       1    a'23     a'24   x2   b'2 
                               
                                                 0
                                                     1 0 0  x2  b' '2 
                                                                                x2  b ' '2
   0        1       a'34   x3   b'3       0   0 1 0  x3  b' '3           x3  b ' '3
                                                       
   0                1   x4   b ' 4       0   0 0 1  x4  b' '4           x4  b ' '4

- The most efficient approach is to eliminate all elements both above and below the pivot
  element in order to clear to zero the entire column containing the pivot element, except of
  course for the 1 in the pivot position on the main diagonal.




Ch. 6: Linear Algebra                                                                       화공수학
    LU-Decomposition

Gauss elimination:
 - Forward elimination + back-substitution
   (computation effort )
 - Inefficient when solving equations with the same coefficient A, but with different rhs constants B

LU Decomposition:
 - Well suited for those situations where many rhs B must be evaluated for a single value of A
    Elimination step can be formulated so that it involves only operations on the matrix of coefficient A.
 - Provides an efficient means to compute the matrix inverse.

(1) LU Decomposition        Ax  B
- LU decomposition separates the time-consuming elimination of A from the manipulations of rhs B.
   once A has been “decomposed”, multiple rhs vectors can be evaluated effectively.

• Overview of LU Decomposition
   Ax  B  0
-Upper triangular form:  u11 u12     u13   x1   d1 
                                                               Ux  D  0
                         0 u 22      u 23   x 2    d 2 
                         0           u 33   x 3   d 3 
                              0                                1 0       0
                                                                                
- Assume lower diagonal matrix with 1’s on the diagonal:         L   l 21 1   0
                                                                     l         1
                                                                      31 l32    
Ch. 6: Linear Algebra                                                                              화공수학
  So that    L( U x  D)  A x  B       L U  A, L D  B

 - Two-step strategy for obtaining solutions:
    LU decomposition step: A  L and U
    Substitution step: D from LD=B
                       x from Ux=D

 • LU Decomposition Version of Gauss Elimination
 a. Gauss Elimination:  a    a12 a13   x1   b1 
                        11                
                        a 21 a 22 a 23   x 2    b 2 
                       a                  
                        31 a 32 a 33   x 3   b 3 

 Use f21 = a21/a11 to eliminate a21    upper triangular matrix form !
     f31 = a31/a11 to eliminate a31
     f32 = a’32/a’22 to eliminate a32

 Store f’s
              a11 a12       a13                             a11 a12   a13            1     0    0
                                                                                                 
              f 21 a '22   a '23      A  LU           U   0 a '22   a '23      L   f 21 1    0
             f             a ' '33                          0         a ' '33        f          1
              31 f 32                                            0                    31 f 32    




Ch. 6: Linear Algebra                                                                                화공수학
 b. Forward-substitution step:          LD  B
                                                              i 1
                                        di  di              a b
                                                              j1
                                                                       ij   j   for i  2,3,..., n

    Back-substitution step: (identical to the back-substitution phase of conventional Gauss elimination)
                                              n
                                     di    a x
                                            ji 1
                                                     ij   j

          x n  d n / a nn & x i                                    for i  n  1, n  2,...,1
                                             a ii




Ch. 6: Linear Algebra                                                                                화공수학
6.4. Rank of a Matrix: Linear Dependence. Vector Space
- Key concepts for existence and uniqueness of solutions
Linear Independence and Dependence of Vectors
Given set of m vectors: a1, …, am (same size)
Linear combination: c1 a1  c 2 a 2    c m a m (cj: any scalars)
c1 a1  c 2 a 2    c m a m  0
- Conditions satisfying above relation:
(1) (Only) all zero cj’s: a1, …, am are linearly independent.
(2) Above relation holds with scalars not all zero  linearly dependent.
   ex ) a1  k 2 a 2    k m a m ( where k j  c j / c1 )

Ex. 1) Linear independence and dependence
       a1  3 0 2 2
       a 2   6 42 24 54                        6a1  0.5a 2  a 3  0

       a 3  21  21 0  15                      Two vectors are linearly independent.


- Vectors can be expressed into linearly independent subset.


Ch. 6: Linear Algebra                                                                 화공수학
Rank of a Matrix
- Maximum number of linearly independent row vectors of a matrix A=[ajk]: rank A

Ex. 3) Rank
             3    0   2  2 
         A   6 42 24 54 
                                            rank = 2
              21  21 0  15
                            
- rank A=0 iff A=0

Theorem 1: (rank in terms of column vectors)
The rank of a matrix A equals the maximum number of linearly independent column
vectors of A.  A and AT same rank.

- Maximum number of linearly independent row vectors of A(r) cannot exceed the
  maximum number of linearly independent column vectors of A.

Ex. 4)
         2      3        0       2       3         0 
         24  2  6  2  42 ,    54   2  6  29  42 
           3  3                      3   21           
         0
                 21 
                           21
                                     15
                                                21 
                                                           21
                                                                
Ch. 6: Linear Algebra                                                        화공수학
Vector Space, Dimension, Basis
- Vector space: a set V of vectors such that with any two vectors a and b in V all their
                linear combination a+b are elements of V.

Let V be a set of elements on which two operations called vector addition and scalar
multiplication are defined. Then V is said to be a vector space if the following ten
properties are satisfied.

Axioms for vector addition
(i) If x and y are in V, then x+y is in V.
(ii) For all x, y in V, x+y = y+x
(iii) For all x, y, z in V, x+(y+z) = (x+y) +z
(iv) There is a unique vector 0 in V such that 0 + x = x + 0 = x
(v) For each x in V, there exists a vector –x such that x+(-x) = (-x)+x=0

Axioms for scalar multiplications
(vi) If k is any scalar and x is in V, then kx is in V
(vii) k(x+y) = kx + ky
(viii) (k1+k2)x = k1x + k2x
(ix) k1(k2x) = (k1k2)x
(x) 1x = x


Ch. 6: Linear Algebra                                                            화공수학
Vector Space, Dimension, Basis
- Subspace: If a subset W of a vector space V is itself a vector space under the
            operations of vector addition and scalar multiplication defined on V, then
             W is called a subspace of V.
               (i) If x and y are in W, then x+y is in W.
               (ii) If x is in W and k is any scalar, then kx is W.
- Dimension: (dim V) the maximum number of linearly independent vectors in V.
- Basis for V: a linearly independent set in V consisting of a maximum possible
               number of vectors in V.
               (number of vectors of a basis for V = dim V)
- Span: the set of linear combinations of given vectors a1, …, ap with the same number
         of components (Span is a vector space)


Ex. 5) Consider a matrix A in Ex.1.: vector space of dim 2, basis a1, a2 or a1, a3
- Row space of A: span of the row vectors of a matrix A
  Column space of A: span of the column vectors of a matrix A

Theorem 2: The row space and the column space of a matrix A have the same
           dimension, equal to rank A.

Ch. 6: Linear Algebra                                                                화공수학
Invariance of Rank under Elementary Row Operations
Theorem 3: Row-equivalent matrices
 Row-equivalent matrices have the same rank.
 (Echelon form of A: no change of rank property)

Ex. 6)       3    0   2  2   3 0 2 2 
         A   6 42 24 54   0 42 28 58
                                       
                                                                   rank A=2
              21  21 0  15
                             0 0 0 0 
                                         
 Practical application of rank in connection with the linearly independence and
  dependence of the vectors

Theorem 4: p vectors x1, …, xp, (with n components) are linearly independent if the
           matrix with row vectors x1, …, xp has rank p: they are linearly dependent if
           that rank is less than p.

Theorem 5: p vectors with n < p components are always linearly dependent.

Theorem 6: The vector space Rn consisting of all vectors with n components has
           dimension n.


Ch. 6: Linear Algebra                                                          화공수학
6.5. Solutions of Linear Systems: Existence, Uniqueness, General Form
Theorem 1: Fundamental theorem for linear systems
(a) Existence: m equations in n unknowns
                 a 11x1  a 12 x 2    a 1n x n  b1
                 a 21x1  a 22 x 2    a 2 n x n  b 2
                 
                 a m1x1  a m 2 x 2    a m nx n  b m
                                                                       ~
  has solutions iff the coefficient matrix A and the augmented matrix A have the same
  rank.

(b) Uniqueness: above system has precisely one solution iff this common rank r of A
                    ~
                and A equals n.

(c) Infinitely many solutions: If rank of A = r < n, system has infinitely many solutions.

(d) Gauss elimination: If solutions exist, they can all be obtained by the Gauss
                       elimination.


Ch. 6: Linear Algebra                                                            화공수학
The Homogeneous Linear System
Theorem 2: Homogeneous system
- A homogeneous linear system always has the trivial solution, x1=0,…, xn=0
     a 11x1  a12 x 2    a 1n x n  0
     a 21x1  a 22 x 2    a 2 n x n  0
     
     a m1x1  a m 2 x 2    a m nx n  0

- Nontrivial solutions exist iff rank A<n.
- If rank A=r<n, these solutions, together with x=0, form a vector space of dimension
   n-r, called the solution space of above system.
- If x1 and x2 are solution vectors, then x=c1x1+c2x2 is also solution vector.

Theorem 3: A homogeneous linear system with fewer equations than unknowns
           always has nontrivial solutions.

The Nonhomogeneous Linear System
Theorem 4: If a nonhomogeneous linear system of equations of the form Ax=b has
            solutions, then all these solutions are of the form
  x  x 0  x h (x0 is any fixed solution of Ax=b, xh: solution of homogeneous system)
Ch. 6: Linear Algebra                                                           화공수학
6.6. Determinants. Cramer’s Rule
- Impractical in computations, but important in engineering applications (eigenvalues,
   DEs, vector algebra, etc.)
- associated with an nxn square matrix
Second-order Determinants                                      a 11      a 12
                                            D  det A                           a 11a 22  a 12 a 21
                                                               a 21 a 22
Ex. 2) Cramer’s rule:
                                       b1       a 12                                     a 11       b1
  a11x1  a 12 x 2  b1                b2       a 22      b1a 22  a12 b 2        a 21 b 2 a 11b 2  a 21b1
                                x1                                       , x2          
  a 21x1  a 22 x 2  b 2                     D                  D                    D            D
Third-order Determinants
       a11 a12          a13
                                   a 22       a 23        a12         a13         a12        a13
   D  a 21 a 22        a 23  a11                  a 21                   a 31
                                   a 32       a 33        a 32        a 33        a 22       a 23
       a 31 a 32        a 33
Ex. 3) Cramer’s rule                                                            b1     a12      a13
                                                                                b2     a 22     a 23
         a11x1  a12 x 2  a13 x 3  b1
                                                              D1          b            a 32     a 33
         a 21x1  a 22 x 2  a 23 x 3  b 2            x1       ,... D1  3
                                                              D                          D
         a x  a x 2  a 33 x 3  b 3
           31 1    32
Ch. 6: Linear Algebra                                                                                    화공수학
Determinant of Any Order n
                     a11     a12     a1n
                     a 21    a 22  a 2 n
      D  det A 
                                        
                     a n1 a n 2  a nn

      D  a j1C j1  a j2 C j2    a j nC j n    ( j  1,2,, n )
      D  a1k C1k  a 2 k C 2 k    a nk C nk      (k  1,2,, n )

      C jk  (1) jk M jk                                        n


                             Minor of ajk in D
                                                           D    (1)
                                                                 k 1
                                                                         j k
                                                                                a jkM jk   ( j  1,2,, n )

 Cofactor of ajk in D                                             n
                                                           D    (1)
                                                                 j1
                                                                         j k
                                                                                a jkM jk   (k  1,2,, n )

  Ex. 4) Third-order determinant
                   a 12     a 13
          M 21                  , C 21   M 21
                   a 32     a 33


Ch. 6: Linear Algebra                                                                                 화공수학
General Properties of Determinants
Theorem 1: (a) Interchange of two rows multiplies the value of the determinant by –1.
(b) Addition of a multiple of a row to another row does not alter the value of the
determinant.
(c) Multiplication of a row by c multiplies the value of the determinant by c.

Ex. 7) Determinant by reduction to triangular form
              2    0 4     6        2 0 4     6
              4    5    1   0        0 5   9    12
        D                       
              0    2    6   1       0 0 2.4    3.8
             3 8       9   1        0 0   0   47.25

Theorem 2: (d) Transposition leaves the value of a determinant unaltered.
(e) A zero row or column renders the value of a determinant zero.
(f) Proportional rows or columns render the value of a determinant zero. In particular,
a determinant with two identical rows or columns has the value zero.




Ch. 6: Linear Algebra                                                           화공수학
General Properties of Determinants
Theorem 1: (a) Interchange of two rows multiplies the value of the determinant by –1.
 (b) Addition of a multiple of a row to another row does not alter the value of the
     determinant.
 (c) Multiplication of a row by c multiplies the value of the determinant by c.

   det(kA)  k n detA
Ex. 7) Determinant by reduction to triangular form
              2    0 4     6        2 0 4     6
              4    5    1   0        0 5   9    12
        D                       
              0    2    6   1       0 0 2.4    3.8
             3 8       9   1        0 0   0   47.25

Theorem 2: (d) Transposition leaves the value of a determinant unaltered.
 (e) A zero row or column renders the value of a determinant zero.
 (f) Proportional rows or columns render the value of a determinant zero.
     In particular, a determinant with two identical rows or columns has the value zero.



Ch. 6: Linear Algebra                                                           화공수학
Rank in Terms of Determinants
- Rank ~~~ determinants

Theorem 3: An m x n matrix A=[ajk] has rank r 1 iff A has an r x r submatrix with
           nonzero determinant, whereas the determinant of every square submatrix
            with r+1 or more rows that A has is zero.
        - If A is square, n x n, it has rank n iff det A0.

Cramer’s Rule
- Not practical in computations, but theoretically interesting in DEs and others

Theorem 4: (a) Linear system of n equations in the same number of unknowns, x
                   a 11x1  a 12 x 2    a 1n x n  b1
                   a 21x1  a 22 x 2    a 2 n x n  b 2
                   
                   a n1x1  a n 2 x 2    a nn x n  b n

                If this system has a nonzero coefficient determinant D=det A, it has
                precisely one solution.
                               D1        D            D
                        x1       , x 2  1 , , x 2  n
                               D         D             D
Ch. 6: Linear Algebra                                                              화공수학
   (b) If the system is homogeneous and D0, it has only the trivial solution x=0. If D=0,
        the homogeneous system also has nontrivial solutions.

6.7. Inverse of a Matrix: Gauss-Jordan Elimination
           1     1
      AA  A A  I

- If A has an inverse, then A is a nonsingular matrix (unique inverse !)
    ~~     no inverse, ~~        a singular matrix.

Theorem 1: Existence of the inverse
 The inverse A-1 of an n x n matrix A exists iff rank A=n, hence iff det A0. Hence A is
 nonsingular if rank A = n, and is singular if rank A < n.

Determination of the Inverse
- n x n matrix A  n x n identity matrix I, I matrix  A-1

             a11 a12     a1n 1 0 0          0
                                              
            a    a 22    a 2n 0 1 0         0
    (A I)   21
                                              
                                                                           1
                                                             ( A I)  ( I A )
                                       
            a            a nn 0 0 0         1
             n1 a n 2                         
Ch. 6: Linear Algebra                                                            화공수학
      Matrix Inverse
          1    1
       AA  A A  I
  a. Calculating the inverse
  - The inverse can be computed in a column-by-column fashion
  by generating solutions with unit vectors as the rhs constants.
         1
     b   0   resulting solution will be
          0
                the first column of the matrix inverse.

          0
     b  1
          0  … the second column of the matrix inverse
          

   Best way: use the LU decomposition algorithm
    (evaluate multiple rhs vectors)



  b. Matrix inversion by Gauss-Jordan elimination
  The square matrix A-1 assumes the role of the column vector of unknown x, while square matrix I assumes
  the role of the rhs column vector B.

 2 1 1   1 0 0  1 1 2 1 2   1 2 0 0   1 1 2 1 2   1 2 0 0   1 0 0   3 4  1 4  1 4 
 1 2 1   0 1 0   1 2  1   0 1 0   0 3 2 1 2   1 2 0 0   0 1 0  1 4 3 4 1 4
 1 1 2   0 0 1  1 1    2   0 0 1  0 1 2 3 2  1 2 0 1  0 0 1  1 4 1 4 3 4 
                                                                                    
 Ch. 6: Linear Algebra                                                                         화공수학
Some Useful Formulas for Inverse
Theorem 2: The inverse of a nonsingular n x n matrix A=[ajk] is given by
                                                   A11   A 21  A n1 
                                                         A 22  A n 2 
                  1
                A 
                           1
                         det A
                               
                               A jk
                                      T
                                          
                                              1 A12
                                            det A           
                                                                       
                                                                      
                                                  A1n    A 2 n  A nn 
                where Ajk is the cofactor of ajk in det A.

                   a    a     1  1                  a 22  a12 
               A   11 12   A                      a
                   a 21 a 22     det A                  21 a11 
Ex. 3) For 3 x 3 matrix

- Diagonal matrices A have an inverse iff all ajj0. Then A-1 is diagonal with entries
  1/a11, …, 1/ann.

Ex. 4) Inverse of a diagonal matrix
                 1     1
-   (AC) 1  C A


Ch. 6: Linear Algebra                                                             화공수학
 Vanishing of Matrix Products. Cancellation Law
 - Matrix multiplication is not commutative in general: A B  BA
 - A B  0 does not necessarily imply A  0 or B  0 or BA  0
 - AC  A D does not necessarily imply C  D      (even when A=0)


 Theorem 3: Cancellation law
   Let A, B, C be n x n matrices. Then
 (a) If rank A=n and AB=AC, then B=C
 (b) If rank A=n, then AB=0 implies B=0. Hence if AB=0, but A0 as well as B0,
      then rank A < n and rank B < n.
 (c) If A is singular, so are BA and AB.

 Determinants of Matrix Products
 Theorem 4: For any n x n matrices A and B
                            
                 det A B  det BA  det A det B




Ch. 6: Linear Algebra                                                       화공수학

								
To top