Docstoc

Allen

Document Sample
Allen Powered By Docstoc
					Searching for the Minimal Bézout Number




                          Lin Zhenjiang, Allen
                        zjlin@cse.cuhk.edu.hk
                         Dept. of CSE, CUHK
                                  3-Oct-2005
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Problems

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Problems

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
1.   Polynomial system problem

                p1 ( x1 ,, xn )  0
                p ( x ,, x )  0
                2 1          n
      P( X ) :                       ,   (1.1)
                          
                pn ( x1 ,, xn )  0
               
      where X  {x1 ,, xn }

Mission: Find out all solutions of P(X).
Application: very common in many engineering fields

   formula construction,
   geometric intersection problems,
   computation of equilibrium states,
   etc.
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Problems

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
2.    Homotopy method

Homotopy Equation:      H ( x, t )  (1  t )Q( x)  tP( x)

Construct Q(x) that satisfy the conditions:

1.   The solutions of Q(x) = 0 are either known or easy to
     known;
2.   When 0≤t ≤1, the solutions of H(x,t) is consist of finite
     number of curves with parameter t;
3.   Each solution of H(x, 1)= P(x) = 0 can be obtained by
     tracking curves starting from t = 0.
        t
                         H(x,1) = P(x)= 0
  t=1
  1111
  1111                                  H(x,t) = 0
  =

  t=0


                            H(x,0) = Q(x)= 0

            Figure 1: Illustration of Homotopy method


Mission: Construct Q(X) with minimal number of solutions.
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Problems

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
3.    Minimal Bézout number
For a polynomial system
                    P(X) = 0,
where P = (p1, p2, …, pn), X = (x1, x2, …, xn),

Bézout theory By dividing the n variables x1, x2, …, xn into
several groups (called a partition strategy), we can get the
corresponding Q(X) and an upper bound of its solution
number - Bézout number.

Mission: Find out the partition strategy which corresponds to
         the minimal Bézout number.
More detail ---

Divide X = (x1, x2, …, xn) into m groups:
          X = (X (1), X(2), …,X(m)),
then we get
a)   the degree matrix D = ( dij ), where dij is the degree of X (j)
     in Pi(X (1), X(2), …,X(m));

b)   and partition vector K = (kj)T, where kj is the number of
     variables that X (j) contains.
                     x1 x2 2  x1 x3  x2  0
Example 1:           2
           P( X ) :  x1 x3  x2  x3  0
                                    2

( n = 3)
                           x1  x2  x3  0
                                         2
                    
If X = (x1, x2, x3) is divide into 2 groups:
     X = ( { x1, x2 }, { x3 } ), or ( { 1, 2 }, { 3 } ),
then we have

              3 1
                 
                                        K   2, 1 
                                                           T
          D  2 1           and
              1 2
                 
The formula for Bézout number B(D,K) is
                                      Per D* 
                             1
         B ( D, K ) 
                      k1!k 2 ! k m !
where
                                             
             d d d  d  d d 
             11     11 12    12    1m    1m 

       D*                               
                                             
               n1  d n    dn          d nm
             d 1 d n 2 2  d nm  
                k            
                 k1        2          km     n  n

and Per(D*) is the permanent of matrix D*.
                       3 1
                            
                   D   2 1  and K   2, 1 
                                               T
In Example 1,
                       1 2
                            
So, the Bézout number is
       B ( D, K ) 
                      1
                    2 ! 1!
                            
                           Per D  34
                                *




                3 3 1
                      
 where      D  2 2 1 
             *

               1 1 2
                      
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Problems

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
4.      Problems
    Searching the optimal one in all possible partition
     strategies
    Model: How many ways to put n balls into m
    (1≤m≤n ) boxes? The result is called the Bell
    number B(n), which has the following estimation
           (n / 2) (n / 2) < B(n) < n!

    Computing Bézout number (or permanent)
     The best-known algorithm is Ryser’s, O(n2n).
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Challenges

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
5.   Tabu search method for minimal Bézout
     number searching

Main idea:
Construct neighbor relationship between partition
strategies (or partitions), and apply Tabu (Taboo)
search method to search the optimal partition.
Two kinds of neighbor relationship

split:   { 1, 3, 6 } { 5 } { 2, 4 }
             ↙↘
         { 1, 3 } { 6 } { 5 } { 2, 4 }

merge:   { 1, 3, 6 } { 5 } { 2, 4 }
                         ↘↙
         { 1, 3 , 6 } { 5, 2, 4 }

A partition has O(n2) neighbors.
Evaluation function

Bézout number, right?


But how can we calculate it ?


That’s our next problem.
1.   Polynomial system problem

2.   Homotopy method

3.   Bézout theory and minimal Bézout number

4.   Challenges

5.   Tabu search method for minimal Bézout number searching

6.   Monte Carlo method for Bézout number calculating

7.   Conclusion
6.     Monte Carlo method for Bézout number
       calculating


    Bézout number and permanent


                           1
       B ( D, K )                         *
                                    Per ( D )   (6.1)
                    k1!k 2! k m !
   Permanent


                       a 
                            n
         Per( A)           i   i , (i )   (6.2)
                     
                     S n

where A is an n×n matrix, and Sn is the set composed
of all permutations of number 1, 2,…,n.

Example 2.

         2 3
         4 5 , Per( A)  2  5  4  3  22
      A     
             
More about the permanent
   The computation of permanents has been studied fairly
    extensively in algebraic complexity theory.

   The complexity of the best-known algorithms grows as the
    exponent of the matrix size.

   Application – Counting problems
The number of perfect matching -- 0-1 permanent
The number of Latin squares    -- general permanent
We can see in the definition of permanent
a. any permutation of 1, 2,…,n, denoted byω,
   corresponds to one product term g(ω);
b. there’re totally n! product terms.

Let Sn be the sample space Ω. We have
            Per(A) = θ· |Ω| = θ· n!     (6.3)

whereθ= E(g(ω)) is the expectation of g(ω).
MC (Monte Carlo) Method

        Per( A)        n!    (6.4)

where
                   N
               1
            
               N
                    g ( )
                   i 1
                                    (6.5)


is the approximation of  by sampling uniformly
from sample space Ω.
Disadvantage of MC method
Too many zero-value product terms when matrix A
is sparse, i.e., for an n×n matrix with sparsity p,

                  pn ≤ pn→0, n →∞
pn is the possibility of sampling a non-zero sample.
Applying simple Monte Carlo approach to our
problem is not very helpful.
MC(Ω+) algorithm
Let       g ()  0,                   
to be the sample space, then we have

      Per( A)        
                                
                                           (6.6)
where               N
               1
            
            

               N
                     g (  )
                    i 1
                                           (6.7 )

Advantage    |Ω+| << |Ω|
                                       +
Question     How to get  and |Ω | ?
                               
How to get |Ω+ | ?

Let IA is a matrix that has the same structure as
A except that the non-zero entries is 1.

Obviously, we have
                 |Ω+ | = Per(IA)
Thus we can calculate |Ω+ | with 0-1 permanent
algorithms.
How to get  or g ( ) ?
                        
              




The equivalent question is

How can we choose a non-zero product term
uniformly?
How to get  or g ( ) ?
                         
               



Expand a permanent on the first column
For any matrix A=(aij) n×n, we have

         Per(A) = ∏ ai1Per(Ac(ai1)), ∨a∈A

where Ac(ai1) is the complementary sub-matrix of
A about ai1.
Remember Laplace expansion on a determinant?
How to get  or g ( ) ?
                             
                 


Example,
                   3 6 1
                          
                 A 2 4 0 
                   0 1 5
                          
then,
                   4 0            6 1            6 1
                   1 5   2  Per 1 5   0  Per 4 0 
 Per( A)  3  Per                                   
                                                     
Divide product terms into 3 groups!
How to get  or g ( ) ?
                             
                 


                   4 0            6 1             6 1
                   1 5   2  Per 1 5   0  Per 4 0 
 Per( A)  3  Per                                   
                                                     
                  ↓              ↓               ↓
           1 product term 2 terms                0 term
Choose “3” (group 1) with the probability 1/3,
“2” (group 2) with 2/3, and “0” (group 3) with 0.
By iterating this procedure, we can finally sample
uniformly a none-zero product term.
Layered MC(Ω+) algorithm
Basic Idea - Importance sampling
Divide the sample space Ω+ into several sub-spaces, in which
the sample values are closer.
               n2                      n2           
   Per ( A)    ai Per ( A (ai ))      g ( ) 
                           i
                             c
                                    
              i 1                      i 1 i
                                                    
                                                     
Ai – Keep the i largest entries of A, and others set to zero;
Product terms in sub-space  i ( A) are likely larger than those
in sub-space i1 ( A) .
Ω+ is divided into n2 sub-spaces according to the value of
product terms.
Layered MC(Ω+) algorithm

How to assign sampling number to sub-spaces?

Based on
   the dimension of sub-spaces;
   the sums of product terms that have already been
    estimated in sub-spaces;

Lead to two algorithms: M1 and M2.
Numerical results
7.     Conclusion

    Polynomial systems

    Homotopy method & Bézout theory

    Searching for minimal Bézout number

    Tabu search

    Computing Bézout number

    Monte Carlo method

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:15
posted:3/31/2011
language:English
pages:37