Heuristic Search - PowerPoint 1

Document Sample
Heuristic Search - PowerPoint 1 Powered By Docstoc
					Different Local Search
Algorithms in STAGE for
Solving Bin Packing Problem


       Gholamreza Haffari

  Sharif University of Technology
         haffari@ce.sharif.edu
Overview
   Combinatorial Optimization Problems
    and State Spaces
   STAGE Algorithm
   Local Search Algorithms
   Results
   Conclusion and Future works
Optimization Problems
   Objective function: F(x1, x2, …, xn)

    Find vector X=(x1, x2, …, xn) which minimizes
    (maximizes) F

   Constraints:
     g1(X)  0
     g2(X)  0
     .
     .
     .
     gm(X)  0
 Combinatorial Optimization
Problems (COP)

   Special kind of Optimization Problems
    which are Discrete

   Most of the COPs are NP-Hard, I.e.
    there is not any polynomial time
    algorithm for solving them.
Satisfiability
   SAT: Given a formula in propositional
    calculus, is there an assignment to its
    variables making it true?
    f(x1, x2, .., xn)

   Problem is NP-Complete. (Cook 1971)
Bin Packing Problem             (BPP)

   Given a list (a1, a2, …) of items, each of
    which has a size s(ai)>0, and a bin
    Capacity C, what is the minimum
    number of bins for packing items?

   Problem is NP-Complete (Garey and
    Johnson 1979)
An Example of BPP



   a1    a2    a3    a4       b1    b2   b3   b4



Objects list: a1, a2, …, an        Objective function: m

Bin’s capacity (bj) is C           ai < C, aibj, 1j m
Definition of State in BPP
   A particular permutation of items in the
    object list is called state.

                   Greedy Algorithm



    a1   a2   a3   a4                 b1   b2   b3   b4
State Space of BPP

                                   a1, a2, a3, a4




                  a1, a2, a4, a3    a1, a4, a2, a3   . . .


 a2, a4, a3, a1   . . .                . . .
A Local Search Algorithm
1) s0 : a random start state

2) for i = 0 to +

     - generate new solutions set S from the current
        solution si

     - decide whether si+1 = s’S or si

      - if a stopping condition is satisfied

          return the best solution found
Local Optimum Solutions
   The quality of a local optimum resulted
    from a local search process depends
    on a starting state.
Multi-Start LSA
   Runs the base local search algorithms
    from different starting states and
    returns the best result found.

    Is it possible to choose a promising
    new starting state?
Other Features of a State
   Other features of a state can help the
    search process.




                                 (Boyan 1998)
    Previous Experiences

   There is a relationship among local optima of
    a COP, so previously found local optima can
    help to locate more promising start states.
Core ideas
   Using an Evaluation Function to predict
    the eventual outcome of doing a local
    search from a state.

   The EF is a function of some features of a
    state.

   The EF is retrained gradually.
    STAGE Algorithm

                                     Execution Phase
   Uses an Evaluation Function to
    locate a good start state.

   Does local search.
                                      Learning Phase
   Retrains EF with the new generated
    search trajectory
Evaluation Function
   EF can be used by another local search
    algorithm for finding a good new
    starting point.

   Applying EF on a state

State       Features         EF   Prediction
Diagram of STAGE




         (Boyan 98)
Analysis of STAGE
   What is the effect of using different local search
    algorithms?

   Local search algorithms:
       Best Improvement Hill Climbing (BIHC)
       First Improvement Hill Climbing (FIHC)
       Stochastic Hill Climbing (STHC)
Best Improvement HC
   Generates all of the neighboring states,
    and then selects the best one.


                           1


                   4   7    …   2
First Improvement HC
   Generates neighboring states
    systematically, and then selects the first
    good one.

                                    5


                            4   7
Stochastic HC
   Stochastically generates some of the
    neighboring states, and then selects the
    best one.

   The size of the set containing neighbors
    is called PATIENCE.
Different LSAs




Different LSAs for solving U250_00 instance
        http://www.ms.ic.ac.uk/info.html
Different LSAs, bounded steps
Some Results
    The higher the accuracy in choosing the next state,
    the better the quality of the final solution, by
    comparing STHC1 and STHC2 (PATIENCE1=350,
    PATIENCE2=700)

    Deep paces result in higher quality and faster
    solutions, by comparing BIHC and others.
Different LSAs, bounded moves
Some Results
• It is better to search the solution space
randomly rather than systematically, by
comparing STHC and others.
Future works
   Using other learning structures in
    STAGE
   Verifying these results on another
    problem (for example Graph Coloring)
   Using other LSA, such as Simulated
    Annealing.
Questions

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:4
posted:7/4/2012
language:English
pages:29