Docstoc

Chaos driven evolutionary algorithm for the traveling salesman problem

Document Sample
Chaos driven evolutionary algorithm for the traveling salesman problem Powered By Docstoc
					                                                                                             4
                                                                                             0

            Chaos Driven Evolutionary Algorithm for the
                          Traveling Salesman Problem
                                                    Donald Davendra1 ∗, Ivan Zelinka1 ,
                               Roman     Senkerik2   and Magdalena Bialic-Davendra3
    1 Department   of Informatics, Faculty of Electrical Engineering and Computing Science,
                              Technical University of Ostrava, Tr. 17. Listopadu 15, Ostrava
              2 Department of Informatics and Artificial Intelligence, Faculty of Informatics,

                             Tomas Bata University in Zlin, Nad Stranemi 4511, Zlin 76001
                                                    3 Department of Finance and Accounting,

                           Faculty of Management and Economics, Mostni 5139, Zlin 76001
                                                                              Czech Republic



1. Introduction
One of the most studied problem in operations research and management science during the
past few decades has been the Traveling Salesman Problem (TSP). The TSP is rather a simple
problem formulation, but what has garnered it much attention is the fact that it belongs to a
specific class of problems which has been labelled as “non-deterministic polynomial” or NP.
What this implies is that no algorithm currently exists which can find the exact solution in
“polynomial time”. A number of current engineering applications are formed around this
premise; such as cryptography.
TSP manages to capture the imagination of theoretical computer scientists and
mathematicians as it simply describes the complexity of NP Completeness. A number
of resources exists on the internet such as the TSPLIB , which allow any novice user to
understand and incorporate the TSP problem.
Since no concrete mathematical algorithm exists to solve the TSP problem, a specific branch of
research, namely evolutionary science, has been applied rather effectively to find solutions.
Evolutionary science itself is divided into many scopes, but the most effective ones have
been the deterministic approaches and random approaches. Deterministic approaches like
Branch and Bound (Land & Doig, 1960) and Lin-Kernighan local searches (Lin & Kernighan,
1973) have proven very effective over the years. Random based approaches, incorporated
in heuristics have generally provided a guided search pattern. Therefore the most effective
algorithms have been a hybrid of the two approaches.
This research introduces another approach, which is based on a chaotic map (Davendra &
Zelinka, 2010). A chaotic system is one which displays a chaotic behavior and it based on
a function which in itself is a dynamical system. What is of interest is that the map iterates
across the functional space in discrete steps, each one in a unique footprint. What this implies
is that the same position in not iterated again. This provides a great advantage as the number
  ∗ donald.davendra@vsb.cz




www.intechopen.com
56
2                                                    Traveling Salesman Problem, Theory and Applications
                                                 Traveling Salesman Problem, Theory and Applications

generated in unique and when input into an evolutionary algorithm, it provides a unique
mapping schema. The question that remains to be answered is that whether this system
improves on a generic random number generator or not.
This chapter is divided into the following sections. Section 2 introduces the TSP problem
formulation. Section 3 describes the algorithm used in this research; Differential Evolution
(DE) and Section 4outlines its permutative varient EDE. The chaotic maps used in this research
are described in Section 5 whereas the experimentation is given in Section 6. The chapter is
concluded in Section 7.

2. Travelling salesman problem
A TSP is a classical combinatorial optimization problem. Simply stated, the objective of a
traveling salesman is to move from city to city, visiting each city only once and returning back
to the starting city. This is called a tour of the salesman. In mathematical formulation, there is
a group of distinct cities {C1 , C2 , C3 , ..., CN } , and there is given for each pair of city   Ci , Cj
a distance d Ci , Cj . The objective then is to find an ordering π of cities such that the total
time for the salesman is minimized. The lowest possible time is termed the optimal time. The
objective function is given as:
                             N −1
                              ∑      d Cπ (i) , Cπ (i+1) + d Cπ ( N ) , Cπ (1)                         (1)
                              i =1
This quality is known as the tour length. Two branches of this problem exist, symmetric and
asymmetric. A symmetric problem is one where the distance between two cities is identical,
given as: d Ci , Cj = d Cj , Ci for 1 ≤ i, j ≤ N and the asymmetric is where the distances are
not equal. An asymmetric problem is generally more difficult to solve.
The TSP has many real world applications; VSLA fabrication (Korte, 1988) to X-ray
crystallography (Bland & Shallcross, 1989). Another consideration is that TSP is NP-Hard as
shown by Garey (1979), and so any algorithm for finding optimal tours must have a worst-case
running time that grows faster than any polynomial (assuming the widely believed conjecture
that P = NP).
TSP has been solved to such an extent that traditional heuristics are able to find good solutions
to merely a small percentage error. It is normal for the simple 3-Opt heuristic typically getting
with 3-4% to the optimal and the variable-opt algorithm of Lin & Kernighan (1973) typically
getting around 1-2%.
The objective for new emerging evolutionary systems is to find a guided approach to TSP and
leave simple local search heuristics to find better local regions, as is the case for this chapter.

3. Differential evolution algorithm
Differential evolution (DE) is one of the evolutionary optimization methods proposed by Price
(1999) to solve the Chebychev polynomial fitting problem. DE is a population-based and
stochastic global optimizer, and has proven to be a robust technique for global optimization.
In order to describe DE, a schematic is given in Figure 1.
There are essentially five sections to the code. Section 1 describes the input to the heuristic.
D is the size of the problem, Gmax is the maximum number of generations, NP is the total




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                                   57
                                                                                                     3


Canonical Differential Evolution Algorithm


       1.Input :D, Gmax , NP ≥ 4, F ∈ (0, 1+) , CR ∈ [0, 1], and initial bounds :x (lo) , x (hi) .
                                                            (lo )                    (hi )   (lo )
                     ∀i ≤ NP ∧ ∀ j ≤ D : xi,j,G=0 = x j + rand j [0, 1] • x j − x j
       2.Initialize :
                     i = {1, 2, ..., NP}, j = {1, 2, ..., D }, G = 0, rand j [0, 1] ∈ [0, 1]
           3.While G < Gmax
       ⎧
       ⎪
                   ⎪ 4. Mutate and recombine :
       ⎪
       ⎪           ⎧
       ⎪
       ⎪
       ⎪
       ⎪           ⎪ 4.1 r , r , r ∈ {1, 2, ...., NP},
                   ⎪
       ⎪
       ⎪           ⎪
                   ⎪        1 2 3
                        randomly selected, except :r1 = r2 = r3 = i
       ⎪
       ⎪           ⎪
                   ⎪
       ⎪
       ⎪           ⎪
                   ⎪
                           rand ∈ {1, 2, ..., D }, randomly selected once each i
       ⎪
       ⎪           ⎪
                   ⎪ 4.2 j
       ⎪
       ⎪           ⎪
                   ⎪
       ⎪           ⎪
                                                 ⎨ x j,r3 ,G + F · ( x j,r1 ,G − x j,r2 ,G )
       ⎨           ⎨                             ⎧
           ∀i ≤ NP
       ⎪
       ⎪           ⎪ 4.3 ∀ j ≤ D, u j,i,G+1 =
                   ⎪                                   if (rand j [0, 1] < CR ∨ j = jrand )
                                                       x j,i,G otherwise
       ⎪
       ⎪           ⎪
                   ⎪                             ⎩
       ⎪
       ⎪           ⎪
                   ⎪
       ⎪
       ⎪
       ⎪           ⎪ 5. Select
                   ⎪
                   ⎪
       ⎪           ⎪
                                     ui,G+1 if f (ui,G+1 ) ≤ f ( xi,G )
       ⎪
       ⎪           ⎪
                   ⎪
                   ⎩ i,G+1 =
       ⎪
       ⎪           ⎪
                   ⎪ x
       ⎪
       ⎪
       ⎪
       ⎪
       ⎩                             xi,G otherwise
           G=G+1

Fig. 1. Canonical Differential Evolution Algorithm

number of solutions, F is the scaling factor of the solution and CR is the factor for crossover. F
and CR together make the internal tuning parameters for the heuristic.
Section 2 outlines the initialization of the heuristic. Each solution xi,j,G=0 is created randomly
between the two bounds x (lo) and x (hi) . The parameter j represents the index to the values
within the solution and i indexes the solutions within the population. So, to illustrate, x4,2,0
represents the second value of the fourth solution at the initial generation.
After initialization, the population is subjected to repeated iterations in section 3.
Section 4 describes the conversion routines of DE. Initially, three random numbers r1 , r2 , r3
are selected, unique to each other and to the current indexed solution i in the population in
4.1. Henceforth, a new index jrand is selected in the solution. jrand points to the value being
modified in the solution as given in 4.2. In 4.3, two solutions, x j,r1 ,G and x j,r2 ,G are selected
through the index r1 and r2 and their values subtracted. This value is then multiplied by F,
the predefined scaling factor. This is added to the value indexed by r3 .
However, this solution is not arbitrarily accepted in the solution. A new random number
is generated, and if this random number is less than the value of CR, then the new value
replaces the old value in the current solution. Once all the values in the solution are obtained,
the new solution is vetted for its fitness or value and if this improves on the value of the
previous solution, the new solution replaces the previous solution in the population. Hence
the competition is only between the new child solution and its parent solution.
Price (1999) has suggested ten different working strategies. It mainly depends on the problem
on hand for which strategy to choose. The strategies vary on the solutions to be perturbed,
number of differing solutions considered for perturbation, and finally the type of crossover
used. The following are the different strategies being applied.

Strategy 1: DE/best/1/exp:              ui,G+1 = xbest,G + F • ( xr1 ,G − xr2 ,G )
Strategy 2: DE/rand/1/exp:              ui,G+1 = xr1 ,G + F • xr2 ,G − xr3 ,G




www.intechopen.com
58
4                                               Traveling Salesman Problem, Theory and Applications
                                            Traveling Salesman Problem, Theory and Applications

Strategy 3: DE/rand−best/1/exp: ui,G+1 = xi,G + λ • xbest,G − xr1 ,G
                                         + F • ( xr1 ,G − xr2 ,G )
Strategy 4: DE/best/2/exp:      ui,G+1 = xbest,G + F • xr1 ,G − xr2 ,G − xr3 ,G − xr4 ,G
Strategy 5: DE/rand/2/exp:      ui,G+1 = x5,G + F • xr1 ,G − xr2 ,G − xr3 ,G − xr4 ,G
Strategy 6: DE/best/1/bin:      ui,G+1 = xbest,G + F • ( xr1 ,G − xr2 ,G )
Strategy 7: DE/rand/1/bin:      ui,G+1 = xr1 ,G + F • xr2 ,G − xr3 ,G
Strategy 8: DE/rand−best/1/bin: ui,G+1 = xi,G + λ • xbest,G − xr1 ,G
                                         + F • ( xr1 ,G − xr2 ,G )
Strategy 9: DE/best/2/bin:      ui,G+1 = xbest,G + F • xr1 ,G − xr2 ,G − xr3 ,G − xr4 ,G
Strategy 10: DE/rand/2/bin:     ui,G+1 = x5,G + F • xr1 ,G − xr2 ,G − xr3 ,G − xr4 ,G

The convention shown is DE/x/y/z. DE stands for Differential Evolution, x represents a
string denoting the solution to be perturbed, y is the number of difference solutions considered
for perturbation of x, and z is the type of crossover being used (exp: exponential; bin:
binomial).
DE has two main phases of crossover: binomial and exponential. Generally, a child solution
ui,G+1 is either taken from the parent solution xi,G or from a mutated donor solution vi,G+1 as
shown : u j,i,G+1 = v j,i,G+1 = x j,r3 ,G + F • x j,r1 ,G − x j,r2 ,G .
The frequency with which the donor solution vi,G+1 is chosen over the parent solution
xi,G as the source of the child solution is controlled by both phases of crossover. This is
achieved through a user defined constant, crossover CR which is held constant throughout
the execution of the heuristic.
The binomial scheme takes parameters from the donor solution every time that the generated
random number is less than the CR as given by rand j [0, 1] < CR , else all parameters come
from the parent solution xi,G .
The exponential scheme takes the child solutions from xi,G until the first time that the random
number is greater than CR, as given by rand j [0, 1] < CR, otherwise the parameters comes from
the parent solution xi,G .
To ensure that each child solution differs from the parent solution, both the exponential and
binomial schemes take at least one value from the mutated donor solution vi,G+1 .

3.1 Tuning parameters
Outlining an absolute value for CR is difficult. It is largely problem dependent. However a few
guidelines have been laid down by Price (1999). When using binomial scheme, intermediate
values of CR produce good results. If the objective function is known to be separable, then
CR = 0 in conjunction with binomial scheme is recommended. The recommended value of CR
should be close to or equal to 1, since the possibility or crossover occurring is high. The higher
the value of CR, the greater the possibility of the random number generated being less than
the value of CR, and thus initiating the crossover.
The general description of F is that it should be at least above 0.5, in order to provide sufficient
scaling of the produced value.
The tuning parameters and their guidelines are given in Table 1.

4. Enhanced differential evolution algorithm
Enhanced Differential Evolution (EDE) (Davendra, 2001; Davendra & Onwubolu, 2007a;
Onwubolu & Davendra, 2006; 2009), heuristic is an extension of the Discrete Differential
Evolution (DDE) variant of DE (Davendra & Onwubolu, 2007b). One of the major drawbacks




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                              59
                                                                                                5


    Control Variables                 Lo        Hi          Best?           Comments
    F: Scaling Factor                 0         1.0+        0.3 – 0.9       F ≥ 0.5
    CR: Crossover probability         0         1           0.8 − 1.0       CR = 0, seperable
                                                                            CR = 1, epistatic

Table 1. Guide to choosing best initial control variables

of the DDE algorithm was the high frequency of in-feasible solutions, which were created after
evaluation. However, since DDE showed much promise, the next logical step was to devise a
method, which would repair the in-feasible solutions and hence add viability to the heuristic.
To this effect, three different repairment strategies were developed, each of which used a
different index to repair the solution. After repairment, three different enhancement features
were added. This was done to add more depth to the DDE problem in order to solve
permutative problems. The enhancement routines were standard mutation, insertion and
local search. The basic outline is given in Figure 2.

4.1 Permutative population
The first part of the heuristic generates the permutative population. A permutative solution
is one, where each value within the solution is unique and systematic. A basic description is

   1. Initial Phase
        (a) Population Generation: An initial number of discrete trial solutions are generated
            for the initial population.
   2. Conversion
        (a) Discrete to Floating Conversion: This conversion schema transforms the parent
            solution into the required continuous solution.
        (b) DE Strategy: The DE strategy transforms the parent solution into the child solution
            using its inbuilt crossover and mutation schemas.
        (c) Floating to Discrete Conversion: This conversion schema transforms the continuous
            child solution into a discrete solution.
   3. Mutation
        (a) Relative Mutation Schema: Formulates the child solution into the discrete solution
            of unique values.
   4. Improvement Strategy
        (a) Mutation: Standard mutation is applied to obtain a better solution.
        (b) Insertion: Uses a two-point cascade to obtain a better solution.
   5. Local Search
        (a) Local Search: 2 Opt local search is used to explore the neighborhood of the solution.


Fig. 2. EDE outline




www.intechopen.com
60
6                                                      Traveling Salesman Problem, Theory and Applications
                                                   Traveling Salesman Problem, Theory and Applications

given in Equation 2.


                                          PG = { x1,G , x2,G , ..., x NP,G }, xi,G = x j,i,G
                                                         (hi )           (lo )         (lo )
               x j,i,G=0 = (int) rand j [0, 1] • x j             + 1 − xj        + xj

                                                        i f x j,i ∈ x0,i , x1,i , ..., x j−1,i
                                                                  /
                                               i = {1, 2, 3, ..., NP} , j = {1, 2, 3, .., D }          (2)

where PG represents the population, x j,i,G=0 represents each solution within the population
      (lo )     (hi )
and x j and x j represents the bounds. The index i references the solution from 1 to NP,
and j which references the values in the solution.

4.2 Forward transformation
The transformation schema represents the most integral part of the DDE problem. Onwubolu
(Onwubolu, 2005) developed an effective routine for the conversion.
Let a set of integer numbers be represented as in Equation 3:

                                                    xi ∈ xi,G                                          (3)
which belong to solution x j,i,G=0 . The equivalent continuous value for xi is given as       <  1 • 102
5 • 102 ≤ 102 .
The domain of the variable xi has length of 5 as shown in 5 • 102 . The precision of the value
to be generated is set to two decimal places (2 d.p.) as given by the superscript two (2) in 102
. The range of the variable xi is between 1 and 103 . The lower bound is 1 whereas the upper
bound of 103 was obtained after extensive experimentation. The upper bound 103 provides
optimal filtering of values which are generated close together (Davendra & Onwubolu, 2007b).
The formulation of the forward transformation is given as:

                                               ′   xi • f • 5
                                             x i = −1 +                                       (4)
                                                    103 − 1
Equation 4 when broken down, shows the value xi multiplied by the length 5 and a scaling
factor f. This is then divided by the upper bound minus one (1). The value computed is
then decrement by one (1). The value for the scaling factor f was established after extensive
experimentation. It was found that when f was set to 100, there was a tight grouping of the
value, with the retention of optimal filtration′ s of values. The subsequent formulation is given
as:

                                               xi • f • 5       x • f •5
                                   ′
                                 x i = −1 +               = −1 + i 3                                   (5)
                                                103 − 1          10 − 1

4.3 Backward transformation
The reverse operation to forward transformation, backward transformation converts the real
value back into integer as given in Equation 6 assuming xi to be the real value obtained from
Equation 5.

                                       (1 + xi ) • 103 − 1   (1 + xi ) • 103 − 1
                        int [ xi ] =                       =                                           (6)
                                                5• f                  500




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                                  61
                                                                                                    7

The value xi is rounded to the nearest integer.

4.4 Recursive mutation
Once the solution is obtained after transformation, it is checked for feasibility. Feasibility
refers to whether the solutions are within the bounds and unique in the solution.
                         ⎧
                         ⎪               u j,i,G+1 = u1,i,G+1 , ..., u j−1,i,G+1
                             ui,G+1 if
                         ⎨
                xi,G+1 =                  x (lo) ≤ u j,i,G+1 ≤ x (lo)                      (7)
                         ⎪
                             xi,G
                         ⎩

Recursive mutation refers to the fact that if a solution is deemed in-feasible, it is discarded and
the parent solution is retained in the population as given in Equation 7.

4.5 Repairment
In order to repair the solutions, each solution is initially vetted. Vetting requires the resolution
of two parameters: firstly to check for any bound offending values, and secondly for repeating
values in the solution. If a solution is detected to have violated a bound, it is dragged to the
offending boundary.

                                                x (lo) if u j,i,G+1 < x (lo)
                               u j,i,G+1 =                                                         (8)
                                                x (hi) if u j,i,G+1 > x (hi)
Each value, which is replicated, is tagged for its value and index. Only those values, which
are deemed replicated, are repaired, and the rest of the values are not manipulated. A second
sequence is now calculated for values, which are not present in the solution. It stands to reason
that if there are replicated values, then some feasible values are missing. The pseudocode is
given in Figure 3
Three unique repairment strategies were developed to repair the replicated values: front
mutation, back mutation and random mutation, named after the indexing used for each particular
one.

Algorithm for Replication Detection

Assume a problem of size n, and a schedule given as X = { x1 , .., xn }. Create a random solution
schedule ∃!xi : R( X ) := { x1 , .., xi .., xn }; i ∈ Z + , where each value is unique and between the
bounds x (lo) and x (hi) .
   1. Create a partial empty schedule P ( X ) := {}
   2. For k = 1, 2, ...., n do the following:
        (a) Check if xk ∈ P ( X ).
        (b) IF xk ∈ P ( X )
                  /
                Insert xk → P ( Xk )
            ELSE
                P ( Xk ) = ∅
   3. Generate a missing subset M ( X ) := R ( X ) \ P ( X ).


Fig. 3. Pseudocode for replication detection




www.intechopen.com
62
8                                                     Traveling Salesman Problem, Theory and Applications
                                                  Traveling Salesman Problem, Theory and Applications


Algorithm for Random Mutation

Assume a problem of size n, and a schedule given as X = { x1 , .., xn }. Assume the missing
subset M( X ) and partial subset P( X ) from Figure 3.
     1. For k = 1, 2, ...., n do the following:
          (a) IF P ( Xk ) = ∅
                  Randomly select a value from the M ( X ) and insert it in P ( Xk ) given as
                  M ( XRnd ) → P ( Xk )
          (b) Remove the used value from the M ( X ).
     2. Output P ( X ) as the obtained complete schedule.

Fig. 4. Pseudocode for random mutation

4.5.1 Random mutation
The most complex repairment schema is the random mutation routine. Each value is selected
randomly from the replicated array and replaced randomly from the missing value array as
given in Figure 4.
Since each value is randomly selected, the value has to be removed from the array after
selection in order to avoid duplication. Through experimentation it was shown that random
mutation was the most effective in solution repairment.

4.6 Improvement strategies
Improvement strategies were included in order to improve the quality of the solutions.
Three improvement strategies were embedded into the heuristic. All of these are one time
application based. What this entails is that, once a solution is created each strategy is applied
only once to that solution. If improvement is shown, then it is accepted as the new solution,
else the original solution is accepted in the next population.

4.6.1 Standard mutation
Standard mutation is used as an improvement technique, to explore random regions of space
in the hopes of finding a better solution. Standard mutation is simply the exchange of two
values in the single solution.
Two unique random values are selected r1 , r2 ∈ rand [1, D ], where as r1 = r2 . The values
                                                               exchange
indexed by these values are exchanged: Solutionr1 ↔ Solutionr1 and the solution is
evaluated. If the fitness improves, then the new solution is accepted in the population. The
routine is shown in Figure 5.

4.6.2 Insertion
Insertion is a more complicated form of mutation. However, insertion is seen as providing
greater diversity to the solution than standard mutation.
As with standard mutation, two unique random numbers are selected r1 , r2 ∈ rand [1, D ]. The
value indexed by the lower random number Solutionr1 is removed and the solution from that
value to the value indexed by the other random number is shifted one index down. The
removed value is then inserted in the vacant slot of the higher indexed value Solutionr2 as
given in Figure 6.




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                             63
                                                                                               9


Algorithm for Standard Mutation

Assume a schedule given as X = { x1 , .., xn }.
   1. Obtain two random numbers r1 and r2 , where r1 = rnd x (lo) , x (hi)            and r2 =
      rnd   x (lo) , x (hi)   , the constraint being r1 = r2

        (a) Swap the two indexed values in the solution
                i. xr1 = xr2 and xr2 = xr1 .
        (b) Evaluate the new schedule X ′ for its objective given as f ( X ′ ).
        (c) IF f ( X ′ ) < f ( X )
                i. Set the old schedule X to the new improved schedule X ′ as X = X ′ .
   2. Output X as the new schedule.

Fig. 5. Pseudocode for standard mutation

4.7 Local search
There is always a possibility of stagnation in evolutionary algorithms. DE is no exemption to
this phenomenon.
Stagnation is the state where there is no improvement in the populations over a period of
generations. The solution is unable to find new search space in order to find global optimal
solutions. The length of stagnation is not usually defined. Sometimes a period of twenty
generation does not constitute stagnation. Also care has to be taken as not be confuse the local
optimal solution with stagnation. Sometimes, better search space simply does not exist. In
EDE, a period of five generations of non-improving optimal solution is classified as stagnation.
Five generations is taken in light of the fact that EDE usually operates on an average of a
hundred generations. This yields to the maximum of twenty stagnations within one run of
the heuristic.

Algorithm for Insertion

Assume a schedule given as X = { x1 , .., xn }.
   1. Obtain two random numbers r1 and r2 , where r1 = rnd x (lo) , x (hi)            and r2 =
      rnd x (lo) , x (hi) , the constraints being r1 = r2 and r1 < r2 .

        (a) Remove the value indexed by r1 in the schedule X.
        (b) For k=r1 ,.....,r2 − 1, do the following:
                i. xk = xk+1 .
        (c) Insert the higher indexed value r2 by the lower indexed value r1 as: Xr2 = Xr1 .
   2. Output X as the new schedule.

Fig. 6. Pseudocode for Insertion




www.intechopen.com
64
10                                              Traveling Salesman Problem, Theory and Applications
                                            Traveling Salesman Problem, Theory and Applications

To move away from the point of stagnation, a feasible operation is a neighborhood or
local search, which can be applied to a solution to find better feasible solution in the local
neighborhood. Local search in an improvement strategy. It is usually independent of the
search heuristic, and considered as a plug-in to the main heuristic. The point of note is that
local search is very expensive in terms of time and memory. Local search can sometimes be
considered as a brute force method of exploring the search space. These constraints make
the insertion and the operation of local search very delicate to implement. The route that
EDE has adapted is to check the optimal solution in the population for stagnation, instead
of the whole population. As mentioned earlier five (5) non-improving generations constitute
stagnation. The point of insertion of local search is very critical. The local search is inserted at
the termination of the improvement module in the EDE heuristic.
Local search is an approximation algorithm or heuristic. Local search works on a neighborhood.
A complete neighborhood of a solution is defined as the set of all solutions that can be arrived at
by a move. The word solution should be explicitly defined to reflect the problem being solved.
This variant of the local search routine is described in Onwubolu (2002) and is generally
known as a 2-opt local search.
The basic outline of a local search technique is given in Figure 7. A number α is chosen equal
to zero (0) (α = ∅). This number iterates through the entire population, by choosing each
progressive value from the solution. On each iteration of α, a random number i is chosen
which is between the lower (1) and upper (n) bound. A second number β starts at the position
i, and iterates till the end of the solution. In this second iteration another random number j is
chosen, which is between the lower and upper bound and not equal to value of β. The values
in the solution indexed by i and j are swapped. The objective function of the new solution is
calculated and only if there is an improvement given as ∆ ( x, i, j) < 0, then the new solution is
accepted.
The complete template of EDE is given in Figure 8.

Algorithm for Local Search

Assume a schedule given as X = { x1 , .., xn }, and two indexes α and β. The size of the schedule
is given as n. Set α = 0.
     1. While α < n
         (a) Obtain a random number i = rand[1, n] between the bounds and under constraint
             i ∈ α.
               /
         (b) Set β = {i }
               i. While β < n
                  A. Obtain another random number j = rand[1, n] under constraint j ∈ β.
                                                                                    /
                                          xi = x j
                  B. IF ∆ ( x, i, j) < 0;
                                          x j = xi
                   C. β = β ∪ { j}
              ii. α = α ∪ { j}


Fig. 7. Pseudocode for 2 Opt Local Search




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                                        65
                                                                                                         11


Enhanced Differential Evolution Template


  Input :D, Gmax , NP ≥ 4, F ∈ (0, 1⎧ , CR ∈ [0, 1], and bounds :x (lo) , x (hi) .
                                         +)
                                         ⎨ xi,j,G=0 = x (lo) + rand j [0, 1] • x (hi) − x (lo)
               ⎧
               ⎪
               ⎪                                               j                       j        j
               ⎨
                  ∀i ≤ NP ∧ ∀ j ≤ D
  Initialize :                           ⎩ i f x j,i ∈ x0,i , x1,i , ..., x j−1,i
                                                          /
               ⎪
               ⎪
                  i = {1, 2, ..., NP}, j = {1, 2, ..., D }, G = 0, rand j [0, 1] ∈ [0, 1]
               ⎩
  Cost : ∀i ≤ NP : f xi,G=0
  ⎪ While G ⎧ Gmax
  ⎧
                 <
                ⎪ Mutate and recombine :
  ⎪
  ⎪
  ⎪
  ⎪
  ⎪
  ⎪             ⎪ r , r , r ∈ {1, 2, ...., NP}, randomly selected, except :r = r = r = i
                ⎪
                ⎪ 1 2 3
  ⎪
  ⎪             ⎪                                                                        1    2     3
                ⎪ jrand ∈ {1, 2, ..., D }, randomly selected once each i
  ⎪
  ⎪             ⎪
                ⎪
  ⎪
  ⎪             ⎪
  ⎪             ⎪                          ⎧
                                           ⎪ γ j,r3 ,G ← x j,r3 ,G : γ j,r ,G ← x j,r ,G :
  ⎪
  ⎪             ⎪
                ⎪
  ⎪
  ⎪             ⎪
                ⎪                          ⎪                                       1              1
  ⎪
  ⎪             ⎪
                ⎪                          ⎪
                                           ⎪
                                           ⎪
                                                      j,r2 ,G ← x j,r2 ,G
  ⎪             ⎪
  ⎪
  ⎪
  ⎪
  ⎪
                ⎪
                ⎪
                ⎪
                ⎪
                                           ⎪ γ
                                           ⎪
                                           ⎨                                    Forward Transformation
                ⎪ ∀ j ≤ D, u j,i,G+1 =
                                           ⎪ γ j,r3 ,G + F · (γ j,r1 ,G − γ j,r2 ,G )
  ⎪
  ⎪             ⎪
  ⎪
  ⎪             ⎪
                ⎪
  ⎪             ⎪
                                                         if (rand j [0, 1] < CR ∨ j = jrand )
  ⎪
  ⎪             ⎪
                ⎪                          ⎪
                                           ⎪
  ⎪
  ⎪             ⎪
                ⎪                          ⎪
                                           ⎪
  ⎪             ⎪                          ⎪
                                                      j,i,G ← x j,i,G         otherwise
  ⎪
  ⎪
  ⎪
                ⎪
                ⎪
                ⎪                          ⎩ γ
                                           ⎪
  ⎪
  ⎪             ⎨                   ⎧
      ∀i ≤ NP
  ⎨
                                    ⎪ ρ j,i,G+1 ← ϕ j,i,G+1 Backward Transformation
                                    ⎪
                ⎪
                ⎪                   ⎪
                                    ⎨
  ⎪
                ⎪ u′
                ⎪                                        mutate
                                  =       u j,i,G+1 ← ρ j,i,G+1 Mutate Schema
  ⎪
  ⎪             ⎪
  ⎪
  ⎪
  ⎪
                ⎪
                ⎪
                ⎪      i,G +1       ⎪
  ⎪             ⎪
                                    ⎩ if u′
                                    ⎪
                                               j,i,G +1 ∈    / u0,i,G+1 , u1,i,G+1 , .. u j−1,i,G+1
  ⎪
  ⎪             ⎪
                ⎪                   ⎪
  ⎪
  ⎪             ⎪
                ⎪
  ⎪
  ⎪             ⎪
                ⎪
                                         ′
                       j,i,G +1 ← ui,G +1 Standard Mutation
  ⎪
  ⎪
  ⎪             ⎪ u
                ⎪
                ⎪
  ⎪
  ⎪             ⎪
                ⎪
  ⎪             ⎪
                                         ′
                       j,i,G +1 ← ui,G +1 Insertion
  ⎪
  ⎪
  ⎪             ⎪ u
                ⎪
                ⎪
  ⎪
  ⎪             ⎪
                ⎪
  ⎪
  ⎪
  ⎪             ⎪ Select :
                ⎪
                ⎪
  ⎪             ⎪
                                    ui,G+1 if f (ui,G+1 ) ≤ f ( xi,G )
  ⎪
  ⎪             ⎪
                ⎪
                ⎩ xi,G+1 =
  ⎪
  ⎪             ⎪
                ⎪
  ⎪
  ⎪
  ⎪
  ⎪                                 xi,G otherwise
  ⎪ G=G+1
  ⎪
  ⎪
  ⎪
  ⎩
      Local Search xbest = ∆ ( xbest , i, j) if stagnation

Fig. 8. EDE Template

5. Chaotic systems
Chaos theory has its manifestation in the study of dynamical systems that exhibit certain
behavior due to the perturbation of the initial conditions of the systems. A number of such
systems have been discovered and this branch of mathematics has been vigorously researched
for the last few decades.
The area of interest for this chapter is the embedding of chaotic systems in the form of chaos
number generator for an evolutionary algorithm.
The systems of interest are discrete dissipative systems. The two common systems of Lozi map
and Delayed Logistic (DL) were selected as mutation generators for the DE heuristic.




www.intechopen.com
66
12                                            Traveling Salesman Problem, Theory and Applications
                                          Traveling Salesman Problem, Theory and Applications

5.1 Lozi map
The Lozi map is a two-dimensional piecewise linear map whose dynamics are similar to those
of the better known Henon map (Hennon, 1979) and it admits strange attractors.
The advantage of the Lozi map is that one can compute every relevant parameter exactly, due
to the linearity of the map, and the successful control can be demonstrated rigorously.
The Lozi map equations are given in Equations 9 and 10.

                              y1 (t + 1) = 1 − a |y1 (t)| + y2 (t)                           (9)

                                     y2 (t + 1) = by1 (t)                                   (10)
The parameters used in this work are a = 1.7 and b = 0.5 as suggested in Caponetto et al.
(2003). The Lozi map is given in Figure 9.

5.2 Delayed logistic map
The Delayed Logistic (DL) map equations are given in Equations 11 and 12.

                                       y1 ( t + 1) = y2                                     (11)

                                  y2 (t + 1) = ay2 (1 − y1 )                                (12)
The parameters used in this work is a = 2.27. The DL map is given in Figure 10.

6. Experimentation
The experimentation has been done on a few representative instances of both symmetric
and asymmetric TSP problems. The chaotic maps are embedded in the place of the random




Fig. 9. Lozi map




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                         67
                                                                                          13




Fig. 10. Delayed Logistic

number generator in the EDE algorithm and the new algorithm is termed EDEC . Five repeated
experimentation of each instance is done by the two different chaotic embedded algorithms.
The average results of all the ten experimentation is compared with EDE and published results
in literature.

6.1 Symmetric TSP
Symmetric TSP problem is one, where the distance between two cities is the same to and fro.
This is considered the easiest branch of TSP problem.
The operational parameters for TSP is given in Table 2.
Experimentation was conducted on the City problem instances. These instances are of 50
cities and the results are presented in Table 3. Comparison was done with Ant Colony
(ACS) (Dorigo & Gambardella, 1997), Simulated Annealing (SA) (Lin et al., 1993), Elastic Net
(EN) (Durbin & Willshaw, 1987), Self Organising Map (SOM) (Kara et al., 2003) and EDE of
Davendra & Onwubolu (2007a). The time values are presented alongside.
In comparison, ACS is the best performing heuristic for TSP, and EDEC is second best
performing heuristic. On average EDEC is only 0.02 above the values obtained by ACS. It

                                      Parameter      Value
                                       Strategy         5
                                          CR          0.8
                                           F          0.5
                                          NP          100
                                      Generation       50

Table 2. EDEC TSP operational values




www.intechopen.com
68
14                                           Traveling Salesman Problem, Theory and Applications
                                         Traveling Salesman Problem, Theory and Applications


     Instant       ACS         SA          EN           SOM        EDE          EDEC
     City set 1    5.88        5.88        5.98         6.06       5.98         5.89
     City set 2    6.05        6.01        6.03         6.25       6.04         6.02
     City set 3    5.58        5.65        5.7          5.83       5.69         5.61
     City set 4    5.74        5.81        5.86         5.87       5.81         5.78
     City set 5    6.18        6.33        6.49         6.7        6.48         6.21
     Average       5.88        5.93        6.01         6.14       6            5.9

Table 3. Symmetric TSP comparison

must be noted that all execution time for EDEC was under 10 seconds. Extended simulation
would possibly lead to better results.

6.2 Asymmetric TSP
Asymmetric TSP is the problem where the distance between the different cities is different,
depending on the direction of travel. Five different instances were evaluated and compared
with Ant Colony (ACS) with local search (Dorigo & Gambardella, 1997) and EDE of Davendra
& Onwubolu (2007a). The results are given in Table 4.

     Instant      Optimal       ACS 3-OPT         ACS 3-OPT     EDE           EDEC
                                best              average       average       average
     p43          5620          5620              5620          5639          5620
     ry48p        14422         14422             14422         15074         14525
     ft70         38673         38673             38679.8       40285         39841
     kro124p      36230         36230             36230         41180         39574
     ftv170       2755          2755              2755          6902          4578

Table 4. Asymmetric TSP comparison

ACS heuristic performs very well, obtaining the optimal value, whereas EDE has an average
performance. EDEC significantly improves the performance of EDE. One of the core difference
is that ACS employs 3−Opt local search on each generation of its best solution, where as EDEc
has a 2−Opt routine valid only in local optima stagnation.

7. Conclusion
The chaotic maps used in this research are of dissipative systems, and through
experimentation have proven very effective. The results clearly validate that the chaotic maps
provide a better alternative to random number generators in the task of sampling of the fitness
landscape.
This chapter has just introduced the concept of chaotic driven evolutionary algorithms.
Much work remains, as the correct mapping structure has to be investigated as well as the
effectiveness of this approach to other combinatorial optimization problems.




www.intechopen.com
Chaos Driven Evolutionary Algorithm forfor the Traveling Salesman Problem
Chaos Driven Evolutionary Algorithm the Traveling Salesman Problem                             69
                                                                                              15

8. Acknowledgement
The following two grants are acknowledged for the financial support provided for this
research.
1. Grant Agency of the Czech Republic - GACR 102/09/1680
2. Grant of the Czech Ministry of Education - MSM 7088352102

9. References
Bland, G. & Shallcross, D. (1989). Large traveling salesman problems arising fromexperiments
          in x-ray crystallography: A preliminary report on computation, Operation Research
          Letters Vol. 8: 125–128.
Caponetto, R., Fortuna, L., Fazzino, S. & Xibilia, M. (2003). Chaotic sequences to improve
          the performance of evolutionary algorithms, IEEE Transactions on Evolutionary
          Computation Vol. 7: 289–304.
Davendra, D. (2001). Differential evolution algorithm for flow shop scheduling, Master’s thesis,
          University of the South Pacific.
Davendra, D. & Onwubolu, G. (2007a). Enhanced differential evolution hybrid scatter search
          for discrete optimisation, Proc. of the IEEE Congress on Evolutionary Computation,
          Singapore, pp. 1156–1162.
Davendra, D. & Onwubolu, G. (2007b). Flow shop scheduling using enhanced differential
          evolution, Proc.21 European Conference on Modeling and Simulation, Prague, Czech Rep,
          pp. 259–264.
Davendra, D. & Zelinka, I. (2010). Controller parameters optimization on a representative
          set of systems using deterministic-chaotic-mutation evolutionary algorithms, in
          I. Zelinka, S. Celikovsky, H. Richter & C. G (eds), Evolutionary Algorithms and Chaotic
          Systems, Springer-Verlag, Germany.
Dorigo, M. & Gambardella, L. (1997). Ant colony system: A co-operative learning approach
          to the traveling salesman problem, IEEE Transactions on Evolutionary Computation Vol.
          1: 53–65.
Durbin, R. & Willshaw, D. (1987). An analogue approach to the travelling salesman problem
          using the elastic net method, Nature Vol. 326: 689–691.
Garey, M. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H.
          Freeman, San Francisco.
Hennon, M. (1979). A two-dimensional mapping with a strange attractor, Communications in
          Mathematical Physics Vol. 50: 69–77.
Kara, L., Atkar, P. & Conner, D. (2003). Traveling salesperson problem (tsp) using shochastic
          search. advanced ai assignment, Carnegie Mellon Assignment Vol. 15213.
Korte, B. (1988). Applications of combinatorial optimization, 13th International Mathematical
          Programming Symposium, Tokyo.
Land, A. & Doig, A. (1960). An automatic method of solving discrete programming problems,
          Econometrica Vol. 28(3): 497–520.
Lin, F., Kao, C. & Hsu (1993). Applying the genetic approach to simulated annealing in solving
          np- hard problems, IEEE Transactions on Systems, Man, and Cybernetics - Part B Vol.
          23: 1752–1767.
Lin, S. & Kernighan, B. (1973). An effective heuristic algorithm for the traveling-salesman
          problem, Operations Research Vol. 21(2): 498–516.




www.intechopen.com
70
16                                           Traveling Salesman Problem, Theory and Applications
                                         Traveling Salesman Problem, Theory and Applications

Onwubolu, G. (2002). Emerging Optimization Techniques in Production Planning and Control,
          Imperial Collage Press, London, England.
Onwubolu, G. (2005). Optimization using differential evolution, Technical Report TR-2001-05,
          IAS, USP, Fiji.
Onwubolu, G. & Davendra, D. (2006). Scheduling flow shops using differential evolution
          algorithm, European Journal of Operations Research 171: 674–679.
Onwubolu, G. & Davendra, D. (2009).             Differential Evolution: A Handbook for Global
          Permutation-Based Combinatorial Optimization, Springer, Germany.
Price, K. (1999). An introduction to differential evolution, New ideas in Optimisation, US.




www.intechopen.com
                                      Traveling Salesman Problem, Theory and Applications
                                      Edited by Prof. Donald Davendra




                                      ISBN 978-953-307-426-9
                                      Hard cover, 298 pages
                                      Publisher InTech
                                      Published online 30, November, 2010
                                      Published in print edition November, 2010


This book is a collection of current research in the application of evolutionary algorithms and other optimal
algorithms to solving the TSP problem. It brings together researchers with applications in Artificial Immune
Systems, Genetic Algorithms, Neural Networks and Differential Evolution Algorithm. Hybrid systems, like Fuzzy
Maps, Chaotic Maps and Parallelized TSP are also presented. Most importantly, this book presents both
theoretical as well as practical applications of TSP, which will be a vital tool for researchers and graduate entry
students in the field of applied Mathematics, Computing Science and Engineering.



How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:

Ivan Zelinka, Roman Senkerik, Magdalena Bialic-Davendra and Donald Davendra (2010). Chaos Driven
Evolutionary Algorithm for the Traveling Salesman Problem, Traveling Salesman Problem, Theory and
Applications, Prof. Donald Davendra (Ed.), ISBN: 978-953-307-426-9, InTech, Available from:
http://www.intechopen.com/books/traveling-salesman-problem-theory-and-applications/chaos-driven-
evolutionary-algorithm-for-the-traveling-salesman-problem




InTech Europe                               InTech China
University Campus STeP Ri                   Unit 405, Office Block, Hotel Equatorial Shanghai
Slavka Krautzeka 83/A                       No.65, Yan An Road (West), Shanghai, 200040, China
51000 Rijeka, Croatia
Phone: +385 (51) 770 447                    Phone: +86-21-62489820
Fax: +385 (51) 686 166                      Fax: +86-21-62489821
www.intechopen.com

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:2
posted:11/21/2012
language:Unknown
pages:17