# Improvements On Hybrid Genetic Algorithm for Quick Convergence

Document Sample

```					        Improvements On Hybrid Genetic Algorithm for Quick Convergence
12
Jiang Hua    Qin Lian Cheng2
1.   China University of Geosciences Wuhan, 430074, P.R. China.
2.   Guilin Institute of Electronic Technology, Guilin, 541004, P.R. China.

Abstract: An improved hybrid genetic algorithm for quick convergence is presented in this paper. In the process of
solving multi-objective Pareto solution, the search ability in total area and the convergence characteristics can be
reinforced by self-adjusting of aberrance probability in offsprings evolution. Comparing with the typical hybrid genetic
algorithm, the more effective optimization convergence can be obtained by using the improved hybrid genetic algorithm
in solution for optimization problem. Numerical simulation based on some typical examples demonstrate the
effectiveness of the proposed method.
Key words: Hybrid genetic algorithm, Pareto solution, multi-objective optimization

1   Introduction
In     practical      engineering      application,     2   The description of the optimization problem
multi-objective optimization questions often appear.              Consider a commonly about multi-objective
The different objective functions can’t be compared           minimize problem:
and even mutual conflict. In the process of
Minimize f(X)=(f 1 (X),f 2 (X),…,f m (X))                (1)
optimization, it needs to consider simultaneity
optimization with many objective functions. genetic
S.t.          g 1 (X)  0     I=1,…,p
algorithm is a type of total optimization arithmetic
that developed in recent years. It is widely used in all            Where
kinds of complicate optimization questions because of
X  ( x1 , x 2 , . . .x,n ) T
its hidden parallel, randomicity and high robust. At
present, genetic algorithm’s methods with application                        Rn           special n vectors
multi-objective optimization mainly are divided three
g i (X ) restrict condition
types: multi-objective converges single objective, not
Pareto method and Pareto method. The tradition
f i ( X ) sub-objective function
method that multi-objective convert single objective is
simple in arithmetic method design, high in computed               The range of value, which restrict condition
efficiency, but it only can solve an availability             decide the decision-making vector, is named “feasible
solution. Not Pareto method can solve many                    area” (  ).
availability solutions, but these solutions usually focus          In the investigation of multi-objective
on the port of the availability interface. The                optimization, Pareto optimization solution (or
optimization method based on Pareto method is that            effective solution) is the most basic concept. If
many objective values directly mapped to adaptive              X *        ,      not       exist          X    ,   make
function. By comparing the dominate relation of
f ( X )  f ( X * ) , and it at least has an objective in
objective value, the effective solution muster will be
found out. The type of method can solve many                                                                             *
f(X) value smaller than an objective value in f ( X ) .
solution by one computed, but the method is
*
complicate in arithmetic, low efficiency in bigger            So X is Pareto optimization in equation (1) (or
offsprings dimensions. In this paper, a class of              effective solution).
multi-objective optimization method based on hybrid
genetic algorithm is proposed to obtain Pareto solution       3   Hybrid Genetic Algorithm and its Improvement
along the Pareto frontier.                                          The simple genetic algorithm is prone to
1
prematurity convergence in the evolution initial stages,   propagation population taking the new extremum
moreover, the offsprings’s diversity descended make        point as reference point, or maintaining population
convergence bad in the evolution evenly. Its               unchanged.
limitations already have many different improving                (2) Intercrossing Operation
methods at present, such as dilamination genetic                 Propagating population is divided into two
algorithm, self-adoptable genetic algorithm, genetic       offsprings namely f1 and f2. Using the type of
algorithm based on little biology circumstance             floating-point intercrossing create the son offsprings
technology, and hybrid genetic algorithm which             which is equivient to parent offsprings in number.
makes the other optimization method amalgamate into              Intercrossing algorithm
the process of evolution. The hybrid genetic algorithm
c h i l1d (1   ) f1  f 2

makes the usage of mutual supplement not only in the
algorithm construction, but also in search mechanism
child 2  f1  (1   ) f     2
and the evolution thought. And it provides a good
method for solving high dimension complicated                       where
optimization problem.                                                         is evenly distributing in the range
In this paper, it adapts the hybrid strategy of          of [0,1].
genetic algorithm and simplex method (GASM). The                (3) Differention Operation
analog annealing mechanism is introduced that make              Son offsprings produces differention. The
probability with evolution process. Genetic algorithm
has the ability of total search, but its convergence is
Pm  min{1, exp( ( f m  f avg ) / f m )}
bad in the round of optimization point, commonly, and
it is approximate optimization. Simplex method has                      where
very strong ability of local search. The efficient of
fm      the best monomer adaptable in son
search is high, but it is very easy to plunge local
optimization because of enduring the effect of initial          offsprings
value. Using genetic algorithm search approximate
f avg     son offsprings’s average adaptable
optimization point in the total range, then approximate
optimization point act as initial value and using
The more         f m close to       f avg , the more
simplex method search in the local range. Combining
the two methods may enhance convergence velocity           offsprings’s maturity is. It is easy to induce
greatly. Simulating anneal algorithm has the               prematurity convergence. Prematurity convergence is
characteristic of jumping suddenly for probability.
avoided for enhancing diffenerntion probability P m
During the evolution, it not only accept the monomer
for good adaptability, but also accept the monomer for     and producing new monomer. Differential probability
bad adaptability as definite probability. When             and offsprings maturity represent on the contrary
simulating anneal mechanism is introduced, it can          toward. This indicates offsprings’s self-adaptable
avoid to induce local optimization in the whole hog        adjusting process.
and enhancing the reliability of total optimization
Differential algorithm X’=X+ 
solution.
The thought of GASM algorithm’s improving:                Where
(1) Search Mechanism                                              X       differential genetic
After using genetic algorithm finds out an
        random disorder obeyed to Kexi
extremum point each time, it will begin local search
taking the point as simplex method’s initial value. If     distributing
better extremum point is searched, it creates new                using to adjust differential amplitude
2
(4) Select Operation
T0
In order to enhance offsprings biodiversity,                     (1)          Obtaining initial temperature                                 , the
parent offsprings, son offsprings of intercrossing and          number of population popsize, evolution time
son offsprings of differention form popnew, then
w1 , w2 ,... wm
produce selected operate.                                       generation, k=0 and m weights                                                  , where

wi  0,1                                  w
In order to assure good unit pass down the next
1
generation units, unit is direct reproduce to the next                                ， and                    i
，       by        using
generation. The fitness of the unit is higher than it of
max wi                   
~
fi
average. The other units, its fitness is lower than it of       max                                           as      fitness              function,
average, reproduce to the next generation with                  Multi-objective minimize problem is transformed into
uniobjective minimize problem:
probability PS . PS is define as equation (4).
min  max  wi f i                     ,
~                         ~
f   i
Where                       is the
PS  min{1, exp( f i  f avg ) / Tk }       (4)
value of objective functions after an empirical
expression.
Where    f i is fitness of the unit,        f avg is
(2) The point of min fitness of propagate
population is used as initial point of simplex algorithm
average fitness, T k is anneal temperature. The lower

anneal temperature, the smaller the fitness of                  in part search. X m in is the point of min fitness of
receiving unit of a sort. Therefore no converge can be                                                        X opt
propagate population, its fitness is f m in ,       is the
Pm
avoided. The no converge derives from largen of             .   optimization point of simplex search. The value of
After selection operation, when the next                                     ~
f opt
population bigger than the former population, we                fitness is               , if
discard the individual which possess the lower fitness            ~          ～                                                ~            ~
f m in  f opt                      X opt  X m in       f opt  f m in
function value, otherwise we obtain new individual                                 , Then                             ,                          . The
randomly and add to the new population.                                                                                                          X opt
(5) The cyclic accelerated operation                            new propagate population is produced by using
As in multi-dimension complicated question, the                                                 X new  X opt            are
method above still need more generations and bigger             as reference point:                                       ,                uniform
generation to convergent the optimum result. In order           distributions from [-1,1].
to reduce the cyclic times, it uses two optimum                       (3) The operation of crossing, aberrance, and
processes, that is, the result of first one as the initial      choice is gone on according to above mentioned
value of the second one, so that the generation of two          improved hybrid genetic algorithms. The objective of
processes can be reduced efficiently.                           the operative is propagating population.
(4) Condition of judge whether convergence is
4       Evolutionary    Multi-objective      Optimization
Algorithms and Realization                                  end: T  Tstop or K  generation. The program is
Above mentioned method for multi-objective                                                 X opt
optimization is an effective and rapid method for               over if satisfies,                      and each objective function is
solving uniobjective optimization problems. Here
output.      Otherwise:                 k  k  1 , recede warm:
presents a method for solving optimal solutions of
multi-objective min of preference for Pareto by using
T  log(T0 / k )
the mentioned hybrid algorithms and minmax                                                  , turn into 2.
methods in multi-objective optimization algorithms.                  In the above algorithm, for the authorize value in
The clear process is as follows.                                the field of [0,1] is random, single of muti-caculation
3
can get the useful result on the boundary of Pareto in         k           x1         x2        f1          f2
different direction.                                           1        -0.0000   -0.0000    1.0000     1.0000
2        -0.3088   -0.0000    0.9655     1.0955
5     Examples                                                 3        0.8976     0.0000    0.5566     1.7980
In this paper, two typical example will be                4        -1.1900    0.0000    0.4328     2.4152
presented to demonstrate the effectiveness of the              5        -0.4533    0.0000    0.8370     1.2056
proposed method. The software is implemented by                6        0.7456     0.0000    0.6443     1.5643
using Matlab6.0.                                               7        0.6242    -0.0000    0.7197     1.3890
min { f 1  x , f 2  ( x  2) }             8        -1.3408    0.0000    0.3577     2.7892
2                 2
Problem 1
9        0.9477    -0.0000    0.5256     1.8945
S.t. x  [3,3]                                   10       -1.5555   -0.0000    0.2944     3.4171

Problem 2
Based no the above results, figure 2 and figure 3
min f1  1 /( x  x  1), f 2  x  3x  1 }
2       2             2         2      indicate that the algorithm can obtain a group of
1       2             1         2
effective solution along the Pareto frontier and in the
S.t.   x1  [3,3]        x2  [5,5]            different directions. The time of computing is related
to offsprings model, evolution algorithm, initial
Problem 1: It is a problem that a simple
temperature. It takes problem 2 as an example. The
monomer variable optimize durable object. Its Pareto
time of computing each time is not more than 0.1
solution locates to in the field of [0,2] as shown in
second. The condition is that offsprings model is 10,
figure 1. By using the new algorithm, the number of
evolution algorithm is 20, initial temperature is 100.
offsprings is 10. Evolution algebra is 20. Initial

temperature T 0 is 100. The time of computing is 50.       6   Conclusion
An improved hybrid genetic algorithm for quick
The 15 former result are shown in table 1. The
convergence is presented in this paper. In the process of
distribution of solution are shown in figure 2.
solving multi-objective Pareto solution, the search ability
Problem 2: The number of offsprings is 10.
in total area and the convergence characteristics can be
Evolution algebra is 20. Initial temperature is 200. The
reinforced by self-adjusting of aberrance probability in
time of computing is 100. The 15 former result are
offsprings evolution. The design of algorithm is simple
shown in table 1 and table 2.
and doesn’t need to consider the distribution of
TABLE 1 THE RESULT OF COMPUTED VALUE
sub-object value. Comparing with the typical hybrid
times          The results of problem 1               genetic algorithm, the more effective optimization
k            x                f1         f2          convergence can be obtained by using the improved
1          1.5520         2.4087        0.2007       hybrid genetic algorithm in solution for optimization
2          0.7757         0.6017        1.4990       problem. Numerical simulation results based on some
3          1.9922         3.9690        0.0002       typical examples demonstrate the effectiveness of the
4          0.1298         0.0169        3.4978       proposed method. It lay decision-making basis for
5          1.1435         1.3077        1.3956       designer. So the new algorithm can be used to solve other
6          0.8197         0.6689        3.3834       engineering optimization problems.
7          0.1599         0.0256        0.4067
8          1.3604         1.8507        0.5678       7   Acknowledgements
9          1.1287         1.5593        3.3503               The authors would like to acknowledge the support
10          0.1673         0.0287        2.0486       of National Natural Science Foundation of China ( Grant
number: 50175070), during the course of this work.
TABLE 2 THE RESULT OF COMPUTED VALUE
times                  The results of problem 2
4
REFRENCES
[1]   Hu Yuda. A practical multiobjectives optimization。
Shanghai.   Shanghai     scientific    &     Technical
Publishers，1990
[2]       Wang    Ling.   Optimization      with   Artificial
Intelligence and Its Application。Beijing. Tsinghua
University Press. 2001
[3] Zong Lingqun. A class of hybrid adaptive
calculating method and analysis on its
performance. Systems Engineering-Theory &
Practice. 2001, (4): 14-18
[4] Wang ling, Zheng Dazhong. A class of simulated
annealing     approach      for multi-objective
optimization. Computer Engineering and
Applications. 2002, (8): 4-5

Jiang Hua: A associate professor in Dept. of Computer
in Guilin university of Electronic Technology (GUET).
His research interests focus on the software engineering，
optimization theory and spatial information technology.
E-mail:    tom6619@263.net       Tel: 86-0773-2926700

5
6

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 7 posted: 3/18/2010 language: English pages: 6
How are you planning on using Docstoc?