Multiobjective Optimization Using Genetic Algorithm by editorijettcs

VIEWS: 26 PAGES: 6

More Info
									       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 3, September – October 2012                                    ISSN 2278-6856




          Multiobjective Optimization Using Genetic
                          Algorithm
               Md. Saddam Hossain Mukta1, T.M. Rezwanul Islam2 and Sadat Maruf Hasnayen3
                 1,2,3
                         Department of Computer Science and Information Technology, Islamic University of Technology,
                                                Board Bazaar, Gazipur, Bangladesh


Abstract: In case of Multi-objective optimization problems
(MOP), objective vector can be scalarized into a single             2. OBJECTIVE
objective and the yielded objective is highly sensitive to the
objective weight vectors and it asks the user to have               Most optimization problems naturally have several
knowledge about the underlying problems. Moreover in the            objectives to be achieved (normally conflicting with each
case of Multi-objective optimization problems, one may              other). These problems with several objectives, are called
require a set of Pareto-Optimal points in the search space,         “Multi-objective” or “vector” optimization problems, and
instead of a single point. Since Genetic Algorithm (GA) works       were originally studied in the context of economics and
with a set of individual solutions called population, it is         operation research. However scientists and engineers soon
natural to adopt GA schemes for Multi-objective Optimization        realized that such problems naturally arise in all areas of
problems so that one can capture a number of solutions              knowledge.
simultaneously. Although many techniques           have been        Over the years, the work of a considerable amount of
developed to solve these types of problems, namely VEGA,
                                                                    operational researcher has produced a important number
MOGA, NPGA, NSGA etc, all of them have some
shortcomings. This project proposal explains a new approach         of techniques to deal with Multi-objective optimization
to solve these types of problems by subdividing the population      problems (Miettinen, 1998). However, it was until
with respect to each overlapping pair of objective functions        relatively recent that researchers realize the potential of
and their merging through genetic operations.                       evolutionary algorithms (EA) in this area.
Keywords:        Genetic          algorithm,       Evolutionary     The most recent developments of such schemes are
Computation.                                                        VEGA, MOGA, NPGA, NSGA and NSGA-II. The fact is
                                                                    that most of them are successful to many test suites for
1. INTRODUCTION                                                     Evolutionary Multi Objective Optimization (EMOO).
                                                                    However they also encounter with some difficulties and
Most optimization problems naturally have several                   recent research trends are mainly heading for devising
objectives to be achieved (normally conflicting with each           new approach to handle with Pareto-Optimal Solutions.
other). These problems with several objectives, are called          This research proposal mainly concentrates on a new
“Multi-objective” or “vector” optimization problems, and            approach to handle this concern.
were originally studied in the context of economics and
                                                                      2.1 Multi objective Optimization Problem
operation research. However scientists and engineers soon
realized that such problems naturally arise in all areas of         Most optimization problems naturally have several
knowledge.                                                          objectives to be achieved and normally they conflict with
Over the years, the work of a considerable amount of                each other. These problems with several objectives are
                                                                    called “multi objective” or “vector” optimization
operational researcher has produced a important number
                                                                    problems.
of techniques to deal with Multi-objective optimization
                                                                    Over the years, the work of considerable amount of
problems (Miettinen, 1998). However, it was until
                                                                    operational researchers has produced an important
relatively recent that researchers realize the potential of         number of techniques to deal with multi objective
evolutionary algorithms (EA) in this area.                          optimization problems ( Miettinen, 1998). We are
The most recent developments of such schemes are                    interested in solving multi objective optimization (MOPs)
VEGA, MOGA, NPGA, NSGA and NSGA-II. The fact is                     of the form:
that most of them are successful to many test suites for
Evolutionary Multi Objective Optimization (EMOO).                              Opt [ f1 (x), f2(x), ...... , fk(x) ]T
However they also encounter with some difficulties and
recent research trends are mainly heading for devising              Subject to the m inequality constraint:
new approach to handle with Pareto-Optimal Solutions.                          gi(x)
This research proposal mainly concentrates on a new                 And the p equality constraints:
approach to handle this concern.
                                                                                hi(x)=0         i = 1,2,…p

Volume 1, Issue 3, September – October 2012                                                                             Page 255
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 3, September – October 2012                                    ISSN 2278-6856


                                                               conditional heteroskedastic (GARCH) model, in which
Where k is the number of objective functions fi: Rn R.         the log-likelihood function is the objective to maximize.
We call x=[x1,x2,………,xn]T the vector of decision               In each case, the unknowns may be thought of as a
variables. We wish to determine from among the set F of        parameter vector, V, and the objective function, z = f(V),
all numbers which satisfy (1.2) and (1.3) the particular set   as a transformation of a vector-valued input to a scalar-
x1*, x2 *,……….,xn* which yields the optimum values of          valued performance metric z.
all objective functions.                                       Optimization may take the form of a minimization or
                                                               maximization procedure. Throughout this article,
   2.2 Genetic Algorithm
                                                               optimization will refer to maximization without loss of
The past decade has witnessed a flurry of interest within      generality, because maximizing f(V) is the same as
the financial industry regarding artificial intelligence       minimizing -f(V). My preference for maximization is
technologies, including neural networks, fuzzy systems,        simply intuitive: Genetic algorithms are based on
and genetic algorithms. In many ways, genetic                  evolutionary processes and Darwin's concept of natural
algorithms, and the extension of genetic programming,          selection. In a GA context, the objective function is
offer an outstanding combination of flexibility,               usually referred to as a fitness function, and the phrase
robustness, and simplicity.                                    survival of the fittest implies a maximization procedure.
"Genetic algorithms are based on a biological metaphor:           2.3 Sharing on MOO
They view learning as a competition among a population         Most experimental MOEAs incorporate phenotypic-based
of evolving candidate problem solutions. A 'fitness'           sharing using the “distance” between objective vectors for
function evaluates each solution to decide whether it will     consistency.     A sharing function[2] determines the
contribute to the next generation of solutions. Then,          degradation of an individual’s fitness due to a neighbor at
through operations analogous to gene transfer in sexual        some distance dist. A sharing function 'sh' was defined as
reproduction, the algorithm creates a new population of        a function of the distance with the following properties:
candidate solutions." Genetic algorithms are created when
computers evaluate and improve a population of possible             0 <= sh(dist) <= 1, for all distance dist
solutions to a problem in a stepwise fashion. The new               sh(0) = 1, and
program evolves by letting good solutions produce                   limdist- = 0;
offspring as bad solutions die out. Over time, the
individual solutions in the population become better and       there are many sharing functions which satisfy the above
better, producing a final, best solution. The method uses      condition. One approach can be,
terms derived from biology, such as generation,
inheritance and mutation, to describe the particular                1-(dist/ sh)             sh                 sh
program manipulation the computer uses at each step of              sh(dist) = 0                   ,otherwise
improvement, hence the name genetic algorithm.
The genetic algorithm is a probabilistic search algorithm               sh and
that iteratively transforms a set (called a population) of     of an individual x is given by:
mathematical objects (typically fixed-length binary
character strings), each with an associated fitness value,          eval'(x) = eval(x)/m(x),
into a new population of offspring objects using the           where m(x) returns the niche count for a particular
Darwinian principle of natural selection and using             individual x:
operations that are patterned after naturally occurring
genetic operations, such as crossover (sexual                       m(x) =    y   sh(dist(x,y)).
recombination) and mutation.
Virtually every technical discipline, from science and         In the above formula the sum over all y in the population
engineering to finance and economics, frequently               includes the string x itself ; consequently, if string x is all
encounters problems of optimization. Although                  by itself in its own niche, it fitness value does not
optimization techniques abound, such techniques often          decrease(m(x)=1). Otherwise , the fitness function is
involve identifying, in some fashion, the values of a          decreased proportionally to the number and closeness of
sequence of explanatory parameters associated with the         neighboring points. It means, that when many individuals
best performance of an underlying dependent, scalar-           are in the same neighborhood they contribute to ones
valued objective function. For example, in simple 3-D          another’s share count, thus derating one another’s fitness
space, this amounts to finding the (x,y) point associated      values. As a result this techniques limits the uncontrolled
with the optimal z value above or below the x-y plane,         growth of particular species within a population.
where the scalar-valued z is a surface identified by the       Sharing occurs only if both solutions are dominated or
objective function f(x,y). Or it may involve estimating a      non dominated with respect to the comparison set. A
large number of parameters of a more elaborate                 value is used , however, the associated niche count is
econometric model. For example, we might wish to               simply the number of vectors within in phenotypic space
estimate the coefficients of a generalized auto-regressive     rather than a degradation value applied against unshared

Volume 1, Issue 3, September – October 2012                                                                          Page 256
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 3, September – October 2012                                    ISSN 2278-6856


fitness. The solution with the smaller niche count is                been proved well suited for some problems but it still
selected for inclusion in the next generation.                       suffers from middling effect.
Represents how “close” two individuals must be in order
                                                                       MOGA
to decrease each other’s fitness. This value commonly
depends on the number of optima in the search space. As              Fonseca and Fleming (1993) proposed the Multi-objective
this number is generally unknown and because P Ftrue’s               Genetic Algorithm (MOGA). This approach consists of a
shape within objective space is also unknown , share’s               scheme in which the rank of a certain individual
value is assigned using Fonseca’s suggested                          corresponds to the number of individuals in the current
method(Fonseca and Fleming, 1998a):                                  population by which it is dominated. All non dominated
                                                                     individuals are assigned rank 1, while dominated ones are
                   k
                    i=1   (   i       )-
                                  share
                                           k
                                            i=1   i
                                                                     penalized according to the population density of the
     N =                                                             corresponding region of the trade-off surface. Its main
                      k                                              weakness is its dependence on the sharing factor (how to
                     s hare
                                                                     maintain the diversity is the main issue when dealing
Where N is the number of individuals in the populations,             with Evolutionary Multi-objective Optimization).
‘s is the difference between the maximum and the                        NPGA
minimum objective values in dimension I, and k is the                Horn et. al. (1994) proposed the Niched Pareto Genetic
number if distinct MOP objectives . As all variables but             Algorithm, which uses a tournament selection scheme
one are known, can be easily computed. For example , if              based on Pareto dominance. Two individuals are
k=2, 1= 2 =1 and N=50, the above equation simplifies                 compared against a set of members of the population
to:                                                                  (typically 10% of the population size). When both
                                                                     competitors are either dominated or non dominated (i.e.
        share   = ( 1+ 2)/N-1= 0.041                                 whether there is a tie), the result of the tournament is
                                                                     decided through fitness sharing in the objective domain (a
   2.4 Pareto Optimality                                             technique called equivalent class sharing was used in this
We normally look for “trade-offs”, rather than single                case) (Horn et. al., 1994). However its main weakness is
solutions when dealing with multi objective optimization             that besides requiring a sharing factor, this approach also
problems. The notion of optimum is therefore, different.             requires an additional parameter: the size of the
In the multi-objective optimization the notion of                    tournament.
optimality is to interrelate the relative values of the
different criteria- if we want compare apple with orange-              NSGA
then we must come up with a different definition of                  The Non-dominated Sorting Genetic Algorithm (NSGA)
optimality.                                                          was proposed by Srinivas and Deb (1994), and is based on
The most commonly adopted notion is that originally was              several layers of classifications of the individuals. Before
proposed by Vilferdo Pareto and we will use the term:                selection is performed, the population is ranked on the
Pareto optimum.                                                      basis of domination (using Pareto ranking): all non-
                                                                     dominated individuals are classified into one category
                                                                     (with a dummy fitness value, which proportional to the
fi(x) fi(x*) for all i = 1,………..,k and fj(x)          j   (x*) for   population size). To maintain the diversity of the
at least one j.                                                      population, these classified individuals are shared (in
                                                                     decision variable space) with their dummy fitness values.
3. REVIEW OF MOO APPROACHES                                          Then this group of classified individuals is removed from
                                                                     the population and another layer of non-dominated
  VEGA                                                               individuals is considered (i.e. the reminder of the
David Schaffer (1985) proposed an approach called as                 subpopulation is re-classified). The process continues
Vector Evaluated Genetic Algorithm (VEGA), and that                  until all individuals in the population are classified. Since
differed of the simple genetic algorithm (GA) only in the            individuals in the first front have the maximum fitness
way in which the selection was performed. This operator              value, they always get more copies than the rest of the
was modified so that at each generation a number of                  population.
subpopulations     were generated       by performing                However some researchers have reported that NSGA has
proportional selection according to each objective                   lower overall performance than MOGA, and it seems to
function in turn. Thus a problem with k objectives and a             be also more sensitive to the value of the sharing factor
population with size of M, k subpopulations of size M/k              than MOGA (Coello, 1996; Veldhuizen, 1999). However
each would be generated. These subpopulations would be               another approach of NSGA, NSGA-II is also proposed by
shuffled together to obtain a new population of size M, on           Deb et.al. It is more efficient than NSGA. Recent
which GA would apply the crossover and mutation                      Approaches: Recently, several new Evolutionary Multi
operators in the usual way. However this approach had                objective Optimization approaches have been developed,
                                                                     namely PAES and SPEA. The Pareto Archived Evolution
Volume 1, Issue 3, September – October 2012                                                                           Page 257
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 3, September – October 2012                                    ISSN 2278-6856


Strategy (PAES) was introduced y Knowles and Corne
(2000a). This approach is very simple: it uses a (1+1)
evolution strategy (i.e. a single parent that generates a
single offspring) together with a historical archive that
records all the non-dominated solutions previously found
(such archive is used as a comparison set in way
analogous to the tournament competition in NPGA). The
Strength Pareto Evolutionary Algorithm (SPEA) was
introduced by Ziztler and Thiele (1999). This approach
was conceived as a way of integrating different
Evolutionary Multi objective Optimization techniques.

4. PROPOSED APPROACH
                                                                                        Figure 1:
   4.1 Proposed New Approach
                                                                   4.2 Underlying Philosophy
The main idea behind the proposed approach is taken
from VEGA and NSGA. In the case of VEGA, first the              In this new approach, at the first step, we select a
initial population of size M is divided into k                  subpopulation that thrives with respect to f1 and f2; the
subpopulations (each of size M/k), and each                     next subpopulation will perform best for f2 and f3. If we
subpopulation is based on k separate objective                  take the elite individuals from these two subpopulations,
performance where total number of objective function is         and apply crossover, it will be natural that the offspring
k. In our approach, the population is divided in to M/k – 1     may achieve good performance with respect to f1, f2 and
subpopulations where M and k stands for same as VEGA.           f3. After the overall iteration, the newly generated
Suppose the objective functions are, f1, f2, f3… fk and the     population (with size M) may have good performance
first subpopulation will be created with respect to the         with all k objectives. After ranking and fitness sharing
performance of f1 and f2, the second will be created with       (according to non-domination), the last generation may
respect to f2 and f3, in the same way k – 1 th                  contain Pareto-optimal points that is the goal of our
subpopulation will be created from fk – 1 and fk. Then          search.
every subpopulation will be ranked and their fitness will          4.3 Disadvantages with some prior approach
be shared (analogous notion to NSGA) to ensure the              In VEGA we have k objective functions and M
maintenance of population diversity and non-dominated           population. The size of each subpopulation will be M/k.
individuals. Now let we enumerate the subpopulations as         Next step we shuffle and then use
s11, s12, s13 … s1k-1 and each of size M/k – 1. Now in
next step, we will create k – 2 subpopulations from s11,
s12, s13 … s1k-1. 1st subpopulation (enumerated s21) is
derived from elite members (non-dominated solutions
with respect to f1, f2 and f2, f3 pairs) of subpopulation
s11 and s12. We will take two individuals (elite member)
from s11 and s12 respectively and apply crossover. The
procedure will be iterated until M/k – 2 numbers of
individuals fills up the s21 subpopulation. In the same
way, rest of the s22, s23 … s2k-2 subpopulation will be
created. Now in next step, k – 3 subpopulations will be
generated (each of size M/k – 3). At every step fitness will
be shared among the individuals in every subpopulation
and non dominated one will get the relatively high                                    Fig: VEGA
fitness. It will become evident that this iteration (ranking,
fitness sharing, crossover and merging) will stop when          Operator on them. After shuffling we never get the more
they all merge to a population of size M and this iteration     fit value separately rather in VEGA we are almost
will continue up to k – 1 times if the total number of          averaging them. But our aim is to gradually get more fit
objective is k.                                                 value which is strictly followed in our technique. This
The overall process will be apparent form below figure          type of problem arise in VEGA is called “Middling
(fig : 1)                                                       performance”.
                                                                2) NSGA has a lower overall performance and it seems to
                                                                be more sensitive to the value of the sharing factor.
                                                                   4.4 Strengths
                                                                This approach include some computational strength:-


Volume 1, Issue 3, September – October 2012                                                                    Page 258
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 3, September – October 2012                                    ISSN 2278-6856


  i. Fitness measure is done during the first step and it
will be done using Min-Max formulation or Distance
Function. (Some non-linear fitness measuring scheme
should be accounted)
  ii. After first iteration, the procedure as NSGA can be
applied to achieve more accurate result (i.e. it can be
embedded into the classification phase of NSGA).
  iii. Some parameters, such as population number (M)
and generation count (k – 1) can be predicted.
  iv. For future implementations, niche method and
crowding can easily be applied.
  v. Parallel implementation is also possible

5. IMPLEMENTATION
   5.1 Algorithm to implement
Initialize the population with random values
    For i=1 to MAXGENS
        Evaluate each subpopulation based on objective
functions.
        assign shared fitness among subpopulation
        Rank on subpopulation based on shared fitness
value(Best fit =highest rank )
        2 point Crossover between two consecutive
subpopulation
        Merge step by step up to getting one final
population.
    End Loop;
 End;

  5.2 Test Functions
Among the many of the known MOEA test functions, we
implemented our approach on the following problem:

            F= (f1(x,y), f2(x,y)), where -5<=x, y<=10       From the obtained result it is evident that this method
            f1(x,y)= x2+y2 ,                                allows the function to converge very quickly.
            f2(x,y)=(x-5) 2 + (y-5) 2


   5.3 Results
Snapshots of different generations are given below.
Function1 and Function 2 represents f1 and f2
respectively.




                                                            6. CONCLUSION

                                                            Even though there exists a number of classical Multi
                                                            objective optimization techniques, they require some a
                                                            priori problem information. Since genetic algorithm use a
                                                            population of points, they may be able to find multiple
                                                            Pareto-Optimal solutions simultaneously. Schaffer’s
                                                            Vector Evaluated Algorithm (VEGA) and Deb’s Non
                                                            dominated Sorting Genetic Algorithm (NSGA) show

Volume 1, Issue 3, September – October 2012                                                               Page 259
       Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com
Volume 1, Issue 3, September – October 2012                                    ISSN 2278-6856


excellent results in many test cases, but still they are not   Bangladesh in 2006. His research interest is mainly focused on
free from some short comings. This new approach shows          Semantic web, Social computing, Software Engineering, HCI,
a new approach to solve Mult iobjective optimization           Image processing, Web Mining and Data & knowledge
problem. However we hope that this research will be a          management. Currently he is a Lecturer in the Dept. of
                                                               Computer Science and Engineering (CSE), Bangladesh
great success if carried out.                                  University of Business and Technology (BUBT).

7. FUTURE PLAN                                                                       T.M. Rezwanul Islam obtained his BSc
                                                                                     degree in Computer Science and
Our future plan will be measured the performance on the                              Information Technology from Islamic
basis of tests like Implementation of Different Statistical                          University of Technology (IUT), Gazipur,
Testing, Error Ratio (ER), Two set coverage (CS),                                    Bangladesh in 2011. He received the OIC
Generational Distance (GD), Maximum Pareto Front                                     (Organization of the Islamic Conference)
Error (ME), Average Pareto Front Error (AE), Hyperarea                               scholarship for three years during his BSc
and Ratio (H,HR) etc.                                                                studies. His research interest is mainly
                                                               focused on AI, Evolutionary Computation, Software
                                                               Engineering, HCI, Image processing, Web Mining, Ubiquitous
REFERENCES                                                     Computing and Cognitive and Computational Neuroscience.
[1] David E. Goldberg, "Genetic Algorithms in search,          Currently he is a Lecturer in the Dept. of Computer Science and
     optimization and machine learning", Pearson               Engineering (CSE), Bangladesh University of Business and
                                                               Technology (BUBT).
     Education Asia Ltd, New Delhi, 2000.
[2] Michalewicz, Z.,"Genetic Algorithms + Data
                                                                                     Sadat Maruf Hasnayen obtained his
    Structures = Evolution Programs", 3rd edn.
                                                                                     BSc degree in Computer Science and
    Springer-Verlag, Berlin Heidelberg New York
                                                                                     Information Technology from Islamic
    (1996).                                                                          University of Technology (IUT),
[3] Carlos A. Coello Coello, David A. Van Veldhuizen,                                Gazipur, Bangladesh in 2011. He
    Gary B. Lamont, " Evolutionary Algorithms for                                    received the OIC (Organization of the
    Solving Multi-Objective Problems", Kluwer                                        Islamic Conference) scholarship for
    Academic Publishers; ISBN: 0306467623, May                                       three years during his BSc studies. His
    2002.                                                      research interest is mainly focused on AI, Evolutionary
[4] R. Sarker, M. Mohammadian and X. Yao,                      Computation, Software Engineering.
    "Evolutionary Optimization, Management and
    Operation" Research Series, Kluwer Academic
    Publishers.
[5] N.Srinivas and K.Deb, “Multiobjective Optimization
    using Non-Dominated Sorting Genetic Algorithm”,
    Kanpur Genetic Algorithm Laboratory (KanGAL),
    Indian Institute of Technology (IIT), Kanpur, India.
[6] Deb.K(2001),"Genetic           Algorithms         for
    Optimization", KanGAL Report Number 2001002.
[7] K.Deb, “Single and Multi-Objective Optimization
    using Evolutionary Computation”, Department of
    Mechanical       Engineering,     Kanpur     Genetic
    Algorithm Laboratory (KanGAL), KanGAL report
    No. 2004002,Institute of Technology (IIT), Kanpur,
    India.
[8] Shukla, P. K. and Deb, K. (August, 2005). On
    Finding Multiple Pareto-Optimal Solutions Using
    Classical      and      Evolutionary     Generating
    Methods. KanGAL Report No. 2005006.

AUTHORS

                Md. Saddam Hossain Mukta obtained his
                M.sc degree in Computer science from
                University of Trento, Italy where he was
                receiving Opera Universita Scholarship and
                earned a B.Sc degree in Computer Science and
                Information    Technology    from    Islamic
                University of Technology (IUT), Gazipur,

Volume 1, Issue 3, September – October 2012                                                                        Page 260

								
To top