New Graphic Approach to Decision Support for Water Management by d4nR45Y

VIEWS: 12 PAGES: 45

									Approximation and Visualization of
    Interactive Decision Maps
         Short course of lectures

           Alexander V. Lotov

Dorodnicyn Computing Center of Russian Academy of
                  Sciences and
       Lomonosov Moscow State University
     Lecture 2. Classes of methods for multi-
      objective (multi-criteria) optimization.
         A posteriori preference methods
Plan of the lecture
1. Old simple-minded approaches
2. Main types of modern methods for multi-criteria
optimization (MCO):
  2a. No-preference methods
  2b. A priori preference methods
  2c. Interactive methods and possible functions of criteria
  2d. A posteriori preference (generating the Pareto frontier)
3. Stability of the Pareto frontier and a posteriori preference
methods for constructing a list of Pareto-optimal points
    Old simple MCO approaches
A priori restrictions: DM selects the
most important criterion and specifies
restrictions for others: yi  min
                               *


 y  f ( x), x  X , yi  ~i , i  1,2,..., m; i  i*
                          y
A priori weights: DM specifies weights for
all criteria and solves the problem:
    m
     wi yi  max           y  f ( x), x  X
    i 1
  Classification of modern methods
   according to the role of the DM

                 MCDA

No-preference                 A posteriori
  methods                      methods
           A priori    Interactive
          preference    methods
           methods
      No-preference methods

The opinions of the decision maker are not
 taken into account. The problem is solved
 by expert using some relatively simple
 method (say, single criterion optimization
 of some function of criteria with
 “objectively” specified parameters). The
 solution is presented to the decision maker
 who can accept or reject it.
    An example of no-preference methods
For example, the following optimization problem can be
  solved:
     minimize h (y) = max {  i (yi - yi*) : i=1,2,...,m} on Y,
where  i = 1/(yi** - yi*), i=1,2,...,m, and y** is the
  “worst” possible criterion point.
For y**, one can take the criterion point, which coordinates
  are the worst feasible values of particular criteria, or the
  worst values of particular criteria over the Pareto frontier, or
  any other “bad” criterion point.
Surely, the decision depends to a great extent on the “worst”
  point. By modifying the “worst” point, one can get any
  point of the Pareto frontier.
        A priori preference methods
      (constructing the decision rule)

• Methods based on multi-attribute utility theory
  (MAUT)
• Methods based on direct weighting of criteria
• Methods based on complicated weighting
  procedures (Analytic Hierarchy Process)
• Outranking methods (Electra, etc)
• Methods based on heuristic concepts (say, goal
  identification)
    The simplest method (by Keeny and Raiffa) for
    approximation of indifference curves for value
    functions based on multi-attribute utility theory
The case of an additive function v(y1, y2)=v1(y1)+v2(y2)
      y2

 v2(y2)=2                                  y  max


 v2(y2)=1




    v=0            v1(y1)=1          v1(y1)=2     y1
  Methods based on direct weighting
             of criteria
DM specifies weights for all criteria and
 maximizes
        m
U ( y )   wi yi  min   y  f ( x), x  X
        i 1
Important disadvantage of linear functions:
  decrement in the value of one criterion can be
  compensated by the value of another criteria
 Complicated weighting procedures
  (say, Analytic Hierarchy Process)

Procedures consist of weighting and
  subsequent single-criterion optimization.
The AHP method helps to develop weights.
  DM has to answer m(m-1)/2 questions
  concerning relative importance of criteria.
  The AHP methods helps to study
  quantitative criteria, too
        ELECTRE method

French school (led by Prof. Bernard Roi)
  proposed interesting methods for
  constructing an outranking relation;
  these methods are affective in the case
  of a small number of alternatives
          Goal identification - 1
• DM has to identify the goal without information
  on the set Y=f(X).
 y2




  0                                     y1
          Goal identification - 2
Then, by using some distance function, the closest
 point of the set Y=f(X) is found.
 y2



           y0     Y=f(X)




  0                                      y1
          Goal identification - 3

The goal programming is the most often used
 MCDA technique, but if the goal is distant from
 the feasible criterion set Y=f(X), the solution y0
 depends mainly on the distance function, but not
 on the goal. Qualified experts feel the feasibility
 of criterion values and manage to identify the
 appropriate goals, which are close to the feasible
 criterion set.
     Interactive (iterative) methods
Methods are based on interaction of the decision
 maker with the computer and consists in a finite
 number of iterations.
At the first stage of an iteration, the decision maker
 specifies parameters of a function of the criteria.
At the second stage, the computer solves the single-
 criterion optimization problem with the criterion
 function specified by the decision maker.
 General scheme of an interactive method
After the k-th iteration, the vectory ( k )  f ( x ( k ) ) and
   some other auxiliary information must be provided
   to decision maker.
• Stage 1. DM explores the information obtained at
   the k-th iteration and, may be, previous iterations.
                                     ( k 1)
   DM specifies parameters                  of a new
   optimization problem h( f ( x), (k 1) )  max
    while x  X
• Stage 2. Computer solves the problem; the new
   decision and criterion vector are computed
 .                ( k 1)     ( k 1)
               y       f (x    )
       The simplest interactive method
The method is based on application of the linear
   function h (y) = <c, y>. An iteration of the
   method consists of two steps:
a) Computer finds the decision x0 from the set X
   that provides the maximum of the linear function
   h (f(x)) = <c, f(x)> over X with some given vector
   of parameters c <0;
b) DM studies the optimal decision x0 , the criterion
   point y0 = f(x0). If DM is not satisfied with the
   decision, he/she changes the values of the
   parameters c and the method goes to the next
   iteration.
   What scalar functions of the criteria
              can be used?

Let h(y) be a scalar function of criteria. Let y0 be the point
   of maximum of h(y) over Y.
1) Does it belong to the Pareto frontier?
Answer. If y  y results in h(y’’) > h(y’)
(the scalar function is increasing in respect to Pareto
   domination), then y0 belongs to the Pareto frontier.
2) But what about the opposite: can any point of the Pareto
   frontier can be found by optimization of the function
   h(y) over Y ?
       Well-known examples of scalar
            functions of criteria

1) Linear function h (y) = <c, y>, where c <0
  (note that we consider the minimization problem!),
  is an increasing function in respect to Pareto
  domination;
2) Tchebycheff distance from the ideal point y*
     h (y) = max {  i (yi - yi*) : i=1,2,...,m},
   where  i : i=1,2,...,m, are some non-negative
  coefficients, is a decreasing function in respect to
  Slater domination.
       Properties of the linear function

1) The maximum over Y of the linear function
    h (y) = <c, y>, where c <0, belongs to the Pareto
   frontier.
  However, the opposite is true only in the case of
   the convex EPH:
any point of the Slater frontier can be the maximum
   over Y of the linear function h (y) = <c, y> with
   some non-positive c only if the EPH is convex.
         Example
y2

              P(Y)


                   f(X)

     c
                          y1
 Properties of the Tchebycheff distance
2) The point y0 of the minimum over Y of the
   Tchebycheff distance
      h (y) = max {  i (yi - yi*) : i=1,2,...,m},
    where  i : i=1,2,...,m, are some non-negative
   coefficients, belongs to the Slater frontier.
Moreover, any point of the Slater frontier can be the
   minimum over Y of the Tchebycheff distance
   with some non-negative coefficients.
          Example
y2

               P(Y)


                    f(X)
     y*

                           y1
A posteriori preference methods
     A posteriori preference methods are based on
 approximating the Pareto frontier and informing the
             decision maker concerning it.
A posteriori methods inform the DM about the Pareto
  optimal set without asking for his/her preferences.
  The DM has to specify a preferred Pareto point, i.e.
non-dominated combination of criterion values, only
after completing the exploration of the Pareto frontier.
  Thus, the single-shot specification of the preferred
  Pareto optimal objective point may be separated in
           time from the exploration phase.
Two main problems that must be solved
by the a posteriori preference methods:
• How to approximate the Pareto frontier
• How to inform the DM about the Pareto frontier

In the case of two criteria, information of the DM is
   usually based on graphical display of the Pareto frontier.
   In the case of more, than two criteria, a list of objective
   points is usually provided to the DM.
Question: is the problem of approximating the Pareto
   frontier stated correctly?
   Stability of the Pareto frontier-1
  Example: Slater (weak Pareto) S(Y) and Pareto P(Y)
    frontiers for the non-disturbed feasible set in
    criterion space Y
       y2      A



                       Y

                   B                  C
P(Y)
            S(Y)
                                                       y1
  Stability of the Pareto frontier-2
P(Y) for the disturbed feasible set in criterion space

           A



                     Y

                 B

        P(Y)                         C
  Stability of the Pareto frontier - 3
If some natural requirements hold,
the condition
                    S(Y) = P(Y)
where Y is the non-disturbed feasible set
in criterion space, is the necessary and
sufficient condition of stability of P(Y)
to the disturbances of parameters.
(Sawaragi Y., Nakayama H., Tanino T., 1985).
Stability of the Edgeworth-Pareto Hull - 1
 Edgeworth-Pareto Hull (EPH) Yp for the non-
  disturbed feasible set in criterion space Y

           A
                               Yp

                    Y

                B                   C
Stability of the Edgeworth-Pareto Hull - 2
Edgeworth-Pareto Hull (EPH) Yp for the disturbed
 feasible set in criterion space Y

           A
                               Yp

                     Y

                 B                  C
Stability of the Edgeworth-Pareto Hull - 3


If some natural requirements hold, the
  Edgeworth-Pareto Hull is stable to the
  disturbances of parameters of the problem.
   The first a posteriori preference
                method

The first a posteriori preference method is the
   method for approximation of the set P(Y) in
   linear bi-criterion problems.
It was introduced by S.Gass and T.Saaty in
   1955 and is based parametric linear
   programming.
 Parametric LP problem for two criteria


y1  (1   ) y2  min y  Cx, Ax  b
  where          changes from 0 to 1.
  The problem is solved by using a method
   for solving the parametric LP problems
     In addition
     to the list of
y2   objective
     points,
     picture was
     provided!




                y1
Different methods
Restrictions-based method
y2




                            y1
 Restrictions-based method: formal
             description

      yi*  min y  f ( x), x  X ,
      yi  li , i  i*, p  0 ,1,...,Pi
            p


The result: a large list of points of the Pareto
 frontier
Weighted Tchebycheff metric as the
  distance from the ideal point
   y2




          y*                   y1
            Formal description
 The problems

h( y)  maxi ( yi  yi *) : i  1,2,...,m  min
         y  f ( x), x  X
 are solved for a large number of parameters
         i  0,    i 1  i  1
                     m




      Result: a large list of Pareto points.
   Parametric LP methods for linear
         problems with m>2

Direct development of idea by Gass and
Saaty: parametric LP methods for m>2
construct all Pareto vertices for a linear
multi-objective problem using the movement
from a vertex to another (see R.L.Steuer.
Multiple-criteria optimization. NY: John Wiley,
1986). A very large list of vertices is
provided to the DM (sometimes along with
the efficient faces of the set Y).
  Evolutionary (including genetic)
   multiple criteria optimization
         y2




                                        y1



Result: a large list of quasi-Pareto points.
 Approximation of the bi-criterion Pareto
       frontier by linear segments
          NISE (Cohon, 1978)

           y2

 Picture is
 provided
 to the DM!                                 y1

The preferred point of an approximation is identified
by the DM.
Lessons learned from bi-objective problems
According to Bernard Roy, “In a general bi-criterion case, it
   has a sense to display all efficient decisions by computing
   and depicting the associated criterion points; then, DM
   can be invited to specify the best point at the compromise
   curve”.
It is extremely important that, in bi-objective MOO
   problems, the graphs provide, along with Pareto optimal
   objective points, information about the objective tradeoffs.
Tradeoff information helps to identify the most preferred
   point at the tradeoff curve.

The question is:
“How to apply this experience in the case of m>2 ?”

								
To top