informaticc smarter

Document Sample
informaticc smarter Powered By Docstoc
					     CAPACITY BUILDING WORKSHOP
ON ORGANISATION OF THE CREATIVE SYSTEM
        MODELLING WORKSHOP
               (Bhutan, 24-25 October 2007)
                   NetSyMoD
         Network Analysis – Creative System
           Modelling – Decision Support




                  Background paper
  Introduction to selected methods to elicit weights
Index
1.  Introduction..................................................................................................................................3
2.  Methods to elicit weights .............................................................................................................4
  Rank weighting ................................................................................................................................4
  Direct weighting method..................................................................................................................4
  SMARTER.......................................................................................................................................5
  Allocation of points..........................................................................................................................5
  Fixed budget allocation....................................................................................................................5
  SMART (Simple Multi-Attribute Rating Technique)......................................................................6
  SWING.............................................................................................................................................6
  SIMOS Procedure ............................................................................................................................7
  Pairwise comparison ........................................................................................................................8
  Importance Weights .........................................................................................................................8
3. Aggregating weights ....................................................................................................................9
References ..........................................................................................................................................11
    1. Introduction
Decision problems involve criteria of varying importance to decision makers. The criterion weights
usually provide the information about the relative importance of the considered criteria. There are
many techniques commonly used for assessing the criterion weights such as ranking and rating
methods, Pairwise comparison and trade-off methods.
There are thus several reasons why you may want to elicit the relative importance of attributes1, and
why you may want to do it in a participatory manner.
First of all, you may want to investigate the trade-offs among different aspects of a response policy,
or several dimensions of a management problems. In this case, weight elicitation can be a direct
input in decision making: multiattribute value theory (MAVT) is a decision approach in which
the overall value of alternatives is given by ratings of the alternatives with respect to each attribute
or assessment criteria, and their weights. It is easy to understand how the process of assigning
weights is important: different weights, as well as different aggregation procedures, may lead to
different results in the final decision. It is also clear that the selection of actors whose weight is to
be included in the final decision process will affect the results.
Another reason why you may want to elicit the importance that actors assign to different factors is
the identification of the most important issues that you should be focusing on, or of indicators
needed to monitor the situation with respect to a particular problem. You can therefore seek the
support of experts of stakeholders both in identifying the set of issues or criteria in the first place,
and then to rank them according to their relative importance.
In both cases, through various aggregation procedures, you can then arrive at an overall ranking of
the attributes. The weight attribute to each attribute is effectively a scaling factor.
For instance, in WP6 of the Brahmatwinn project, you are asked to validate with local experts a
series of indicators that project partners have identified as important in assessing vulnerability to
climate change and monitoring it. The idea is that “objective” and “scientific” indicators should be
integrated with perspective of the local population, who face a set of constraints and issues which
are not easily seen from a more traditional modelling (hydrogeological, climatological, but also
socio-economic) perspective. Similarly, in WP8 you will be asked to use participatory weighting
techniques to arrive at an estimation of the likelihood of different “what if?” scenarios, based on the
modelling exercise and monitoring indicators.
Before turning to the description of a selection of participatory weight elicitation methods, it is
important to point out that there are different ways to elicit (i.e. make explicit) decision preferences
and to aggregate them. In theory, if the participant is consistent in his weighting, the final ranking
should be the same under any procedure. Yet, different weight attribute assign methods and
different aggregation procedures may lead to different overall ranking. For instance, there is a
tendency of assigning weights in multiples of 10; or the tendency to consider the ranking of the

1
 We will refer to “attribute” to indicate anything whose weight or relative importance is to be elicited. Thus, the term
attribute may indicate assessment criteria, indicators, problematic issues, or anything of relevance in the specific
context.
attributes rather than the strength of the preferences. Another important limitation of certain
methods is that the ranges of the impacts are not considered enough when the weights are elicited,
and the resulting prioritizations reflect the participant’s general values in life rather than acceptable
trade-offs. However, it is possible to alleviate these problems if the analyst and decision makers are
conscious of them. This highlights the importance of the education of analysts and decision makers.
Each method is designed on different underlying rationales and the results include subjective
elements as the analysis is based on individual evaluation. Thus, different techniques have different
strength or weaknesses in the analysis as they all stress some aspects of the weighting process while
excluding some others. The application of different methods and the comparison of the results
obtained are recommend as they add information and enhance the weighting exercise.
In general, the exercise is of intrinsic value as it allows you to ensure that all aspects of the problem
are identified, that the numbers are used to help the group think hard about these aspects so that a
more robust and transparent selection can be arrived at. Furthermore, the exercise allows you to
identify attributes over which wide disagreements exist, as well as area of critical importance where
there is wide consensus.
In the case of Brahmatwinn, the validation of the relative importance of “modelling indicators” by
local experts and stakeholders is clearly of value, as it will ensure that the models developed by the
“scientific” community include a wider range of aspects which are also relevant to the local
conditions and respond to actual problems encountered.


In the next section, a selection of procedures to elicit weights in a group setting is presented. We
will also look at how the results of the weighting exercise can be elaborated upon, and presented
back to the group.

    2.   Methods to elicit weights
Rank weighting
The most simple elicitation methods are ranked based methods. In rank based methods, each
respondent is asked to rank in order of preference a list of attributes, and the weights of the
attributes depend on the position of the attribute itself in the rank. Weights are calculated by using
the mathematical formulae that imply the same order.

Direct weighting method
For instance, in the direct weighting method, participants are asked to assign a vote to a list of
attributes, independently on one another. Traditionally, votes are to be assigned on a 5-point Likert2
scale, which is considered to be most intuitive and least prone to biases.




2
  A Likert scale is the most widely used scale in survey research. When responding to a Likert item, respondents specify
their level of agreement to a statement (Lickert, R., 1932, "A Technique for the Measurement of Attitudes", Archives of
Psychology 140: pp. 1-55).
These methods are simple and do not require much from the participants; thus they are ideal for a
preliminary screening of the alternatives. However, while only information on the ranking order of
the attributes is used and there are likely to be several weightings implying the same order.
Reference document:

       Word file:     1a. Direct weighting Matrix.doc

       Excel file:    1b. Direct weigthing.xls



SMARTER
The smarter procedure builds upon the SMART methods. Respondents are asked to rank all the
attributes in decreasing order of importance. The weights are then calculates as:
wi = ( N + 1 − Ri ) , where Ri is the rank assigned to attribute i . The weights are then normalised.

Easy to use approaches such as SMART are nowadays the common basis for many applied multi-
criteria decision analysis studies (Belton &Stewart, 2001). In these techniques, the preference
comparisons are done with respect to a certain attribute only. The SMART method is therefore
widely used, easy to understand and the data is easy to process. But the comparison of the relative
importance of attributes is meaningless if it does not reflect the effectiveness of the attributes as
well, in cases in which the exercise is aimed at assessing alternative options against set criteria.


Reference document:

       Word file:     2a. SMARTER Matrix.doc

       Excel file:    2b. Smarter.xls



Allocation of points

Fixed budget allocation
A variant of rank weighting that decreases some of the potential biases of the most simple method is
the allocation of points, whereby participants are asked to allocate a fixed number (budget) of points
(100, for instance) amongst the attributes to be weighted. These points and their normalised
versions are interpreted as the weights of the attributes.
The allocation of point method mitigates to some extent the tendency not to consider to a significant
degree the trade offs among attributes. This method is also very quick to explain and easy to
implement. The processing of data is also straightforward, both in determining individual ranking
and in aggregating them across participants.
Reference document:

           Word file:        3a. Allocation of points Matrix.doc

           Excel file:       3b. Allocation of point.xls




SMART (Simple Multi-Attribute Rating Technique)
The participants are asked to rank the attributes in order of importance. They then assign 10 points
to the least important attribute, and an increasing number of points (without explicit upper limit) are
assigned to the other attributes to address their importance relative to the least important attribute.
The weights are calculated by normalising the sum of the points to one:
           pi
wi =   N
                    , where p i corresponds to the points given to attribute i and N is the total number of
       ∑p
       j =1
                j


attributes.


A variant of this method allows interval ranking, that is, any attribute can be the reference point.
This is because it is sometimes easier to use some easily measurable attribute, such as money, and
the weight elicitation process can be made easier and more precise. So, for instance, if a respondent
has to rank three attributes of a policy – cost of the measure; impact on wildlife; distributional
aspect – it may be easier to ask him or her to compare “impact on wildlife” and “distributional
aspect” to “ cost of the measure”, which his easier to quantify. Thus, the remaining two attributes
may receive a positive or negative score, depending on whether they are more or less important than
“cost of the measure”. However, it may still be better to use as a reference attribute the least
preferred, whenever participants cannot identify an attribute containing the least imprecision to be
used as a reference attribute.


Reference document:

           Word file:        4a. SMART Matrix.doc

           Excel file:       3b. Allocation of point.xls

SWING
One of the problems of the weighting methods described so far is that they are insensitive to the
scale of the attributes being compared. This shortcoming is particularly important when weights are
elicited as part of an assessment process..
Assume that the respondent is asked to evaluate two policy interventions: Option A costs € 2,000
and limits the concentration of the pollutant to 80 mg/ m2; Option B costs €2,001, but it limits
concentration to 20 mg/ m2. Using a scale-insensitive ranking method, the respondents could assign
the same importance to the two criteria, which would imply that 1€ reduction in the cost of the
option would be the same as passing from a concentration of the pollutant of 20 mg/ m2 to 80
mg/m2. Yet, this is unlikely to be the case.
A better method to elicit weight would then be to ask the respondent which of the two criteria –
costs of the option or concentration of pollutant – she would rather improve from its worse possible
outcome.
In SWING, respondents are asked to imagine a hypothetical scenario in which all attributes are at
their worst possible state – that is, they are all assigned a value of 0, where the states of the attribute
is normalised 0,100.
As a second step, respondents are asked to imagine that they are only allowed to increase one
attribute to its most preferred (maximum) level, and 100 points are assigned to the most important
attribute.
Thirdly, respondents are asked to imagine a situation in which the most preferred attribute scores
100, and all the remaining ones are at 0. They are then tasked with selecting a second attribute,
which could be raised to its maximum level. This procedure continues until all attributes are ranked
in decreasing order of importance.
Finally, respondents are asked to compare each criterion in turn with the most highly ranked
criterion. In particular, they have to asses the increase in overall value resulting from an increase
from 0 to 100 on the selected criterion as a percentage of the increase in overall value resulting in
an increase from a score 0 to 100 on the most highly ranked criterion.


Reference documents

       Word file:      5.a Swing Matrix

       Excel file:     3.b Allocation of points



SIMOS Procedure
The Simos procedure (Simos, 1990a, 1990b) was selected as it provides a simple and effective
approach for weight elicitation. It is based on a set of coloured cards, one for criteria, provided to
each participant. The participants are asked to rank these cards (or criteria) from the least important
to the most important. The rank order of a criterion expresses the importance a single participant
wants to ascribe to that criterion: the first criterion in the ranking is the least important and the last
criterion in the ranking is the most important. If the two criteria are found to be equally important,
these are given the same rank position. In order to allow participants to express strong preference
between criteria, another set of cards (white cards) is introduced. The participants are asked to
introduce white cards between two successive coloured cards, while the number of white cards is
proportional to the difference between the importance of the considered criteria. Subsequently, the
criteria weights are calculated using the rank positions attributed in the previous step: the rank
positions are simply divided by the total sum of the positions of the considered criteria, thus
providing a vector of weights to be applied to the evaluation criteria, in the form of real values
summing up to 1.
You can either distribute participants a matrix and ask them to imagine the white cards, or you can
distribute the cards themselves, whichever you think more appropriate.


Reference documents:
       Word file:     6a. SIMOS Matrix.doc
                      6c. How to assign SIMOS ranks.doc
       Excel file:    6b. SIMOS RANKING.xls



Pairwise comparison
The method calls for a pair-wise rating of attributes. Various scales can be used, for instance, a
scale of 0 (equal importance) to 3 (absolutely more important) is commonly adopted. The weights
are averaged for each attributes (Saaty 1990, 1994).
Participants will be presented a worksheet and will be asked to compare the attribute in the row
with the one in the column. For each cell, they will be required to mark down the letter of the most
important attribute in the cell, and then score the difference in importance from 0 (no difference) to
3 (major difference).
The results are consolidated by adding up the scores obtained by each attribute when preferred to
the attribute it is compared with. The results are then normalised to a total of 100.
The tool provides a framework for comparing each course of action against all others, and helps to
show the difference in importance between factors. Note, however, that this method does not allow
you to check the consistency of respondents’ preferences – in particular, their transitivity. You
should therefore examine the results’ matrices for each respondent to check for major problems.
The AHP method described below uses more complex algorithms to derive the final weights from a
pair-wise comparison matrix, which enables to check for consistency in judgment.


Reference documents:
       Word file:     7a. PAIRWISE Matrix.doc
       Excel file:    7b. PAIRED COMPARISON.xls



Importance Weights
Related to the Pairwise comparison method is the elicitation of importance weight. Importance
weights are used for priority settings, and are assigned by the use of a pair-wise comparison method
that is introduced by the Analytical Hierarchy Process (AHP). The AHP concept basically allows
the user to evaluate the importance of each of the selected parameters (criteria) when compared to
each of the other parameters on a scale from 1 to 9 ranging from "same importance" (1) to
"absolutely more important" (9).
The pair-wise comparison methodology is found to be the most user transparent and scientifically
sound methodology for assigning weights representing the relative importance of criteria. The
found weights represent a "best fit" set of weights derived from the eigenvector of the square
reciprocal matrix used to compare all possible pairs of criteria.
Thus, each pair of attribute is compared, and the preference ratios are stored in a comparison
matrix. Because individual judgments will never agree perfectly, the degree of consistency achieved
in the pair-wise comparings is measured by a Consistency Ratio (CR) indicating whether the
comparings made were sound.


Reference documents:
       Word file:      8a. Importance weight.doc
       Excel file:     8b. AHP consistency.xls


Please, note that you may need to install an additional component to your excel worksheet for the
formulae to calculate the importance weight to work. You will find the additional component
(MATRIX XLA folder). To install the additional component, open your excel application. Select
<Addins...> from the <Tools> menu. Once in the Addins Manager, search for and select
"matrix.xla" file, stored in the CD rom of the course.

   3. Aggregating weights

Once you have elicited the relative importance that each individual respondent places on your list of
attributes, you may need to construct an aggregated weighting list, which integrates, in one score for
each attribute, the opinion of all respondents. For instance, if you have a list of monitoring
indicators but limited resources, you may need to select the most important ones on which
monitoring is to be prioritised. Alternatively, if you have a list of issues or problem areas which you
have asked respondents to rank, you may need to select only the top most important for further
research or actions.
Of course the aggregation procedure you choose will to some extent determine the final ranking of
the attributes that you get. However, for the purpose of selecting from a list of attributes the most
important ones on which to concentrate simple additive weighting (SAW) may suffice. This is the
simplest aggregation form as it assumes additive aggregation of decision outcomes. Thus, the
overall rank for attribute i is given by:

S SAW = ∑ wi ,e , where wi ,e is the weight that participant e assigns to attribute i
        E
Individual raking are, by default, assumed to have the same weight. Note however that you may
wish to give participants’ vote different relative importance if, for instance, a specific category is
over-represented. Assume, for instance, that you have 5 participants with an ecologic background,
but only 1 economist. Then, if you were to give the same importance to the ranks of all participants,
you are likely to find that your results are biased towards giving more importance to ecologic-type
References
A 1000 mind: when alternatives matter: http://www.1000minds.com/Show.aspx?page=47
Barron, F. H., Barrett, B. E., 1996, Decision quality using ranked attribute weights. Management
   Science 42(11), 1515-1523. (pdf!)
Belton, V., Stewart, T.J. (2001), Multiple criteria decision analysis: an integrated approach. Boston:
   Kluwer Academic.
Edwards, W. 1977, How to use multi-attribute utility measurement for social decision making.
   IEEE Transactions on Systems, Man and Cybernetics, 7, 326-340.
Edwards, W., Barron, F.H., 1994, SMARTS and SMARTER: improved simple methods for
   multiattribute utility measurement. Organizational Behavior and Human Dimension Processes
   60, 306-325. (pdf!)
Edwards, W., Barron, F.H., 1994, SMARTS and SMARTER: improved simple methods for
   multiattribute utility measurement. Organizational Behavior and Human Dimension Processes
   60, 306-325. (pdf!)
Ferligoj, A., and Hlebec, V. (1999), Evaluation of social network measurement instruments, Social
    Networks, 21, 111-130.
Mustajoki, J., Hämäläinen, R.P., Salo, A. (2005), Decision Support by Interval SMART/SWING –
  Incorporatine Imprecision in the SMART and SWING Methods, Decision Science, 36(2), 317-
  339.
Phillips, L. D., Phillips, M. 1993, Facilitated work groups: theory and practice, Journal of the
    Operational Research Society, 44(6), 533-549.
Saaty, T. L. (1990), How to make a decision: the analytical hierarchical process. European Journal
   of Operational Research, 48(1), 9-26.
Saaty, T. L. (1994), Fundamentals of the analytical hierarchical process, RWS Publications,
   Pittsburgh.
Von Winterfeldt, D., Edwards, W. 1986, Decision analysis and behavioural research. Cambridge
   University Press.
Von Winterfeldt, D., Edwards, W. 1986, Decision analysis and behavioural research. Cambridge
   University Press.

				
DOCUMENT INFO
Description: Research about smarter method informatic enginering