Utility

Document Sample
Utility Powered By Docstoc
					BAMS 517 – 2011
Decision Analysis -III
Utility

Martin L. Puterman
UBC Sauder School of Business
Winter Term 2011

                            1
A curious gamble
   Gamble A; Flip a coin- Heads win $1; Tails lose
    $1
   Gamble B; Flip a coin- Heads win $1 million;
    Tails lose $1 million
   The expected value for each is $0.
   Which would you choose?




                                                      2
Another curious gamble
   Consider the following game; flip a coin until you get a head.
   Payoff – head the first time $2, head the second time $4, head the third time
    $8, …
   What is the expected value of this game?
                                          
                                                   1
                                 E[ X ]   2 n  ( ) n  
                                          n 1     2
   This is called the St. Petersburg Paradox
      The paradox is named from Daniel Bernoulli's presentation of the problem and his
       solution, published in 1738 in the Commentaries of the Imperial Academy of
       Science of Saint Petersburg. However, the problem was invented by Daniel's
       cousin Nicolas Bernoulli who first stated it in a letter to Pierre Raymond de
       Montmort of 9 September 1713.
      Of it, Daniel Bernoulli said
       “The determination of the value of an item must not be based on the price, but
       rather on the utility it yields…. There is no doubt that a gain of one
       thousand ducats is more significant to the pauper than to a rich man though both
       gain the same amount.”
      Gabriel Cramer, a Swiss Mathematician said;
       “mathematicians estimate money in proportion to its quantity, and men of good
       sense in proportion to the usage that they may make of it”.
                                                                                    3
Basic idea
   To avoid these situations we can work backwards through a
    decision tree by replacing each gamble by our personal
    assessment of its value
   Shortcomings of this approach
       It is tedious
       May not be accurate
       May be inconsistent
       Sometimes difficult; especially if gamble has many possible
        outcomes.

   Another approach - replace outcomes by there utility.

                                                                      4
When is utility useful?
   Monetary outcomes
       To systematically incorporate personal attitudes towards
        consequences of decisions and risk
            Large losses may be catastrophic
       To evaluate complex gambles systematically
            Win $207 with probability .17 and lose $114 with probability .83.
   To evaluate decisions involving non-monetary outcomes
       Health outcomes – years of life vs. quality
   To evaluate decisions with multiple dimensions to
    outcomes
       Wealth, happiness, …




                                                                                 5
What is utility? Some preliminary definitions
  Let {x, y, z, w} be possible outcomes of a decision
  problem. Then
 1. x > y means "x is preferred to y" (also known as strict
    preference)
 2. x ~ y means "x is viewed indifferently relative to y"
 3. x >~ y means "x is either preferred or viewed indifferently
    relative to y" (also known as weak preference)
 4. (x,p,y) means a gamble (an uncertain outcome, or a lottery)
    in which outcome x will be received with probability p, and
    outcome y will be received with probability 1-p.

 Example: x is $150, y is a ticket to a Canucks game; z is a 50-
    50 lottery which either wins $200 or a 20 year old PC and w
    is one week of good health


                                                              6
What is utility? Some consistency Axioms for outcomes
and preferences

For any outcomes x,y,z,w and numbers p,q between 0 and 1
    the following hold:
   1. Weak ordering.
       (a) x >~ x. (Reflexivity)
       (b) x >~ y or y >~ x. (Connectivity)
       (c) x >~ y and y >~ z imply x >~ z. (Transitivity)
   2. Reducibility. ((x,p,y),q,y) ~ (x,pq,y).
   3. Independence. If (x,p,z) ~ (y,p,z), then (x,p,w) ~ (y,p,w).
   4. Betweenness. If x > y, then x > (x,p,y) > y.
   5. Solvability. If x > y > z, then there exists p such that y ~ (x,p,z).

   Example: x is $150, y is a ticket to a Canucks playoff game, z is a
      50-50 lottery which either wins $200 or a 20 year old PC, and w
      is one week of good health
                                                                              7
    What is utility? A key theoretical result and interpretation
   Theorem: (J. von Neumann and O. Morgenstern, Theory of Games and Economic Behavior,
    1944 (Also attributable to Ramsay (1931))
    If Axioms 1-5 are satisfied for all outcomes, then there exists a real-valued utility
    function u(s) defined on outcomes, with the properties that:
        1.   x > y if and only if u(x) > u(y), and x ~ y if and only if u(x) = u(y);
        2.   u(x,p,y) = pu(x)+(1-p)u(y);
        3.   u is unique up to order preserving affine transformations; that is, if v is any other
             function satisfying 1. and 2. then there exist real numbers b, and a>0, such that v(x)
             = au(x)+b.
   This means that if we believe the consistency axioms:
       There is a function u(s) that captures our preferences for outcomes; the higher the utility
        the more we prefer the outcome.
       The utility of a lottery is the expected utility.
       The relative difference between outcomes measures our relative preference for
        outcomes
   The consequence of all this is that in a decision problem
       We value all outcomes by their utility
       We replace a lottery by its expected utility
       We choose decisions which maximize utility                                               8
  Using Utility: U(x) = ((x+2000)/10000).5

                                                                                                        Utility

                                                                                         5000            .8367



                           .5

   Choice A
                          .5


                                                                                          -1000          .3163




   Choice B
                                                                                            1500             .5916

Choice A has EMV = $2000 and Expected Utility = .5764; so under EMV you prefer A and under Expected Utility you prefer B.

At what value for Choice B would you be indifferent?        1322.87 which is the certainty equivalent of A           9
Alberta Exploration Revisited
   Suppose we revisit Alberta Exploration but use
    expected utility instead of expected monetary
    value as our optimality criterion
     How    might our utility change.
   For simplicity we assume an exponential utility
    function u(x) = xa normalized so that 0 ≤ u(x) ≤ 1.
     That   is u(x) = ((x-b)/c)a
   See Alberta Utility



                                                     10
St. Petersburg Paradox Revisited
   Bernoulli suggest used a utility function u(x)
    =ln(x)
     Thus  the value of receiving 2n is ln(2n) = nln(2) so
      that the E(u(X))= 2ln(2) = 1.3863
     Hence the certainty equivalent of this gamble would
      be e1.3863 = 4.




                                                              11
Assessing a decision maker‟s utility

                              Best outcome



            p

Choice A
           1-p


                               Worst outcome




Choice B                        Specified
                                intermediate
                                outcome


                                               12
Assessing a decision maker‟s utility
   We can use a similar approach to that used for
    assessing probabilities.
       Take a reference lottery for which the utility of the two outcomes
        are known and the probability of receiving the better one is p.
            We start by assigning utility of 1 to best outcome B and 0 to worst
             outcome W.
            We compare it to a decision with no randomness and a fixed payoff
             C.
       Two approaches;
            Fix p and vary C
            Fix C and vary p (using spinner)
       In the first case we find the value C that has utility p
       In the second case we find the utility p of receiving C for sure.
       Which is easier?

                                                                               13
Assessing a decision maker‟s utility
   Iterative approach;
       Set p = .5 and find C1 so that u(C1)=.5
       Now consider a 50-50 lottery between C1 and B. Assign utility
        .75 to the equivalent value C2.
       Now consider a 50-50 lottery between C1 and W. Assign utility
        .25 to the equivalent value C3.
       Continue this process
   Check for consistency and whether it agrees with our
    attitude towards risk.
   Plot and smooth utility curve.
   Considerable behavioral research on doing this to avoid
    bias
   Exercise
       Find your utility curve for a decision problem with outcomes
        ranging from -5000 to 20000.

                                                                        14
Another interpretation of utility
   Assume u(s) is normalized so it‟s value falls between 0
    and 1.
   Then suppose we have a fixed outcome with utility q.
   Then this fixed outcomes is equivalent to a lottery with
    outcomes B with probability q and w with probability 1-q.
   Thus we can reduce a decision problem to one in which
    every endpoint is a lottery between B and W.
       Of course our preference is for lotteries with higher probabilities
        of receiving B.
   Thus we can think of the utility as the probability of
    receiving the best outcome.


                                                                          15
Shape of Utility Functions
   A utility function u(•) is concave if for any a satisfying 0  a  1
                       u(ax + (1-a)y)  au(x) + (1-a) u(y)
    for any x and y in S.
        A twice differentiable concave utility function has a non-positive second derivative (u‟‟ ≤ 0)
        Examples
              u(x) =ln(x)
              u(x) = √x
              u(x) = 1-e-ax
   An individual who possesses a strictly concave utility function is said to be risk
    averse.
        By strictly concave , I mean it is not linear on any intervals.
        A key property of a strictly concave utility function is decreasing marginal utility. That means
         as x increases the slope or value of an additional unit, u`(x), decreases.
   An individual with a linear utility function is said to be risk neutral.
        This is appropriate when you
            Repeat the gamble many times
            Consequences are small (utility curve is approximately linear over small increments)

   An individual with a convex utility function is risk seeking.
        Example u(x) = x2
   Comments
        Often we normalize the utility function so that its maximum value is 1 and its minimum value
         is 0.
        In decision problems we often define our utility for our total wealth, not the just the outcome
         of the gamble.
                                                                                                          16
    Decreasing marginal utility
   It is not always reasonable to assume that
    you will value an additional $100 equally                         1
    regardless of how much money you already
                                                                     0.9
    have
                                                                     0.8
       You would probably value an extra $100 less
        if you were very wealthy than if you were very               0.7

        poor                                                         0.6
   This phenomenon of valuing an additional                                                 u(80)




                                                           Utility
                                                                     0.5
    dollar less, the more money you already
    have, is known as decreasing marginal utility                    0.4               u(60)
       Decreasing marginal utility implies that the                 0.3

        utility function has a concave shape – see                   0.2        u(40)
        graph
                                                                     0.1
       The horizontal lines in the graph show the                         u(20)
        additional (marginal) utility of each additional              0




                                                                      0

                                                                           20


                                                                                  40


                                                                                        60


                                                                                               80


                                                                                                     0


                                                                                                            0


                                                                                                                 0


                                                                                                                      0


                                                                                                                           0


                                                                                                                                0
        $20




                                                                                                    10


                                                                                                         12


                                                                                                                14


                                                                                                                     16


                                                                                                                          18


                                                                                                                               20
                                                                                                    Money
       Note that the derivative of u is decreasing as
        the monetary value increases
   Would this argument apply to an extra day
    of life?

                                                                                                                          17
    Risk-aversion: a graphical view
    Consider the following options:
     1.   A chance p of $50 and (1-p) of $10
     2.   A sure outcome of $[50p + 10(1-p)]
    Line AB represents the expected                          1


     utility of option 1 for any value of p                  0.9

                                                             0.8                               u(50)
         Note this line lies under the curve
                                                             0.7
         It‟s equation is pu(50)+(1-p)u(10)
                                                             0.6
    For p = 0.5, the expected monetary                                          C




                                                   Utility
                                                             0.5

     value of either option is $30                           0.4
                                                                   u(10)
         The blue line represents the expected              0.3

          utility of option 1                                0.2

         The red line to point C represents the             0.1

          utility of option 2                                 0
                                                                   0   10   20       30   40     50    60   70   80   90   100
    The sure outcome will have a higher                                                       Money

     expected value whenever the
     decision maker is risk averse.

                                                                                                                      18
    Certainty equivalence of gambles
    Consider now a gamble (decision) that can
     either result in having a dollars with
     probability p, or to b dollars with probability
     1–p, where a > b                                                 1

    A is the expected utility of the gamble                         0.9

    C is a dollar value such that you would be             A0.8
     indifferent between having $C for certain
                                                                     0.7
     and accepting the gamble
                                                                     0.6
         The utility of C for certain is equal to the




                                                           Utility
          expected utility of the gamble                             0.5

         u(C) = pu(a) +(1-p)u(b)                                    0.4

    The value C is called the certainty                             0.3

     equivalent value of the gamble                                  0.2
    C = u-1(A)                                                      0.1
         Certainty equivalent = inverse of u applied to
                                                                      0
          expected utility of gamble                                       b   Money   C B   a
    B = pa+(1-p)b is the expected monetary
     outcome of the gamble which exceeds C.
         Why?


                                                                                             19
    An important decision problem; Insurance
    Suppose you want to buy insurance against the theft of your car,
     which you value at $20,000. You currently have $40,000 in total
     assets
    You assess the likelihood of your car being stolen in the next year at
     0.5%
        If you don‟t buy insurance, you will have $40,000 with 99.5% probability,
         and $20,000 with probability 0.5% The expected monetary outcome is
         therefore .995($40,000)+.005($20,000) = $39,900
        If you buy insurance at a price c, then you will have $40,000 – c,
         regardless of whether your car is stolen or not
    You would buy insurance at price c if
          u(40,000 – c) > 0.995u(40,000) + .005u(20,000)
    If you are risk-neutral, you would be willing to spend up to $40,000 –
     $39,900 = $100 on car insurance
    If you are risk-averse, would you be willing to spend more or less than
     $100?

                                                                                20
    Insurance
    We plot the insurance-buying problem in
     the graph at right, assuming risk-
     aversion (not to scale)
    B represents the expected monetary
                                                           1
     value of not buying insurance = $39,900
                                                          0.9
    A represents the expected utility of the
     decision not to buy insurance              A         0.8

     = 0.995u(40,000) + .005u(20,000)                     0.7

    C represents the monetary value at                   0.6
     which u(C) = A (certainty equivalence)




                                                Utility
                                                          0.5
    You would pay up to $40,000 – C for
     insurance with this utility fn.                      0.4

    You would pay up to $40,000 – B if you               0.3

     were risk neutral                                    0.2
    If you are risk-averse, you would be                 0.1
     willing to pay more than the expected
     value for insurance                                   0
                                                                $20K           C B   $40K
                                                                       Money
    Why would anyone sell you insurance?



                                                                                       21
    Risk premiums
    The previous examples show that for a risk-averse decision-maker, the value of
     a gamble (its certainty equivalent) will be less than its „fair value‟ (expected
     monetary outcome)
        This follows from the concavity of the utility function
        Risk-averse people would refuse a „fair‟ bet
    The difference between the „fair value‟ and the certain equivalence value of the
     gamble (between points B and C in the previous graphs) is known as the risk
     premium
    If the risk premium is negative you would be risk seeking
        If this holds for all outcomes you would have a convex utility function.
        Recall roulette and horse race betting are “unfair games” and have negative risk
         premiums
    Note people may have a utility function that is risk seeking for gambles with
     positive payoffs and risk averse for negative consequences.
    What would such a utility function look like?

                                                                                        22
    Risk premiums
    The size of the risk premium can
     be used to measure the degree of
     risk-aversion to a particular gamble              1

    Consider the two utility curves                  0.9

     pictured at left. The red curve is               0.8
     „more concave‟ in the region                 A
                                                  0.7
     between the two outcomes and
     corresponds to a more risk-averse                0.6

     decision maker




                                            Utility
                                                      0.5
    The point D is the certainty-                    0.4
     equivalent value for this more risk-
                                                      0.3
     averse decision-maker
    The point C is the certainty-                    0.2

     equivalent for the less risk averse              0.1
     decision maker                                    0
    The risk premium B – D is greater                      $b       D C B
                                                                     Money
                                                                                    $a
     than B-C for the risk averse
     decision maker.
                                            (Note: The two utility curves coincide at the points
                                            a and b in this graph only for convenience of
                                            exposition.)
                                                                                            23
    The value of a gamble
    Suppose that your utility function for total wealth $x is u(x) = x1/2
        For simplicity, we won‟t bother here to normalize the utilities between 0 and 1
    Consider a gamble that pays $5000 with probability 0.5 and -$5000 with
     probability 0.5 (i.e., you have to pay $5000 if you lose)
    The following table shows that your risk premium depends on your wealth:

          Initial Wealth            Exp. Utility         Certain Equiv   Risk Premium
                 x            U = .5(x+5000)1/2             CE = U2        EV – CE
                                    + .5(x-5000)1/2                        = x – CE

             $ 5000                     50                   $ 2500         $2500

             $15000                   120.7                 $14571          $ 429

             $25000                   157.3                 $24747          $ 253
    Thus with this utility function, the more money you already have, the less
     money you would require in compensation for accepting the gamble
        That is, your risk premium is decreasing with your wealth.

                                                                                           24
The value of a gamble
   The fact that the risk premium can vary with initial wealth means
    that we can‟t consider the value of a gamble in isolation from our
    total wealth (this is often called framing).
   It may be inconvenient if we have to constantly have to refer to our
    current level of wealth in order to judge the value of a gamble
   There are certain utility functions, however, for which the value of a
    gamble can be judged apart from total wealth
   For example, consider the utility function u(x) = 1 – exp(-x/10000).
    The following table lists the risk premiums for each:

Initial Wealth             Exp. Utility              Certain Equiv     Risk Premium
      x          U = .5(1 – e-(x+5000)/10000)       -10000 ln(1 – U)    EMV – CE
                       + .5(1 – e- (x-5000)/ 10000)                      = x – CE
    $ 5000                    .316                       $ 3799           $1201
    $15000                    .748                       $13799           $1201

    $25000                    .907                       $23799           $1201
                                                                                    25
    Utility functions with the delta property
    Utility functions for which the risk
     premium of a gamble does not
     depend on the initial level of wealth                 1.2

     are said to have the delta property                             more risk-averse
    It can be shown that all such utility                  1
     functions are either of the form (up to
     a linear scaling)
                  u(x) = x, or                             0.8


              u(x) = 1 – exp(-cx),




                                                 Utility
     for some c > 0                                        0.6

    The parameter c is known as the risk
     aversion constant: the higher the                     0.4
                                                                                       less risk-averse
     value of c, the more risk-averse one
     is (see graph at right)                                                                       c = 1/5000
                                                           0.2                                     c = 1/10000
    Risk-neutral utility functions have the
                                                                                                   c = 1/20000
     delta property
    Is this a desirable property of a utility              0
                                                                 0    5000   10000   15000      20000   25000     30000
     function?                                                                   Total Wealth

                                                                                                                 26
Example: the jellybean game
   Suppose there are 50% red jellybeans in a jar, and your utility
    function for total wealth $x is u(x) = 1 – exp(-x/10). You win $10 if
    you draw a red jellybean and $0 otherwise
   The fair value of this game is $5 – if you were risk-neutral, you
    would pay up to this amount to play the game
   We now compute the certain equivalent of the game. Because the
    utility function has the delta property, you can ignore total wealth –
    just assume x = 0.
   Expected utility is then U = .5(1 – e-1) + .5(1 – e0) = .316
   The certain equivalent is then u-1(.316) = -10•ln(1 - .316) = 3.80
   You would pay up to $3.80 to play the game (regardless of your
    current wealth)
   Using utilities that have the delta property is convenient. You can
    use such a utility function if you assume that your aversion to risk is
    constant, regardless of your current wealth

                                                                        27
More on risk aversion
   The linear (risk-neutral) and exponential utility functions we have
    discussed so far are convenient functions to use
       Both of these functions presume that your level of risk aversion is constant,
        regardless of your level of wealth
            This is what it means to have the delta-property
            Another measure of the risk-aversion associated with a utility function u(•) at a level
             of wealth x is the value r(x) = –u’’(x) / u’(x).
                  For linear utility functions, r(x) = 0.
                  For u(x) = a – bexp(-kx), r(x) = k
            Constant risk-aversion is not always be a reasonable assumption – as
             you grow richer, you may become more tolerant of risk – your risk-
             aversion should decrease
   Other commonly used utility functions are of the form u(x) = xk, for 0 < k <
    1 or u(x)=ln(x)
       The risk-aversion associated with these utility functions decreases as wealth
        grows
       For example for u(x) = ln(x), r(x) = 1/x.
   Ultimately, for any major decision, you will have to think hard about how
    much you value the outcomes – you don‟t want to assume a utility function
    merely because it has a convenient form
                                                                                                   28
Measuring risk aversion
    We say that utility curve u is more risk averse
    than utility curve v if for all outcomes x and y and
    every lottery (x,p,y) the certain-equivalent of u is
    less than certain-equivalent of v.
   Theorem (Pratt)
    This is true if and only if –u‟‟(x)/u‟(x) > -v‟‟(x)/v‟(x)
    for all outcomes v.
   Proof
   The quantity –u‟‟(x)/u‟(x) is called the Arrow-Pratt
    measure of absolute risk aversion
   Example u(x) = x , v(x) = x2 for x >0.
                                                           29
    Expected value of perfect information under utility

   If your utility function has the delta property, then the difference in the
    certain equivalents of these decisions represents the amount that you
    should be willing to pay to acquire information
        If you are risk-neutral, then this is simply the difference in the expected
         monetary value of the two decisions
   If your utility function does not have the delta property, then computing
    how much you should be willing to pay to acquire information is more
    difficult. The following approach will work.
        Suppose you must pay x for perfect information.
        If you pay x, then each endpoint on the PI decision tree is reduced by x.
        Then vary x so that the utility of the perfect information decision tree equals
         that of the perfect information decision tree.




                                                                                       30
Buying vs. selling
   Suppose there is a lottery such that with
    probability .5 you get 1000 and with probability
    .5 you lose 200.
   Would you pay the same price to buy it as you
    would to sell it?
   Yes, if you use expected value
   Maybe, if you are risk averse
     Yes if your utility has the delta property
     No, otherwise e.g. u(x)=sqrt(500+x)

   What if you are risk seeking?
Summary so far
   Utility provides a way of valuing outcomes that takes
    your risk attitude into account.
   To use it in the context of decision analysis replace all
    endpoints by their utilities and do backward induction as
    in the expected value case.
   Utility functions differ between decision makers.
   You can assess a decision maker‟s utility function by
    asking him/her to evaluate gambles
   The certainty equivalent gives the monetary value of a
    decision problem.
   For risk averse decision makers, the utility function is
    concave and the risk premium is positive.
   But behavioral studies suggest that utility theory might
    not describe how people really behave.
    We will not cover assessing multi-attribute utilities or
    utilities for health outcomes.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:17
posted:10/2/2011
language:English
pages:32