Docstoc

Rud2007-final

Document Sample
Rud2007-final Powered By Docstoc
					Partially specified probabilities (PSP)

                Ehud Lehrer
             Tel Aviv University
               Tel Aviv, Israel


            www.tau.ac.il/∼lehrer

             RUD2007, Tel Aviv




                                    Ehud Lehrer – Partially specified probabilities – p. 1/
                       A-A (vN-M)                                   Fully Bayesian




                             Partially-specified probabilities
                          (probability of events in sub-algebra)   Partially Bayesian




              Partially-specified probabilities
Partially
            (expectation of random variables)
Bayesian
                                                               Choquet EUM
                                                             (Schmeidler 1989)

           Multiple prior
     (Gilboa-Schmeidler 1989)
Ellsberg urn




               Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors




                                             Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors

• 30 are Red and the other 60 are either White or Black




                                             Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors

• 30 are Red and the other 60 are either White or Black

• One ball is randomly drawn from the urn




                                             Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors

• 30 are Red and the other 60 are either White or Black

• One ball is randomly drawn from the urn

• A decision maker (DM) is asked to choose between two
lotteries:




                                             Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors

• 30 are Red and the other 60 are either White or Black

• One ball is randomly drawn from the urn

• A decision maker (DM) is asked to choose between two
lotteries:

• X: obtain $100 if Red.
  Y: obtain $100 if White




                                             Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors

• 30 are Red and the other 60 are either White or Black

• One ball is randomly drawn from the urn

• A decision maker (DM) is asked to choose between two
lotteries:

• X: obtain $100 if Red.
  Y: obtain $100 if White

• W: obtain $100 if Red or Black.
  Z: obtain $100 if White or Black


                                             Ehud Lehrer – Partially specified probabilities – p. 1/?
                    Ellsberg urn
• An urn contains 90 balls of three colors

• 30 are Red and the other 60 are either White or Black

• One ball is randomly drawn from the urn

• A decision maker (DM) is asked to choose between two
lotteries:

• X: obtain $100 if Red.
  Y: obtain $100 if White

• W: obtain $100 if Red or Black.
  Z: obtain $100 if White or Black

• If you prefer X over Y and Z over W you are not an
expected utility maximizer                   Ehud Lehrer – Partially specified probabilities – p. 1/?
Ellsberg urn - cont.




                   Ehud Lehrer – Partially specified probabilities – p. 2/?
             Ellsberg urn - cont.
• The DM ought to take decisions while having only partial
information about the real distribution




                                            Ehud Lehrer – Partially specified probabilities – p. 2/?
             Ellsberg urn - cont.
• The DM ought to take decisions while having only partial
information about the real distribution

• From the information DM obtained she can infer that
P(Red) = 1/3 and I
I                 P(White or Black) = 2/3.




                                            Ehud Lehrer – Partially specified probabilities – p. 2/?
Dynamic urn




              Ehud Lehrer – Partially specified probabilities – p. 3/?
                  Dynamic urn
• Suppose that on day 2 the number of White balls is
doubled




                                            Ehud Lehrer – Partially specified probabilities – p. 3/?
                   Dynamic urn
• Suppose that on day 2 the number of White balls is
doubled

• What is now the distribution of colors?




                                            Ehud Lehrer – Partially specified probabilities – p. 3/?
                   Dynamic urn
• Suppose that on day 2 the number of White balls is
doubled

• What is now the distribution of colors?

• The probability of no non-trivial event is known: the prob.
of Red is no longer 1/3; that of White or Black is no longer
2/3




                                               Ehud Lehrer – Partially specified probabilities – p. 3/?
                   Dynamic urn
• Suppose that on day 2 the number of White balls is
doubled

• What is now the distribution of colors?

• The probability of no non-trivial event is known: the prob.
of Red is no longer 1/3; that of White or Black is no longer
2/3

• So, after all what the DM knows?




                                               Ehud Lehrer – Partially specified probabilities – p. 3/?
                   Dynamic urn
• Suppose that on day 2 the number of White balls is
doubled

• What is now the distribution of colors?

• The probability of no non-trivial event is known: the prob.
of Red is no longer 1/3; that of White or Black is no longer
2/3

• So, after all what the DM knows?

• DM knows the expectation of some, but not all, random
variables



                                               Ehud Lehrer – Partially specified probabilities – p. 3/?
Dynamic urn - cont.




                  Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]




                                               Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]

                          1
                 E(X) =
• We’ll see that I        3




                                               Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]

                          1
                 E(X) =
• We’ll see that I        3

• Denote by ni the number of color i balls at day one




                                               Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]

                          1
                 E(X) =
• We’ll see that I        3

• Denote by ni the number of color i balls at day one
               R      W B
                                 (nW + nB = 60)
           nR = 30 nW nB




                                               Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]

                          1
                 E(X) =
• We’ll see that I        3

• Denote by ni the number of color i balls at day one
               R      W B
                                 (nW + nB = 60)
           nR = 30 nW nB
                                     1
                        nR             n
                                     3 W       1
                    nR +nW +nB   =   nW    =   3




                                                   Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]

                          1
                 E(X) =
• We’ll see that I        3

• Denote by ni the number of color i balls at day one
               R      W B
                                 (nW + nB = 60)
           nR = 30 nW nB
                                     1
                        nR             n
                                     3 W       1
                    nR +nW +nB   =   nW    =   3

• On day 2 there are 2nW White balls


                                                   Ehud Lehrer – Partially specified probabilities – p. 4/?
              Dynamic urn - cont.
                                                  1
• Consider the random variable (r.v.): X = [1(R); 6 (W ); 0(B)]

                          1
                 E(X) =
• We’ll see that I        3

• Denote by ni the number of color i balls at day one
               R      W B
                                 (nW + nB = 60)
           nR = 30 nW nB
                                      1
                        nR              n
                                      3 W       1
                    nR +nW +nB   =    nW    =   3

• On day 2 there are 2nW White balls

                   1nR + 1 2nW +0nB                1
                                              nR + 3 nW                 1
       E(X) =
       I                 6
                     nR +2nW +nB      =   nR +nW +nB +nW        =       3
                                                     Ehud Lehrer – Partially specified probabilities – p. 4/?
Dynamic urn - conclusion




                    Ehud Lehrer – Partially specified probabilities – p. 5/?
        Dynamic urn - conclusion
• On day 2 the expectation of two random variables is
known:




                                            Ehud Lehrer – Partially specified probabilities – p. 5/?
         Dynamic urn - conclusion
• On day 2 the expectation of two random variables is
known:

• X = [1(R); 6 (W ); 0(B)] (expectation= 1 ) and
               1
                                         3
[1(R); 1(W ); 1(B)] (expectation=1).




                                                   Ehud Lehrer – Partially specified probabilities – p. 5/?
         Dynamic urn - conclusion
• On day 2 the expectation of two random variables is
known:

• X = [1(R); 6 (W ); 0(B)] (expectation= 1 ) and
               1
                                         3
[1(R); 1(W ); 1(B)] (expectation=1).

• The expectation of any r.v. in the algebra they generate is
also known.




                                                   Ehud Lehrer – Partially specified probabilities – p. 5/?
Noisy signals




                Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z




                                                 Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z
• There are two payoffs, 5 and 7 – stochastically depend on
the state:




                                            Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z
• There are two payoffs, 5 and 7 – stochastically depend on
the state:
                 x              y             z
            [1(5); 0(7)] [ 1 (5); 1 (7)] [0(5); 1(7)]
                           2      2




                                            Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z
• There are two payoffs, 5 and 7 – stochastically depend on
the state:
                  x             y             z
            [1(5); 0(7)] [ 1 (5); 1 (7)] [0(5); 1(7)]
                           2      2
• Suppose it is repeated many times and dist. over x, y and
z is uniforms and i.i.d.




                                            Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z
• There are two payoffs, 5 and 7 – stochastically depend on
the state:
                  x             y             z
            [1(5); 0(7)] [ 1 (5); 1 (7)] [0(5); 1(7)]
                           2      2
• Suppose it is repeated many times and dist. over x, y and
z is uniforms and i.i.d.
• DM observes 50% the payoff 5 and 50% the payoff 7




                                            Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z
• There are two payoffs, 5 and 7 – stochastically depend on
the state:
                    x             y             z
              [1(5); 0(7)] [ 1 (5); 1 (7)] [0(5); 1(7)]
                             2      2
• Suppose it is repeated many times and dist. over x, y and
z is uniforms and i.i.d.
• DM observes 50% the payoff 5 and 50% the payoff 7
• Asymptotically, all DM knows is that the expectation of
[1(x), 0(y), -1(z)] is 0.




                                            Ehud Lehrer – Partially specified probabilities – p. 6/?
                   Noisy signals
• There are three states of nature: x, y and z
• There are two payoffs, 5 and 7 – stochastically depend on
the state:
                    x             y             z
              [1(5); 0(7)] [ 1 (5); 1 (7)] [0(5); 1(7)]
                             2      2
• Suppose it is repeated many times and dist. over x, y and
z is uniforms and i.i.d.
• DM observes 50% the payoff 5 and 50% the payoff 7
• Asymptotically, all DM knows is that the expectation of
[1(x), 0(y), -1(z)] is 0.

• In an interactive model when players get only noisy
signals about others, players face partially-specified
probability (PSP)

                                             Ehud Lehrer – Partially specified probabilities – p. 6/?
Example by Machina (2007)




                    Ehud Lehrer – Partially specified probabilities – p. 7/?
       Example by Machina (2007)
• An urn contains 20 balls of four different colors a,b,c, and d.
10 are either a or b and the other 10 are either c or d.




                                                 Ehud Lehrer – Partially specified probabilities – p. 7/?
       Example by Machina (2007)
• An urn contains 20 balls of four different colors a,b,c, and d.
10 are either a or b and the other 10 are either c or d.

• A decision maker (DM) chooses an act, then a ball is
randomly drawn and a reward (utility) is given. The following
table summarizes the rewards related to four acts.




                                                 Ehud Lehrer – Partially specified probabilities – p. 7/?
       Example by Machina (2007)
• An urn contains 20 balls of four different colors a,b,c, and d.
10 are either a or b and the other 10 are either c or d.

• A decision maker (DM) chooses an act, then a ball is
randomly drawn and a reward (utility) is given. The following
table summarizes the rewards related to four acts.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0




                                                  Ehud Lehrer – Partially specified probabilities – p. 7/?
       Example by Machina (2007)
• An urn contains 20 balls of four different colors a,b,c, and d.
10 are either a or b and the other 10 are either c or d.

• A decision maker (DM) chooses an act, then a ball is
randomly drawn and a reward (utility) is given. The following
table summarizes the rewards related to four acts.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0


• Notice: f3 is a mirror image of f2 and f4 of f1

                                                  Ehud Lehrer – Partially specified probabilities – p. 7/?
       Example by Machina (2007)
• An urn contains 20 balls of four different colors a,b,c, and d.
10 are either a or b and the other 10 are either c or d.

• A decision maker (DM) chooses an act, then a ball is
randomly drawn and a reward (utility) is given. The following
table summarizes the rewards related to four acts.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0


• We expect (as long as symmetry is kept) that if f2                          f1
then f3 f4
                                                  Ehud Lehrer – Partially specified probabilities – p. 7/?
       Example by Machina (2007)
• An urn contains 20 balls of four different colors a,b,c, and d.
10 are either a or b and the other 10 are either c or d.

• A decision maker (DM) chooses an act, then a ball is
randomly drawn and a reward (utility) is given. The following
table summarizes the rewards related to four acts.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0


• We expect (as long as symmetry is kept) that if f2 f1
then f3 f4 , but this is inconsistent with Choquet EUM
                                                  Ehud Lehrer – Partially specified probabilities – p. 7/?
Example by Machina (2007) - cont.




                         Ehud Lehrer – Partially specified probabilities – p. 8/?
Example by Machina (2007) - cont.
                a     b     c    d
          f1    0    200   100   100
          f2    0    100   200   100
          f3   100   200   100   0
          f4   100   100   200   0




                                       Ehud Lehrer – Partially specified probabilities – p. 8/?
 Example by Machina (2007) - cont.
                           a     b     c        d
                     f1    0    200   100       100
                     f2    0    100   200       100
                     f3   100   200   100       0
                     f4   100   100   200       0
                                            1
                   P(a, b) = I
• It is known that I         P(c, d) =      2




                                                      Ehud Lehrer – Partially specified probabilities – p. 8/?
 Example by Machina (2007) - cont.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0

• It is known that I           P(c, d) = 1
                     P(a, b) = I         2
                                                1
                                   E(X1 ) =
• Equivalently, X1 = [1, 1, 0, 0], I            2   and
                             1
                   E(X2 ) = 2
X2 = [0, 0, 1, 1], I




                                                          Ehud Lehrer – Partially specified probabilities – p. 8/?
 Example by Machina (2007) - cont.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0

• It is known that I           P(c, d) = 1
                     P(a, b) = I         2
                                                1
                                   E(X1 ) =
• Equivalently, X1 = [1, 1, 0, 0], I            2   and
                             1
                   E(X2 ) = 2
X2 = [0, 0, 1, 1], I
• Belief: P(b, c) is positive.




                                                          Ehud Lehrer – Partially specified probabilities – p. 8/?
 Example by Machina (2007) - cont.
                             a     b     c    d
                       f1    0    200   100   100
                       f2    0    100   200   100
                       f3   100   200   100   0
                       f4   100   100   200   0

• It is known that I           P(c, d) = 1
                     P(a, b) = I          2
                                              1
• Equivalently, X1 = [1, 1, 0, 0], IE(X1 ) = 2 and
                             1
                   E(X2 ) = 2
X2 = [0, 0, 1, 1], I
                                                                        1
• Belief: P(b, c) is positive. It implies IE([0, 1, 1, 0]) ≥            20




                                                     Ehud Lehrer – Partially specified probabilities – p. 8/?
 Example by Machina (2007) - cont.
                             a     b     c    d
                       f1    0    200   100   100
                       f2    0    100   200   100
                       f3   100   200   100   0
                       f4   100   100   200   0

• It is known that I           P(c, d) = 1
                     P(a, b) = I           2
                                               1
• Equivalently, X1 = [1, 1, 0, 0], IE(X1 ) = 2 and
                             1
                   E(X2 ) = 2
X2 = [0, 0, 1, 1], I
                                                                         1
• Belief: P(b, c) is positive. It implies I E([0, 1, 1, 0]) ≥            20
                                         1
• Let’s assume that I  E([0, 1, 1, 0]) = 20




                                                      Ehud Lehrer – Partially specified probabilities – p. 8/?
 Example by Machina (2007) - cont.
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0

• It is known that I           P(c, d) = 1
                     P(a, b) = I           2
                                               1
• Equivalently, X1 = [1, 1, 0, 0], IE(X1 ) = 2 and
                             1
                   E(X2 ) = 2
X2 = [0, 0, 1, 1], I
                                                              1
• Belief: P(b, c) is positive. It implies I E([0, 1, 1, 0]) ≥ 20
                                         1
• Let’s assume that I  E([0, 1, 1, 0]) = 20
• Now the expectations of X1 , X2 and [0, 1, 1, 0] are known.



                                                  Ehud Lehrer – Partially specified probabilities – p. 8/?
Partially-specified probability (PSP)




                           Ehud Lehrer – Partially specified probabilities – p. 9/?
Partially-specified probability (PSP)
                                            P,
• Partially-specified probability is a pair (I Y)




                                               Ehud Lehrer – Partially specified probabilities – p. 9/?
Partially-specified probability (PSP)
                                            P,
• Partially-specified probability is a pair (I Y)

  P
• I – probability




                                               Ehud Lehrer – Partially specified probabilities – p. 9/?
Partially-specified probability (PSP)
                                            P,
• Partially-specified probability is a pair (I Y)

  P
• I – probability

• Y – a set of random variables




                                               Ehud Lehrer – Partially specified probabilities – p. 9/?
Partially-specified probability (PSP)
                                            P,
• Partially-specified probability is a pair (I Y)

  P
• I – probability

• Y – a set of random variables

                         EP         E(Y ), Y ∈ Y
• DM is informed only of I I (Y ) = I




                                               Ehud Lehrer – Partially specified probabilities – p. 9/?
How one can use PSP?




                  Ehud Lehrer – Partially specified probabilities – p. 10/?
           How one can use PSP?
       P,
• Let (I Y) be PSP (partially-specified probability)




                                             Ehud Lehrer – Partially specified probabilities – p. 10/?
           How one can use PSP?
       P,
• Let (I Y) be PSP (partially-specified probability)

• X is a random variable




                                             Ehud Lehrer – Partially specified probabilities – p. 10/?
           How one can use PSP?
       P,
• Let (I Y) be PSP (partially-specified probability)

• X is a random variable

                                    P,
• Define the expectation of X w.r.t (I Y),




                                             Ehud Lehrer – Partially specified probabilities – p. 10/?
           How one can use PSP?
       P,
• Let (I Y) be PSP (partially-specified probability)

• X is a random variable

                                    P,
• Define the expectation of X w.r.t (I Y),



 EP
 I (I ,Y) (X) = max{      E(Y );
                       αY I         αY Y ≤ X and αY ∈ R}




                                             Ehud Lehrer – Partially specified probabilities – p. 10/?
Evaluating acts with PSP




                    Ehud Lehrer – Partially specified probabilities – p. 11/?
         Evaluating acts with PSP
                P,
• Suppose that (I Y) is a PSP over S (the state space)




                                           Ehud Lehrer – Partially specified probabilities – p. 11/?
         Evaluating acts with PSP
                P,
• Suppose that (I Y) is a PSP over S (the state space)

• and u is an affine (utility function) defines over ∆(L)




                                              Ehud Lehrer – Partially specified probabilities – p. 11/?
         Evaluating acts with PSP
                P,
• Suppose that (I Y) is a PSP over S (the state space)

• and u is an affine (utility function) defines over ∆(L)

• Note that if f is an act, then u ◦ f is a random variable
defined over S




                                               Ehud Lehrer – Partially specified probabilities – p. 11/?
         Evaluating acts with PSP
                P,
• Suppose that (I Y) is a PSP over S (the state space)

• and u is an affine (utility function) defines over ∆(L)

• Note that if f is an act, then u ◦ f is a random variable
defined over S

• PSP and u induce a complete order over acts. Let f and g
be acts, then




                                               Ehud Lehrer – Partially specified probabilities – p. 11/?
         Evaluating acts with PSP
                P,
• Suppose that (I Y) is a PSP over S (the state space)

• and u is an affine (utility function) defines over ∆(L)

• Note that if f is an act, then u ◦ f is a random variable
defined over S

• PSP and u induce a complete order over acts. Let f and g
be acts, then

           f        EP                  EP
               g iff I (I ,Y) (u ◦ f ) ≥ I (I ,Y) (u ◦ g)




                                                  Ehud Lehrer – Partially specified probabilities – p. 11/?
Back to Machina’s Example




                    Ehud Lehrer – Partially specified probabilities – p. 12/?
Back to Machina’s Example
             a     b     c    d
       f1    0    200   100   100
       f2    0    100   200   100
       f3   100   200   100   0
       f4   100   100   200   0




                                    Ehud Lehrer – Partially specified probabilities – p. 12/?
       Back to Machina’s Example
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0
                               1                  1
• Recall, IE(X1 ) = I                E(X3 ) =
                      E(X2 ) = 2 and I            20
                                       P,
(X3 = [0, 1, 1, 0]). This defined PSP (I Y)




                                                       Ehud Lehrer – Partially specified probabilities – p. 12/?
       Back to Machina’s Example
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0
• f1 corresponds to Y = [0, 200, 100, 100] and f2 to
Z = [0, 100, 200, 100] (u identity – the reward in utile terms)




                                                  Ehud Lehrer – Partially specified probabilities – p. 12/?
       Back to Machina’s Example
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0
• f1 corresponds to Y = [0, 200, 100, 100] and f2 to
Z = [0, 100, 200, 100] (u identity – the reward in utile terms)
• Z = [0, 100, 100, 0] + [0, 0, 100, 100] and
I
E(Z) = 100I  E(X3 ) + 100I X2 = 100 + 100
                           E            20    2




                                                  Ehud Lehrer – Partially specified probabilities – p. 12/?
       Back to Machina’s Example
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0
• f1 corresponds to Y = [0, 200, 100, 100] and f2 to
Z = [0, 100, 200, 100] (u identity – the reward in utile terms)
• Z = [0, 100, 100, 0] + [0, 0, 100, 100] and
I
E(Z) = 100I  E(X3 ) + 100I X2 = 100 + 100
                           E            20    2
• The best approximation from below to Y with linear
combinations of X1 , X2 and X3 is 100X2




                                                  Ehud Lehrer – Partially specified probabilities – p. 12/?
       Back to Machina’s Example
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0
• f1 corresponds to Y = [0, 200, 100, 100] and f2 to
Z = [0, 100, 200, 100] (u identity – the reward in utile terms)
• Z = [0, 100, 100, 0] + [0, 0, 100, 100] and
I
E(Z) = 100I  E(X3 ) + 100I X2 = 100 + 100
                           E            20    2
• The best approximation from below to Y with linear
combinations of X1 , X2 and X3 is 100X2
         EP             EP
• Thus, I (I ,Y) (Z) > I (I ,Y) (Y ), which implies f2 f1



                                                  Ehud Lehrer – Partially specified probabilities – p. 12/?
       Back to Machina’s Example
                           a     b     c    d
                     f1    0    200   100   100
                     f2    0    100   200   100
                     f3   100   200   100   0
                     f4   100   100   200   0
• f1 corresponds to Y = [0, 200, 100, 100] and f2 to
Z = [0, 100, 200, 100] (u identity – the reward in utile terms)
• Z = [0, 100, 100, 0] + [0, 0, 100, 100] and
I
E(Z) = 100I  E(X3 ) + 100I X2 = 100 + 100
                           E            20    2
• The best approximation from below to Y with linear
combinations of X1 , X2 and X3 is 100X2
         EP             EP
• Thus, I (I ,Y) (Z) > I (I ,Y) (Y ), which implies f2 f1
• From similar reasons, f3 f4

                                                  Ehud Lehrer – Partially specified probabilities – p. 12/?
Axiomatization




                 Ehud Lehrer – Partially specified probabilities – p. 13/?
                Axiomatization
• Pretty much like Anscombe-Aumann with a different
version of the independence axiom




                                          Ehud Lehrer – Partially specified probabilities – p. 13/?
                  Axiomatization
• Pretty much like Anscombe-Aumann with a different
version of the independence axiom

• Definition: An act f is fat free (FaF) if

                  f (s)   g(s) for every s ∈ S

 with at least one strict preference, implies
                              f g




                                                 Ehud Lehrer – Partially specified probabilities – p. 13/?
                  Axiomatization
• Pretty much like Anscombe-Aumann with a different
version of the independence axiom

• Definition: An act f is fat free (FaF) if

                  f (s)   g(s) for every s ∈ S

 with at least one strict preference, implies
                              f g
• Definition: An act f is strongly fat free (SFaF) if for every
constant act c and every α ∈ (0, 1), αf + (1 − α)c is fat free




                                                 Ehud Lehrer – Partially specified probabilities – p. 13/?
                  Axiomatization
• Pretty much like Anscombe-Aumann with a different
version of the independence axiom

• Definition: An act f is fat free (FaF) if

                  f (s)   g(s) for every s ∈ S

 with at least one strict preference, implies
                              f g
• Definition: An act f is strongly fat free (SFaF) if for every
constant act c and every α ∈ (0, 1), αf + (1 − α)c is fat free

• SFaF independence: if f and g are acts such that f g
and h is SFaF, then αf + (1 − α)h αg + (1 − α)h for every
α ∈ (0, 1]

                                                 Ehud Lehrer – Partially specified probabilities – p. 13/?
Relation to Choquet EUM (Schmeidler 1989)




                                   Ehud Lehrer – Partially specified probabilities – p. 14/?
         Relation to Choquet EUM (Schmeidler 1989)


• If in PSP Y consists of the indicators of events in a
sub-algebra, then a convex capacity can be defined.




                                               Ehud Lehrer – Partially specified probabilities – p. 14/?
         Relation to Choquet EUM (Schmeidler 1989)


• If in PSP Y consists of the indicators of events in a
sub-algebra, then a convex capacity can be defined.

• On convex capacities the concave integral (Lehrer, 2005)
coincides with Choquet integral

• Thus, when Y consists of the indicators of events in a
sub-algebra the PSP model is strictly between A-A and
Choquet EUM




                                               Ehud Lehrer – Partially specified probabilities – p. 14/?
         Relation to Choquet EUM (Schmeidler 1989)


• If in PSP Y consists of the indicators of events in a
sub-algebra, then a convex capacity can be defined.

• On convex capacities the concave integral (Lehrer, 2005)
coincides with Choquet integral

• Thus, when Y consists of the indicators of events in a
sub-algebra the PSP model is strictly between A-A and
Choquet EUM




                                               Ehud Lehrer – Partially specified probabilities – p. 14/?
Relation to the Multiple Prior Model (Gilboa-Schmeidler 1989)


• From duality: estimating the expectation of a random
variable by members of Y is equivalent to minimizing wrt to
                                          P
probability distributions that agree with I on Y




                                              Ehud Lehrer – Partially specified probabilities – p. 15/?
Relation to the Multiple Prior Model (Gilboa-Schmeidler 1989)


• From duality: estimating the expectation of a random
variable by members of Y is equivalent to minimizing wrt to
                                          P
probability distributions that agree with I on Y
          P,
• Thus, (I Y) induces a set of probability distributions that
are consistent with the information available. DM minimizes
over these distributions




                                              Ehud Lehrer – Partially specified probabilities – p. 15/?
Relation to the Multiple Prior Model (Gilboa-Schmeidler 1989)


• From duality: estimating the expectation of a random
variable by members of Y is equivalent to minimizing wrt to
                                          P
probability distributions that agree with I on Y
          P,
• Thus, (I Y) induces a set of probability distributions that
are consistent with the information available. DM minimizes
over these distributions
• This set is an affine space of distributions (intersection of
an affine space with the simplex of distributions)




                                              Ehud Lehrer – Partially specified probabilities – p. 15/?
Relation to the Multiple Prior Model (Gilboa-Schmeidler 1989)


• From duality: estimating the expectation of a random
variable by members of Y is equivalent to minimizing wrt to
                                          P
probability distributions that agree with I on Y
          P,
• Thus, (I Y) induces a set of probability distributions that
are consistent with the information available. DM minimizes
over these distributions
• This set is an affine space of distributions (intersection of
an affine space with the simplex of distributions)
• The PSP model is strictly between A-A and the Multiple
Prior model




                                              Ehud Lehrer – Partially specified probabilities – p. 15/?
Relation to the Multiple Prior Model (Gilboa-Schmeidler 1989)


• From duality: estimating the expectation of a random
variable by members of Y is equivalent to minimizing wrt to
                                          P
probability distributions that agree with I on Y
           P,
• Thus, (I Y) induces a set of probability distributions that
are consistent with the information available. DM minimizes
over these distributions
• This set is an affine space of distributions (intersection of
an affine space with the simplex of distributions)
• The PSP model is strictly between A-A and the Multiple
Prior model
• The PSP model is information-based: the set of priors is
determined by the information structure and the real
distribution


                                              Ehud Lehrer – Partially specified probabilities – p. 15/?
Relation to the Multiple Prior Model (Gilboa-Schmeidler 1989)


• From duality: estimating the expectation of a random
variable by members of Y is equivalent to minimizing wrt to
                                          P
probability distributions that agree with I on Y
           P,
• Thus, (I Y) induces a set of probability distributions that
are consistent with the information available. DM minimizes
over these distributions
• This set is an affine space of distributions (intersection of
an affine space with the simplex of distributions)
• The PSP model is strictly between A-A and the Multiple
Prior model
• The PSP model is information-based: the set of priors is
determined by the information structure and the real
distribution
• This fact is essential when considering interactive models.
Players play actual strategies. The beliefs are determined
by the information player get about them      Ehud Lehrer – Partially specified probabilities – p. 15/?
Non-cooperative games




                  Ehud Lehrer – Partially specified probabilities – p. 16/?
          Non-cooperative games
• A game consists of:




                            Ehud Lehrer – Partially specified probabilities – p. 16/?
           Non-cooperative games
• A game consists of:

• N – a finite set of players




                               Ehud Lehrer – Partially specified probabilities – p. 16/?
           Non-cooperative games
• A game consists of:

• N – a finite set of players

• Ai – A finite set of player i’s actions




                                           Ehud Lehrer – Partially specified probabilities – p. 16/?
            Non-cooperative games
• A game consists of:

• N – a finite set of players

• Ai – A finite set of player i’s actions

• ui – player i’s utility function.

                                      R
                           ui : ×Ai → I




                                           Ehud Lehrer – Partially specified probabilities – p. 16/?
Partially-specified equilibrium




                        Ehud Lehrer – Partially specified probabilities – p. 17/?
    Partially-specified equilibrium
• Each player plays a pure or a mixed strategy




                                            Ehud Lehrer – Partially specified probabilities – p. 17/?
     Partially-specified equilibrium
• Each player plays a pure or a mixed strategy

• Each player obtains partial information about other
players’ strategies




                                             Ehud Lehrer – Partially specified probabilities – p. 17/?
     Partially-specified equilibrium
• Each player plays a pure or a mixed strategy

• Each player obtains partial information about other
players’ strategies

• Each player maximizes her payoff against the worst
(uncoordinated/independent) strategy consistent with her
information




                                             Ehud Lehrer – Partially specified probabilities – p. 17/?
Example




          Ehud Lehrer – Partially specified probabilities – p. 18/?
                     Example
• Consider the coordination game
                        L     M     R
                    T   3,3   0,0   0,0
                    C   0,0   2,2   0,0
                    B   0,0   0,0   1,1




                                          Ehud Lehrer – Partially specified probabilities – p. 18/?
                           Example
• Consider the coordination game
                                L    M     R
                           T   3,3   0,0   0,0
                          C    0,0   2,2   0,0
                           B   0,0   0,0   1,1


                                               3             3
• There are three equilibria: a. p = ( 2 , 5 , 0), q = ( 2 , 5 , 0); b.
                                           5               5
                                           2 3 6
p = (0, 0, 1), q = (0, 0, 1); and c. p = ( 11 , 11 , 11 ),
      2 3 6
q = ( 11 , 11 , 11 ).




                                                      Ehud Lehrer – Partially specified probabilities – p. 18/?
Best response




                Ehud Lehrer – Partially specified probabilities – p. 19/?
                   Best response
• Suppose that player i is informed only of the variables
over A−i in Yi




                                             Ehud Lehrer – Partially specified probabilities – p. 19/?
                    Best response
• Suppose that player i is informed only of the variables
over A−i in Yi

• p1 is a best response to p2 (w.r.t. Y1 ) if p1 maximizes
I (p2 ,Y1 ) ( payoff of player 1)
E




                                                Ehud Lehrer – Partially specified probabilities – p. 19/?
                     Best response
• Suppose that player i is informed only of the variables
over A−i in Yi

• p1 is a best response to p2 (w.r.t. Y1 ) if p1 maximizes
I (p2 ,Y1 ) ( payoff of player 1)
E

• Definition: (p1 , p2 ) is partially specified probability w.r.t. to
the information Y1 and Y2 if pi is a best response to p−i
(w.r.t Yi )




                                                   Ehud Lehrer – Partially specified probabilities – p. 19/?
                     Best response
• Suppose that player i is informed only of the variables
over A−i in Yi

• p1 is a best response to p2 (w.r.t. Y1 ) if p1 maximizes
I (p2 ,Y1 ) ( payoff of player 1)
E

• Definition: (p1 , p2 ) is partially specified probability w.r.t. to
the information Y1 and Y2 if pi is a best response to p−i
(w.r.t Yi )

• In n-player games: pi is a best response to the
independent p−i (w.r.t Yi )




                                                   Ehud Lehrer – Partially specified probabilities – p. 19/?
Information-based




                Ehud Lehrer – Partially specified probabilities – p. 20/?
               Information-based
• The notion of partially-specified equilibrium is
information-based




                                              Ehud Lehrer – Partially specified probabilities – p. 20/?
               Information-based
• The notion of partially-specified equilibrium is
information-based

• The information structure, namely, the information
available to each player, determines the set of equilibria




                                              Ehud Lehrer – Partially specified probabilities – p. 20/?
               Information-based
• The notion of partially-specified equilibrium is
information-based

• The information structure, namely, the information
available to each player, determines the set of equilibria

• No player has a prior belief about other players’
strategies. The information structure is exogenous




                                              Ehud Lehrer – Partially specified probabilities – p. 20/?
               Information-based
• The notion of partially-specified equilibrium is
information-based

• The information structure, namely, the information
available to each player, determines the set of equilibria

• No player has a prior belief about other players’
strategies. The information structure is exogenous

• The belief a player has about other players is determined
by the actual strategies, as well as by the partial information
a player obtains about them




                                               Ehud Lehrer – Partially specified probabilities – p. 20/?
PS correlated equilibrium (PSCE)




                        Ehud Lehrer – Partially specified probabilities – p. 21/?
  PS correlated equilibrium (PSCE)
• A mediator select (p1 , ..., pn ) according to distribution Q




                                                 Ehud Lehrer – Partially specified probabilities – p. 21/?
  PS correlated equilibrium (PSCE)
• A mediator select (p1 , ..., pn ) according to distribution Q

• Players don’t know Q




                                                 Ehud Lehrer – Partially specified probabilities – p. 21/?
  PS correlated equilibrium (PSCE)
• A mediator select (p1 , ..., pn ) according to distribution Q

• Players don’t know Q

• The player gets a recommendation to play pi and a
partially-specified probability about Q(p−i |pi )




                                                 Ehud Lehrer – Partially specified probabilities – p. 21/?
  PS correlated equilibrium (PSCE)
• A mediator select (p1 , ..., pn ) according to distribution Q

• Players don’t know Q

• The player gets a recommendation to play pi and a
partially-specified probability about Q(p−i |pi )

• And he plays a best response




                                                 Ehud Lehrer – Partially specified probabilities – p. 21/?
          Learning to play PSCE
• Recall the example with the noisy signals




                                              Ehud Lehrer – Partially specified probabilities – p. 22/?
          Learning to play PSCE

• The game is played repeatedly




                                  Ehud Lehrer – Partially specified probabilities – p. 22/?
          Learning to play PSCE

• The game is played repeatedly

• Players observe noisy signals of past actions




                                            Ehud Lehrer – Partially specified probabilities – p. 22/?
           Learning to play PSCE

• The game is played repeatedly

• Players observe noisy signals of past actions

• Each player plays a conditional regret-free strategy (“I
have no regret for playing p because this is the best I could
do in response to the worst strategy of the others that is
consistent with the signals I received during the times I was
playing p.")




                                             Ehud Lehrer – Partially specified probabilities – p. 22/?
           Learning to play PSCE

• The game is played repeatedly

• Players observe noisy signals of past actions

• Each player plays a conditional regret-free strategy (“I
have no regret for playing p because this is the best I could
do in response to the worst strategy of the others that is
consistent with the signals I received during the times I was
playing p.")

•A recent paper with Eilon Solan (
“Learning to play partially-specified equilibrium") shows that
conditional regret-free strategies exist and the empirical
frequency of the mixed strategies played converges to PSCE

                                              Ehud Lehrer – Partially specified probabilities – p. 22/?
                       A-A (vN-M)                                   Fully Bayesian




                             Partially-specified probabilities
                          (probability of events in sub-algebra)   Partially Bayesian




              Partially-specified probabilities
Partially
            (expectation of random variables)
Bayesian
                                                               Choquet EUM
                                                             (Schmeidler 1989)

           Multiple prior
     (Gilboa-Schmeidler 1989)

				
DOCUMENT INFO