# Randomized Algorithms

Document Sample

```					          隨機演算法
Randomized Algorithms

呂學一 (Hsueh-I Lu)
http://www.iis.sinica.edu.tw/~hil/
2004/5/12             Randomized Algorithms, Lecture 10   1
Today

    Fingerprinting techniques
– #3SAT  IP.
    Randomized balanced search tree

2004/5/12              Randomized Algorithms, Lecture 10   2
Definition of IP
   A pair of interactive (randomized)
algorithms P and V forms an interactive
proof for L if
– [completeness 勿縱]:
    xL, Pr[<P,V>(x) = accept] ≥ 2/3.
– [soundness 勿枉]:
    xL,  P*, Pr[<P*,V>(x) = accept] ≤ 1/3.
– [tractability 效率]
   V runs in polynomial time.
2004/5/12                    Randomized Algorithms, Lecture 10   3
Comment

    The honest prover P and malicious prover
P* are granted “infinite” computation
power.
    That is, it is OK even if the tasks executed
by P and P* require exponential time.

2004/5/12           Randomized Algorithms, Lecture 10   4
Graph Non-Isomorphism

    Input: G0 and G1
    Output:
– accept, if G0 and G1 are non-isomorphic;
– reject, if G0 and G1 are isomorphic.

2004/5/12             Randomized Algorithms, Lecture 10   5
An Interactive Proof

    V1:
– Select b from {0,1} uniformly at random
– Select a random permutation п over {1, 2, …, n}
– Send G≡ п(Gb) to P
    P1: compute a number a{0,1}, with Ga ~ G
– Comment: This step may take exponential time, but it
is OK.
    V2: if a = b, then accept; otherwise reject.

2004/5/12               Randomized Algorithms, Lecture 10       6
For non-isomorphic G0,G1

    The bit a computed by P has to be the
same as the bit b.
    Therefore, Pr[<P,V>(G0,G1) = accept] = 1.

2004/5/12          Randomized Algorithms, Lecture 10   7
For isomorphic G0 and G1

    Note that the graph G sent out from V is
isomorphic to both G0 and G1.
    Any malicious prover P* cannot tell from
G whether b is 0 or 1.
    So, Pr[<P*,V>(G0,G1) = accept] ≤ ½.

2004/5/12          Randomized Algorithms, Lecture 10   8
Is this an interactive
proof system?
    Tractability? Yes.
    Completeness?
– If G0 and G1 aren’t isomorphic, then
Pr[<P,V>(G0,G1) = accept] = 1.
How to make
    Soundness?                                            this smaller
– If G0 ~ G1, then P*, we have                      than 1/3?

Pr[<P*,V>(G0, G1) = accept] ≤ ½.

2004/5/12             Randomized Algorithms, Lecture 10                   9
Same process twice
    Modification
– Both “accept”  accept.
– At most one “accept”  reject.
    Completeness?
– If G0 and G1 aren’t isomorphic, then
Pr[<P,V>(G0,G1) = accept] = 1.
    Soundness?
– If G0 ~ G1, then P*, we have
Pr[<P*,V>(G0, G1) = accept] ≤ ¼.

2004/5/12                Randomized Algorithms, Lecture 10   10
Illustration
GNI

Co-NP
NP

IP≡PSPACE

2004/5/12        Randomized Algorithms, Lecture 10               11
Goal: #3SAT belongs to IP
#3SAT

IP≡PSPACE
#P

2004/5/12        Randomized Algorithms, Lecture 10           12
#3SAT

    Input
– A set C of m clauses on n variables, where
each clause has at most three literals.
– For brevity, let us assume that m is
polynomial in n.
    Output
– The number of distinct truth assignments that
satisfy C.
2004/5/12             Randomized Algorithms, Lecture 10   13
We will give P and V s.t.

    If the number of satisfying truth
assignments for C is indeed s, then
– Pr[<P,V>(C, s) = accept] = 1.
    Otherwise, for any malicious P*
– Pr[<P*,V>(C, s) = accept] = o(1).

2004/5/12             Randomized Algorithms, Lecture 10   14
Trick: Arithmetization
Turning an n-variable clause set into
an n-variable polynomial.

2004/5/12        Randomized Algorithms, Lecture 10   15
clause set  polynomial
Let xi = 1 represent that xi is assigned true.
Let xi = 0 represent that xi is assigned false.

Use polynomial

(1 ¡ (1 ¡ x1 )x2 x3 )(1 ¡ (1 ¡ x1 )(1 ¡ x2 ))

to represent the clause set
fx ; :x ; :x g; fx ; x g:
1    2    3     1   2
2004/5/12            Randomized Algorithms, Lecture 10   16
Why?
The clause set
fx ; :x ; :x g; fx ; x g
1    2    3     1 2

represents

(x1 _ :x2 _ :x3 ) ^ (x1 _ x2 )
´   (:(:x1 ^ x2 ^ x3 )) ^ (:(:x1 ^ :x2 ));

which clearly corresponds to

(1 ¡ (1 ¡ x1 )x2 x3 )(1 ¡ (1 ¡ x1 )(1 ¡ x2 )):

2004/5/12              Randomized Algorithms, Lecture 10      17
Observations
Let Q denote the polynomial corresponding to the input set C
of clauses.
Q is an n-variable polynomial of degree at most 3m.

Each truth assignment for C corresponds in a 1-1 manner to
one of the 2n binary assignments to the variables of Q.

Moreover, for any truth assignment x for C (or, equivalently,
binary assignment x for Q), we have that

Q(x) = 1        ()            C (x) = true
Q(x) = 0        ()            C (x) = false:
2004/5/12            Randomized Algorithms, Lecture 10       18
Therefore, …
The number of satisfying assignments for C is exactly
X X
1 1
X
1
]Q ´                    ¢¢¢             Q ( x1 ; x2 ; : : : ; x n ) :
x1 = 0 x 2 = 0         xn = 0

2004/5/12                      Randomized Algorithms, Lecture 10                    19
The goal
Although ]Q is the summation of at least 2n terms,
we give an interactive proof system for proving that
its value is equal to some integer s.

How is this possible? Use the ¯ngerprinting tech-
nique!

2004/5/12          Randomized Algorithms, Lecture 10   20
In other words,
we reduced the problem of counting
satisfying truth assignments to a
polynomial-evaluation problem.

2004/5/12       Randomized Algorithms, Lecture 10   21
The new problem

    Input:
– An n-variable degree-3m polynomial Q
– An integer s.
    Output:
– Determining whether #Q = s.

2004/5/12            Randomized Algorithms, Lecture 10   22
The new goal
Construct a pair of P and V such that the following
holds.
If ]Q = s, then

Pr[hP; V i(Q; s) = accept] = 1:

If ]Q 6= s, then for any malicious P , we have that
¤

Pr[ hP ¤ ; V i(Q; s) = accept] · 3mn = o(1):
2n
2004/5/12                  Randomized Algorithms, Lecture 10   23
A helping polynomial
Let [Q be the polynomial de¯ned as follows.
X X
1    1
X1
[Q(x1 ) =        ¢¢¢     Q(x1 ; x2 ; : : : ; xn ):
x2 =0 x 3 = 0          xn =0

[Q is a one-variable polynomial. V does not know
the explicit representation of [Q, but clearly,

]Q = [Q(0) + [Q(1):

2004/5/12              Randomized Algorithms, Lecture 10   24
How do es the helping
p o l y n o m i a l [ Q h e lp V
to catch a cheating
¤
prover P ?

2004/5/12             Randomized Algorithms, Lecture 10   25
V asks P to prove #Q = s.
P sends the explicit representation R(x1 ) of [Q to V .

V evaluates R(0) and R(1) in polynomial time. If R(0) +
R(1) 6= s, then V rejects. Otherwise,
F V randomly chooses a number r from 1 to 2n .
F V recursively asks P to prove ]Q = s, where Q is the
^ ^                 ^
(n ¡ 1)-variable polynomial Q(r; x2 ; x3 ; : : : ; xn ) and
^
s = R(r).
F If the requested proof for ]Q = s is accepted by V ,
^     ^
then V accepts the proof for ]Q = s; otherwise, V
rejects.
2004/5/12              Randomized Algorithms, Lecture 10     26
The idea behind
If ]Q = s, then the honest prover P can come up
with the explicit representation R of [Q. Then, no
matter what r is chosen by V , we have
^                  ^
]Q = [Q(r) = R(r) = s:
^    ^
Therefore, the honest proof for ]Q = s recursively
provided by P will de¯nitely be accepted by V .

2004/5/12         Randomized Algorithms, Lecture 10   27
What if P* is malicious?
If ]Q 6= s, consider the case that a malicious prover
¤
P tries to mislead V into accepting the cheating
proof.

V relies on the helping polynomial [Q to catch a
cheating proof.

2004/5/12          Randomized Algorithms, Lecture 10   28
Fingerprinting
Suppose that P comes up with a polynomial R 6= [Q
¤

with R(0) + R(1) = s. (Since ]Q 6= s, either R 6= [Q
or R(0) + R(1) 6= s.)

Then, V is likely ¯nd out the inconsistency between
R and [Q using ¯ngerprinting technique. Speci¯-
cally, since R ¡ [Q has degree at most 3m, we have
h                                i
Pr s  ^ ´ R(r) = [Q(r) ´ ]Q j R 6= [Q · 3m :
^
2n

2004/5/12          Randomized Algorithms, Lecture 10   29
The error probability
We have that

Pr[V accepts the pro of for ]Q = s j ]Q 6= s]
·        3m
+ Pr[V accepts the pro of for ]Q = s j ]Q 6= s]:
^ ^ ^ ^
2n
T h e r ef o r e ,

Pr[V accepts the pro of for ]Q = s j ]Q 6= s]
·       3m n
2n
=       o (1 ) :
2004/5/12                  Randomized Algorithms, Lecture 10   30
Part 2 -- TREAP
Randomized balanced search tree

2004/5/12        Randomized Algorithms, Lecture 10   31
Search tree

k

<k                           >k

2004/5/12        Randomized Algorithms, Lecture 10   32
Deterministic balanced
search trees
    Maintaining the depth to be O(log n) via
rotations
– AVL Trees
– Red-Black Trees
– …
    Drawbacks:
– Usually complicated to implement
– May need Ω(log n) rotations for some update
(insertion/deletion).
2004/5/12               Randomized Algorithms, Lecture 10   33
TREAP = TRee + hEAP

    [Aarogan + Seidel, FOCS 1989].
–    Randomized balanced search tree
–    Expected depth = O(log n)
–    Expected number of rotations per update < 2.
–    Easy to implement.

2004/5/12                Randomized Algorithms, Lecture 10   34
Node x = (key(x), priority(x))

    For brevity, we assume that all nodes have
distinct keys and priorities.
    For any two nodes x and y, we use
– x < y to denote key(x) < key(y); and
– x ◄ y to denote priority(x) < priority(y).
    A tree of nodes is a TREAP if each node x
satisfies the
– search tree property
– heap order property

2004/5/12                 Randomized Algorithms, Lecture 10   35
Search tree property

x

<x                           >x

2004/5/12        Randomized Algorithms, Lecture 10   36
Heap order property

x

◄x

2004/5/12        Randomized Algorithms, Lecture 10   37
Observation

    For any set S of nodes with distinct keys
and priorities, there is a unique TREAP for
S.
    Why?

2004/5/12          Randomized Algorithms, Lecture 10   38
A constructive proof
Choose the node
with highest priority
x                                    to be the root

<x                           >x

2004/5/12        Randomized Algorithms, Lecture 10                           39
For example,

2004/5/12   Randomized Algorithms, Lecture 10   40
The idea

    Each node is associated with a randomly
chosen priority.
    Then, it becomes hard for the adversary to
construct a set of keys that yields an
unbalanced treap on average.

2004/5/12          Randomized Algorithms, Lecture 10   41
Goal

    We will show that, as long as the priorities are
assigned such that the order among all the
priorities are uniformly at random (independent
of the order among their keys),
– Expected depth of each node is O(log n).
– Expected number of rotations required is less than 2.
    Math tool
– Mulmuley’s Games [FOCS 1988].

2004/5/12                Randomized Algorithms, Lecture 10       42
Game A(n)
    A set C of cards 1,2,…,n.
    Sampling the cards in C without replacement
until all cards are drawn.
    Let X denote the number of cards, when sampled,
are larger than all previous sampled cards.
    Example, 1 3 2 5 4 6 7.  X = 5.
    Define An = E[X].

2004/5/12           Randomized Algorithms, Lecture 10   43
Game B(n,m)

    A set C of regular cards 1,2,…,n.
    A set D of m trigger cards.
    Sampling the union of C and D without
replacement until all cards in C are drawn.
    The counting for X is as in game A(n)
except that the counting starts only after
one of the m trigger cards is sampled.
    Let B(n,m) = E[X].
2004/5/12          Randomized Algorithms, Lecture 10   44
Mulmuley’s Theorem
A(n) = Hn (= 1 + 1=2 + 1=3 + ¢ ¢ ¢ + 1=n):
B (n; m) = Hn + Hm ¡ Hm+n :

2004/5/12       Randomized Algorithms, Lecture 10   45
Sketch of the proof (1)
If the ¯rst card is i, then the expected value of X is exactly 1 +
A(n ¡ i). (Why?)

We thus have the following recurrence relation.
X
n¡1
1       1 + A(n ¡ i)
A(n) =        +
n             n
i=1
X
n¡1
1        1 + A(i)
=     +               :
n            n
i=1

It is a good exercise to show that A(n) = Hn is the solution to
the above recurrence relation. (See Concrete Math for help.)

2004/5/12              Randomized Algorithms, Lecture 10      46
Sketch of the proof (2)
If the ¯rst card is i, then the expected value of X is exactly B (n ¡
i; m). If the ¯rst card is a trigger card, then the expected value
of X is exactly A(n) = Hn .

We thus have the following recurrence relation.
X
n¡1
mHn        1         1
B (n; m) =           +         +             B (n ¡ i; m)
m+n m+n m+n
i=1
X
n¡1
mHn        1         1
=          +         +             B (i; m):
m+n m+n m+n
i=1

Good exercise: Showing that B (n; m) = Hn + Hm ¡ Hm+n .

2004/5/12               Randomized Algorithms, Lecture 10         47
Theorem 1

   Let x be a node in an n-node treap T. If the
key of x is the k-th smallest, then the
expected depth of x in T is precisely
A(k) + A(n – k + 1) – 1,
which is O(log n) by Mulmuley’s Theorem.

2004/5/12         Randomized Algorithms, Lecture 10   48
Proof strategy
r

x                                          x                   x

Expected                                             Expected
number of               symmetric                     number of
2004/5/12
nodes is A(n – k   Randomized Algorithms, Lecture 10   nodes is A(k)   49
+ 1)
Constructing treap

    We construct T by letting T be empty
initially, and then insert the n nodes to T
in decreasing order of their priority
– Note that the insertion order does not matter,
since treap is unique.
– This simplifies the construction, since each
inserted node goes to the leaf and thus no
rotation at all throughout the whole process.
2004/5/12              Randomized Algorithms, Lecture 10   50
What are these blue +
white nodes?
r

x          x

2004/5/12   Randomized Algorithms, Lecture 10       51
The key observation

    Node y is blue or white with respect to
node x if and only if each node z
(including x) with
– key(y) < key(z) ≤ key(x)
has to satisfy
– priority(y) > priority(z).
    The statement about white node is obvious.
    Focus on the one for y is blue.
2004/5/12               Randomized Algorithms, Lecture 10   52
Node z has to be in the
right subtree of y.

y                                               y

z                                           z
x                                           x

2004/5/12               Randomized Algorithms, Lecture 10           53
The key observation

    Node y is blue or white with respect to node x if
and only if each node z (including x) with
– key(y) < key(z) ≤ key(x)
has to satisfy
– priority(y) > priority(z).
    Therefore, y is exactly a node with key(y) ≤
key(x) such that when it is inserted, it has the
largest key of all such nodes in the treap.

2004/5/12                 Randomized Algorithms, Lecture 10   54
Theorem 1

   Let x be a node in an n-node treap T. If the
key of x is the k-th smallest, then the
expected depth of x in T is precisely
A(k) + A(n – k + 1) – 1,
which is O(log n) by Mulmuley’s Theorem.

2004/5/12         Randomized Algorithms, Lecture 10   55
Left spine and right spine

x

L(x)
R(x)

2004/5/12          Randomized Algorithms, Lecture 10   56
Exercise

    Deleting node x from treap T requires
|R(x)| + |L(x)| rotations.
    Let T be the resulting treap after inserting
x. Then, the number of rotations
performed in that insertion is exactly |R(x)|
+ |L(x)|.

2004/5/12           Randomized Algorithms, Lecture 10   57
Theorem 2

    Let x be a node in an n-node treap T. If the
key of x is the k-th smallest, then
– the expected value of |R(x)| is 1-1/k,
– the expected value of |L(x)| is 1-1/(n-k+1).
    Therefore, by the exercise we know that
the expected number of rotations for a
deletion is less than 2.

2004/5/12              Randomized Algorithms, Lecture 10   58
Proof strategy

    Since both statements are symmetric to
each other, it suffices to prove that E[|R(x)|]
is exactly 1 – 1/k.
    We prove by showing that the nodes in
|R(x)| correspond to the cards counted in
Mulmuley’s Game B(k – 1, 1), i.e., k – 1
regular cards and 1 trigger card.
– B(k – 1,1) = Hk – 1 + H1 – Hk = 1 – 1/k.
2004/5/12              Randomized Algorithms, Lecture 10   59
The key observation

x

N o d e y i s h a s t h e la r g e s t ke y t h a t is le s s
t h a n ke y ( x ).

E a ch b l u e n o d e z i s l i k e t h e b l u e n o d e i n
y   T h e o r e m 1 w it h r e s p e c t t o y . T h e d if-
f e r e n c e is t h a t t h e y h av e t o h ave low e r
p r i o r i ty t h a n x .

2004/5/12         Randomized Algorithms, Lecture 10                        60
Therefore…

    The blue node inserted during process of
constructing treap T is like drawing k – 1
regular cards (i.e., those with keys less
than that of x) together with one trigger
card (i.e., x).

2004/5/12           Randomized Algorithms, Lecture 10   61
Theorem 2

    Let x be a node in an n-node treap T. If the
key of x is the k-th smallest, then
– the expected value of |R(x)| is 1-1/k,
– the expected value of |L(x)| is 1-1/(n-k+1).
    Therefore, by the exercise we know that
the expected number of rotations for a
deletion is less than 2.

2004/5/12              Randomized Algorithms, Lecture 10   62

```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
 views: 4 posted: 12/30/2011 language: pages: 62