Submodular Set Function Maximization via the Multilinear Relaxation _ Dependent Rounding

Document Sample
Submodular Set Function Maximization via the Multilinear Relaxation _ Dependent Rounding Powered By Docstoc
					 Submodular Set Function Maximization
via the Multilinear Relaxation & Dependent Rounding



                    Chandra Chekuri
           Univ. of Illinois, Urbana-Champaign
    Max weight independent set

• N a finite ground set

• w : N ! R+ weights on N

•    I µ 2N is an independence family of subsets
    • I is downward closed: A 2 I and B ½ A ) B 2 I


                        max w(S)

                        s.t S 2 I
     Independence families

• stable sets in graphs

• matchings in graphs and hypergraphs

• matroids and intersection of matroids

• packing problems: feasible {0,1} solutions to
      A x · b where A is a non-negative matrix
    Max weight independent set
                           max w(S)

                           s.t S 2 I
•   max weight stable set in graphs
•   max weight matchings
•   max weight independent set in a matroid
•   max weight independent set in intersection of two matroids
•   max profit knapsack
•   etc
                   This talk
                     max f(S)

                      s.t.  S2
                      I
f is a non-negative submodular set function on N

Motivation:
  • several applications
  • mathematical interest
 Submodular Set Functions

A function f : 2N ! R+ is submodular if

f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, i 2 N\B

                                  j
                 A         B


f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A  N , i, j  N\A
 Submodular Set Functions

A function f : 2N ! R+ is submodular if

f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, i 2 N\B

                                  j
                 A         B


f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A  N , i, j  N\A

Equivalently: f(A) + f(B) ≥ f(AB) + f(AB) 8 A,B  N
    Cut functions in graphs

• G=(V,E) undirected graph

• f : 2V ! R+ where f(S) = |δ(S)|


                               S
      Coverage in Set Systems

• X1, X2, ..., Xn subsets of set U

• f : 2{1,2, ..., n} ! R+ where f(A) = |[ i   in A   Xi |


X1                                    X1
                               X5                                X5
           X4                                          X4




 X2                                     X2
                          X3                                X3
 Submodular Set Functions

• Non-negative submodular set functions

  f(A) ≥ 0 8 A ) f(A) + f(B) ¸ f(A[ B) (sub-additive)

• Monotone submodular set functions

  f(ϕ) = 0 and f(A) ≤ f(B) for all A  B

• Symmetric submodular set functions
  f(A) = f(N\A) for all A
            Other examples

• Cut functions in hypergraphs (symmetric non-negative)

• Cut functions in directed graphs (non-negative)

• Rank functions of matroids (monotone)

• Generalizations of coverage in set systems (monotone)

• Entropy/mutual information of a set of random variables

• ...
         Example: Max-Cut
                     max f(S)

                     s.t S 2 I

• f is cut function of a given graph G=(V,E)

•   I = 2V : unconstrained

• NP-Hard
    Example: Max k-Coverage
                      max f(S)

                      s.t S 2 I

• X1,X2,...,Xn subsets of U and integer k
• N = {1,2,...,n}
• f is the set coverage function (monotone)
•    I = { A µ N : |A| · k } (cardinality constraint)
• NP-Hard
Approximation Algorithms

A is an approx. alg. for a maximization problem:

•   A runs in polynomial time

•    for all instances I of the problem A(I) ¸ ® OPT(I) ®
    (· 1) is the worst-case approximation ratio of A
                 Techniques
                    max f(S)

                      s.t.  S2
                      I
f is a non-negative submodular set function on N

• Greedy

• Local Search

• Multilinear relaxation and rounding
  Greedy and Local-Search

[Nemhauser-Wolsey-Fisher’78, Fisher-Nemhauser-Wolsey’78]

• Work well for “combinatorial” constraints: matroids,
  intersection of matroids and generalizations

• Recent work shows applicability to non-monotone
  functions [Feige-Mirrokni-Vondrak’07] [Lee-Mirrokni-
  Nagarajan-Sviridenko’08] [Lee-Sviridenko-Vondrak’09]
  [Gupta etal, 2010]
 Motivation for mathematical
  programming approach
• Quest for optimal results

• Greedy/local search not so easy to adapt for packing
  constraints of the form Ax · b

• Known advantages of geometric and continuous
  optimization methods and the polyhedral approach
   Math. Programming approach

max w(S)                max w¢x                 xi 2 [0,1]
                                                indicator
                                                variable for
s.t S 2 I               s.t x 2 P(I)            i


 Exact algorithm: P(I) = convexhull( {1S : S 2 I})
   Math. Programming approach

max w(S)                 max w¢x                 Round
                                                 x* 2 P(I) to
s.t S 2 I                s.t x 2 P(I)            S* 2 I

 Exact algorithm: P(I) = convexhull( {1S : S 2 I})

 Approx. algorithm: P(I) ¾ convexhull( {1S : S 2 I})

 P(I) solvable: can do linear optimization over it
   Math. Programming approach

max f(S)                max F(x)                 Round
                                                 x* 2 P(I) to
s.t S 2 I               s.t x 2 P(I)             S* 2 I

 P(I) ¶ convexhull( {1S : S 2 I}) and solvable
   Math. Programming approach

max f(S)               max F(x)             Round
                                            x* 2 P(I) to
s.t S 2 I              s.t x 2 P(I)         S* 2 I

   • What is the continuous extension F ?

   • How to optimize with objective F ?

   • How do we round ?
                Some results

[Calinescu-C-Pal-Vondrak’07]+[Vondrak’08]=[CCPV’09]

Theorem: There is a randomized (1-1/e) ' 0.632
  approximation for maximizing a monotone f subject
  to any matroid constraint.
[C-Vondrak-Zenklusen’09]

Theorem: (1-1/e-²)-approximation for monotone f
  subject to a matroid and a constant number of
  packing/knapsack constraints.
  What is special about 1-1/e?

Greedy gives (1-1/e)-approximation for the problem
max { f(S) | |S| · k } when f is monotone [NWF’78]
  • Obtaining a (1-1/e + ²)-approximation requires
    exponentially many value queries to f [FNW’78]
  • Unless P=NP no (1-1/e +²)-approximation for special
    case of Max k-Coverage [Feige’98]

New results give (1-1/e) for any matroid constraint
improving ½ . Moreover, algorithm is interesting and
techniques have been quite useful.
 Submodular Welfare Problem

• n items/goods (N) to be allocated to k players

• each player has a submodular utility function
  fi(Ai) is the utility to i if Ai is allocation to i)

• Goal: maximize welfare of allocation i fi(Ai)



Can be reduced to a single f and a (partition) matroid
constraint and hence (1-1/e) approximation
          Some more results

[C-Vondrak-Zenklusen’11]

• Extend approach to non-monotone f

• Rounding framework via contention resolution schemes

• Several results from framework including the ability to
  handle intersection of different types of constraints
   Math. Programming approach

max f(S)               max F(x)             Round
                                            x* 2 P(I) to
s.t S 2 I              s.t x 2 P(I)         S* 2 I

   • What is the continuous extension F ?

   • How to optimize with objective F ?

   • How do we round ?
  Multilinear extension of f

[CCPV’07] inspired by [Ageev-Sviridenko]

For f : 2N ! R+ define F : [0,1]N ! R+ as

x = (x1, x2, ..., xn)  [0,1]N

R: random set, include i independently with prob. xi

F(x) = E[ f(R) ] =  S  N f(S) i  S xi  i  N\S (1-xi)
Why multilinear extension?

• Ideally a concave extension to maximize

• Could choose (“standard”) concave closure f+ of f

• Evaluating f+(x) is NP-Hard!
             Properties of F

• F(x) can be evaluated (approximately) by random
  sampling

• F is a smooth submodular function
  • 2F/xixj ≤ 0 for all i,j.
    Recall f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A, i, j

  • F is concave along any non-negative direction vector

  • F/xi ≥ 0 for all i if f is monotone
   Math. Programming approach

max f(S)               max F(x)               Round
                                              x* 2 P(I) to
s.t S 2 I              s.t x 2 P(I)           S* 2 I

   • What is the continuous extension F ? ✔

   • How to optimize with objective F ?

   • How do we round ?
              Maximizing F

max { F(x) |  xi · k, xi 2 [0,1] } is NP-Hard
 Approximately maximizing F

[Vondrak’08]

Theorem: For any monotone f, there is a (1-1/e)
  approximation for the problem max { F(x) | x  P }
  where P [0,1]N is any solvable polytope.



Algorithm: Continuous-Greedy
 Approximately maximizing F

[C-Vondrak-Zenklusen’11]

Theorem: For any non-negative f, there is a ¼
  approximation for the problem max { F(x) | x  P }
  where P [0,1]n is any down-closed solvable
  polytope.

Remark: 0.325-approximation can be obtained

Algorithm: Local-Search variants
 Local-Search based algorithm

Problem: max { F(x) | x 2 P }, P is down-monotone

            x* = a local optimum of F in P

            Q = { z 2 P | z · 1-x* }

            y* = a local optimum of F in Q

            Output better of x* and y*
 Local-Search based algorithm

Problem: max { F(x) | x 2 P }, P is down-monotone

            x* = a local optimum of F in P

            Q = { z 2 P | z · 1-x* }

            y* = a local optimum of F in Q

            Output better of x* and y*

Theorem: Above algorithm gives a ¼ approximation.
   Math. Programming approach

max f(S)              max F(x)                Round
                                              x* 2 P(I) to
s.t S 2 I             s.t x 2 P(I)            S* 2 I

   • What is the continuous extension F ? ✔

   • How to optimize with objective F ? ✔

   • How do we round ?
                 Rounding

Rounding and approximation depend on I and P(I)

Two results:

• For matroid polytope a special rounding

• A general approach via contention resolution
  schemes
     Rounding in Matroids

Matroid M = (N, I)

Independence polytope: P(M) = convhull({1S | S 2 I})

given by following system [Edmonds]

   i 2 S xi · rankM(S) 8 S µ N

  x 2 [0,1]N
     Rounding in Matroids

[Calinescu-C-Pal-Vondrak’07]
Theorem: Given any point x in P(M), there is a randomized
  polynomial time algorithm to round x to a vertex x*
  (hence an indep set of M) such that
  • E[x*] = x
  • F(x*) ≥ F(x)


[C-Vondrak-Zenklusen’09]
Different rounding with additional properties and apps.
                  Rounding
         max F(x)              Round
                               x* 2 P(I)
         s.t x 2 P(I)          to S* 2 I

F(x*) = E[f(R)] where R is obtained by independently
rounding each i with probability x*i

R unlikely to be in I
                  Rounding
         max F(x)                 Round
                                  x* 2 P(I)
         s.t x 2 P(I)             to S* 2 I

F(x*) = E[f(R)] where R is obtained by independently
rounding each i with probability x*i

R unlikely to be in I

Obtain R’ µ R s.t. R’ 2 I and E[f(R’)] ¸ c f(R)
                     A simple question?

0.4         0.6          0.3          0.6             0.1
      0.7
              1
 0.7               0.9
 1           0.4


      • x is a convex combination of spanning trees

      • R: pick each e 2 E independently with probability xe

      Question: what is the expected size of a maximal forest
      in R? (n - # of connected components)
        A simple question?

• x is a convex combination of spanning trees of G

• R: pick each e 2 E independently with probability xe

Question: what is the expected size of a maximal forest
in R? (n - # of connected components)

Answer: ¸ (1-1/e) (n-1)
           Related question

• x is a convex combination of spanning trees of G

• R: pick each e 2 E independently with probability xe

Want a (random) forest R’ µ R s.t. for every edge e

       Pr[e 2 R’ | e 2 R] ¸ c
           Related question

• x is a convex combination of spanning trees of G

• R: pick each e 2 E independently with probability xe

Want a (random) forest R’ µ R s.t. for every edge e

       Pr[e 2 R’ | e 2 R] ¸ c

) there is a forest of size e c xe = c (n-1) in R
           Related question

• x is a convex combination of spanning trees of G

• R: pick each e 2 E independently with probability xe

Want a (random) forest R’ µ R s.t. for every edge e

       Pr[e 2 R’ | e 2 R] ¸ c

Theorem: c = (1-1/e) is achievable & optimal [CVZ’11]

(true for any matroid)
    Contention Resolution Schemes

•    I an independence family on N

• P(I) a relaxation for I and x 2 P(I)

• R: random set from independent rounding of x

CR scheme for P(I): given x, R outputs R’ µ R s.t.
    1. R’ 2 I
    2. and for all i, Pr[i 2 R’ | i 2 R] ¸ c
Rounding and CR schemes
        max F(x)              Round
                              x* 2 P(I)
        s.t x 2 P(I)          to S* 2 I

Theorem: A monotone CR scheme for P(I) can be used
to round s.t.

       E[f(S*)] ¸ c F(x*)

Via FKG inequality
                  Remarks

[CVZ’11]

• Several existing rounding schemes are CR schemes

• CR schemes for different constraints can be
  combined for their intersection

• CR schemes through correlation gap and LP duality
   Math. Programming approach

max f(S)               max F(x)               Round
                                              x* 2 P(I) to
s.t S 2 I              s.t x 2 P(I)           S* 2 I

   Problem reduced to finding a good relaxation P(I) and
   a contention resolution scheme for P(I)
      Concluding Remarks

• Substantial progress on submodular function
  maximization problems in the last few years

• New tools and connections including a general
  framework via the multilinear relaxation

• Increased awareness and more applications

• Several open problems still remain
Thanks!

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:30
posted:10/9/2013
language:English
pages:52