# Lec-Eigenvalue-Applications

Document Sample

A FEW APPLICATIONS

253                    Csci 5304:   21. appl1 [ ]

◮ Idea is to put order into the web by ranking pages by
◮
their importance..

◮ Tells you how important a page is...
◮
◮ Google uses this for searches..
◮
◮ Updated regularly..
◮
◮ Still a lot of mystery in what is in it..
◮

254                                     Csci 5304:   21. appl1 [ ]
Page-rank - explained

Main point:   A page is important if it is pointed to by
other important pages.
◮ Importance of your page (its PageRank) is determined
◮
by summing the page ranks of all pages which point to it.
◮ Weighting: If a page points to several other pages, then
◮
the weighting should be distributed proportionally.
◮ Imagine many tokens doing a random walk on this graph:
◮
• (δ/n) chance to follow one of the n links on a page,
• What’s the chance a token will land on each page?
◮ If www.cs.umn.edu/~boley points to 10 pages including
◮
yours, then you will get 1/10 of the credit of my page.
255                                   Csci 5304:   21. appl1 [ ]
Page-Rank

If T1, ..., Tn point to page A then
ρ(A) = 1 − δ + δ [w1ρ(T1) + w2ρ(T2) + ...wnρ(Tn)]
wi = 1/c(Ti)

◮ c(Ti) = count of links going out of Page Ti. So the
◮
◮ δ is a small ’damping’ parameter – e.g. 0.85
◮
◮ Deﬁnes a (possibly huge) Hyperlink matrix H
◮
1
c(Ti)
if   i points to j
hij =
0     otherwise

256                                     Csci 5304:   21. appl1 [ ]
Example: 4 Nodes
A points to B, C, D
B points to C, D
C points to A, D
D points to A, C
What is the H matrix?

257                     Csci 5304:   21. appl1 [ ]
A few properties:

◮ Row- sums of H are = 1. So: He = e.
◮

◮ Sum of all PageRanks will
◮                                               ρ(A) = 1.
be one:                       All-PagesA

Algorithm (PageRank)
1. Select initial row vector v (v > 0)
2. For i=1:maxitr
3       v := (1 − δ)eT + δvH
4. end
◮ row iteration..
◮

258                                      Csci 5304:   21. appl1 [ ]
Properties:

1: v will remain ≥ 0. [combines non-negative vectors]
2: If initial v sums to 1 then each v in the sequence sums
to one: ve = 1.
3: More general iteration is of the form

v := v[(1 − δ)E + δH ]    with     E = ez T
G
1
where z is a probability vector eT z = 1 [Ex. z = n e]
4: A variant of the power method.
5: e is a right-eigenvector of G associated with λ = 1. We
are interested in the left eigenvector.

259                                   Csci 5304:   21. appl1 [ ]
Kleinberg’s Hubs and Authorities

◮ Idea is to put order into the web by ranking pages by
◮
their degree of Authority or ”Hubness”.
◮ An Authority is a page pointed to by many important
◮
pages.
• Authority Weight = sum of Hub Weights from In-Links.
◮ A Hub is a page that points to many important pages:
◮
• Hub Weight = sum of Authority Weights from Out-Links.
◮ Source:
◮
http://www.cs.cornell.edu/home/kleinber/auth.pdf

260                                 Csci 5304:   21. appl1 [ ]
Computation of Hubs and Authorities

◮ Simplify computation by forcing sum of squares of weights
◮
to be 1.
◮ Authj = xj =
◮                    i:(i,j)∈Edges Hubi.
◮ Hubi = yi =
◮                  j:(i,j)∈Edges Authj .
◮ Let A = Adjacency matrix: aij = 1 if (i, j) ∈ Edges.
◮
◮ y = Ax, x = AT y.
◮
◮ Iterate . . . to leading eigenvectors of AT A & AAT .
◮
◮

261                                   Csci 5304:   21. appl1 [ ]
Spectral Graph Partitioning

◮ Let N be the incidence matrix: Nij = ±1 if i-th edge
◮
is incident on the j-th vertex.
◮ For example: A↔C,D, B↔D, C↔A, D↔A,B (undi-
◮
rected graph):
                  
1   0   −1     0
N = 1     0    0    −1  ,
0 −1     0     1
yielding Laplacian = diagonal matrix of degrees − Adja-
cency matrix :
                    
2    0    −1 −1
TN = L =  0        1     0   −1 

N              −1
.
0     1    0 
−1 −1       0    2
262                                 Csci 5304:   21. appl1 [ ]
Normalized Graph Cuts

Mark a partitioning of the vertices: n− = 1, n+ = 3
√
T / 3 · 1 = [n , n , n , −n ]T /√n n .
v = [1, 1, 1, −3]               − − −        +      − +

Then
v T Lv               1          1
= |cut| ·        +
vT v                n−       n+
and
v T e = 0, where e = [1, 1, 1, 1]T = eigenvector of L.

◮ Approximately minimize
◮                               this with   an eigenvector of L:
-1.E-15 (.500000 .500000         .500000     .500000) ← ’null’ vector
.585786 (-.27059 .653281         -.65328     .270598) ← ’Fiedler’
2.00000 (.500000 -.50000         -.50000     .500000)     vector
3.41421 (.653281 .270598         -.27059     -.65328)

263                                             Csci 5304:   21. appl1 [ ]

DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 3 posted: 6/25/2011 language: English pages: 11