# Computing Relevance, Similarity The Vector Space Model

Document Sample

```					              Computing Relevance, Similarity:
The Vector Space Model

Based on Larson and Hearst’s slides at
UC-Berkeley

http://www.sims.berkeley.edu/courses/is202/f00/

Database Management Systems, R. Ramakrishnan                           1

Document Vectors

v Documents are represented as “bags of words”
v Represented as vectors when used computationally
• A vector is like an array of floating point
• Has direction and magnitude
• Each vector holds a place for every term in the
collection
• Therefore, most vectors are sparse

Database Management Systems, R. Ramakrishnan                           2
Document Vectors:
One location for each word.
nova          galaxy heat           h’wood film      role      diet   fur
A   10             5     3
B   5              10
C                                  10             8     7
D                                  9
“Nova” occurs 10     in   10
times    5
text A
E                   “Galaxy” occurs 5 times in text A 10                 10
F                    “Heat” occurs 3 times in text A 9                   10
G 5                7 (Blank means 0 occurrences.)
9
H                  6          10          2      8
I                                          7     5                  1    3

Database Management Systems, R. Ramakrishnan                                       3

Document Vectors
Document ids
nova          galaxy heat           h’wood film      role      diet   fur
A   10             5     3
B   5              10
C                                         10      8     7
D                                         9      10     5
E                                                                 10     10
F                                                                 9      10
G 5                7                             9
H                  6          10          2      8
I                                          7     5                  1    3

Database Management Systems, R. Ramakrishnan                                       4
We Can Plot the Vectors
Star

Diet

Assumption: Documents that are “close” in space are similar.
Database Management Systems, R. Ramakrishnan                                         5

Vector Space Model
v   Documents are represented as vectors in term space
• Terms are usually stems
• Documents represented by binary vectors of terms
v Queries represented the same as documents
v A vector distance measure between the query and
documents is used to rank retrieved documents
• Query and Document similarity is based on length and
direction of their vectors
• Vector operations to capture boolean query conditions
• Terms in a vector can be “weighted” in many ways
Database Management Systems, R. Ramakrishnan                                         6
Vector Space Documents
and Queries
docs    t1    t2     t3   RSV=Q.Di
t1
D1      1     0      1      4        t3                               D2
D9               D1
D2      1     0      0      1
D3      0     1      1      5                                              D4
D4      1     0      0      1             D11
D5
D5      1     1      1      6
D3                  D6
D6      1     1      0      3
D10
D7      0     1      0      2
D8      0     1      0      2
D9      0     0      1      3                                   D8          t2
D7
D10      0     1      1      5
D11      1     0      1      3
Q       1     2      3
q1    q2     q3
Boolean term combinations
Q is a query – also represented
as a vector
Database Management Systems, R. Ramakrishnan                                                     7

Assigning Weights to Terms

å Binary Weights
 Raw term frequency
ê tf x idf
• Recall the Zipf distribution
• Want to weight terms highly if they are
• frequent in relevant documents … BUT
• infrequent in the collection as a whole

Database Management Systems, R. Ramakrishnan                                                     8
Binary Weights

v   Only the presence (1) or absence (0) of a term is
included in the vector
docs   t1   t2   t3
D1     1    0    1
D2     1    0    0
D3     0    1    1
D4     1    0    0
D5     1    1    1
D6     1    1    0
D7     0    1    0
D8     0    1    0
D9     0    0    1
D10     0    1    1
D11     1    0    1

Database Management Systems, R. Ramakrishnan                      9

Raw Term Weights

v   The frequency of occurrence for the term in each
document is included in the vector

docs   t1   t2   t3
D1     2    0    3
D2     1    0    0
D3     0    4    7
D4     3    0    0
D5     1    6    3
D6     3    5    0
D7     0    8    0
D8     0   10    0
D9     0    0    1
D10     0    3    5
D11     4    0    1

Database Management Systems, R. Ramakrishnan                     10
TF x IDF Weights

v    tf x idf measure:
• Term Frequency (tf)
• Inverse Document Frequency (idf) -- a way to deal with
the problems of the Zipf distribution
v    Goal: Assign a tf * idf weight to each term in each
document

Database Management Systems, R. Ramakrishnan                               11

TF x IDF Calculation
wik = tf ik * log( N / nk )
Tk = term k in document Di
tf ik = frequency of term Tk in document Di
idf k = inverse document frequency of term Tk in C
N = total number of documents in the collection C
nk = the number of documents in C that contain Tk

idf k = log N 
 
 nk 

Database Management Systems, R. Ramakrishnan                               12
Inverse Document Frequency
v   IDF provides high values for rare words and low
values for common words
 10000 
log       =0
 10000 
For a                             10000 
log        = 0.301
collection                        5000 
of 10000
 10000 
documents                     log        = 2.698
 20 
 10000 
log       =4
 1 
Database Management Systems, R. Ramakrishnan                             13

TF x IDF Normalization
v   Normalize the term weights (so longer documents are not
unfairly given more weight)
• Usually means forcing all values to fall within a certain
range, typically between 0 and 1, inclusive.

tf ik log( N / nk )
wik =
∑
t
k =1
(tf ik ) 2 [log( N / nk )]2

Database Management Systems, R. Ramakrishnan                             14
Pair-wise Document Similarity

nova        galaxy        heat       h’wood      film        role    diet         fur
A     1            3            1
B     5            2
C                                          2          1           5
D                                          4          1

How to compute document similarity?

Database Management Systems, R. Ramakrishnan                                                      15

Pair-wise Document Similarity
sim( A, B ) = (1 ∗ 5) + (2 ∗ 3) = 11
D1 = w11 , w12, ..., w1t                       sim( A, C ) = 0
D2 = w21 , w22, ..., w2t                       sim( A, D ) = 0
t                     sim( B, C ) = 0
sim( D1 , D2 ) = ∑ w1i ∗ w2i                   sim( B, D ) = 0
i =1                    sim(C , D) = ( 2 ∗ 4) + (1∗1) = 9

nova       galaxy heat           h’wood film             role       diet          fur
A      1           3     1
B      5           2
C                                          2           1          5
D                                          4           1

Database Management Systems, R. Ramakrishnan                                                      16
Pair-wise Document Similarity
(cosine normalization)

D1 = w11 , w12, ..., w1t
D2 = w21 , w22, ..., w2t
t
sim( D1 , D2 ) = ∑ w1i ∗ w2i unnormalized
i =1
t

∑w      1i   ∗ w2i
sim( D1 , D2 ) =                 i =1
cosine normalized
t                   t

∑ (w
i =1
1i   ) ∗
2
∑ (w
i =1
2i   )   2

Database Management Systems, R. Ramakrishnan                                                                            17

Vector Space “Relevance” Measure

Di = wd i1 , wd i 2 ,..., wd it
Q = wq1 , wq 2, ..., wqt                        w = 0 if a term is absent
t
if term weights normalized :                          sim(Q, Di ) = ∑ wqj ∗ wd ij
j =1

otherwise normalize in the similarity comparison :
t

∑w
j =1
qj   ∗ wd ij
sim(Q, Di ) =
t                     t

∑ (wqj )2 ∗
j =1
∑ (w
j =1
d   ij
)2

Database Management Systems, R. Ramakrishnan                                                                            18
Computing Relevance Scores

Say we have query vector Q = (0.4,0.8)
Also, document D2 = (0.2,0.7)
What does their similarity comparison yield?
(0.4 * 0.2) + (0.8 * 0.7)
sim(Q, D2 ) =
[(0.4) 2 + (0.8) 2 ] *[(0.2) 2 + (0.7) 2 ]
0.64
=            = 0.98
0.42

Database Management Systems, R. Ramakrishnan                                                                  19

Vector Space with Term Weights and
Cosine Matching
Di=(di1,wdi1;di2, wdi2;…;dit, wdit)
Term B
Q =(qi1,wqi1;qi2, wqi2;…;qit, wqit)
1.0                    Q = (0.4,0.8)
∑
t
D2        Q      D1=(0.8,0.3)
j =1
wq j wdij
0.8                    D2=(0.2,0.7)              sim(Q, Di ) =
∑          ( wq j ) 2 ∑ j =1 ( wd ij ) 2
t                      t
j =1
0.6
α2                                                           (0.4 ⋅ 0.2) + (0.8 ⋅ 0.7)
0.4
sim(Q, D 2) =
[(0.4) 2 + (0.8) 2 ] ⋅ [(0.2) 2 + (0.7) 2 ]
D1
0.2     α1                                                     0.64
=           = 0.98
0.42
0      0.2    0.4 0.6        0.8     1.0
.56
Term A                        sim(Q, D1 ) =          = 0.74
0.58

Database Management Systems, R. Ramakrishnan                                                                  20
Similarity Measures
|Q∩D|                           Simple matching (coordination level match)
|Q∩D|                        Dice’s Coefficient
2
|Q|+| D|
|Q∩D|
Jaccard’s Coefficient
|Q∪D|
|Q∩D|
1             1           Cosine Coefficient
|Q | 2 ×| D | 2
|Q∩D|
min(| Q |, | D |)               Overlap Coefficient

Database Management Systems, R. Ramakrishnan                                      21

Text Clustering

v Finds overall similarities among groups of
documents
v Finds overall similarities among groups of tokens
v Picks out some themes, ignores others

Database Management Systems, R. Ramakrishnan                                      22
Text Clustering
Clustering is
“The art of finding groups in data.”
-- Kaufmann and Rousseeu

Term 1

Term 2
Database Management Systems, R. Ramakrishnan                              23

Problems with Vector Space

v   There is no real theoretical basis for the
assumption of a term space
• It is more for visualization than having any real basis
• Most similarity measures work about the same
v   Terms are not really orthogonal dimensions
• Terms are not independent of all other terms; remember
our discussion of correlated terms in text

Database Management Systems, R. Ramakrishnan                              24
Probabilistic Models

v Rigorous formal model attempts to predict the
probability that a given document will be relevant
to a given query
v Ranks retrieved documents according to this
probability of relevance (Probability Ranking
Principle)
v Relies on accurate estimates of probabilities

Database Management Systems, R. Ramakrishnan                                 25

Probability Ranking Principle

v   If a reference retrieval system’s response to each request is
a ranking of the documents in the collections in the order
of decreasing probability of usefulness to the user who
submitted the request, where the probabilities are estimated
as accurately as possible on the basis of whatever data has
been made available to the system for this purpose, then
the overall effectiveness of the system to its users will be
the best that is obtainable on the basis of that data.

Stephen E. Robertson, J. Documentation 1977

Database Management Systems, R. Ramakrishnan                                 26
Iterative Query Refinement

Database Management Systems, R. Ramakrishnan                           27

Query Modification

v   Problem: How can we reformulate the query to
help a user who is trying several searches to get at
the same information?
• Thesaurus expansion:
• Suggest terms similar to query terms
• Relevance feedback:
• Suggest terms (and documents) similar to retrieved
documents that have been judged to be relevant

Database Management Systems, R. Ramakrishnan                           28
Relevance Feedback
v   Main Idea:
• Modify existing query based on relevance judgements
• Extract terms from relevant documents and add them
to the query
OR
• AND/ re-weight the terms already in the query
v   There are many variations:
• Usually positive weights for terms from relevant docs
• Sometimes negative weights for terms from non-relevant
docs
v   Users, or the system, guide this process by selecting
terms from an automatically-generated list.
Database Management Systems, R. Ramakrishnan                            29

Rocchio Method

v   Rocchio automatically
• Re-weights terms
• Adds in new terms (from relevant docs)
• have to be careful when using negative terms
• Rocchio is not a machine learning algorithm

Database Management Systems, R. Ramakrishnan                            30
Rocchio Method
β n         γ n
Ri − ∑ Si
∑ n i=1
1                 2

Q1 = α Q0 +
n1 i =1      2

where
Q0 = the vector for the initial query
Ri = the vector for the relevant document i
Si = the vector for the non - relevant document i
n1 = the number of relevant documents chosen
n2 = the number of non - relevant documents chosen
α , β and γ tune the importance of relevant and nonrelevant terms
(in some studies best to set β to 0.75 and γ to 0.25)
Database Management Systems, R. Ramakrishnan                                                              31

Rocchio/Vector Illustration
Information
Q0 = retrieval of information = (0.7,0.3)
1.0                                        D1 = information science =       (0.2,0.8)
D1                       D2 = retrieval systems =        (0.9,0.1)

Q’
Q’ = ½*Q0+ ½ * D1 = (0.45,0.55)
Q” = ½*Q0+ ½ * D2 = (0.80,0.20)
0.5
Q0
Q”
D2

0                              0.5                     1.0
Retrieval
Database Management Systems, R. Ramakrishnan                                                              32

```
DOCUMENT INFO
Shared By:
Categories:
Stats:
 views: 24 posted: 1/8/2010 language: English pages: 16