Docstoc

models Information Retrieval Models 1 Retrieval Models • A

Document Sample
models Information Retrieval Models 1 Retrieval Models • A Powered By Docstoc
					Information Retrieval Models




                               1
              Retrieval Models
• A retrieval model specifies the details of:
  1. Document representation
  2. Query representation
  3. Retrieval function

• Determines a notion of relevance.
• Notion of relevance can be binary or
  continuous (i.e. ranked retrieval).


                                                2
      Classes of Retrieval Models
• Boolean models (set theoretic)
• Statistical (Probabilistic) models
• Vector space models (statistical/algebraic)
  – Generalized VS
• Extended Boolean Models



                                                3
       Other Model Dimensions
• Logical View of Documents
  – Index terms
  – Full text
  – Full text + Structure (e.g. hypertext)


• User Task
  – Retrieval
  – Browsing

                                             4
                 Retrieval Tasks
• Ad hoc retrieval: Fixed document corpus, varied
  queries.

• Filtering: Fixed query, continuous document
  stream.
   – Binary decision of relevant/not-relevant.

• Routing: Same as filtering but continuously supply
  ranked lists rather than binary filtering.



                                                       5
      Common Preprocessing Steps
• Strip unwanted characters/markup (e.g. HTML tags,
  punctuation, numbers, etc.).

• Break into tokens (keywords) on whitespace.

• Stem tokens to “root” words
   – computational  comput

• Remove common stopwords (e.g. a, the, it, etc.).

• Detect common phrases (possibly using a domain specific
  dictionary).

• Build inverted index (keyword  list of docs containing it).
                                                                 6
                 Boolean Model
• A document is represented as a set of keywords.

• Queries are Boolean expressions of keywords,
  connected by AND, OR, and NOT, including the use
  of brackets to indicate scope.
   – ((Rio AND Brazil) OR (Hilo AND Hawaii)) AND hotel
     AND NOT Hilton)


• Output: Document is relevant or not. No partial
  matches or ranking.

                                                         7
     Exact Match - Boolean Search

• You retrieve exactly what you ask for in the
  query:
   – all documents that have the term(s) with logical
     connection(s), as stated in the query
   – exactly: nothing less, nothing more
• Based on matching following rules of Boolean
  algebra, or algebra of sets
   – ‘new algebra’
   – presented by circles in Venn diagrams


                                                        8
                  Boolean Algebra
• Operates on sets
  – e.g. set of documents
• Has four operations (like in algebra):
  1. A: retrieve set A
     • I want documents that have the term programming
  2. A AND B: retrieve set that has A and B
     • often called intersection & labeled A  B
     • I want documents that have both terms programming
       and language some place within

                                                         9
               Boolean algebra
3. A OR B: retrieve set that has either A or B
  • often called union and labeled A  B
  • I want documents that have either term
    programming or term language someplace
    within
4. A NOT B: retrieve set A but not B
  • often called negation and labeled A – B
  • I want documents that have term programming but if
    they also have term language I do not want those


                                                         10
               Potential problems

• But beware:
  – programming AND language will retrieve documents
    that have programming language (together as a
    phrase) but also documents that have language in the
    first paragraph and programming in the third section,
    5 pages later, and it does not deal with programming
    language at all

   – thus in Google you will ask for “programming
     language” and in DIALOG for programming language
     to retrieve the exact phrase programming language
                                                        11
          Potential problems
– programming NOT language will retrieve
  documents that have programming and suppress
  those that along with programming also have
  language, but sometimes those suppressed may
  very well be relevant. Thus, NOT is also known
  as the “dangerous operator “




                                                   12
  Boolean algebra depicted in Venn diagrams
Four basic operations:
e.g. A = programming B= language
   A              B
                          A alone. All documents that have A. Shade 1 & 2.
  1       2           3   programming
      A           B
  1       2           3   A AND B. Shade 2
                          programming AND language
      A           B
      1       2       3     A OR B. Shade 1, 2, 3
                            programming OR language
      A           B
                            A NOT B. Shade 1
      1       2       3
                            programming NOT language
                                                                       13
                Venn diagrams … cont.
Complex statements allowed e.g
                        (A OR B) AND C
        A        B
                         Shade 4,5,6
            2
    1                3
                         (programming OR language) AND
        4   5    6       visual
            7

            C
                         (A OR B) NOT C
                         Shade what?
                         (programming OR language) NOT
                         visual
                                                         14
        Boolean Retrieval Model

• Popular retrieval model because:
   – Easy to understand for simple queries.
   – Clean formalism.

• Reasonably efficient implementations possible for
  normal queries.




                                                      15
      Boolean Models  Problems
• Very rigid: AND means all; OR means any.
• Difficult to express complex user requests.
• Difficult to control the number of documents
  retrieved.
   – All matched documents will be returned.
• Difficult to rank output.
   – All matched documents logically satisfy the query.
• Difficult to perform relevance feedback.
   – If a document is identified by the user as relevant or
     irrelevant, how should the query be modified?

                                                              16
                Statistical Models
• A document is typically represented by a bag of
  words (unordered words with frequencies).
• Bag = set that allows multiple occurrences of the
  same element.
• User specifies a set of desired terms with optional
  weights:
   – Weighted query terms:
     Q = < database 0.5; text 0.8; information 0.2 >
   – Unweighted query terms:
     Q = < database; text; information >
   – No Boolean conditions specified in the query.

                                                        17
           Statistical Retrieval
• Retrieval based on similarity between query
  and documents.
• Output documents are ranked according to
  similarity to query.
• Similarity based on occurrence frequencies
  of keywords in query and document.
• Automatic relevance feedback can be supported.

                                                   18
       Issues for Vector Space Model
• How to determine important words in a document?
   – Word sense?
   – Word n-grams (and phrases,…)  terms

• How to determine the degree of importance of a term
  within a document and within the entire collection?

• How to determine the degree of similarity between a
  document and the query?

• In the case of the web, what is a collection and what
  are the effects of links, formatting information, etc.?
                                                            19
        The Vector-Space Model
• Assume t distinct terms remain after preprocessing;
  call them index terms or the vocabulary.
• These “orthogonal” terms form a vector space.
                Dimension = t = |vocabulary|
• Each term, i, in a document or query, j, is given a
  real-valued weight, wij.
• Both documents and queries are expressed as
  t-dimensional vectors:
                    dj = (w1j, w2j, …, wtj)


                                                        20
              Graphic Representation
Example:
D1 = 2T1 + 3T2 + 5T3              T3
D2 = 3T1 + 7T2 + T3
Q = 0T1 + 0T2 + 2T3           5


        D1 = 2T1+ 3T2 + 5T3

                              Q = 0T1 + 0T2 + 2T3
                                       2   3
                                                                  T1
  D2 = 3T1 + 7T2 + T3
                                       • Is D1 or D2 more similar to Q?
                                       • How to measure the degree of
               7
         T2                              similarity? Distance? Angle?
                                         Projection?

                                                                          21
             Document Collection
• A collection of n documents can be represented in the
  vector space model by a Term-Document Matrix.
• An entry in the matrix corresponds to the “weight” of a
  term in the document; zero means the term has no
  significance in the document or it simply doesn’t exist in
  the document.
                         T1 T2 ….        Tt
                     D1 w11 w21 …        wt1
                     D2 w12 w22 …        wt2
                     :    : :           :
                     :    : :           :
                     Dn w1n w2n …         wtn


                                                               22
   Term Weights: Term Frequency
• More frequent terms in a document are more
  important, i.e. more indicative of the topic.
      fij = frequency of term i in document j



• May want to normalize term frequency (tf) across
  the entire document:
                  tfij = fij / max{fij}


                                                     23
Term Weights: Inverse Document Frequency
• Terms that appear in many different documents
  are less indicative of overall topic.
   df i = document frequency of term i
        = number of documents containing term i
   idfi = inverse document frequency of term i,
        = log2 (N/ df i)
         (N: total number of documents)
• An indication of a term’s discrimination power.
• Log used to dampen the effect relative to tf.

                                                    24
             TF-IDF Weighting
• A typical combined term importance indicator is
  tf-idf weighting:
            wij = tfij idfi = tfij log2 (N/ dfi)
• A term occurring frequently in the document but
  rarely in the rest of the collection is given high
  weight.
• Many other ways of determining term weights
  have been proposed.
• Experimentally, tf-idf has been found to work well.

                                                        25
     Computing TF-IDF -- An Example
-Given a document containing terms with given frequencies:
  A(3), B(2), C(2), D(1)
-Assume collection contains 10,000 documents and
document frequencies of these terms are:
  A(50), B(1300), C(250), D(20)
-Then:
A: tf = 3/3; idf = log(10000/50) = 7.6;    tf-idf = 7.6
B: tf = 2/3; idf = log(10000/1300) = 2.9; tf-idf = 1.9
C: tf = 2/3; idf = log(10000/250) = 5.3; tf-idf = 3.5
D: tf = 1/3; idf = log(10000/20) = 8.9     tf-idf = 2.9


                                                             26
               Query Vector
• Query vector is typically treated as a
  document and also tf-idf weighted.

• Alternative is for the user to supply weights
  for the given query terms.




                                                  27
               Similarity Measure
• A similarity measure is a function that computes
  the degree of similarity between two vectors.

• Using a similarity measure between the query and
  each document:
   – It is possible to rank the retrieved documents in the
     order of presumed relevance.

   – It is possible to enforce a certain threshold so that the
     size of the retrieved set can be controlled.



                                                                 28
 Similarity Measure - Inner Product
• Similarity between vectors for the document di and query q
  can be computed as the vector inner product:
                                t
         sim(dj,q) = dj•q =   w ·w
                               i 1
                                     ij   iq

     where wij is the weight of term i in document j and wiq is the weight
     of term i in the query

• For binary vectors, the inner product is the number of
  matched query terms in the document (size of intersection).
• For weighted term vectors, it is the sum of the products of
  the weights of the matched terms.

                                                                             29
       Inner Product -- Examples

Binary:
   – D = 1, 1,   1, 0, 1,   1,   0
                                     Size of vector = size of vocabulary = 7
   – Q = 1, 0 , 1, 0, 0,    1,   1   0 means corresponding term not found in
                                       document or query
   sim(D, Q) = 3

Weighted:
     D1 = 2T1 + 3T2 + 5T3        D2 = 3T1 + 7T2 + 1T3
     Q = 0T1 + 0T2 + 2T3
       sim(D1 , Q) = 2*0 + 3*0 + 5*2 = 10
       sim(D2 , Q) = 3*0 + 7*0 + 1*2 = 2


                                                                           30
      Properties of Inner Product

• The inner product is unbounded.

• Favors long documents with a large number of
  unique terms.

• Measures how many terms are matched but
  NOT how many terms are not matched.


                                                 31
          Cosine Similarity Measure
• Cosine similarity measures the cosine of                             t3
   the angle between two vectors.
• Inner product normalized by the vector                          1
   lengths.                           t                    D1
                 dj q   ( wij  wiq)
                        
                                                                       Q
 CosSim(dj, q) =                  i 1
                                    t            t
                                                                2             t1
                           wij   wiq
                                            2          2
                 dj  q
                                   i 1         i 1


                                                        t2        D2
D1 = 2T1 + 3T2 + 5T3 CosSim(D1 , Q) = 10 / (4+9+25)(0+0+4) = 0.81
D2 = 3T1 + 7T2 + 1T3 CosSim(D2 , Q) = 2 / (9+49+1)(0+0+4) = 0.13
Q = 0T1 + 0T2 + 2T3

D1 is 6 times better than D2 using cosine similarity but only 5 times better
using inner product.

                                                                                    32
          Naïve Implementation
-Convert all documents in collection D to tf-idf
  weighted vectors, dj, for keyword vocabulary V.
-Convert query to a tf-idf-weighted vector q.
-For each dj in D do
    Compute score sj = cosSim(dj, q)
-Sort documents by decreasing score.
-Present top ranked documents to the user.

Time complexity: O(|V|·|D|) Bad for large V & D !
|V| = 10,000; |D| = 100,000; |V|·|D| = 1,000,000,000

                                                       33
Comments on Vector Space Models
• Simple, mathematically based approach.
• Considers both local (tf) and global (idf) word
  occurrence frequencies.
• Provides partial matching and ranked results.
• Tends to work quite well in practice despite
  obvious weaknesses.
• Allows efficient implementation for large
  document collections.

                                                    34
  Problems with Vector Space Model
• Missing semantic information (e.g. word sense).
• Missing syntactic information (e.g. phrase structure,
  word order, proximity information).
• Assumption of term independence (e.g. ignores
  synonomy).
• Lacks the control of a Boolean model (e.g., requiring
  a term to appear in a document).
   – Given a two-term query “A B”, may prefer a document
     containing A frequently but not B, over a document that
     contains both A and B, but both less frequently.

                                                               35
    Extended Boolean Model

•   Disadvantages of “Boolean Model” :
•   No term weight is used
•   Counter example: query q=Kx AND Ky.
    Documents containing just one term, e,g, Kx is considered
     as irrelevant as another document containing none of these
     terms.




                                                                  36
 Extended Boolean Model:

• The Extended Boolean model was introduced in
  1983 by Salton, Fox, and Wu
• The idea is to make use of term weight as vector
  space model.
• Strategy: Combine Boolean query with vector space
  model.
• Advantages: It is easy for user to provide query.




                                                      37
 Extended Boolean Model

• Each document is represented by a vector (similar to vector
  space model.)
   wx , j  tf norm x , j * idf norm x , j
                       tf x , j                             idf x
   tf norm x , j                   and idf norm x , j   
                     tf m axx , j                          idf m ax
• Query is in terms of Boolean formula.




                                                                      38
The Idea     qand = kx AND ky; x=wxj and y= wyj
      ky                                         (1,1)

                                dj
  y = wyj

                                                         AND


               dj+1


     (0,0)                     x = wxj            kx

    We want a document to be as close as possible to (1,1)
                                                               39
AND query
•   For query q=Kx AND Ky, (1,1) is the most desirable point.
    We use to rank the documents

                                 (1 x)  (1  y)
                                       2        2

          sim(qand , d )  1 
                                           2
•   The bigger the better.




                                                                40
The Idea            qor = kx OR ky; x= wxjand y=wyj
    ky                                           (1,1)


y = wyj                                     dj
                                                          OR
             dj+1




   (0,0)                          x = wxj         kx


         We want a document to be as far as possible from (0,0)
                                                                  41
    OR query
•    For query q=Kx OR Ky, (0,0) is the point we try to
     avoid. We can use to rank the documents
                                           2
                                 x y
                                   2

               sim(qor , d ) 
                                       2
•    The bigger the better.




                                                          42
   Weights and Euclidean Distances




Maximum Euclidean Distance, dmax, in a two-dimensional term space.

                                                                     43
AND query




   "AND" Euclidean Distance in a two-dimensional term space.

                                                               44
                        OR query




The Euclidean distance of a document at (x, y) must be d < 1.41
                                                                  45
       Normalized Similarity Scores
• To compare similarity scores for a variety of
  scenarios we need to normalize all distances by
  dividing by dmax. Thus, for OR and AND queries
  we obtain




                                                    46
              Extend the idea to m terms

• qor=k1  k2  …  Km

                           x  x ... x )
                                     p   p       p   1/ p

                  ,d )  (
                                     1   2       m
       sim(qor           j

                                 m
• qand=k1  k2  …  km
                                                             1/ p
                         (1 x ) (1 x ) ...(1 x )
                                 p           p           p

sim(qand , dj )  1 (
                             1           2           m

                                                             )
                                         m
                                                                 47
                          Example:

• For instance, consider the query q=(k1 AND k2) OR k3. The
  similarity sim(q,dj) between a document dj and this query is
  then computed as
                                                        p
                                                      
                    1  p 1  x1   1  x2 
                                   p               p
                                                         xp
                  p                 2                     3

   sim(q, d j )                                      
                                      2

• Any boolean can be expressed as a numeral formula.


                                                                 48
Exercise
• Rank the following by decreasing cosine
  similarity:
   – Two documents that have only frequent words (the, a,
     an, of) in common.
   – Two documents that have no words in common.
   – Two documents that have many rare words in common
     (wingspan, tailfin).




                                                            49
Exercise
 • Consider three Documents: Austen's Sense and
   Sensibility(SaS), Pride and Prejudice(PaP);
   Bronte's Wuthering Heights (WH)
                              SaS     PaP       WH
                 affection    115      58       20
                  jealous      10       7       11
                  gossip        2       0        6

                            SaS       PaP        WH
                 affection 0.996     0.993      0.847
                  jealous 0.087      0.120      0.466
                  gossip 0.017       0.000      0.254

 •   Calculate cos(SAS, PAP) and cos(SAS, WH)
                                                        50
  Exercise:

1. Give the numeral formula for extended Boolean model of
    the query
q=(k1 or k2 or k3)and (not k4 or k5). (assume that there are 5
    terms in total.)

2. Assume that the document is represented by the vector (0.8,
    0.1, 0.0, 0.0, 1.0).
What is sim(q, d) for extended Boolean model?




                                                                 51

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:5
posted:3/27/2012
language:
pages:51
Description: Information Retrievaland Web Search lecture