Statistical NLP Lecture 2 by ild18893

VIEWS: 4 PAGES: 2

									                                                              Rational versus Empiricist Approaches to
         Statistical NLP: Lecture 2                                         Language (I)
                                                          • Question: What prior knowledge should be built into
                                                            our models of NLP?
                                                          • Rationalist Answer: A significant part of the
           Introduction to Statistical NLP                  knowledge in the human mind is not derived by the
                                                            senses but is fixed in advance, presumably by genetic
                        (Ch1)                               inheritance (Chomsky: poverty of the stimulus).
                                                          • Empiricist Answer: The brain is able to perform
                                                            association, pattern recognition, and generalization
                                                            and, thus, the structures of Natural Language can be
                                                            learned.




    Rational versus Empiricist Approaches to
                  Language (II)
                                                                   Today’s Approach to NLP

• Chomskyan/generative linguists seek to describe           • From ~1970-1989, people were concerned with
  the language module of the human mind (the I-               the science of the mind and built small (toy)
  language) for which data such as text (the E-               systems that attempted to behave intelligently.
  language) provide only indirect evidence, which           • Recently, there has been more interest on
  can be supplemented by native speakers intuitions.          engineering practical solutions using automatic
• Empiricists approaches are interested in describing         learning (knowledge induction).
  the E-language as it actually occurs.
                                                            • While Chomskyans tend to concentrate on
• Chomskyans make a distinction between linguistic
  competence and linguistic performance. They                 categorical judgements about very rare types of
  believe that linguistic competence can be described         sentences, statistical NLP practitioners concentrate
  in isolation while Empiricists reject this notion.          on common types of sentences.




           Why is NLP Difficult?                                 Methods that don’t work well

• NLP is difficult because Natural Language is highly       • Maximizing coverage while minimizing ambiguity
  ambiguous.                                                  is inconsistent with symbolic NLP.
• Example: “The company is training workers” has 2          • Furthermore, hand-coded syntactic constraints and
  or more parse trees (i.e., syntactic analyses).             preference rules are time consuming to build, do
• “List the sales of the products produced in 1973 with       not scale up well and are brittle in the face of the
                                                              extensive use of metaphor in language.
  the products produced in 1972” has 455 parses.
                                                            • Example: if we code
• Therefore, a practical NLP system must be good at
                                                                animate being --> swallow --> physical object
  making disambiguation decisions of word sense,
  word category, syntactic structure, and semantic                I swallowed his story, hook, line, and sinker.
  scope.                                                         The supernova swallowed the planet.




                                                                                                                     1
     What Statistical NLP can do for us                                           Corpora
                                                          •    Brown Corpus – 1 million words
• Disambiguation strategies that rely on hand-coding
  produce a knowledge acquisition bottleneck and          •    British National Corpus – 100 mil. words
  perform poorly on naturally occurring text.             •    American National Corpus – 10 mil. words -> 100
• A Statistical NLP approach seeks to solve these         •    Penn TreeBank - parsed WSJ text
  problems by automatically learning lexical and
  structural preferences from corpora. In particular,     •    Canadian Hansard – parallel corpus (bilingual)
  Statistical NLP recognizes that there is a lot of
  information in the relationships between words.
                                                          Dictionaries:
• The use of statistics offers a good solution to the
  ambiguity problem: statistical models are robust,       • Longman Dictionary of Contemporary English
  generalize well, and behave gracefully in the           • WordNet (hierarchy of synsets)
  presence of errors and new data.




 Things that can be done with Text Corpora (I)                  Things that can be done with Text Corpora (II)
                 Word Counts                                                      Zipf’s Law

                                                              • If we count up how often each word type of a
 • Word Counts to find out:                                     language occurs in a large corpus and then list the
    – What are the most common words in the text.
                                                                words in order of their frequency of occurrence, we
    – How many words are in the text (word tokens
      and word types).                                          can explore the relationship between the frequency
    – What the average frequency of each word in                of a word, f, and its position in the list, known as
      the text is.                                              its rank, r.
 • Limitation of word counts: Most words                      • Zipf’s Law says that:       f ∝ 1/r
   appear very infrequently and it is hard to                 • Significance of Zipf’s Law: For most words, our
   predict much about the behavior of words                     data about their use will be exceedingly sparse.
   that do not occur often in a corpus. ==>                     Only for a few words will we have a lot of examples.
   Zipf’s Law.




  Things that can be done with Text Corpora (III)               Things that can be done with Text Corpora (IV)
                   Collocations                                                  Concordances

• A collocation is any turn of phrase or accepted usage
                                                          • Finding concordances corresponds to finding the
  where somehow the whole is perceived as having an
                                                            different contexts in which a given word occurs.
  existence beyond the sum of its parts (e.g., disk
  drive, make up, bacon and eggs).                        • One can use a Key Word In Context (KWIC)
                                                            concordancing program.
• Collocations are important for machine translation.
                                                          • Concordances are useful both for building
• Collocations can be extracted from a text (example,
                                                            dictionaries for learners of foreign languages and
  the most common bigrams can be extracted).
                                                            for guiding statistical parsers.
  However, since these bigrams are often insignificant
  (e.g., “at the”, “of a”), they can be filtered.




                                                                                                                       2

								
To top