Docstoc

SIMS 290 2 Applied Natural Language Processing Barbara Rosario (PowerPoint)

Document Sample
SIMS 290 2 Applied Natural Language Processing Barbara Rosario (PowerPoint) Powered By Docstoc
					I256: Applied Natural Language Processing

Preslav Nakov and Marti Hearst October 16, 2006
(Many slides originally by Barbara Rosario, modified here)

1

Today
Classification
Text categorization (and other applications)

Various issues regarding classification
Clustering vs. classification, binary vs. multi-way, flat vs. hierarchical classification…

Introduce the steps necessary for a classification task
Define classes Label text Features Training and evaluation of a classifier

2

Classification
Goal: Assign „objects‟ from a universe to two or more classes or

categories

Examples:
Problem Tagging Sense Disambiguation Information retrieval Sentiment classification Author identification Object Word Word Document Document Document Categories POS The word‟s senses Relevant/not relevant Positive/negative Authors

From: Foundations of Statistical Natural Language Processing. Manning and Schutze

3

Text Categorization Applications
Web pages organized into category hierarchies Journal articles indexed by subject categories (e.g., the Library of Congress, MEDLINE, etc.) Responses to Census Bureau occupations Patents archived using International Patent Classification Patient records coded using international insurance categories E-mail message filtering News events tracked and filtered by topics Spam vs. anti-palm

Slide adapted from Paul Bennet

4

Yahoo News Categories

5

Why not a semi-automatic text categorization tool?
Humans can encode knowledge of what constitutes membership in a category.
This encoding can then be automatically applied by a machine to categorize new examples. For example...

Slide adapted froml Paul Bennett

6

Expert System (late 1980s)

Slide adapted froml Paul Bennett

7

Rule-based Approach to Text Categorization
Text in a Web Page
“Saeco revolutionized espresso brewing a decade ago by introducing Saeco SuperAutomatic machines, which go from bean to coffee at the touch of a button. The all-new Saeco Vienna Super-Automatic home coffee and cappucino machine combines top quality with low price!”

Rules
Rule 1. (espresso or coffee or cappucino ) and machine* Maker Coffee

Rule 2.

automat* and answering and machine*

Phone

Rule ...

Slide adapted froml Paul Bennett

8

Defining Rules By Hand
This is fine for low-stakes applications
Google and Yahoo alerts allow users to automatically receive news articles containing certain keywords Called “filtering” or “routing” Works fine when it’s ok to miss some things

But when high accuracy is required, experience has shown
too time consuming too difficult inconsistency issues (as the rule set gets large)
9

Slide adapted froml Paul Bennett

Replace Knowledge Engineering with a Statistical Learner

Slide adapted froml Paul Bennett

10

Cost of Manual Text Categorization
Yahoo!
 200 (?) people for manual labeling of Web pages  using a hierarchy of 500,000 categories

MEDLINE (National Library of Medicine)
 $2 million/year for manual indexing of journal articles  using MEdical Subject Headings (18,000 categories)

Mayo Clinic
 $1.4 million annually for coding patient-record events  using the International Classification of Diseases (ICD) for billing insurance companies

US Census Bureau decennial census (1990: 22 million responses)
 232 industry categories and 504 occupation categories  $15 million if fully done by hand

Slide adapted froml Paul Bennett

11

Knowledge Engineering

vs.

Statistical Learning

For US Census Bureau Decennial Census 1990
232 industry categories and 504 occupation categories $15 million if fully done by hand

Define classification rules manually:
Expert System AIOCS Development time: 192 person-months (2 people, 8 years) Accuracy = 47%

Learn classification function
Nearest Neighbor classification (Creecy ’92: 1-NN) Development time: 4 person-months (Thinking Machine) Accuracy = 60%

Slide adapted froml Paul Bennett

12

Text Topic categorization
Topic categorization: classify the document into semantics topics
The U.S. swept into the Davis Cup final on Saturday when twins Bob and Mike Bryan defeated Belarus's Max Mirnyi and Vladimir Voltchkov to give the Americans an unsurmountable 3-0 lead in the best-of-five semi-final tie. One of the strangest, most relentless hurricane seasons on record reached new bizarre heights yesterday as the plodding approach of Hurricane Jeanne prompted evacuation orders for hundreds of thousands of Floridians and high wind warnings that stretched 350 miles from the swamp towns south of Miami to the historic city of St. Augustine.
13

The Reuters collection
A gold standard
Collection of (21,578) newswire documents. For research purposes: a standard text collection to compare systems and algorithms 135 valid topics categories

14

Reuters
Top topics in Reuters

15

Reuters Document Example
<REUTERS TOPICS="YES" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET" OLDID="12981" NEWID="798"> <DATE> 2-MAR-1987 16:51:43.42</DATE> <TOPICS><D>livestock</D><D>hog</D></TOPICS> <TITLE>AMERICAN PORK CONGRESS KICKS OFF TOMORROW</TITLE> <DATELINE> kicks off CHICAGO, March 2 - </DATELINE><BODY>The American Pork Congress

tomorrow, March 3, in Indianapolis with 160 of the nations pork producers from 44 member states determining industry positions on a number of issues, according to the National Pork Producers Council, NPPC. Delegates to the three day Congress will be considering 26 resolutions concerning various issues, including the future direction of farm policy and the tax law as it applies to the agriculture sector. The delegates will also debate whether to endorse concepts of a national PRV (pseudorabies virus) control and eradication program, the NPPC said. A large trade show, in conjunction with the congress, will feature the latest in technology in all areas of the industry, the NPPC added. Reuter

&#3;</BODY></TEXT></REUTERS>
16

Classification vs. Clustering
Classification assumes labeled data: we know how many classes there are and we have examples for each class (labeled data). Classification is supervised In Clustering we don‟t have labeled data; we just assume that there is a natural division in the data and we may not know how many divisions (clusters) there are Clustering is unsupervised

17

Classification

Class1 Class2
18

Classification

Class1 Class2
19

Classification

Class1 Class2
20

Classification

Class1 Class2
21

Clustering

22

Clustering

23

Clustering

24

Clustering

25

Clustering

26

Categories (Labels, Classes)
Labeling data 2 problems: Decide the possible classes (which ones, how many)
Domain and application dependent

Label text
Difficult, time consuming, inconsistency between annotators

27

Reuters Example, revisited
Why not topic = policy ?
<REUTERS TOPICS="YES" LEWISSPLIT="TRAIN" CGISPLIT="TRAINING-SET" OLDID="12981" NEWID="798"> <DATE> 2-MAR-1987 16:51:43.42</DATE> <TOPICS><D>livestock</D><D>hog</D></TOPICS> <TITLE>AMERICAN PORK CONGRESS KICKS OFF TOMORROW</TITLE> <DATELINE> kicks off CHICAGO, March 2 - </DATELINE><BODY>The American Pork Congress

tomorrow, March 3, in Indianapolis with 160 of the nations pork producers from 44 member states determining industry positions on a number of issues, according to the National Pork Producers Council, NPPC. Delegates to the three day Congress will be considering 26 resolutions concerning various issues, including the future direction of farm policy and the tax law as it applies to the agriculture sector. The delegates will also debate whether to endorse concepts of a national PRV (pseudorabies virus) control and eradication program, the NPPC said. A large trade show, in conjunction with the congress, will feature the latest in technology in all areas of the industry, the NPPC added. Reuter &#3;</BODY></TEXT></REUTERS>
28

Binary vs. multi-way classification
Binary classification: two classes
Multi-way classification: more than two classes

Sometime it can be convenient to treat a multi-way problem like a binary one: one class versus all the others, for all classes

29

Flat vs. Hierarchical classification
Flat classification: relations between the classes undetermined
Hierarchical classification: hierarchy where each node is the sub-class of its parent‟s node

30

Single- vs. multi-category classification
In single-category text classification each text belongs to exactly one category
In multi-category text classification, each text can have zero or more categories

31

Features
>>> text = "Seven-time Formula One champion Michael Schumacher took on the Shanghai circuit Saturday in qualifying for the first Chinese Grand Prix." >>> label = “sport” >>> labeled_text = LabeledText(text, label)

Here the classification takes as input the whole string What‟s the problem with that? What are the features that could be useful for this example?
32

Feature terminology
Feature: An aspect of the text that is relevant to the task Some typical features
Words present in text Frequency of words Capitalization Are there NE? WordNet Others?

33

Feature terminology
Feature: An aspect of the text that is relevant to the task Feature value: the realization of the feature in the text
Words present in text : Kerry, Schumacher, China… Frequency of word: Kerry(10), Schumacher(1)… Are there dates? Yes/no Are there PERSONS? Yes/no Are there ORGANIZATIONS? Yes/no WordNet: Holonyms (China is part of Asia), Synonyms(China, People's Republic of China, mainland China)

34

Feature Types
Boolean (or Binary) Features Features that generate boolean (binary) values. Boolean features are the simplest and the most common type of feature.
f1(text) = 1 0 f2(text) = 1 0 if text contain “Kerry” otherwise if text contain PERSON otherwise

35

Feature Types
Integer Features Features that generate integer values. Integer features can be used to give classifiers access to more precise information about the text.
f1(text) = Number of times text contains “Kerry” f2(text) = Number of times text contains PERSON

36

Feature selection
How do we choose the “right” features? A future lecture

37

Classification
Define classes Label text Extract Features Choose a classifier
>>> my_classifier.classify(token)

The Naive Bayes Classifier NN (perceptron) SVM ….

Train it (and test it) Use it to classify new examples
38

Training
• Usually the classifier is defined by a set of parameters • Training is the procedure for finding a “good” set of parameters • Goodness is determined by an optimization criterion such as misclassification rate • Some classifiers are guaranteed to find the optimal set of parameters

39

Testing, evaluation of the classifier
After choosing the parameters of the classifiers (i.e. after training it) we need to test how well it‟s doing on a test set (not included in the training set) Calculate misclassification on the test set

40

Evaluating classifiers
Contingency table for the evaluation of a binary classifier
GREEN is correct GREEN was assigned RED was assigned a c RED is correct b d

Accuracy = (a+d)/(a+b+c+d) Precision: P_GREEN = a/(a+b), P_ RED = d/(c+d) Recall: R_GREEN = a/(a+c), R_ RED = d/(b+d)
41

Training size
The more the better! (usually) Results for text classification*

*From: Improving the Performance of Naive Bayes for Text Classification, Shen and Yang

42

Training size

*From: Improving the Performance of Naive Bayes for Text Classification, Shen and Yang

43

Training size

*From: Improving the Performance of Naive Bayes for Text Classification, Shen and Yang

44

Training Size
Author identification

Authorship Attribution a Comparison Of Three Methods, Matthew Care

45

Upcoming
Classifiers Feature selection algorithms

46


				
DOCUMENT INFO