Applied Natural Language Processing: Barbara Rosario
Applied Natural Language Processing: Barbara Rosario
Applied Natural Language Processing: Barbara Rosario
Barbara Rosario
Word Senses
Words have multiple distinct meanings, or senses:
Plant: living plant, manufacturing plant, Title: name of a work, ownership document, form of address, material at the start of a film,
Maybe its just text categorization Each word sense represents a topic
Parsing
For PP attachment, for example
information retrieval
To return documents with the right sense of bank
3 Adapted from Dan Kleins cs 288 slides
Resources
WordNet
Hand-build (but large) hierarchy of word senses Basically a hierarchical thesaurus
SensEval
AWSD competition Training / test sets for a wide range of words, difficulties, and parts-of-speech Bake-off where lots of labs tried lots of competing approaches
SemCor
A big chunk of the Brown corpus annotated with WordNet senses
OtherResources
The Open Mind Word Expert Parallel texts
4 Taken from Dan Kleins cs 288 slides
Features
Bag-of-words (use words around with no order)
The manufacturing plant which had previously sustained the towns economy shut down after an extended labor strike. Bags of words = {after, manufacturing, which, labor, ..}
Verb WSD
Why are verbs harder?
Verbal senses less topical More sensitive to structure, argument choice Better disambiguated by their argument (subject-object): importance of local information For nouns, a wider context likely to be useful
Better features
There are smarter features:
Argument selectional preference:
serve NP[meals] vs. serve NP[papers] vs. serve NP[country]
Subcategorization:
[function] serve PP[as] [enable] serve VP[to] [tennis] serve <intransitive> [food] serve NP {PP[to]} Can capture poorly (but robustly) with local windows but we can also use a parser and get these features explicitly
Supervised learning
Supervised learning
Supervised learning
When we know the truth (true senses) (not always true or easy) Classification task Most systems do some kind of supervised learning Many competing classification technologies perform about the same (its all about the knowledge sources you tap) Problem: training data available for only a few words Examples: Bayesian classification
Nave Bayes (simplest example of Graphical models)
Today
Introduction to probability theory Introduction to graphical models
Probability theory plus graph theory
10
Why Probability?
Statistical NLP aims to do statistical inference for the field of NLP Statistical inference consists of taking some data (generated in accordance with some unknown probability distribution) and then making some inference about this distribution.
11
Why Probability?
Examples of statistical inference are WSD, the task of language modeling (ex how to predict the next word given the previous words), topic classification, etc. In order to do this, we need a model of the language. Probability theory helps us finding such model
12
Probability Theory
How likely it is that something will happen Sample space is listing of all possible outcome of an experiment
Sample space can be continuous or discrete For language applications its discrete (i.e. words)
13
14 http://ai.stanford.edu/~paskin/gm-short-course/lec1.pdf
15 http://ai.stanford.edu/~paskin/gm-short-course/lec1.pdf
16 http://ai.stanford.edu/~paskin/gm-short-course/lec1.pdf
17 http://ai.stanford.edu/~paskin/gm-short-course/lec1.pdf
Prior Probability
Prior probability: the probability before we consider any additional knowledge
P ( A)
18
Conditional probability
Sometimes we have partial knowledge about the outcome of an experiment Conditional (or Posterior) Probability Suppose we know that event B is true The probability that A is true given the knowledge about B is expressed by
P( A | B)
P(A,B) P(A|B) P(B)
19
20 http://ai.stanford.edu/~paskin/gm-short-course/lec1.pdf
P ( A, B ) P ( A | B ) P ( B ) P ( B | A) P ( A)
Note: P(A,B) = P(A B) Chain Rule P(A, B) = P(A|B) P(B) = The probability that A and B both happen is the probability that B happens times the probability that A happens, given B has occurred. P(A, B) = P(B|A) P(A) = The probability that A and B both happen is the probability that A happens times the probability that B happens, given A has occurred. Multi-dimensional table with a value in every cell giving the probability of that specific state occurring 21
Chain Rule
P(A,B) = P(A|B)P(B) = P(B|A)P(A)
P(A,B,C,D) = P(A)P(B|A)P(C|A,B)P(D|A,B,C..)
22
Bayes' rule
Useful when one quantity is more easy to calculate; trivial consequence of the definitions we saw but it s extremely useful
23
Bayes' rule
P(A|B)P(A) P(A|B) P(B)
Example
S:stiff neck, M: meningitis P(S|M) =0.5, P(M) = 1/50,000 P(S)=1/20 I have stiff neck, should I worry?
P(M | S )
(Conditional) independence
Two events A e B are independent of each other if P(A) = P(A|B) Two events A and B are conditionally independent of each other given C if P(A|C) = P(A|B,C)
26
Back to language
Statistical NLP aims to do statistical inference for the field of NLP
Topic classification
P( topic | document )
Language models
P (word | previous word(s) )
WSD
P( sense | word)
28
Estimation of P
Frequentist statistics
Parametric Non-parametric (distribution free)
Bayesian statistics
Bayesian statistics measures degrees of belief Degrees are calculated by starting with prior beliefs and updating them in face of the evidence, using Bayes theorem
Inference
The central problem of computational Probability Theory is the inference problem: Given a set of random variables X1, , Xk and their joint density P(X1, , Xk), compute one or more conditional densities given observations.
Compute
P(X1 | X2 , Xk) P(X3 | X1 ) P(X1 , X2 | X3, X4,) Etc
s = argmaxsk P(sk | c)
31
s = argmaxsk P(sk | c)
Nave Bayes classifier widely used in machine learning Estimate P(c | sk) and P(sk)
33
vj in c
Nave Bayes assumption: Two consequences All the structure and linear ordering of words within the context is ignored bags of words model The presence of one word in the model is independent of the others
Not true but model easier and very efficient easier efficient mean something specific in the probabilistic framework
Well see later (but easier to estimate parameters and more efficient inference)
Nave Bayes assumption is inappropriate if there are strong dependencies, but often it does very well (partly because the 35 decision may be optimal even if the assumption is not correct)
P(vj | sk)
Estimation
Count of vj when sk
C ( sk) P(sk) C ( w)
Prior probability of sk
36
end end
P(sk)
end
C ( sk) C ( w)
37
Next week
Introduction to Graphical Models Part of speech tagging Readings:
Chapter 5 NLTL book Chapter 10 of Foundation of Stat NLP
39