0% found this document useful (0 votes)
4 views

NLP

Uploaded by

oaboalwafa75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

NLP

Uploaded by

oaboalwafa75
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

The University of Jordan

N-Gram Language Model


Chapter 3
Based on Speech and Language Processing. Daniel Jurafsky & James H. Martin book, 2023
The University of Jordan

Introduction
• Let’s predict what word is likely to follow:
• Please turn your homework ….
• Most likely in, Possibly over but sure not the
• It will be very helpful if we assign a probability to each possible next
word
• Models that assign probabilities to upcoming words, or sequences of
words in general, are called language models or LMs.
• LLM that revolutionized modern NLP are trained just by predicting
words
The University of Jordan

Introduction
• Language models can also assign a probability to an entire sentence
• For example, they can predict that the following sequence has a much
higher probability of appearing in a text:
• all of a sudden I notice three guys standing on the sidewalk
• than does this same set of words in a different order:
• on guys all I of notice sidewalk three a sudden standing the

• Why does it matter what the probability of a sentence is or how


probable the next word?
The University of Jordan

Why calculating probability


• Correcting spelling errors
• (Their are two midterm) or (There are two midterms)
• Correcting grammar
• (Everything has improve) od (Everything has improved)
• Speech recognizer
• (I will be back soonish) or ( I will be bassoon dish)
The University of Jordan

N-gram
• An n-gram is a sequence of n-words:
• A 2-gram or bigram is a two-word sequence of words like: “please turn”. Turn
your”, “your homework”
• A 3-gram or trigram is a three-word sequence of words like “please turn your”
or “turn your homework”
• We use the n-gram term to mean a probabilistic model that can
estimate the probability of a word given the n-1 previous words, and
thereby also to assign probabilities to entire sequences.
The University of Jordan

N-Grams
• Compute the probability of a word w given some history h P(w|h)
• Suppose h is “ its water is so transparent that”
• Suppose the word w is “the”
• We want to know P(the| its water is so transparent that)
• How?
The University of Jordan

N-Grams -Relative frequency count


• Relative frequency count:
• take a very large corpus
• count the number of times we see its water is so transparent that
• count the number of times this is followed by the
• The answer of the question “Out of the times we saw the history h,
how many times was it followed by the word w”, as follows:
C(its water is so transparent that the)
𝑃(𝑡ℎ𝑒|𝑖𝑡𝑠 𝑤𝑎𝑡𝑒𝑟 𝑖𝑠 𝑠𝑜 𝑡𝑟𝑎𝑛𝑠𝑝𝑎𝑟𝑒𝑛𝑡 𝑡ℎ𝑎𝑡) =
C(its water is so transparent that)
The University of Jordan

N-Grams -Relative frequency count


• Where to get the corpus? Is web enough?
• New sentences are created all the time, and we won’t always be able
to count entire sentences
• If we want to know joint probability of an entire sequence of words
like its water is so transparent, we could do it by asking “out of all
possible sequences of five words, how many of them are its water is
so transparent?”
• We would have to get the count of its water is so transparent and
divide by the sum of the counts of all possible five-word sequences.
• Too much right?
The University of Jordan

N-Gram – Chain rule of probability


• To represent the probability of a particular random variable Xi taking
on the value “the”, or P(Xi =“the”) or simply, P(the).
• Represent a sequence of n words w1…wn or w1:n
• For the joint probability of each word in a sequence having a
particular value P(X1 = w1,X2 = w2,X3 = w3,...,Xn = wn) we’ll use
P(w1,w2,...,wn)
• How can we compute probabilities of entire sequences like
P(w1,w2,...,wn)?
The University of Jordan

N-Gram – Chain rule of probability


• Decompose this probability using the chain rule of probability:

• Apply chain rule to words:


The University of Jordan

N-Gram – Chain rule of probability


• We don’t know any way to compute the exact probability of a word
given a long sequence of preceding words, P(wn|w1:n−1).
• We can’t just estimate by counting the number of times every word
occurs following every long string, because language is creative and
any particular context might have never occurred before!
• What we can do?
The University of Jordan

Bigram model
• Approximates the probability of a word given all the previous words
P(wn|w1:n−1) by using only the conditional probability of the preceding
word P(wn|wn−1)
• Instead of computing this:

• Aapproximate it with:
The University of Jordan

N-Grams Model
• We can extend to trigrams, 4-grams, 5-grams
• In general, this is an insufficient model of language
• because language has long-distance dependencies:
“The computer which I had just put into the machine room on the fifth floor
crashed.”

• But we can often get away with N-gram models


The University of Jordan

Bigram model
• This assumption is called Markov assumption
• Markov models are the class of probabilistic models that assume we
can predict the probability of some future unit without looking too far
into the past.
• generalize bigram to trigram to n-gram:

• N=2 means bigram, N=3 means trigram.

• How do we estimate these bigram or n-gram probabilities?


The University of Jordan

Maximum likelihood estimation (MLE)


• Get the counts from a corpus
• Normalize the counts so it lies between 0 and 1
• For example, to compute a particular bigram probability of a word wn
given a previous word wn−1, we’ll compute the count of the bigram
C(wn−1wn) and normalize by the sum of all the bigrams that share the
same first word wn−1:
The University of Jordan

Example
• Consider a mini corpus of three sentences:

• Calculate:
The University of Jordan

Example
Berkeley Restaurant Project sentences
• can you tell me about any good cantonese restaurants close by
• mid priced thai food is what i’m looking for
• tell me about chez panisse
• can you give me a listing of the kinds of food that are available
• i’m looking for a good place to eat breakfast
• when is caffe venezia open during the day
The University of Jordan

Raw bigram counts


• Out of 9222 sentences
The University of Jordan

Raw bigram probabilities


• Normalize by unigrams:

• Result:
The University of Jordan

Bigram estimates of sentence probabilities


Here are a few other useful probabilities:
P(i|<s>) = 0.25
P(english|want) = 0.0011
P(food|english) = 0.5
P(</s>|food)=0.68

P(<s> I want english food </s>) =


Now we can compute the P(I|<s>) 0.25
probability of sentences like I
× P(want|I) X 0.33
want English food or I want
Chinese food by simply × P(english|want) X 0.0011
multiplying the appropriate × P(food|english) X 0.5
bigram probabilities together, as × P(</s>|food) X 0.68
follows:
= .000031
The University of Jordan

Training and Test Sets


• The probabilities of an N-gram model come from the corpus it is
trained on.
• The parameters of a statistical model are trained on some set of data,
and then we apply the models to some new data in some task (such
as speech recognition) and see how well they work.
• Training set and a test set (or a training corpus and a test corpus).
• This training-and-testing paradigm can also be used to evaluate
different N-gram architectures.
The University of Jordan

Extrinsic evaluation of N-gram models


• Best evaluation for comparing models A and B
• Put each model in a task
• spelling corrector, speech recognizer, MT system
• Run the task, get an accuracy for A and for B
• How many misspelled words corrected properly
• How many words translated correctly
• Compare accuracy for A and B
The University of Jordan

Difficulty of extrinsic (in-vivo) evaluation of N-


gram models
• Extrinsic evaluation
• Time-consuming; can take days or weeks
• So
• Sometimes use intrinsic evaluation: perplexity
• Bad approximation
• unless the test data looks just like the training data
• So generally only useful in pilot experiments
• But is helpful to think about.
The University of Jordan

Perplexity
• The perplexity (sometimes abbreviated as PP or PPL) of a language
model on a test set is the inverse probability of the test set
• one over the probability of the test set, normalized by the number of words.
• For a test set W = w1w2...wN,:

• Or we can use the chain rule to expand probability


The University of Jordan

Perplexity
The best language model is one that best predicts an unseen test set
1
• Gives the highest P(sentence) -
PP(W ) = P(w1w2 ...wN )
N

Perplexity is the inverse probability of


the test set, normalized by the number
1
of words: = N
P(w1w2...wN )

Chain rule:

For bigrams:

Minimizing perplexity is the same as maximizing probability


The University of Jordan

Intuition of Perplexity
• The Shannon Game: mushrooms 0.1
• How well can we predict the next word? pepperoni 0.1
anchovies 0.01

I always order pizza with cheese and ____ ….

The 33rd President of the US was ____ fried rice 0.0001


….
I saw a ____
• Unigrams are terrible at this game. (Why?) and 1e-100

• A better model of a text


• is one which assigns a higher probability to the word that actually occurs
The University of Jordan

Perplexity as branching factor


• Let’s suppose a sentence consisting of random digits
• What is the perplexity of this sentence according to a model that
assign P=1/10 to each digit?
The University of Jordan

Lower perplexity = better model

• Training 38 million words, test 1.5 million words, WSJ

N-gram Unigram Bigram Trigram


Order

Perplexity 962 170 109


The University of Jordan

The perils of overfitting


• N-grams only work well for word prediction if the test corpus looks
like the training corpus
• In real life, it often doesn’t
• We need to train robust models that generalize!
• One kind of generalization: Zeros!
• Things that don’t ever occur in the training set
• But occur in the test set
The University of Jordan

Zeros
• Training set: • Test set
… denied the allegations … denied the offer
… denied the reports … denied the loan
… denied the claims
… denied the request
P(“offer” | denied the)

For these reasons, we want to modify the maximum likelihood


estimates for computing N-gram probabilities, focusing on the N-
gram events that we incorrectly assumed had zero probability.
The University of Jordan

Zero probability bigrams


• Bigrams with zero probability
• mean that we will assign 0 probability to the test set!
• And hence we cannot compute perplexity (can’t divide by 0)!
The University of Jordan

The intuition of smoothing (from Dan Klein)


• When we have sparse statistics:
P(w | denied the)
3 allegations

allegations
2 reports

reports

outcome
1 claims

attack
request
claims
1 request

man
7 total

• Steal probability mass to generalize better


P(w | denied the)
2.5 allegations
1.5 reports

allegations
allegations
0.5 claims

outcome
0.5 request

reports

attack
2 other

man
request
claims
7 total
The University of Jordan

Add-one estimation
• Also called Laplace smoothing
• Pretend we saw each word one more time than we did
• Just add one to all the counts!
c(wi-1,wi )
PMLE (wi | wi-1 ) =
• MLE estimate: c(wi-1 )

c(wi-1,wi )+1
• Add-1 estimate: PAdd-1 (wi | wi-1 ) =
c(wi-1 )+V
The University of Jordan

Maximum Likelihood Estimates


• The maximum likelihood estimate
• of some parameter of a model M from a training set T
• maximizes the likelihood of the training set T given the model M
• Suppose the word “bagel” occurs 400 times in a corpus of a
million words
• What is the probability that a random word from some other text
will be “bagel”?
• MLE estimate is 400/1,000,000 = .0004
• This may be a bad estimate for some other corpus
• But it is the estimate that makes it most likely that “bagel” will occur 400
times in a million word corpus.
The University of Jordan

Berkeley Restaurant Corpus: Laplace


smoothed bigram counts
The University of Jordan

Laplace-smoothed bigrams
The University of Jordan

Reconstituted counts
The University of Jordan

Compare with raw bigram counts


The University of Jordan

Add-1 estimation is a blunt instrument


• So add-1 isn’t used for N-grams:
• We’ll see better methods
• But add-1 is used to smooth other NLP models
• For text classification
• In domains where the number of zeros isn’t so huge.
The University of Jordan

Good-Turing Discounting
• re-estimate the amount of probability mass to assign to N-grams
with zero counts by looking at the number of N-grams that
occurred one time.
• A word or N-gram (or any event) that occurs once is called a
singleton, or a hapax legomenon.
• The Good-Turing intuition is to use the frequency of singletons as
a re-estimate of the frequency of zero-count bigrams.
• The Good-Turing intuition is to use the frequency of singletons as
a re-estimate of the frequency of zero-count bigrams.
The University of Jordan

Good-Turing Discounting
• The Good-Turing algorithm is based on computing Nc, the number of
N-grams that occur c times.
• We refer to the number of N-grams that occur c times as the
frequency of frequency c.
• So applying the idea to smoothing the joint probability of bigrams,
N0 is the number of bigrams with count 0, N1 the number of bigrams
with count 1 (singletons), and so on.
The University of Jordan

Good-Turing Discounting
• We can think of each of the Nc as a bin which stores the number of different N-
grams that occur in the training set with that frequency c:

• to re-estimate the smoothed count c for N0, we use the following equation for
the probability P*GT for things that had zero count N0, or what we might call the
missing mass:
• P*GT (things with frequency zero in training) = N1/N

• See example in the book


The University of Jordan

INTERPOLATION
• If we are trying to compute P(wn|wn−1wn−2), but we have no examples of a
particular trigram wn−2wn−1wn, we can instead estimate its probability by using
the bigram probability P(wn|wn−1).
• Similarly, if we don’t have counts to compute P(wn|wn−1), we can look to the
unigram P(wn)
• There are two ways to use this N-gram “hierarchy”, backoff and interpolation.
• In backoff, if we have non-zero trigram counts, we rely solely on the trigram
counts. We only “back off” to a lower order N-gram if we have zero evidence
for a higher-order N-gram.
• In interpolation, we always mix the probability estimates from all the N-gram
estimators, i.e., we do a weighted interpolation of trigram, bigram, and
unigram counts.
The University of Jordan

Linear Interpolation
•Simple interpolation

•Lambdas conditional on context:


The University of Jordan

How to set the lambdas?


a held-out corpus is an additional training
• Use a held-out corpus corpus that we use not to set the N-gram
counts, but to set other parameters.

Held-Out Test
Training Data Data Data

• Choose λs to maximize the probability of held-out data:


• Fix the N-gram probabilities (on the training data)
• Then search for λs that give largest probability to held-out set:

logP(w1...wn | M(l1...lk )) = ålogPM ( l1... lk ) (wi | wi-1 )


i
The University of Jordan

H.W 3
• Write a program to compute unsmoothed unigrams, bigrams and Trigrams.
• Run your N-gram program on two different small corpora (use the links below). Now
compare the statistics of the two corpora. What are the differences in the most common
unigrams between the two? How about interesting differences in bigrams and Trigrams?
• http://ar.wikipedia.org/wiki/%D8%A5%D9%86%D8%AA%D8%B1%D9%86%D8%AA
• http://arz.wikipedia.org/wiki/%D8%A7%D9%86%D8%AA%D8%B1%D9%86%D8%AA

(Bounce)
• Add an option to your program to generate random sentences.
• Add an option to your program to do Good-Turing discounting.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy