Probabilisticiir
Probabilisticiir
Probabilisticiir
P ( A B ) 28 P ( B C ) 24
P( A B) P( B C )
I f we use them the cond. prob. : P (C ) 32
P ( B) 62
The probability of P(A) of any event A defined on a sample space S can
be expressed in terms of cond. probabilities. Suppose we are given N
mutually exclusive events Bn ,n = 1,2…. N whose union equals S as
ilustrated in figure
A Bn
B1 B2
A
N N
A S A Bn ( A Bn )
n 1 n 1
B3 Bn
The definition of conditional probability applies to any two
events. In particular ,let Bn be one of the events defined
above in the subsection on total probability.
P(Bn A)
P( Bn A)
P(A)
İf P(A)≠O,or, alternatively,
P ( A Bn )
P ( A Bn )
P ( Bn )
if P(Bn)≠0, one form of Bayes’ theorem is obtained by
equating these two expressions:
P ( A Bn ) P( Bn )
P( Bn A)
P( A)
Another form derives from a substitution of P(A) as given:
P( A Bn ) P ( Bn )
P ( Bn A)
P ( A B1 ) P ( B1 ) ... P( A BN ) P ( BN )
The first attempts to develop a probabilistic theory of retrieval were made over
30 years ago [Maron and Kuhns 1960; Miller 1971], and since then there has been
a steady development of the approach. There are already several operational IR
systems based upon probabilistic or semiprobabilistic models.
One major obstacle in probabilistic or semiprobabilistic IR models is finding
methods for estimating the probabilities used to evaluate the probability of
relevance that are both theoretically sound and computationally efficient.
The first models to be based upon such assumptions were the “binary
independence indexing model” and the “binary independence retrieval model
One area of recent research investigates the use of an explicit network
representation of dependencies. The networks are processed by means of
Bayesian inference or belief theory, using evidential reasoning techniques such
as those described by Pearl 1988. This approach is an extension of the earliest
probabilistic models, taking into account the conditional dependencies present in
a real environment.
User Understanding
Query
Information of user need is
Representation
Need uncertain
How to match?
Uncertain guess of
Document whether document
Document Document
Representation
s Representation has relevant content
p ( x | NR ) p ( NR )
p ( NR | x)
p( x) p ( R | x) p ( NR | x ) 1
p(x|R), p(x|NR) - probability that if a relevant (non-relevant) document is
retrieved, it is x.
Bayes’ Optimal Decision Rule
x is relevant iff p(R|x) > p(NR|x)
PRP in action: Rank all documents by p(R|x)
More complex case: retrieval costs.
Let d be a document
C - cost of retrieval of relevant document
C’ - cost of retrieval of non-relevant document
Probability Ranking Principle: if
C p( R | d ) C (1 p( R | d )) C p( R | d ) C (1 p( R | d ))
for all d’ not yet retrieved, then d is the next
document to be retrieved
We won’t further consider loss/utility from
now on
How do we compute all those probabilities?
Do not know exact probabilities, have to use
estimates
Binary Independence Retrieval (BIR) – which we
discuss later today – is the simplest model
Questionable assumptions
“Relevance” of each document is independent of
relevance of other documents.
▪ Really, it’s bad to keep on returning duplicates
Boolean model of relevance
Estimate how terms contribute to relevance
How tf, df, and length influence your judgments
about do things like document relevance?
▪ One answer is the Okapi formulae (S. Robertson)
Constant for a
Needs estimation
given query
Constant for
each query
pi (1 ri ) pi (1 ri )
RSV log log
xi qi 1 ri (1 pi ) xi qi 1 ri (1 pi )
• Estimating RSV coefficients.
• For each term i look at this table of document counts:
Xi=1 s n-s n
Xi=0 S-s N-n-S+s N-n
Total S N-S N
s (n s)
• Estimates: pi ri
S (N S ) For now,
s (S s) assume no
ci K ( N , n, S , s ) log zero terms.
(n s) ( N n S s)
If non-relevant documents are approximated by the whole
collection, then ri (prob. of occurrence in non-relevant
documents for query) is n/N and
log (1– ri)/ri = log (N– n)/n ≈ log N/n = IDF!
pi (probability of occurrence in relevant documents) can be
estimated in various ways:
from relevant documents if know some
▪ Relevance weighting can be used in feedback loop
constant (Croft and Harper combination match) – then just get idf
weighting of terms
proportional to prob. of occurrence in collection
▪ more accurately, to log of this (Greiff, SIGIR 1998)
1. Assume that pi constant over all xi in query
pi = 0.5 (even odds) for any given doc
2. Determine guess of relevant document set:
V is fixed size set of highest ranked documents on
this model (note: now a bit like tf.idf!)
3. We need to improve our guesses for pi and ri, so
Use distribution of xi in docs in V. Let Vi be set of
documents containing xi
▪ pi = |Vi| / |V|
Assume if not retrieved then not relevant
▪ ri = (ni – |Vi|) / (N – |V|)
4. Go to 2. until converges then return ranking
1. Guess a preliminary probabilistic description of R
and use it to retrieve a first set of documents V,
as above.
2. Interact with the user to refine the description:
learn some definite members of R and NR
3. Reestimate pi and ri on the basis of these
Or can combine new information with original guess
(use Bayesian prior): | V | p (1)
pi( 2) i i κ is
| V | prior
weight
4. Repeat, thus generating a succession of
approximations to R.
Getting reasonable approximations of
probabilities is possible.
Requires restrictive assumptions:
term independence
terms not in query don’t affect the outcome
boolean representation of documents/queries/relevance
document relevance values are independent
Some of these assumptions can be removed
Problem: either require partial relevance information or
only can derive somewhat inferior term weights
In general, index terms aren’t
independent
Dependencies can be complex
van Rijsbergen (1979) proposed
model of simple tree
dependencies
Exactly Friedman and
Goldszmidt’s Tree Augmented
Naive Bayes (AAAI 13, 1996)
Each term dependent on one
other
In 1970s, estimation problems
held back success of this model
What is a Bayesian network?
A directed acyclic graph
Nodes
▪ Events or Variables
▪ Assume values.
▪ For our purposes, all Boolean
Links
▪ model direct dependencies between nodes
• Bayesian networks model causal
relations between events
a b p(b)
•Inference in Bayesian Nets:
p(a) •Given probability distributions
Conditional for roots and conditional
c dependence probabilities can compute
apriori probability of any
p(c|ab) for all values instance
• Fixing assumptions (e.g., b
for a,b,c
was observed) will cause
recomputation of probabilities
For more information see:
R.G. Cowell, A.P. Dawid, S.L. Lauritzen, and D.J. Spiegelhalter.
1999. Probabilistic Networks and Expert Systems. Springer Verlag.
J. Pearl. 1988. Probabilistic Reasoning in Intelligent Systems:
Networks of Plausible Inference. Morgan-Kaufman.
f 0.3 Project Due d 0.4
Finals
f 0.7 (f) (d) d 0.6
f f fd fd f d f d
n 0 .9 0 .3 No Sleep Gloom g 0.99 0.9 0.8 0.3
(n) (g)
n 0 .1 0 .7 g 0.01 0.1 0.2 0.7
g g Triple Latte
t 0.99 0.1 (t)
t 0.01 0.9
Finals Project Due
(f) (d)
• Independence assumption:
No Sleep Gloom P(t|g, f)=P(t|g)
(n) (g) • Joint probability
P(f d n g t)
=P(f) P(d) P(n|f) P(g|f d) P(t|g)
Triple Latte
(t)
Goal
Given a user’s information need (evidence), find
probability a doc satisfies need
Retrieval model
Model docs in a document network
Model information need in a query network
Document Network
d1 d2 di -documents dn
tiLarge,
- document
but representations
t1 t2 tn
riCompute
- “concepts”
once for each
document collection rk
r1 r2 r3
ci - query concepts cm
c1 c2 Small, compute once for
every query
qi - high-level concepts q2
q1
Query Network I I - goal node
Construct Document Network (once !)
For each query
Construct best Query Network
Attach it to Document Network
Find subset of di’s which maximizes the
probability value of node I (best subset).
Retrieve these di’s as the answer to query.
d1 Documents
d2
Document
Network
r1 r2 r3 Terms/Concepts
c1 c2 c3 Concepts
Query
Network
q1 q2 Query operators
(AND/OR/NOT)
i
Information need
Prior doc probability P(d) = P(c|r)
1/n 1-to-1
P(r|d) thesaurus
within-document term P(q|c): canonical forms of
frequency query operators
tf idf - based Always use things like AND
and NOT – never store a
full CPT*
OR NOT
User query
Prior probs don’t have to be 1/n.
“User information need” doesn’t have to be a
query - can be words typed, in docs read, any
combination …
Phrases, inter-document links
Link matrices can be modified over time.
User feedback.
The promise of “personalization”
Document network built at indexing time
Query network built/scored at query time
Representation:
Link matrices from docs to any single term are like
the postings entry for that term
Canonical link matrices are efficient to store and
compute
Attach evidence only at roots of network
Can do single pass from roots to leaves
All sources served by Google!