Introduction To Probability Theory: A Short Course On Graphical Models
Introduction To Probability Theory: A Short Course On Graphical Models
Introduction To Probability Theory: A Short Course On Graphical Models
AB
AB
A\B
For simplicity, we will work (mostly) with nite sets. The extension to countably innite sets is not dicult. The extension to uncountably innite sets requires Measure Theory.
Probability spaces
A probability space represents our uncertainty regarding an experiment. It has two parts: 1. the sample space , which is a set of outcomes; and 2. the probability measure P , which is a real function of the subsets of .
P
P(A)
A set of outcomes A is called an event. P (A) represents how likely it is that the experiments actual outcome will be a member of A.
B 0
P(A) + P(B) = P(AB)
P (A) = 1 P (\A) P () = 0 If A B then P (A) P (B) P (A B) = P (A) + P (B) P (A B) P (A B) P (A) + P (B) ...
Example
One easy way to dene our probability measure P is to assign a probability to each outcome : re smoke no smoke 0.002 0.001 no re 0.003 0.994
These probabilities must be non-negative and they must sum to one. Then the probabilities of all other events are determined by the axioms: P ({(re, smoke), (no re, smoke)}) = P ({(re, smoke)}) + P ({(no re, smoke)}) = 0.002 + 0.003 = 0.005
Conditional probability
Conditional probability allows us to reason with partial information. When P (B) > 0, the conditional probability of A given B is dened as P (A | B) = P (A B) P (B)
This is the probability that A occurs, given we have observed B, i.e., that we know the experiments actual outcome will be in B. It is the fraction of probability mass in B that also belongs to A. P (A) is called the a priori (or prior) probability of A and P (A | B) is called the a posteriori probability of A given B.
A B
P(AB) / P(B) = P(A|B)
10
11
Start with the denition of conditional probability and multiply by P (A): P (A B) = P (A)P (B | A) The probability that A and B both happen is the probability that A happens times the probability that B happens, given A has occurred.
12
The chain rule will become important later when we discuss conditional independence in Bayesian networks.
13
Bayes rule
Use the product rule both ways with P (A B) and divide by P (B): P (B | A)P (A) P (B)
P (A | B) =
Bayes rule translates causal knowledge into diagnostic knowledge. For example, if A is the event that a patient has a disease, and B is the event that she displays a symptom, then P (B | A) describes a causal relationship, and P (A | B) describes a diagnostic one (that is usually hard to assess). If P (B | A), P (A) and P (B) can be assessed easily, then we get P (A | B) for free.
14
Random variables
It is often useful to pick out aspects of the experiments outcomes. A random variable X is a function from the sample space .
X()
Random variables can dene events, e.g., { : X() = true}. One will often see expressions like P {X = 1, Y = 2} or P (X = 1, Y = 2). These both mean P ({ : X() = 1, Y () = 2}).
15
2<N <6
F =1
16
Densities
Let X : be a nite random variable. The function pX : density of X if for all x : pX (x) = P ({ : X() = x}) When is innite, pX : is the density of X if for all : pX (x) dx
is the
X() = x
pX
pX (x)
17
Joint densities
If X : and Y : are two nite random variables, then pXY : is their joint density if for all x and y : pXY (x, y) = P ({ : X() = x, Y () = y}) When or are innite, pXY : if for all and : is the joint density of X and Y
X() = x
pXY
pXY (x,y)
Y() = y
18
pXY (x, y)
5 0 0
19
Marginal densities
Given the joint density pXY (x, y) for X : and Y : , we can compute the marginal density of X by pX (x) =
y
pXY (x, y)
pXY (x, y) dy
when is innite. This process of summing over the unwanted variables is called marginalization.
20
Conditional densities
pX|Y (x, y) : is the conditional density of X given Y = y if
pX|Y (x, y) = P ({ : X() = x} | { : Y () = y}) for all x if is nite, or if pX|Y (x, y) dx = P ({ : X() } | { : Y () = y})
for all if is innite. Given the joint density pXY (x, y), we can compute pX|Y as follows: pX|Y (x, y) = pXY (x, y) x pXY (x , y) or pX|Y (x, y) = pXY (x, y) p (x , y) dx XY
21
Product rule: pXY (x, y) = pX (x) pY |X (y, x) Chain rule: pX1 Xk (x1 , . . . , xk ) = pX1 (x1 ) pX2 |X1 (x2 , x1 ) pXk |X1 Xk1 (xk , x1 , . . . , xk1 ) Bayes rule: pX|Y (x, y) pY (y) pY |X (y, x) = pX (x)
22
Inference
The central problem of computational Probability Theory is the inference problem: Given a set of random variables X1 , . . . , Xk and their joint density, compute one or more conditional densities given observations. Many problems can be formulated in these terms. Examples: In our example, the probability that there is a re given smoke has been detected is pF |S (true, true). We can compute the expected position of a target we are tracking given some measurements we have made of it, or the variance of the position, which are the parameters of a Gaussian posterior. Inference requires manipulating densities; how will we represent them?
23
Table densities
The density of a set of nite-valued random variables can be represented as a table of real numbers. In our re alarm example, the density of S is given by 0.995 s = false pS (s) = 0.005 s = true
If F is the Boolean random variable indicating a re, then the joint density pSF is represented by pSF (s, f ) s = true s = false f = true 0.002 0.001 f = false 0.003 0.994
Note that the size of the table is exponential in the number of variables.
24
One of the simplest densities for a real random variable. It can be represented by two real numbers: the mean and variance 2 .
0.5 0.4 0.3 0.2 0.1 0 -5
-4
-3
-2
-1
25
26
The Gaussian density is the only density for real random variables that is closed under marginalization and multiplication. Also: a linear (or ane) function of a Gaussian random variable is Gaussian; and, a sum of Gaussian variables is Gaussian. For these reasons, the algorithms we will discuss will be tractable only for nite random variables or Gaussian random variables. When we encounter non-Gaussian variables or non-linear functions in practice, we will approximate them using our discrete and Gaussian tools. (This often works quite well.)
27
Looking ahead. . .
Inference by enumeration: compute the conditional densities using the denitions. In the tabular case, this requires summing over exponentially many table cells. In the Gaussian case, this requires inverting large matrices. For large systems of nite random variables, representing the joint density is impossible, let alone inference by enumeration. Next time: sparse representations of joint densities Variable Elimination, our rst ecient inference algorithm.
28
Summary
A probability space describes our uncertainty regarding an experiment; it consists of a sample space of possible outcomes, and a probability measure that quanties how likely each outcome is. An event is a set of outcomes of the experiment. A probability measure must obey three axioms: non-negativity, normalization, and additivity of disjoint events. Conditional probability allows us to reason with partial information. Three important rules follow easily from the denitions: the product rule, the chain rule, and Bayes rule.
29
Summary (II)
A random variable picks out some aspect of the experiments outcome. A density describes how likely a random variable is to take on a value. We usually work with a set of random variables and their joint density; the probability space is implicit. The two types of densities suitable for computation are table densities (for nite-valued variables) and the (multivariate) Gaussian (for real-valued variables). Using a joint density, we can compute marginal and conditional densities over subsets of variables. Inference is the problem of computing one or more conditional densities given observations.
30