Lec18 HMMs
Lec18 HMMs
§ Conditional probability
§ Marginal probability
§ Product rule
§ Chain rule
Probability Recap
§ Example: 𝑋 ∝𝑓 𝑋 𝑃 𝑋
𝑥! 0.4 0.4 / (0.4 + 0.2)
𝑥" 0.2 0.2 / (0.4 + 0.2)
Markov Models
§ Value of X at a given time is called the state
X1 X2 X3 X4
X1 X2 X3 X4
§ CPT P(Xt | Xt-1): Two new ways of representing the same CPT
X1 X2 X3 X4 Xt ?
X
P (xt ) = P (xt 1 , xt )
x
X
t 1
= P (xt | xt 1 )P (xt 1 )
xt 1
Forward simulation
Example Run of Mini-Forward Algorithm
§ From initial observation of sun
…
P(X1) P(X¥) [Demo: L13D1,2,3]
Video of Demo Ghostbusters Basic Dynamics
Video of Demo Ghostbusters Circular Dynamics
Video of Demo Ghostbusters Whirlpool Dynamics
Stationary Distributions
§ Stationary distribution
§ Will spend more time on highly reachable pages
§ E.g. many ways to get to the Acrobat Reader download page
§ Somewhat robust to link spam
§ Google 1.0 returned the set of pages containing all your
keywords in decreasing rank, now all search engines use link
analysis along with many other factors (rank actually getting
less important over time)
Hidden Markov Models
Pacman – Sonar
X1 X2 X3 X4 X5
Hidden Markov Models
§ Markov chains not so useful for most agents
X1 X2 X3 X4 X5
X1 X2 X3 X4 X5
E1 E2 E3 E4 E5
Example: Weather HMM
P (Xt | Xt 1)
P (Et | Xt )
Transitions Emissions
§ An HMM is defined by: Rt-1 Rt P(Rt|Rt-1) Rt Ut P(Ut|Rt)
+r +r 0.7 +r +u 0.9
§ Initial distribution:
+r -r 0.3 +r -u 0.1
§ Transitions: P (Xt | Xt 1 ) -r +r 0.3 -r +u 0.2
§ Emissions: P (Et | Xt ) -r -r 0.7 -r -u 0.8
Example: Ghostbusters HMM
X1 X2 X3 X4 0 0 0
X5 P(X’|X = <1,2>)
X1 X2 X3 X4 X5
E1 E2 E3 E4 E5
X1 X2 X3 X4 X5
E1 E2 E3 E4 E5
§ Robot tracking:
§ Observations are range readings (continuous)
§ States are positions on a map (continuous) X1 X2 X3 X4 X
E1 E2 E3 E4 E
Filtering / Monitoring
E1 E2 E3 E4 Et Observe
Prob 0 1
t=0
Sensor model: can read in which directions there is a wall,
never more than 1 mistake
Motion model: may not execute action with small prob.
Example: Robot Localization
Prob 0 1
t=1
Lighter grey: was possible to get the reading, but less likely b/c
required 1 mistake
Example: Robot Localization
Prob 0 1
t=2
Example: Robot Localization
Prob 0 1
t=3
Example: Robot Localization
Prob 0 1
t=4
Example: Robot Localization
Prob 0 1
t=5
HMM Inference: Find State Given Evidence
𝐵! 𝑋 = 𝑃(𝑋! |𝑒":! )
§ Idea: start with 𝑃(𝑋1) and derive 𝐵' (𝑋) in terms of 𝐵'()(𝑋)
§ Two steps: Passage of Time & Observation
𝐵$ % 𝑋 = 𝑃(𝑋% |𝑒!:# )
X1 X2 X3 X4
E1 E2 E3 E4
X1
X1 X2
E1
Passage of Time: Base Case
X1 X2
X1
E1
XX
00 00
BB(X
(Xt+1
t+1))== PP (X
(X |x|xt )B(x
t )B(x
t)t)
xx
tt
X1 X2 X3 X4 X5
E1 E2 E3 E4 B(X E /X PP(e
B(Xt+1
t+15)) / (e t+1 |X
|X t+1 )B
)B 00 (X
(Xt+1t+1))
Xt+1
t+1 t+1 t+1
Pacman – Sonar