Unit 4 Full PPT (ML)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 31

UNIT IV

UNIT IV GRAPHICAL MODELS

Bayesian Networks – Conditional Independence – Markov Random Fields –

Learning – Naive Bayes Classifiers – Markov Model – Hidden Markov Model.


GRAPHICAL MODELS

• Graphical models, also known as probabilistic graphical models.

• These models can be used to analyze the relationships between variables, estimate
the probability distribution of a set of variables, and make predictions about future
events.

• There are two main types of graphical models: Directed and Undirected.

• Directed model-indicate the causal relationships between variables.

• Undirected graphical model-Represent the dependencies between variables without


assuming a causal relationship.
Bayesian network
 "A Bayesian network is a probabilistic graphical model which represents a set of variables and
their conditional dependencies using a directed acyclic graph."

 It is also called a Bayes network, belief network, decision network, or Bayesian model.

 Bayesian networks are probabilistic, because these networks are built from a probability
distribution, and also use probability theory for prediction and anomaly detection.

 Bayesian Network can be used for building models from data and experts opinions, and it consists
of two parts:

 Directed Acyclic Graph

 Table of conditional probabilities.


Bayesian belief network

• Directed Acyclic Graph


• The Bayesian network graph does not contain any cyclic graph. Hence, it is known as a directed acyclic graph
or DAG.
Bayesian belief network

• Satheesh installed a new burglar alarm at his home to detect burglary. The alarm responds to
detecting a burglary but also responds to minor earthquakes. Satheesh has two neighbors Kumar
and Devi, who have taken responsibility to inform Satheesh at work when they hear the alarm.
Kumar always calls Satheesh when he hears the alarm, but sometimes he got confused with the
phone ringing and calls at that time too. On the other hand, Devi likes to listen to high music, so
sometimes she misses hearing the alarm. Here we would like to compute the probability of a
Burglary Alarm.

• Calculate the probability that the alarm has sounded, but there is neither a burglary, nor
an earthquake occurred, and Kumar and Devi are both called satheesh.
Bayesian belief network
Bayesian belief network
Bayesian belief network
MARKOV MODEL

• It is a mathematical framework used to model the stochastic process.


The probability of a future event depends only on the current state and
not the history(past) of the system.

Example: Google predicts the next word in your sentence based on your
previous entry within Gmail
TYPES OF MARKOV MODEL

 Discrete-time Markov Chain (DTMC)

 Continuous-time Markov Chain (CTMC)

 Hidden Markov Model (HMM)

 Markov Decision Process (MDP)


Hidden Markov Model

Hidden Markov Model

• Hidden Markov Models (HMMs) are a class of probabilistic graphical models

that allow us to predict a sequence of unknown (hidden) variables from a

set of observed variables.

• Eg: predicting the weather (hidden variable) based on the type of clothes that

someone wears (observed).


HIDDENMARKOVMODEL

Hidden Markov Models (HMMs) are a class of


probabilistic graphical models that allow us to predict a sequence
of unknown (hidden) variables from a set of observed variables.

Eg: predicting the weather (hidden variable) based on the type of


clothes that someone wears (observed).
Hidden Markov Model

Hidden Markov Model - Transitions


Hidden Markov Model

Hidden Markov Model – Computation of Joint Probability

Transition data — the probability of transitioning to a new state


conditioned on a present state.
Emission data — the probability of transitioning to an observed
state conditioned on a hidden state.
Initial state information — the initial probability of transitioning to a
hidden state. This can also be looked at as the prior probability.
Hidden Markov Model

Hidden Markov Model – Example


Hidden Markov Model

Hidden Markov Model – Probability Distribution


5.5 Hidden Markov Model

Hidden Markov Model – Probability Distribution


Hidden Markov Model

Hidden Markov Model - Steps

Compute the joint probability of a set of hiddenstates given a set of


observed states.
The hidden states are also referred to as latent states.
From The joint probability of a sequence of hidden states, we
determine the best possible sequence.
The sequence with the highest probability is chosen as the best
sequence for hidden states.
The states of the Markov chain
is hidden or unknown.
We can observe some variables
that depends on the hidden
states.
That model is called Hidden
Markov Model
HMM=Hidden MC+ Observed
Variables.
• Look at three consecutive day
Sunny+Happy
Cloudy+Happy
• Sunny+Sad
• It is same as the joint
probability.
• P(happy-happy-sad, sunny-
cloudy-sunny)
By using the Markov Property,
P(Y=happy-happy-sad,X=sunny-cloudy-sunny)
=P(X1=sunny)*P(Y1=happy|X1=sunny)*P(X2=cloudy|X1=sunny)*P(Y2=happy|X2
=cloudy)*P(X3=sunny|X2=cloudy)*P(Y3=sad|X3=sunny)
=0.509*0.8*0.3*0.4*0.4*0.2
=0.00391
CONDITIONAL INDEPENDENCE

• In machine learning, conditional independence can be used to simplify models by


breaking down complex relationships between variables into simpler, more
manageable parts.
• By identifying conditional independence relationships between variables, we can
reduce the number of parameters needed to model the data and improve the
efficiency of our algorithms.
• For example, in Bayesian networks, conditional independence relationships are
used to represent the probabilistic relationships between variables in a graphical
model. In a Bayesian network, nodes represent variables, and edges represent
conditional dependencies between the variables. By identifying conditional
independence relationships between variables, we can reduce the number of edges
in the network, which can significantly reduce the computational complexity of
inference algorithms.
CONDITIONAL INDEPENDENCE

• Conditional independence can also be used in feature selection, where we try to


identify the subset of features that are most relevant for a given task. By identifying
conditional independence relationships between features and the target variable, we
can eliminate irrelevant or redundant features and improve the accuracy and
efficiency of our models.

• Overall, conditional independence is an important concept in machine learning, as it


can help us to build more efficient and accurate models by simplifying the
relationships between variables and reducing the number of parameters needed to
model the data.
CONDITIONAL INDEPENDENCE-EXANPLE

• Let blue, green and red be three classes of objects with prior probabilities given by
P(blue)=¼,P(green)=½,P(red)=¼.There are three types of objects: pencils, pens and paper.
Let the class-conditional probabilities of these objects be given as follows.
• Use Bayes classifier to classify pencil, pen and paper.
• P(pencil/green) =1/3
• P(pen/green) =1/2
• P(paper/green) = 1/6
• P(pencil/blue) =1/2
• P(pen/blue) = 1/6
• P(paper/blue) =1/3
• P(pencil/red) =1/6
• P(pen/red) =1/3
• P(paper/red) = 1/2.
Example-solution
Example-pencil solution
Example- pen solution
Example- paper solution

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy