0% found this document useful (0 votes)
4 views42 pages

Image Segmentation1

The document discusses image segmentation, a key technique in digital image processing that involves partitioning images into segments for easier analysis. It covers various clustering algorithms, including K-means and Gaussian Mixture Models (GMM), highlighting their differences and applications in unsupervised learning. Additionally, it explains the Expectation-Maximization (EM) algorithm used to optimize GMM parameters for better data fitting.

Uploaded by

gauravgreat777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views42 pages

Image Segmentation1

The document discusses image segmentation, a key technique in digital image processing that involves partitioning images into segments for easier analysis. It covers various clustering algorithms, including K-means and Gaussian Mixture Models (GMM), highlighting their differences and applications in unsupervised learning. Additionally, it explains the Expectation-Maximization (EM) algorithm used to optimize GMM parameters for better data fitting.

Uploaded by

gauravgreat777
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 42

Image Segmentation

• Image segmentation is a fundamental technique in digital image


processing and computer vision.
• It involves partitioning a digital image into multiple segments
(regions or objects) to simplify and analyze an image by
separating it into meaningful components, which makes the
image processing more efficient by focusing on specific regions of
interest. A typical image segmentation task goes through the
following steps:
1.Groups pixels in an image based on shared characteristics like
colour, intensity, or texture.
2.Assigns a label to each pixel, indicating its belonging to a specific
segment or object.
3.The resulting output is a segmented image, often visualized as a
mask or overlay highlighting the different segments.
Visual Characteristics of Pixel
Visual Characteristics can be based on :
• Brightness
• Color
• Position
• Depth
• Motion
Pixels in Euclidean Space
Pixel Similarity
Clustering similar pixels
K Means Clustering algorithm
K Means is a clustering algorithm.
Clustering algorithms are unsupervised algorithms
which means that there is no labelled data available.
It is used to identify different classes or clusters in the
given data based on how similar the data is.
 Data points in the same group are more similar to
other data points in that same group than those in
other groups.
3-means Clustering Example
K-means Clustering
K means Initialization method
K-means clustering results(k=2)
K-means clustering results(k=8)
K-means clustering
results(k=16)
K-means clustering
results(k=16)
Concept of Mean Shift
Concept of Mean Shift
Mean Shift Algorithm
Measuring Affinity
Graph Cut
Graph Cut Segmentation
Measure of sub graph size
Normalized Cut(N-Cut)
Gaussian Mixture Model

• A Gaussian mixture model is a soft clustering


technique used in unsupervised learning to determine
the probability that a given data point belongs to a
cluster. It’s composed of several Gaussians, each
identified by k ∈ {1,…, K}, where K is the number of
clusters in a data set and is comprised of the following
parameters.

• K-means is a clustering algorithm that assigns each


data point to one cluster based on the closest centroid.
It’s a hard clustering method, meaning each point
belongs to only one cluster with no uncertainty.
How Gaussian Mixture Models Work?

• GMM uses a probabilistic approach


1.Multiple Gaussians (Clusters): Each cluster is
represented by a Gaussian distribution, and the data
points are assigned probabilities of belonging to different
clusters based on their distance from each Gaussian.

2.Parameters of a Gaussian: The core of GMM is made


up of three main parameters for each Gaussian:
2. Mean (μ): The center of the Gaussian distribution.

3. Covariance (Σ): Describes the spread or shape of the cluster.

4. Mixing Probability (π): Determines how dominant or likely


each cluster is in the data.
How are Gaussian Mixture Models different
from other clustering methods?
1.Unlike K-means and DBSCAN, Gaussian Mixture
models are capable of soft clustering, assigning
probabilities of membership to each cluster. This is
particularly useful when data points share properties
with multiple clusters.
2.GMMs can model clusters with different shapes,
sizes, and orientations. This makes GMMs more
flexible in capturing the true structure of the data.
3.GMMs can handle complex datasets where features
are correlated.
4.GMMs can also be used for anomaly detection.
• What is a Gaussian Mixture Model?
• A Gaussian Mixture model is a model where probability
density is given by a mixture of Gaussian distribution.

where:
•x is a d-dimensional vector.
•μₖ is the mean vector of the k-th Gaussian
component.
•Σₖ is the covariance matrix of the k-th Gaussian
component.
•wₖ represents the mixing weight of the k-th
component, where 0 ≤ wₖ ≤ 1 and the sum of
the weights is 1. wₖ is also referred to as the
prior probability of component k.
•N(x; μₖ, Σₖ) is the normal density function for
The Expectation-Maximization (EM) Algorithm

• To fit a Gaussian Mixture Model to the data, we use the Expectation-


Maximization (EM) algorithm, which is an iterative method that
optimizes the parameters of the Gaussian distributions (mean,
covariance, and mixing coefficients). It works in two main steps:
1.Expectation Step (E-step):
1. In this step, the algorithm calculates the probability that each data point belongs
to each cluster based on the current parameter estimates (mean, covariance,
mixing coefficients).

2.Maximization Step (M-step):


2. After estimating the probabilities, the algorithm updates the parameters (mean,
covariance, and mixing coefficients) to better fit the data.

• These two steps are repeated until the model converges, meaning the
parameters no longer change significantly between iterations
1.Initialization: Start with initial guesses for the means,
covariances, and mixing coefficients of each Gaussian
distribution.

2.E-step: For each data point, calculate the probability of


it belonging to each Gaussian distribution (cluster).

3.M-step: Update the parameters (means, covariances,


mixing coefficients) using the probabilities calculated in
the E-step.

4.Repeat: Continue alternating between the E-step and


M-step until the log-likelihood of the data (a measure of
how well the model fits the data) converges.
Example

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy