Kernel - Opto Engineering
Kernel - Opto Engineering
Kernel - Opto Engineering
Template matching
Template matching is a technique for recognizing which parts of an image match a template
that represents a model image.
Template matching
Where r and c stand for the row and column coordinates and the sum is constructed on r’,c’
the template coordinates, therefore between 0 and the value of the template rows/columns.
The closer this value is to zero the more likely the analysed portion will match the template.
This approach is greatly affected by the absolute value of the pixel. The soundness of the
search can be increased with a simple normalization based on the averages of the pixel
values of the template and of the image.
Comparison between template features and image features. One example is shape
matching which compares the vectors of the gradients of the image outlines.
Shape matching
Some points of the outline (red dots) are extracted from the template, the positions of these
points are saved with respect to a coordinate of reference (blue dot) which in our case has
coordinate (0,0), the vectors of the gradients for each point of the template in question are
saved.
The vectors of the gradients of the template and of the image are then compared, scrolling
the coordinates of the reference point along the entire image. A matrix is associated with
dimensionality equal to:
With values
1 n ⟨GIi ∣ GTi ⟩
M(r, c) = ∑
n i = 1 |GIi ||GTi |
With the sum constructed on the subset of the selected template points. Therefore, the
gradient vector of point i of the image GI with coordinate (u,v)=(r,c)+(xi,yi) – where (r,c) is the
new offset and (xi,yi) the relative position of the point to be analyzed with respect to the
reference point in the template – is compared with the gradient of the point with coordinates
(xi,yi) of the template GT.
Thanks to normalization, these values are always between -1 and 1. If the orientation of the
gradient is irrelevant, but one is only concerned in its direction, the formula can be modified:
1 n |⟨GIi ∣ GTi ⟩|
M(r, c) = ∑
n i = 1 |GIi ||GTi |
The closer the value is to 1, the more likely it is for the image to contain the required
template.
The methods put forth above have the scale and rotation unchanged, but they can be
modified to be adapted to this purpose.
Contour analysis
An image or a contour (binary image) can be analysed by moments.
A moment M under an order (p, q) is defined as follows:
With the double integral running over the whole domain of x and y (whole image or ROI).
As digital images represent a discrete subspace, we can replace the double integral with a
double summation:
Simple moments:
If we calculate M00 of the pixel intensity function I(x, y), we obtain the sum of the pixel
values for monochrome images
If we calculate M00 of the indicator function reporting the presence of non-zero pixels
(unit value per pixel other than zero, otherwise null), we obtain the contour area.
The image centroid coordinates can be calculated as follows:
M10 M
x̄ = , ȳ = 01
M00 M00
The central moments (referring to the centroid coordinates) can be calculated based on the
previous moments
Which have the property of being invariant with respect to translations (the centroid
coordinates are based on M moments).
µpq
ηpq = p+q
1+ 2
µ00
Kernel
The kernel is a small mask used to apply filters to an image. These masks have the shape of a
square matrix, which is why they are also called convolution matrices.
Let’s consider matrix A, which represents the matrix containing the grey values of all the
pixels in the original image, and matrix B representing the kernel matrix. Now let’s
superimpose matrix B to matrix A, so that the centre of matrix B corresponds to the pixel of
matrix A to be processed.
The value of the target image (matrix C) is calculated as the sum of all the elements of the
matrix resulting from the Hadamard product between matrices A and B.
Intro
Optics
Lighting
Cameras
Machine Vision Algorithms
Template matching
Contour analysis
Kernel
Edge detection
Segmentation and Thresholding
Blob analysis
Shape fitting
Autofocus
Camera calibration
Neural network
Machine learning
Vision systems
Glossary
Example:
Sample image
Particularly useful kernels are derivative filters. Let’s analyse two Sobel filters:
1 0 -1
2 0 -2
1 0 -1
1 2 1
0 0 0
-1 -2 -1
These two filters represent, respectively, the derivatives (gradients) along abscissas Gx and
along ordinates Gy of the image. If one calculates an additional matrix that represents the
gradient module:
√G2x + G2y
Sample image
Clearly, this process represents the preliminary step to extract the edges of the image.