0% found this document useful (0 votes)
30 views31 pages

Unit-4 Digital Imaging-Norepages30

Digital imaging involves creating digital images through capturing physical scenes or object interiors, and often includes image processing, compression, storage, display and printing. An image consists of pixels arranged in a grid, with resolution measuring pixels per inch. Color images use multiple bits per pixel to represent colors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views31 pages

Unit-4 Digital Imaging-Norepages30

Digital imaging involves creating digital images through capturing physical scenes or object interiors, and often includes image processing, compression, storage, display and printing. An image consists of pixels arranged in a grid, with resolution measuring pixels per inch. Color images use multiple bits per pixel to represent colors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

What is Digital Imaging??

Digital imaging or digital image acquisition is the creation of digital images, such as of a
physical scene or of the interior structure of an object. The term is often assumed to imply or
include the processing, compression, storage, printing, and display of such images.

An image consists of a rectangular array of dots called pixels. The size of the image is specified
in terms of width X height, in numbers of the pixels. The physical size of the image, in inches
or centimeters, depends on the resolution of the device on which the image is displayed. The
resolution is usually measured in DPI (Dots Per Inch). An image will appear smaller on a device
with a higher resolution than on one with a lower resolution. For color images, one needs
enough bits per pixel to represent all the colors in the image. The number of the bits per pixel is
called the depth of the image.
https://www.slideshare.net/abshinde/image-processing-fundamentals-47399733
https://www.slideshare.net/PreetiGupta6/image-processing1-introduction (26-35 Key Stages in
DIP)
Digital Image Processing Basics
 Difficulty Level : Easy
 Last Updated : 30 Nov, 2021
Digital Image Processing means processing digital image by means of a digital computer. We
can also say that it is a use of computer algorithms, in order to get enhanced image either to
extract some useful information.

Image processing mainly include the following steps:


1.Importing the image via image acquisition tools;
2.Analysing and manipulating the image;
3.Output in which result can be altered image or a report which is based on analysing that image.
What is an image?
An image is defined as a two-dimensional function,F(x,y), where x and y are spatial coordinates,
and the amplitude of F at any pair of coordinates (x,y) is called the intensity of that image at that
point. When x,y, and amplitude values of F are finite, we call it a digital image.
In other words, an image can be defined by a two-dimensional array specifically arranged in
rows and columns.
Digital Image is composed of a finite number of elements, each of which elements have a
particular value at a particular location.These elements are referred to as picture elements,image
elements,and pixels.A Pixel is most widely used to denote the elements of a Digital Image.
Types of an image
1. BINARY IMAGE– The binary image as its name suggests, contain only two pixel elements
i.e 0 & 1,where 0 refers to black and 1 refers to white. This image is also known as
Monochrome.
2. BLACK AND WHITE IMAGE– The image which consist of only black and white color is
called BLACK AND WHITE IMAGE.
3. 8 bit COLOR FORMAT– It is the most famous image format.It has 256 different shades of
colors in it and commonly known as Grayscale Image. In this format, 0 stands for Black, and
255 stands for white, and 127 stands for gray.
4. 16 bit COLOR FORMAT– It is a color image format. It has 65,536 different colors in it.It is
also known as High Color Format. In this format the distribution of color is not as same as
Grayscale image.
A 16 bit format is actually divided into three further formats which are Red, Green and Blue.
That famous RGB format.

Image as a Matrix
As we know, images are represented in rows and columns we have the following syntax in which
images are represented:

The right side of this equation is digital image by definition. Every element of this matrix is
called image element , picture element , or pixel.

DIGITAL IMAGE REPRESENTATION IN MATLAB:

In MATLAB the start index is from 1 instead of 0. Therefore, f(1,1) = f(0,0).


henceforth the two representation of image are identical, except for the shift in origin.
In MATLAB, matrices are stored in a variable i.e X,x,input_image , and so on. The variables
must be a letter as same as other programming languages.

PHASES OF IMAGE PROCESSING:


1.ACQUISITION– It could be as simple as being given an image which is in digital form. The
main work involves:
a) Scaling
b) Color conversion(RGB to Gray or vice-versa)
2.IMAGE ENHANCEMENT– It is amongst the simplest and most appealing in areas of Image
Processing it is also used to extract some hidden details from an image and is subjective.
3.IMAGE RESTORATION– It also deals with appealing of an image but it is
objective(Restoration is based on mathematical or probabilistic model or image degradation).
4.COLOR IMAGE PROCESSING– It deals with pseudocolor and full color image processing
color models are applicable to digital image processing.
5.WAVELETS AND MULTI-RESOLUTION PROCESSING– It is foundation of
representing images in various degrees.
6.IMAGE COMPRESSION-It involves in developing some functions to perform this
operation. It mainly deals with image size or resolution.
7.MORPHOLOGICAL PROCESSING-It deals with tools for extracting image components
that are useful in the representation & description of shape.
8.SEGMENTATION PROCEDURE-It includes partitioning an image into its constituent parts
or objects. Autonomous segmentation is the most difficult task in Image Processing.
9.REPRESENTATION & DESCRIPTION-It follows output of segmentation stage, choosing
a representation is only the part of solution for transforming raw data into processed data.
10.OBJECT DETECTION AND RECOGNITION-It is a process that assigns a label to an
object based on its descriptor.

OVERLAPPING FIELDS WITH IMAGE PROCESSING

According to block 1,if input is an image and we get out image as a output, then it is termed as
Digital Image Processing.
According to block 2,if input is an image and we get some kind of information or description as
a output, then it is termed as Computer Vision.
According to block 3,if input is some description or code and we get image as an output, then it
is termed as Computer Graphics.
According to block 4,if input is description or some keywords or some code and we get
description or some keywords as a output,then it is termed as Artificial Intelligence

Image Edge Detection Operators in Digital Image Processing


Edges are significant local changes of intensity in a digital image. An edge can be defined as a
set of connected pixels that forms a boundary between two disjoint regions. There are three types
of edges:
 Horizontal edges
 Vertical edges
 Diagonal edges
Edge Detection is a method of segmenting an image into regions of discontinuity. It is a widely
used technique in digital image processing like

 pattern recognition
 image morphology
 feature extraction
Edge detection allows users to observe the features of an image for a significant change in the
gray level. This texture indicating the end of one region in the image and the beginning of
another. It reduces the amount of data in an image and preserves the structural properties of an
image.
Edge Detection Operators are of two types:
 Gradient – based operator which computes first-order derivations in a digital image like,
Sobel operator, Prewitt operator, Robert operator
 Gaussian – based operator which computes second-order derivations in a digital image like,
Canny edge detector, Laplacian of Gaussian

Image data types

https://slideplayer.com/slide/7102350/ (Go through this ppt and theory is mentioned below)

Images can be created by using different techniques of representation of data called data type
like monochrome and colored images. Monochrome image is created by using single color
whereas colored image is created by using multiple colors. Some important data types of images
are following:
 1-bit images- An image is a set of pixels. Note that a pixel is a picture element in digital
image. In 1-bit images, each pixel is stored as a single bit (0 or 1). A bit has only two
states either on or off, white or black, true or false. Therefore, such an image is also
referred to as a binary image, since only two states are available. 1-bit image is also
known as 1-bit monochrome images because it contains one color that is black for off
state and white for on state.
A 1-bit image with resolution 640*480 needs a storage space of 640*480 bits.
640 x 480 bits. = (640 x 480) / 8 bytes = (640 x 480) / (8 x 1024) KB= 37.5KB.
The clarity or quality of 1-bit image is very low.
 8-bit Gray level images- Each pixel of 8-bit gray level image is represented by a single
byte (8 bits). Therefore each pixel of such image can hold 28=256 values between 0 and
255. Therefore each pixel has a brightness value on a scale from black (0 for no
brightness or intensity) to white (255 for full brightness or intensity). For example, a
dark pixel might have a value of 15 and a bright one might be 240.
A grayscale digital image is an image in which the value of each pixel is a single
sample, which carries intensity information. Images are composed exclusively of gray
shades, which vary from black being at the weakest intensity to white being at the
strongest. Grayscale images carry many shades of gray from black to white. Grayscale
images are also called monochromatic, denoting the presence of only one (mono) color
(chrome). An image is represented by bitmap. A bitmap is a simple matrix of the tiny
dots (pixels) that form an image and are displayed on a computer screen or printed.
A 8-bit image with resolution 640 x 480 needs a storage space of 640 x 480 bytes=(640
x 480)/1024 KB= 300KB. Therefore an 8-bit image needs 8 times more storage space
than 1-bit image.
 24-bit color images - In 24-bit color image, each pixel is represented by three bytes,
usually representing RGB (Red, Green and Blue). Usually true color is defined to mean
256 shades of RGB (Red, Green and Blue) for a total of 16777216 color variations. It
provides a method of representing and storing graphical image information an RGB
color space such that a colors, shades and hues in large number of variations can be
displayed in an image such as in high quality photo graphic images or complex graphics.
Many 24-bit color images are stored as 32-bit images, and an extra byte for each pixel
used to store an alpha value representing special effect information.
A 24-bit color image with resolution 640 x 480 needs a storage space of 640 x 480 x 3
bytes = (640 x 480 x 3) / 1024=900KB without any compression. Also 32-bit color
image with resolution 640 x 480 needs a storage space of 640 x 480 x 4 bytes= 1200KB
without any compression.

Disadvantages

o Require large storage space


o Many monitors can display only 256 different colors at any one time. Therefore,
in this case it is wasteful to store more than 256 different colors in an image.
 8-bit color images - 8-bit color graphics is a method of storing image information in a
computer's memory or in an image file, where one byte (8 bits) represents each pixel.
The maximum number of colors that can be displayed at once is 256. 8-bit color
graphics are of two forms. The first form is where the image stores not color but an 8-bit
index into the color map for each pixel, instead of storing the full 24-bit color value.
Therefore, 8-bit image formats consists of two parts: a color map describing what colors
are present in the image and the array of index values for each pixel in the image. In
most color maps each color is usually chosen from a palette of 16,777,216 colors (24
bits: 8 red, 8green, 8 blue).
The other form is where the 8-bits use 3 bits for red, 3 bits for green and 2 bits for blue.
This second form is often called 8-bit true color as it does not use a palette at all. When a
24-bit full color image is turned into an 8-bit image, some of the colors have to be
eliminated, known as color quantization process.
A 8-bit color image with resolution 640 x 480 needs a storage space of 640 x 480
bytes=(640 x 480) / 1024KB= 300KB without any compression.

Color lookup tables

A color loop-up table (LUT) is a mechanism used to transform a range of input colors into
another range of colors. Color look-up table will convert the logical color numbers stored in
each pixel of video memory into physical colors, represented as RGB triplets, which can be
displayed on a computer monitor. Each pixel of image stores only index value or logical color
number. For example if a pixel stores the value 30, the meaning is to go to row 30 in a color
look-up table (LUT). The LUT is often called a Palette.
Characteristic of LUT are following:
 The number of entries in the palette determines the maximum number of colors which
can appear on screen simultaneously.
 The width of each entry in the palette determines the number of colors which the wider
full palette can represent.
A common example would be a palette of 256 colors that is the number of entries is 256 and
thus each entry is addressed by an 8-bit pixel value. Each color can be chosen from a full
palette, with a total of 16.7 million colors that is the each entry is of 24 bits and 8 bits per
channel which sets the total combinations of 256 levels for each of the red, green and blue
components 256 x 256 x 256 =16,777,216 colors.

Image file formats are like JPEG, GIF, PNG etc.

What is Video Indexing

1. Video indexing is the process of providing watchers a way to access and navigate contents
easily; similar to book indexing.

2. The selection of indexes derived from the content of the video to help organize video data and
metadata that represents the original video stream

Transformation

Transformation is a function. A function that maps one set to another set after performing some
operations.
Digital Image Processing system
We have already seen in the introductory tutorials that in digital image processing, we will
develop a system that whose input would be an image and output would be an image too. And
the system would perform some processing on the input image and gives its output as an
processed image. It is shown below.

Now function applied inside this digital system that process an image and convert it into output
can be called as transformation function.
As it shows transformation or relation, that how an image1 is converted to image2.

Image transformation.

Consider this equation


G(x,y) = T{ f(x,y) }
In this equation,
F(x,y) = input image on which transformation function has to be applied.
G(x,y) = the output image or processed image.
T is the transformation function.
This relation between input image and the processed output image can also be represented as.
s = T (r)
where r is actually the pixel value or gray level intensity of f(x,y) at any point. And s is the pixel
value or gray level intensity of g(x,y) at any point.
The basic gray level transformation has been discussed in our tutorial of basic gray level
transformations.
Now we are going to discuss some of the very basic transformation functions.
Examples
Consider this transformation function.
Lets take the point r to be 256, and the point p to be 127. Consider this image to be a one bpp
image. That means we have only two levels of intensities that are 0 and 1. So in this case the
transformation shown by the graph can be explained as.

All the pixel intensity values that are below 127 (point p) are 0, means black. And all the pixel
intensity values that are greater then 127, are 1, that means white. But at the exact point of 127,
there is a sudden change in transmission, so we cannot tell that at that exact point, the value
would be 0 or 1.
Mathematically this transformation function can be denoted as:

Consider another transformation like this


Now if you will look at this particular graph, you will see a straight transition line between
input image and output image.
It shows that for each pixel or intensity value of input image, there is a same intensity value of
output image. That means the output image is exact replica of the input image.
It can be mathematically represented as:
g(x,y) = f(x,y)
the input and output image would be in this case are shown below.

What is digital design?

While graphic design focuses mostly on static designs, digital design involves movement, such
as animation, interactive pages, and 2D or 3D modeling. Digital designers create images and
elements that will end up on a screen, whether that’s a computer screen, a phone screen, a
dashboard, or any other digital formats. The design may also include audio and sound effects
to complement the visuals.

The commonality between digital design and graphic design is they both create forms of visual
communication, often related to an idea, image, or brand. In both instances, the designer’s goal
is to transmit information and ideas to the audience using symbols and imagery. This visual
communication creates meaning for the viewer. Visuals help to evoke emotions in your
audience, which can in turn help with brand awareness and affinity.

Interactivity in digital design

Another key characteristic of digital design is interactivity. Beyond being aesthetically


pleasing, your design needs to be usable, with a focus on how your user will interact with it.
This is important no matter what medium you’re designing for, from web design to mobile app
design and other mediums like wearables. As technology advances, digital designers face the
challenge of creating for a variety of screen sizes, all with various uses and contexts.
Analytics in digital design

Data and analytics also play a role in defining digital design, as they allow a designer to gauge
the product’s performance. It is difficult to track the performance of a physical flyer or
brochure, but a digital design does not face the same problem. By reviewing performance
indicators such as likes, shares, downloads, and pageviews, you’ll get a better understanding of
how effective your design is.

The introduction of analytics and data also empowers the digital designer to make data-
informed design decisions throughout the design process. For example, digital designers often
run A/B tests to collect qualitative data about two (or more) designs. This added layer of
analytics in digital design is a key differentiator between digital and graphic design.

In experiential graphic design, digital technology refers to the means of creating, storing,
processing, and displaying of electronic communications, stories, moving content, effects and
added functionality in a built environment. Generally, media created using digital technology
are displayed on computer screens or LED or LCD displays. However, evolving digital
interfaces and projection technologies are increasing the array of surfaces on which moving
images can appear.

Digital design focuses on the design of digitally mediated environments and experiences,
including websites, web applications, exhibition experiences, and gaming. Digital signage,
another application of digital technology, is made possible by the centralized distribution and
playback of digital content on networks of displays. Digital signage often appears in retail
applications and, in addition, is increasingly a component of comprehensive wayfinding
systems designed for transportation and healthcare environments.

The seven basic elements of graphic design are line, shape, color, texture, type, space and
image.

How is technology used in graphic design?


Technology has more positive than negative impacts on graphic design. Indeed, technology has
made promoted the work of designers using printers, the internet, scanners and design
programs. Technology has positively affected the design industry by helping designers improve
on their skills and work.

What is TWAIN??

TWAIN - What Is It And Do You Need It? - YouTube

HP Futuresmart MFPs TWAIN Scanning | HP - YouTube (How to download TWAIN)


https://www.slideserve.com/deanna/introduction-to-scanners

 TWAIN is a widely-used program that lets you scan an image (using a scanner ) directly
into the application (such as PhotoShop) where you want to work with the image.
 Without TWAIN, you would have to close an application that was open, open a special
application to receive the image, and then move the image to the application where you
wanted to work with it.
 The TWAIN driver runs between an application and the scanner hardware. TWAIN
usually comes as part of the software package you get when you buy a scanner. It's also
integrated into PhotoShop and similar image manipulation programs.
 The software was developed by a work group from major scanner manufacturers and
scanning software developers and is now an industry standard.
 In several accounts, TWAIN was an acronym developed playfully from "technology
without an important name." TWAIN, deriving it from the saying "Ne'er the twain shall
meet," because the program sits between the driver and the application. The name is not
intended to be an acronym.

 TWAIN is scanning protocol that was initially used for Microsoft Windows and Apple
Macintosh operating systems, and it added Linux/Unix support since version 2.0. The
first release was in 1992. It was designed as an interface between image processing
software and scanners or digital cameras. TWAIN is the most commonly used protocol
and the standard in document scanners. In most cases, users should be able to either get a
free TWAIN driver or easily find one (from the manufacturer’s website), for their
scanners — Canon, HP, Epson, Kodak, Xerox, etc.

The software has three key elements:

 The Application
 The Source Manager
 The Data Source

The Source Manager Interface provided by TWAIN allows your application to control data
sources, such as scanners and digital cameras, and acquire images, as shown in the figure
below.

Although nearly all scanners contain a TWAIN driver that complies with the TWAIN standard,
the implementation of each TWAIN scanner driver may vary slightly in terms of scanner setting
dialog, custom capabilities, and other features. It’s good if you want to use features specific to a
particular scanner model, but if you want your application’s scanning behavior to be consistent
on different scanners, you need to be wary of customized code.

The TWAIN standard is now evolving to the next generation, called TWAIN direct. The TWAIN
working group, that Dynamsoft is an associate of, claims that with TWAIN direct vendor
specific drivers will no longer be needed. The application will be able to communicate directly
with scanning devices. The best of TWAIN direct is still to come.

Image Subtraction

This is mainly used to enhance the difference between images. Used for background subtraction
for detecting moving objects, in medical science for detecting blockage in the veins etc a field
known as mask mode radiography. In this, we take 2 images, one before injecting a contrast
medium and other after injecting. Then we subtract these 2 images to know how that medium
propagated, is there any blockage or not.

Image Multiplication

This can be used to extract Region of interest (ROI) from an image. We simply create a mask
and multiply the image with the mask to get the area of interest.

Affine Transformation
In Euclidean geometry, an affine transformation, or an affinity (from the Latin, affinis,
"connected with"), is a geometric transformation that preserves lines and parallelism (but not
necessarily distances and angles).
More generally, an affine transformation is an automorphism of an affine space (Euclidean
spaces are specific affine spaces), that is, a function which maps an affine space onto itself while
preserving both the dimension of any affine subspaces (meaning that it sends points to points,
lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments.
Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An
affine transformation does not necessarily preserve angles between lines or distances between
points, though it does preserve ratios of distances between points lying on a straight line.
Linear mapping method using affine transformation
Affine transformation is a linear mapping method that preserves points, straight lines, and planes.
Sets of parallel lines remain parallel after an affine transformation.

The affine transformation technique is typically used to correct for geometric distortions or
deformations that occur with non-ideal camera angles. For example, satellite imagery uses affine
transformations to correct for wide angle lens distortion, panorama stitching, and image
registration.

Transforming and fusing the images to a large, flat coordinate system is desirable to eliminate
distortion. This enables easier interactions and calculations that don’t require accounting for
image distortion.

The following table illustrates the different affine transformations: translation, scale, shear, and
rotation.

Affine
Transfo Exam
rm ple Transformation Matrix

Translat ⎡⎣⎢10tx01ty001⎤⎦⎥[100010txty1]
ion txtx specifi
es the
displaceme
nt along
the xx axis
tyty specifi
es the
displaceme
nt along
the yy axis.

Scale ⎡⎣⎢sx000sy0001⎤⎦⎥[sx000sy0001] sxsx specifi


es the scale
factor
along
the xx axis
Affine
Transfo Exam
rm ple Transformation Matrix

sysy specifi
es the scale
factor
along
the yy axis.

Shear ⎡⎣⎢1shx0shy10001⎤⎦⎥[1shy0shx10001] shxshx spe


cifies the
shear factor
along
the xx axis
shyshy spe
cifies the
shear factor
along
the yy axis.

Rotatio ⎡⎣⎢cos(q) qq specifies


n −sin(q)0sin(q)cos(q)0001⎤⎦⎥[cos⁡(q)sin⁡(q)0−sin⁡(q)cos⁡(q)00 the angle of
01] rotation.

Morphological Operations is a broad set of image processing operations that process digital
images based on their shapes. In a morphological operation, each image pixel is corresponding
to the value of other pixel in its neighborhood.
By choosing the shape and size of the neighborhood pixel, you can construct a morphological
operation that is sensitive to specific shapes in the input image.
Morphological operations apply a structuring element called strel in Matlab, to an input image,
creating an output image of the same size.
Types of Morphological operations:

 Dilation: Dilation adds pixels on the object boundaries.


 Erosion: Erosion removes pixels on object boundaries.
 Open: The opening operation erodes an image and then dilates the eroded image, using the
same structuring element for both operations.
 Close: The closing operation dilates an image and then erodes the dilated image, using the
same structuring element for both operations.
Dilation:
 Dilation expands the image pixels i.e. it is used for expanding an element A by using
structuring element B.
 Dilation adds pixels to object boundaries.
 The value of the output pixel is the maximum value of all the pixels in the neighborhood. A
pixel is set to 1 if any of the neighboring pixels have the value 1.

Input:

Figure: Input image

Figure: Output image

Edge detection is a technique of image processing used to identify points in a digital image
with discontinuities, simply to say, sharp changes in the image brightness. These points
where the image brightness varies sharply are called the edges (or boundaries) of the image.
It is one of the basic steps in image processing, pattern recognition in images and computer
vision. When we process very high-resolution digital images, convolution techniques come
to our rescue.

The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function.
Histogram Function:

Points about Histogram:

 Histogram of an image provides a global description of the appearance of an image.


 Information obtained from histogram is very large in quality.
 Histogram of an image represents the relative frequency of occurrence of various gray
levels in an image.
Let’s assume that an Image matrix is given as:
This image matrix contains the pixel values at (i, j) position in the given x-y plane which is the
2D image with gray levels.

There are two ways to plot a Histogram of an image:


Method 1: In this method, the x-axis has grey levels/ Intensity values and the y-axis has the
number of pixels in each grey level. The Histogram value representation of the above image
is:
Explanation: The above image has 1, 2, 3, 4, 5, 6, and 8 as the intensity values and the
occurrence of each intensity value in the image matrix is 2, 1, 3, 2, 2, 3 and 3 respectively so
according to intensity value and occurrence of that particular intensity we mapped them into a
Graph.

The histogram originated outside of lean and Six Sigma, but it has many applications in this area,
especially in organizations that rely heavily on statistical analysis. It’s a flexible tool that can be
put to various uses, and when used correctly, it can shed a lot of light on the way your operations
are running. It’s important as a good leader to familiarize yourself with the possibilities offered
by the histogram, and use it whenever appropriate.

1. Identifying the most common process outcome

A quick look at a histogram can immediately reveal what the most common outcome of a
process with varying outcomes is. By simply collecting all data related to the final state of the
process and organizing it in a histogram, any special trends will quickly become apparent. Of
course, things can get complicated when you have a large number of possible outcomes,
especially when some of them are grouped up and appear alongside one another commonly.

2. Identifying data symmetry

Sometimes, you will spot trends that lean in two directions simultaneously. A histogram can
make it very easy to identify those occurrences and know when your processes are prone to
producing symmetrical results in some circumstances. Occasionally, this can prove very useful
for optimizing certain types of processes. On the other hand, it can also help you identify
possible issues, as sometimes symmetry is not what you expect to see in your results.

3. Spotting deviations

Likewise, a histogram can make it quite obvious when your results are deviating from the
expected values. As long as you’ve collected all data points and arranged them appropriately for
quick reviewing, you can rely on a histogram to tell you when the results are not moving in the
right direction. In fact, if you’re working with a small number of data points, the histogram is
easily the most useful tool for spotting oddities and identifying worrying trends. Keeping a list of
histograms that have been produced in the course of your work and referring back to it can
further make things easy to analyze, as you will additionally know when a deviation is
potentially caused by old issues, or by a recent change in your operations.

4. Verifying equal distribution

In some cases, symmetry is exactly what you’re looking for, especially in a process prone to
random deviations. If you’re looking to make sure that you have a maximized coverage of your
outputs, the histogram can be a very simple tool to go about that. If one of the data points is
below its standard norms, this will become apparent very quickly, and you can take appropriate
measures to correct the situation. When combined with a historical analysis as we described
above, you can gain full control over the variation of your outputs.
5. Spotting areas that require little effort

Last but definitely not least, a histogram can be helpful in determining when you’re wasting too
much effort or resources on a specific task. Sometimes, a certain part of your process will not
require as much attention as you think it does, and a histogram depicting the current resource
allocation can immediately reveal that. Just make sure that you have an adequate overview of
how your resources are being allocated and utilized to build the data sets for your histogram,
otherwise you might not see the full picture.

Binary Image Operations


Binary Image Operations are a collection of digital filters, image arithmetic and image
processing techniques. Apply simple modifications such as open and fill holes to count the
number of cells. Develop complex Workfiles that utilize multiple binary layers to differentiate
nuclei inside cells.
Erode
Shrinks both the interior and exterior boundaries of objects in the displayed binary image by one
pixel for each pass applied. Erode can be used to shrink small objects until they disappear so that
they will not be measured or counted later. Erode can have multiple passes applied, although the
shape of objects after many passes may change.

Full Erode
Recursively Erodes until the last pass before the object will be extinguished. Full Erode is
sometimes called "Ultimate Erosion". Each object is considered separately, and so different sized
objects may have a different number of passes applied.
This technique can be used in the separation process to erode all objects to their centers before
growing back to the intermediate boundary.

Objects may have more than one pixel or group of pixels remaining after Full Erode,
depending on the original shape of the object.
Dilate
Grows the interior and exterior boundaries of objects by one pixel for each pass applied. Dilate
can be useful to grow parts of an object together that were separated by incorrect thresholding in
Identify. Dilate can have multiple passes applied, although the shape of objects after many passes
may change. This operation can be combined with Erode in the Open and Close operators.

Open
Performs first binary image Erosion and then binary image Dilation on the displayed binary
image. The effect of Open is to enlarge gaps between objects that have incorrect tenuous
connections, making them separate. Open can have multiple passes applied, although the shape
of objects after many passes may change. An identical number of passes are applied first for
Erode and then for Dilate. This encourages the possibility that the Eroded objects will Dilate
back to their original size and shape. As the number of passes increases, smaller objects may be
completely Eroded, and will not be Dilated back. This method is sometimes used to remove
unwanted small objects or noise pixels.

Close
Close is equal to performing a Dilate and then an Erode. Cross small gaps between objects that
have incorrect breaks, joining them up. Close can have multiple passes applied, although the
shape of objects after many passes may change. An identical number of passes are applied first
for Dilate and then for Erode. This encourages the possibility that the Dilated objects will Erode
back to their original size and shape. As the number of passes increases, smaller holes within
objects may be completely Closed. This method is sometimes used to remove unwanted small
holes or noise pixels.

Thin
Performs binary image Thinning on the displayed binary image. Can have multiple passes
applied, although the shape of objects after many passes may change.
The effect of the Thin operator is similar to an Erode of the interior and exterior boundary of
each object by one pixel for each pass applied, but prevents the objects from disappearing by
avoiding further object modifying operations when the object is just one pixel thick. Use on
samples which have linear objects.

What is RGB Color Model?

RGB color model is an additive color model in which red, green and blue colors are mixed

together in various proportions to form a different array of colors. The name was given with the

first letters of three primary colors red, green and blue. In this model, colors are prepared by

adding components, with white having all colors in it and black without the presence of any

color. RGB color model is used in various digital displays like TV and video displays, Computer

displays, digital cameras, and other light-based display devices.

Understanding RGB Color Model


A color model is a process for creating more colors using a few primary colors. There are two

types of color models: the Additive color model and the subtractive color model. In the additive

color, model light is used to display colors. While in the subtractive color model, printing inks

are used to produce color. The most common additive color model used is an RGB color model,

and a CMYK color model is used for printing.


RGB color model is the additive color model using Red, green and blue colors. The main use of

the RGB color model is for displaying images on electronic devices. In this process of the RGB

color model, if the three colors are superimposed with the least intensity, then the black color is

formed, and if it is added with the full intensity of light, then the white color is formed. To make

a different array of colors, these primary colors should be superimposed in different intensities.

According to some studies, the intensity of each primary colour can vary from 0 to 255, resulting

in the creation of almost 16,777,216 colors.

Working for RGB Color Model


As we already discussed above, the basic principle behind the working of the RGB color model

is additive color mixing. It is the process of mixing 3 primary colors, red, green and blue,

together in different proportions to make more different colors.

For each primary color, it is possible to take 256 different shades of that color. So by adding 256

shades of 3 primary colors, we can produce over 16 million different colors. Cone cells or

photoreceptors are part of the human eye that is responsible for color perception. In the RGB

color model, the combination of primary colors creates different colors that we perceive by

stimulating the different cone cells simultaneously.


As shown in the figure above, the addition of red, green and blue light will cause us to perceive

different colors. For example, if we combine blue and green light in some proportions, it will

result in the formation of cyan. And if we combine red and green light, it results in yellow light.

Uses Of RGB Color Model


Below are some uses of RGB color, which are as follows:

1. RGB in Display
The main application of the RGB color model is to display digital images. It is used in cathode

ray tubes, LCD displays, and LED display such as television, computer monitor or large screens.

Each pixel on these displays is built by using three small and very close RGB light sources.

These colors cannot be distinguished separately at a common viewing distance and viewed as a

single solid color.

RGB is also used in component video display signals. It consists of three signals, red, green and

blue, which carried on three separate pins or cables. These video signals are the best quality

signal that can be carried on the standard SCART connector.

2. RGB In Cameras
Digital cameras for photography that use a CMOS or CCD image sensor mostly perform with

some type of RGB color model. Current digital cameras are equipped with an RGB sensor that

helps perform light intensity evaluation in a crucial manner. And this results in the optimum

value of exposure in each image.

3. RGB In Scanner
An image scanner is a device that scans a physical document and converts it to digital form and

transferred it to the computer. There are different types of such scanners, and most of them work

based on the RGB color model. These scanners use a charge-coupled device or contact image
sensor as the image sensor. Color scanners often read data as RGB values, and these data are

then processed with some algorithm to convert to other colors.

Advantages

 No transformations required to display data on the screen.

 It is considered as the base color space for various applications

 It is a computationally practical system.

 With the help of additive property, it is used in video displays

 It relates simply to CRT applications.

 This model is very easy to implement

DisAdvantages

 RGB values are commonly not transferable between devices

 Not perceptually uniform.

 Not perfect for identification of colors

 Difficult to determine specific color

 The difference between colors is not linear

Examples of the RGB Color Model


Below is an example of the RGB model, which is as follows:

1. Photography
Experiments with RGB in color photography were started in the early 1860s. And thus made the

process of combining three color-filtered separate takes. Most standard cameras capture the same

RGB brands, so the images they create look almost exactly what our eyes see.

2. Computer graphics
RGB color model is one of the main color representation methods used in computer graphics. It

has a color coordinate system with three primary colors.

3. Television
The world’s first television with RGB color transmission was developed in 1928. And in 1940,

experiments on the RGB field sequential color system was started by the Colombia broadcasting

system. In the modern century, RGB shadow mask technology is used in CRT displays.

Commission internationale de l’éclairage (CIE) is a 100 year old organization that creates

international standards related to light and color.

CIE 1931 is a Color Matching System. Color matching does not attempt to describe how colors

appear to humans, color matching tells us how to numerically specify a measured color, and then

later accurately reproduce that measured color (e.g. in print or digital displays).
To reiterate that point, Color Matching Systems are not focused on describing color with qualities

like hue or saturation, they just tell us what combinations of light appear to be the same color to

most people (they “match”). Color matching allows gives us a basic framework for color

reproduction.

RGB color space

The sRGB color space, or color profile, is based on the RGB color model, which is based on
three colors: red, green, and blue. When these three colors are combined, they create variations
of other colors. The sRGB color space is composed of a specific amount of color information;
this data is used to optimize and streamline colors between devices and technical platforms, such
as computer screens, printers, and web browsers.

Each color within the sRGB color space provides the possibility of variations of that color. The
image below displays a larger RGB color scale, and the sRGB color space lies close to the
circle’s white center where different colors are closer together in range.

SRGB is the world’s default color space. Most consumer applications, devices, printers, and web
browsers default to sRGB and read color information accordingly when dealing with images, and
when it comes to workflow efficiency, sRGB is king.

IMAGE file formats:

1. JPEG (or JPG) - Joint Photographic Experts Group

JPEGs might be the most common file type you run across on the web, and more than likely the
kind of image that is in your company's MS Word version of its letterhead. JPEGs are known for
their "lossy" compression, meaning that the quality of the image decreases as the file size
decreases.
You can use JPEGs for projects on the web, in Microsoft Office documents, or for projects that
require printing at a high resolution. Paying attention to the resolution and file size with JPEGs is
essential in order to produce a nice-looking project.

JPG vs JPEG

There is no difference between the .jpg and .jpeg filename extensions. Regardless of how you
name your file, it is still the same format and will behave the same way.

The only reason that the two extensions exist for the same format is because .jpeg was shortened
to .jpg to accommodate the three-character limit in early versions of Windows. While there is no
such requirement today, .jpg remains the standard and default on many image software
programs.

2. PNG - Portable Network Graphics

PNGs are amazing for interactive documents such as web pages but are not suitable for print.
While PNGs are "lossless," meaning you can edit them and not lose quality, they are still low
resolution.

The reason PNGs are used in most web projects is that you can save your image with more
colors on a transparent background. This makes for a much sharper, web-quality image.

3. GIF - Graphics Interchange Format

GIFs are most common in their animated form, which are all the rage on Tumblr pages and in
banner ads. It seems like every day we see pop culture GIF references from Giphy in the
comments of social media posts. In their more basic form, GIFs are formed from up to 256
colors in the RGB colorspace. Due to the limited number of colors, the file size is drastically
reduced.
This is a common file type for web projects where an image needs to load very quickly, as
opposed to one that needs to retain a higher level of quality.

4. TIFF - Tagged Image File

A TIF is a large raster file that doesn't lose quality. This file type is known for using "lossless
compression," meaning the original image data is maintained regardless of how often you might
copy, re-save, or compress the original file.

Despite TIFF images' ability to recover their quality after manipulation, you should avoid using
this file type on the web. Since it can take forever to load, it'll severely impact website
performance. TIFF files are also commonly used when saving photographs for print.
5. PSD - Photoshop Document

PSDs are files that are created and saved in Adobe Photoshop, the most popular graphics editing
software ever. This type of file contains "layers" that make modifying the image much easier to
handle. This is also the program that generates the raster file types mentioned above.
The largest disadvantage to PSDs is that Photoshop works with raster images as opposed to
vector images.

6. PDF - Portable Document Format

PDFs were invented by Adobe with the goal of capturing and reviewing rich information from
any application, on any computer, with anyone, anywhere. I'd say they've been pretty successful
so far.

If a designer saves your vector logo in PDF format, you can view it without any design editing
software (as long as you have downloaded the free Acrobat Reader software), and they have the
ability to use this file to make further manipulations. This is by far the best universal tool for
sharing graphics.

7. EPS - Encapsulated Postscript

EPS is a file in vector format that has been designed to produce high-resolution graphics for
print. Almost any kind of design software can create an EPS.
The EPS extension is more of a universal file type (much like the PDF) that can be used to open
vector-based artwork in any design editor, not just the more common Adobe products. This
safeguards file transfers to designers that are not yet utilizing Adobe products, but may be using
Corel Draw or Quark.

8. AI - Adobe Illustrator Document

AI is, by far, the image format most preferred by designers and the most reliable type of file
format for using images in all types of projects from web to print, etc.

Adobe Illustrator is the industry standard for creating artwork from scratch and therefore more
than likely the program in which your logo was originally rendered. Illustrator produces vector
artwork, the easiest type of file to manipulate. It can also create all of the aforementioned file
types. Pretty cool stuff! It is by far the best tool in any designer's arsenal.

9. INDD - Adobe InDesign Document

INDDs (InDesign Document) are files that are created and saved in Adobe InDesign. InDesign is
commonly used to create larger publications, such as newspapers, magazines and eBooks.

Files from both Adobe Photoshop and Illustrator can be combined in InDesign to produce
content rich designs that feature advanced typography, embedded graphics, page content,
formatting information and other sophisticated layout-related options.

10. RAW - Raw Image Formats


A RAW image is the least-processed image type on this list -- it's often the first format a picture
inherits when it's created. When you snap a photo with your camera, it's saved immediately in a
raw file format. Only when you upload your media to a new device and edit it using image
software is it saved using one of the image extensions explained above.

RAW images are valuable because they capture every element of a photo without processing and
losing small visual details. Eventually, however, you'll want to package them into a raster or
vector file type so they can be transferred and resized for various purposes.

As you can see from the icons above, there are multiple raw image files in which you can create
images -- many of them native to certain cameras (and there are still dozens more formats not
shown above). Here's a brief description of those four raw files above:

 CR2: This image extension stands for Canon RAW 2, and was created by Canon for photos
taken using its own digital cameras. They're actually based on the TIFF file type, making them
inherently high in quality.
 CRW: This image extension was also created by Canon, preceding the existence of the CR2.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy