Unit-4 Digital Imaging-Norepages30
Unit-4 Digital Imaging-Norepages30
Digital imaging or digital image acquisition is the creation of digital images, such as of a
physical scene or of the interior structure of an object. The term is often assumed to imply or
include the processing, compression, storage, printing, and display of such images.
An image consists of a rectangular array of dots called pixels. The size of the image is specified
in terms of width X height, in numbers of the pixels. The physical size of the image, in inches
or centimeters, depends on the resolution of the device on which the image is displayed. The
resolution is usually measured in DPI (Dots Per Inch). An image will appear smaller on a device
with a higher resolution than on one with a lower resolution. For color images, one needs
enough bits per pixel to represent all the colors in the image. The number of the bits per pixel is
called the depth of the image.
https://www.slideshare.net/abshinde/image-processing-fundamentals-47399733
https://www.slideshare.net/PreetiGupta6/image-processing1-introduction (26-35 Key Stages in
DIP)
Digital Image Processing Basics
Difficulty Level : Easy
Last Updated : 30 Nov, 2021
Digital Image Processing means processing digital image by means of a digital computer. We
can also say that it is a use of computer algorithms, in order to get enhanced image either to
extract some useful information.
Image as a Matrix
As we know, images are represented in rows and columns we have the following syntax in which
images are represented:
The right side of this equation is digital image by definition. Every element of this matrix is
called image element , picture element , or pixel.
According to block 1,if input is an image and we get out image as a output, then it is termed as
Digital Image Processing.
According to block 2,if input is an image and we get some kind of information or description as
a output, then it is termed as Computer Vision.
According to block 3,if input is some description or code and we get image as an output, then it
is termed as Computer Graphics.
According to block 4,if input is description or some keywords or some code and we get
description or some keywords as a output,then it is termed as Artificial Intelligence
pattern recognition
image morphology
feature extraction
Edge detection allows users to observe the features of an image for a significant change in the
gray level. This texture indicating the end of one region in the image and the beginning of
another. It reduces the amount of data in an image and preserves the structural properties of an
image.
Edge Detection Operators are of two types:
Gradient – based operator which computes first-order derivations in a digital image like,
Sobel operator, Prewitt operator, Robert operator
Gaussian – based operator which computes second-order derivations in a digital image like,
Canny edge detector, Laplacian of Gaussian
Images can be created by using different techniques of representation of data called data type
like monochrome and colored images. Monochrome image is created by using single color
whereas colored image is created by using multiple colors. Some important data types of images
are following:
1-bit images- An image is a set of pixels. Note that a pixel is a picture element in digital
image. In 1-bit images, each pixel is stored as a single bit (0 or 1). A bit has only two
states either on or off, white or black, true or false. Therefore, such an image is also
referred to as a binary image, since only two states are available. 1-bit image is also
known as 1-bit monochrome images because it contains one color that is black for off
state and white for on state.
A 1-bit image with resolution 640*480 needs a storage space of 640*480 bits.
640 x 480 bits. = (640 x 480) / 8 bytes = (640 x 480) / (8 x 1024) KB= 37.5KB.
The clarity or quality of 1-bit image is very low.
8-bit Gray level images- Each pixel of 8-bit gray level image is represented by a single
byte (8 bits). Therefore each pixel of such image can hold 28=256 values between 0 and
255. Therefore each pixel has a brightness value on a scale from black (0 for no
brightness or intensity) to white (255 for full brightness or intensity). For example, a
dark pixel might have a value of 15 and a bright one might be 240.
A grayscale digital image is an image in which the value of each pixel is a single
sample, which carries intensity information. Images are composed exclusively of gray
shades, which vary from black being at the weakest intensity to white being at the
strongest. Grayscale images carry many shades of gray from black to white. Grayscale
images are also called monochromatic, denoting the presence of only one (mono) color
(chrome). An image is represented by bitmap. A bitmap is a simple matrix of the tiny
dots (pixels) that form an image and are displayed on a computer screen or printed.
A 8-bit image with resolution 640 x 480 needs a storage space of 640 x 480 bytes=(640
x 480)/1024 KB= 300KB. Therefore an 8-bit image needs 8 times more storage space
than 1-bit image.
24-bit color images - In 24-bit color image, each pixel is represented by three bytes,
usually representing RGB (Red, Green and Blue). Usually true color is defined to mean
256 shades of RGB (Red, Green and Blue) for a total of 16777216 color variations. It
provides a method of representing and storing graphical image information an RGB
color space such that a colors, shades and hues in large number of variations can be
displayed in an image such as in high quality photo graphic images or complex graphics.
Many 24-bit color images are stored as 32-bit images, and an extra byte for each pixel
used to store an alpha value representing special effect information.
A 24-bit color image with resolution 640 x 480 needs a storage space of 640 x 480 x 3
bytes = (640 x 480 x 3) / 1024=900KB without any compression. Also 32-bit color
image with resolution 640 x 480 needs a storage space of 640 x 480 x 4 bytes= 1200KB
without any compression.
Disadvantages
A color loop-up table (LUT) is a mechanism used to transform a range of input colors into
another range of colors. Color look-up table will convert the logical color numbers stored in
each pixel of video memory into physical colors, represented as RGB triplets, which can be
displayed on a computer monitor. Each pixel of image stores only index value or logical color
number. For example if a pixel stores the value 30, the meaning is to go to row 30 in a color
look-up table (LUT). The LUT is often called a Palette.
Characteristic of LUT are following:
The number of entries in the palette determines the maximum number of colors which
can appear on screen simultaneously.
The width of each entry in the palette determines the number of colors which the wider
full palette can represent.
A common example would be a palette of 256 colors that is the number of entries is 256 and
thus each entry is addressed by an 8-bit pixel value. Each color can be chosen from a full
palette, with a total of 16.7 million colors that is the each entry is of 24 bits and 8 bits per
channel which sets the total combinations of 256 levels for each of the red, green and blue
components 256 x 256 x 256 =16,777,216 colors.
1. Video indexing is the process of providing watchers a way to access and navigate contents
easily; similar to book indexing.
2. The selection of indexes derived from the content of the video to help organize video data and
metadata that represents the original video stream
Transformation
Transformation is a function. A function that maps one set to another set after performing some
operations.
Digital Image Processing system
We have already seen in the introductory tutorials that in digital image processing, we will
develop a system that whose input would be an image and output would be an image too. And
the system would perform some processing on the input image and gives its output as an
processed image. It is shown below.
Now function applied inside this digital system that process an image and convert it into output
can be called as transformation function.
As it shows transformation or relation, that how an image1 is converted to image2.
Image transformation.
All the pixel intensity values that are below 127 (point p) are 0, means black. And all the pixel
intensity values that are greater then 127, are 1, that means white. But at the exact point of 127,
there is a sudden change in transmission, so we cannot tell that at that exact point, the value
would be 0 or 1.
Mathematically this transformation function can be denoted as:
While graphic design focuses mostly on static designs, digital design involves movement, such
as animation, interactive pages, and 2D or 3D modeling. Digital designers create images and
elements that will end up on a screen, whether that’s a computer screen, a phone screen, a
dashboard, or any other digital formats. The design may also include audio and sound effects
to complement the visuals.
The commonality between digital design and graphic design is they both create forms of visual
communication, often related to an idea, image, or brand. In both instances, the designer’s goal
is to transmit information and ideas to the audience using symbols and imagery. This visual
communication creates meaning for the viewer. Visuals help to evoke emotions in your
audience, which can in turn help with brand awareness and affinity.
Data and analytics also play a role in defining digital design, as they allow a designer to gauge
the product’s performance. It is difficult to track the performance of a physical flyer or
brochure, but a digital design does not face the same problem. By reviewing performance
indicators such as likes, shares, downloads, and pageviews, you’ll get a better understanding of
how effective your design is.
The introduction of analytics and data also empowers the digital designer to make data-
informed design decisions throughout the design process. For example, digital designers often
run A/B tests to collect qualitative data about two (or more) designs. This added layer of
analytics in digital design is a key differentiator between digital and graphic design.
In experiential graphic design, digital technology refers to the means of creating, storing,
processing, and displaying of electronic communications, stories, moving content, effects and
added functionality in a built environment. Generally, media created using digital technology
are displayed on computer screens or LED or LCD displays. However, evolving digital
interfaces and projection technologies are increasing the array of surfaces on which moving
images can appear.
Digital design focuses on the design of digitally mediated environments and experiences,
including websites, web applications, exhibition experiences, and gaming. Digital signage,
another application of digital technology, is made possible by the centralized distribution and
playback of digital content on networks of displays. Digital signage often appears in retail
applications and, in addition, is increasingly a component of comprehensive wayfinding
systems designed for transportation and healthcare environments.
The seven basic elements of graphic design are line, shape, color, texture, type, space and
image.
What is TWAIN??
TWAIN is a widely-used program that lets you scan an image (using a scanner ) directly
into the application (such as PhotoShop) where you want to work with the image.
Without TWAIN, you would have to close an application that was open, open a special
application to receive the image, and then move the image to the application where you
wanted to work with it.
The TWAIN driver runs between an application and the scanner hardware. TWAIN
usually comes as part of the software package you get when you buy a scanner. It's also
integrated into PhotoShop and similar image manipulation programs.
The software was developed by a work group from major scanner manufacturers and
scanning software developers and is now an industry standard.
In several accounts, TWAIN was an acronym developed playfully from "technology
without an important name." TWAIN, deriving it from the saying "Ne'er the twain shall
meet," because the program sits between the driver and the application. The name is not
intended to be an acronym.
TWAIN is scanning protocol that was initially used for Microsoft Windows and Apple
Macintosh operating systems, and it added Linux/Unix support since version 2.0. The
first release was in 1992. It was designed as an interface between image processing
software and scanners or digital cameras. TWAIN is the most commonly used protocol
and the standard in document scanners. In most cases, users should be able to either get a
free TWAIN driver or easily find one (from the manufacturer’s website), for their
scanners — Canon, HP, Epson, Kodak, Xerox, etc.
The Application
The Source Manager
The Data Source
The Source Manager Interface provided by TWAIN allows your application to control data
sources, such as scanners and digital cameras, and acquire images, as shown in the figure
below.
Although nearly all scanners contain a TWAIN driver that complies with the TWAIN standard,
the implementation of each TWAIN scanner driver may vary slightly in terms of scanner setting
dialog, custom capabilities, and other features. It’s good if you want to use features specific to a
particular scanner model, but if you want your application’s scanning behavior to be consistent
on different scanners, you need to be wary of customized code.
The TWAIN standard is now evolving to the next generation, called TWAIN direct. The TWAIN
working group, that Dynamsoft is an associate of, claims that with TWAIN direct vendor
specific drivers will no longer be needed. The application will be able to communicate directly
with scanning devices. The best of TWAIN direct is still to come.
Image Subtraction
This is mainly used to enhance the difference between images. Used for background subtraction
for detecting moving objects, in medical science for detecting blockage in the veins etc a field
known as mask mode radiography. In this, we take 2 images, one before injecting a contrast
medium and other after injecting. Then we subtract these 2 images to know how that medium
propagated, is there any blockage or not.
Image Multiplication
This can be used to extract Region of interest (ROI) from an image. We simply create a mask
and multiply the image with the mask to get the area of interest.
Affine Transformation
In Euclidean geometry, an affine transformation, or an affinity (from the Latin, affinis,
"connected with"), is a geometric transformation that preserves lines and parallelism (but not
necessarily distances and angles).
More generally, an affine transformation is an automorphism of an affine space (Euclidean
spaces are specific affine spaces), that is, a function which maps an affine space onto itself while
preserving both the dimension of any affine subspaces (meaning that it sends points to points,
lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments.
Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An
affine transformation does not necessarily preserve angles between lines or distances between
points, though it does preserve ratios of distances between points lying on a straight line.
Linear mapping method using affine transformation
Affine transformation is a linear mapping method that preserves points, straight lines, and planes.
Sets of parallel lines remain parallel after an affine transformation.
The affine transformation technique is typically used to correct for geometric distortions or
deformations that occur with non-ideal camera angles. For example, satellite imagery uses affine
transformations to correct for wide angle lens distortion, panorama stitching, and image
registration.
Transforming and fusing the images to a large, flat coordinate system is desirable to eliminate
distortion. This enables easier interactions and calculations that don’t require accounting for
image distortion.
The following table illustrates the different affine transformations: translation, scale, shear, and
rotation.
Affine
Transfo Exam
rm ple Transformation Matrix
Translat ⎡⎣⎢10tx01ty001⎤⎦⎥[100010txty1]
ion txtx specifi
es the
displaceme
nt along
the xx axis
tyty specifi
es the
displaceme
nt along
the yy axis.
sysy specifi
es the scale
factor
along
the yy axis.
Morphological Operations is a broad set of image processing operations that process digital
images based on their shapes. In a morphological operation, each image pixel is corresponding
to the value of other pixel in its neighborhood.
By choosing the shape and size of the neighborhood pixel, you can construct a morphological
operation that is sensitive to specific shapes in the input image.
Morphological operations apply a structuring element called strel in Matlab, to an input image,
creating an output image of the same size.
Types of Morphological operations:
Input:
Edge detection is a technique of image processing used to identify points in a digital image
with discontinuities, simply to say, sharp changes in the image brightness. These points
where the image brightness varies sharply are called the edges (or boundaries) of the image.
It is one of the basic steps in image processing, pattern recognition in images and computer
vision. When we process very high-resolution digital images, convolution techniques come
to our rescue.
The histogram of a digital image with gray levels in the range [0, L-1] is a discrete function.
Histogram Function:
The histogram originated outside of lean and Six Sigma, but it has many applications in this area,
especially in organizations that rely heavily on statistical analysis. It’s a flexible tool that can be
put to various uses, and when used correctly, it can shed a lot of light on the way your operations
are running. It’s important as a good leader to familiarize yourself with the possibilities offered
by the histogram, and use it whenever appropriate.
A quick look at a histogram can immediately reveal what the most common outcome of a
process with varying outcomes is. By simply collecting all data related to the final state of the
process and organizing it in a histogram, any special trends will quickly become apparent. Of
course, things can get complicated when you have a large number of possible outcomes,
especially when some of them are grouped up and appear alongside one another commonly.
Sometimes, you will spot trends that lean in two directions simultaneously. A histogram can
make it very easy to identify those occurrences and know when your processes are prone to
producing symmetrical results in some circumstances. Occasionally, this can prove very useful
for optimizing certain types of processes. On the other hand, it can also help you identify
possible issues, as sometimes symmetry is not what you expect to see in your results.
3. Spotting deviations
Likewise, a histogram can make it quite obvious when your results are deviating from the
expected values. As long as you’ve collected all data points and arranged them appropriately for
quick reviewing, you can rely on a histogram to tell you when the results are not moving in the
right direction. In fact, if you’re working with a small number of data points, the histogram is
easily the most useful tool for spotting oddities and identifying worrying trends. Keeping a list of
histograms that have been produced in the course of your work and referring back to it can
further make things easy to analyze, as you will additionally know when a deviation is
potentially caused by old issues, or by a recent change in your operations.
In some cases, symmetry is exactly what you’re looking for, especially in a process prone to
random deviations. If you’re looking to make sure that you have a maximized coverage of your
outputs, the histogram can be a very simple tool to go about that. If one of the data points is
below its standard norms, this will become apparent very quickly, and you can take appropriate
measures to correct the situation. When combined with a historical analysis as we described
above, you can gain full control over the variation of your outputs.
5. Spotting areas that require little effort
Last but definitely not least, a histogram can be helpful in determining when you’re wasting too
much effort or resources on a specific task. Sometimes, a certain part of your process will not
require as much attention as you think it does, and a histogram depicting the current resource
allocation can immediately reveal that. Just make sure that you have an adequate overview of
how your resources are being allocated and utilized to build the data sets for your histogram,
otherwise you might not see the full picture.
Full Erode
Recursively Erodes until the last pass before the object will be extinguished. Full Erode is
sometimes called "Ultimate Erosion". Each object is considered separately, and so different sized
objects may have a different number of passes applied.
This technique can be used in the separation process to erode all objects to their centers before
growing back to the intermediate boundary.
Objects may have more than one pixel or group of pixels remaining after Full Erode,
depending on the original shape of the object.
Dilate
Grows the interior and exterior boundaries of objects by one pixel for each pass applied. Dilate
can be useful to grow parts of an object together that were separated by incorrect thresholding in
Identify. Dilate can have multiple passes applied, although the shape of objects after many passes
may change. This operation can be combined with Erode in the Open and Close operators.
Open
Performs first binary image Erosion and then binary image Dilation on the displayed binary
image. The effect of Open is to enlarge gaps between objects that have incorrect tenuous
connections, making them separate. Open can have multiple passes applied, although the shape
of objects after many passes may change. An identical number of passes are applied first for
Erode and then for Dilate. This encourages the possibility that the Eroded objects will Dilate
back to their original size and shape. As the number of passes increases, smaller objects may be
completely Eroded, and will not be Dilated back. This method is sometimes used to remove
unwanted small objects or noise pixels.
Close
Close is equal to performing a Dilate and then an Erode. Cross small gaps between objects that
have incorrect breaks, joining them up. Close can have multiple passes applied, although the
shape of objects after many passes may change. An identical number of passes are applied first
for Dilate and then for Erode. This encourages the possibility that the Dilated objects will Erode
back to their original size and shape. As the number of passes increases, smaller holes within
objects may be completely Closed. This method is sometimes used to remove unwanted small
holes or noise pixels.
Thin
Performs binary image Thinning on the displayed binary image. Can have multiple passes
applied, although the shape of objects after many passes may change.
The effect of the Thin operator is similar to an Erode of the interior and exterior boundary of
each object by one pixel for each pass applied, but prevents the objects from disappearing by
avoiding further object modifying operations when the object is just one pixel thick. Use on
samples which have linear objects.
RGB color model is an additive color model in which red, green and blue colors are mixed
together in various proportions to form a different array of colors. The name was given with the
first letters of three primary colors red, green and blue. In this model, colors are prepared by
adding components, with white having all colors in it and black without the presence of any
color. RGB color model is used in various digital displays like TV and video displays, Computer
types of color models: the Additive color model and the subtractive color model. In the additive
color, model light is used to display colors. While in the subtractive color model, printing inks
are used to produce color. The most common additive color model used is an RGB color model,
the RGB color model is for displaying images on electronic devices. In this process of the RGB
color model, if the three colors are superimposed with the least intensity, then the black color is
formed, and if it is added with the full intensity of light, then the white color is formed. To make
a different array of colors, these primary colors should be superimposed in different intensities.
According to some studies, the intensity of each primary colour can vary from 0 to 255, resulting
is additive color mixing. It is the process of mixing 3 primary colors, red, green and blue,
For each primary color, it is possible to take 256 different shades of that color. So by adding 256
shades of 3 primary colors, we can produce over 16 million different colors. Cone cells or
photoreceptors are part of the human eye that is responsible for color perception. In the RGB
color model, the combination of primary colors creates different colors that we perceive by
different colors. For example, if we combine blue and green light in some proportions, it will
result in the formation of cyan. And if we combine red and green light, it results in yellow light.
1. RGB in Display
The main application of the RGB color model is to display digital images. It is used in cathode
ray tubes, LCD displays, and LED display such as television, computer monitor or large screens.
Each pixel on these displays is built by using three small and very close RGB light sources.
These colors cannot be distinguished separately at a common viewing distance and viewed as a
RGB is also used in component video display signals. It consists of three signals, red, green and
blue, which carried on three separate pins or cables. These video signals are the best quality
2. RGB In Cameras
Digital cameras for photography that use a CMOS or CCD image sensor mostly perform with
some type of RGB color model. Current digital cameras are equipped with an RGB sensor that
helps perform light intensity evaluation in a crucial manner. And this results in the optimum
3. RGB In Scanner
An image scanner is a device that scans a physical document and converts it to digital form and
transferred it to the computer. There are different types of such scanners, and most of them work
based on the RGB color model. These scanners use a charge-coupled device or contact image
sensor as the image sensor. Color scanners often read data as RGB values, and these data are
Advantages
DisAdvantages
1. Photography
Experiments with RGB in color photography were started in the early 1860s. And thus made the
process of combining three color-filtered separate takes. Most standard cameras capture the same
RGB brands, so the images they create look almost exactly what our eyes see.
2. Computer graphics
RGB color model is one of the main color representation methods used in computer graphics. It
3. Television
The world’s first television with RGB color transmission was developed in 1928. And in 1940,
experiments on the RGB field sequential color system was started by the Colombia broadcasting
system. In the modern century, RGB shadow mask technology is used in CRT displays.
Commission internationale de l’éclairage (CIE) is a 100 year old organization that creates
CIE 1931 is a Color Matching System. Color matching does not attempt to describe how colors
appear to humans, color matching tells us how to numerically specify a measured color, and then
later accurately reproduce that measured color (e.g. in print or digital displays).
To reiterate that point, Color Matching Systems are not focused on describing color with qualities
like hue or saturation, they just tell us what combinations of light appear to be the same color to
most people (they “match”). Color matching allows gives us a basic framework for color
reproduction.
The sRGB color space, or color profile, is based on the RGB color model, which is based on
three colors: red, green, and blue. When these three colors are combined, they create variations
of other colors. The sRGB color space is composed of a specific amount of color information;
this data is used to optimize and streamline colors between devices and technical platforms, such
as computer screens, printers, and web browsers.
Each color within the sRGB color space provides the possibility of variations of that color. The
image below displays a larger RGB color scale, and the sRGB color space lies close to the
circle’s white center where different colors are closer together in range.
SRGB is the world’s default color space. Most consumer applications, devices, printers, and web
browsers default to sRGB and read color information accordingly when dealing with images, and
when it comes to workflow efficiency, sRGB is king.
JPEGs might be the most common file type you run across on the web, and more than likely the
kind of image that is in your company's MS Word version of its letterhead. JPEGs are known for
their "lossy" compression, meaning that the quality of the image decreases as the file size
decreases.
You can use JPEGs for projects on the web, in Microsoft Office documents, or for projects that
require printing at a high resolution. Paying attention to the resolution and file size with JPEGs is
essential in order to produce a nice-looking project.
JPG vs JPEG
There is no difference between the .jpg and .jpeg filename extensions. Regardless of how you
name your file, it is still the same format and will behave the same way.
The only reason that the two extensions exist for the same format is because .jpeg was shortened
to .jpg to accommodate the three-character limit in early versions of Windows. While there is no
such requirement today, .jpg remains the standard and default on many image software
programs.
PNGs are amazing for interactive documents such as web pages but are not suitable for print.
While PNGs are "lossless," meaning you can edit them and not lose quality, they are still low
resolution.
The reason PNGs are used in most web projects is that you can save your image with more
colors on a transparent background. This makes for a much sharper, web-quality image.
GIFs are most common in their animated form, which are all the rage on Tumblr pages and in
banner ads. It seems like every day we see pop culture GIF references from Giphy in the
comments of social media posts. In their more basic form, GIFs are formed from up to 256
colors in the RGB colorspace. Due to the limited number of colors, the file size is drastically
reduced.
This is a common file type for web projects where an image needs to load very quickly, as
opposed to one that needs to retain a higher level of quality.
A TIF is a large raster file that doesn't lose quality. This file type is known for using "lossless
compression," meaning the original image data is maintained regardless of how often you might
copy, re-save, or compress the original file.
Despite TIFF images' ability to recover their quality after manipulation, you should avoid using
this file type on the web. Since it can take forever to load, it'll severely impact website
performance. TIFF files are also commonly used when saving photographs for print.
5. PSD - Photoshop Document
PSDs are files that are created and saved in Adobe Photoshop, the most popular graphics editing
software ever. This type of file contains "layers" that make modifying the image much easier to
handle. This is also the program that generates the raster file types mentioned above.
The largest disadvantage to PSDs is that Photoshop works with raster images as opposed to
vector images.
PDFs were invented by Adobe with the goal of capturing and reviewing rich information from
any application, on any computer, with anyone, anywhere. I'd say they've been pretty successful
so far.
If a designer saves your vector logo in PDF format, you can view it without any design editing
software (as long as you have downloaded the free Acrobat Reader software), and they have the
ability to use this file to make further manipulations. This is by far the best universal tool for
sharing graphics.
EPS is a file in vector format that has been designed to produce high-resolution graphics for
print. Almost any kind of design software can create an EPS.
The EPS extension is more of a universal file type (much like the PDF) that can be used to open
vector-based artwork in any design editor, not just the more common Adobe products. This
safeguards file transfers to designers that are not yet utilizing Adobe products, but may be using
Corel Draw or Quark.
AI is, by far, the image format most preferred by designers and the most reliable type of file
format for using images in all types of projects from web to print, etc.
Adobe Illustrator is the industry standard for creating artwork from scratch and therefore more
than likely the program in which your logo was originally rendered. Illustrator produces vector
artwork, the easiest type of file to manipulate. It can also create all of the aforementioned file
types. Pretty cool stuff! It is by far the best tool in any designer's arsenal.
INDDs (InDesign Document) are files that are created and saved in Adobe InDesign. InDesign is
commonly used to create larger publications, such as newspapers, magazines and eBooks.
Files from both Adobe Photoshop and Illustrator can be combined in InDesign to produce
content rich designs that feature advanced typography, embedded graphics, page content,
formatting information and other sophisticated layout-related options.
RAW images are valuable because they capture every element of a photo without processing and
losing small visual details. Eventually, however, you'll want to package them into a raster or
vector file type so they can be transferred and resized for various purposes.
As you can see from the icons above, there are multiple raw image files in which you can create
images -- many of them native to certain cameras (and there are still dozens more formats not
shown above). Here's a brief description of those four raw files above:
CR2: This image extension stands for Canon RAW 2, and was created by Canon for photos
taken using its own digital cameras. They're actually based on the TIFF file type, making them
inherently high in quality.
CRW: This image extension was also created by Canon, preceding the existence of the CR2.