0% found this document useful (0 votes)
927 views

Getting Started With Imagery and Remote Sensing

The document provides an overview of remote sensing, including what it is, the key components of a remote sensing system (sensors and platforms), and examples of different sensor and platform types. Specifically, it defines remote sensing as collecting information about Earth's surface without direct contact using reflected or emitted energy. It describes how sensors on platforms like satellites, aircraft, and drones measure this energy across the electromagnetic spectrum to extract information and create imagery.

Uploaded by

Hannah Gomez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
927 views

Getting Started With Imagery and Remote Sensing

The document provides an overview of remote sensing, including what it is, the key components of a remote sensing system (sensors and platforms), and examples of different sensor and platform types. Specifically, it defines remote sensing as collecting information about Earth's surface without direct contact using reflected or emitted energy. It describes how sensors on platforms like satellites, aircraft, and drones measure this energy across the electromagnetic spectrum to extract information and create imagery.

Uploaded by

Hannah Gomez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Getting Started with Imagery and Remote Sensing

What is remote sensing?


Remote sensing is the process of collecting information about objects and features, most
commonly on Earth's surface, without making direct contact with the object or feature.
Information can be extracted about Earth's surface because various objects and materials reflect,
absorb, or emit energy differently. That energy can be derived from the sun, as indicated in the
following graphic, or other energy sources. Note in the graphic that some of the sun's energy is
absorbed by or scattered through the atmosphere. Only a fraction of the total energy from the sun
is returned to a remote sensing system. Atmospheric conditions impact remote sensing and the
ability to collect information.

Remote sensing consists of two elements: a sensor and a platform (or collection platform). Sensors
measure and record energy that is either reflected by or emitted from objects on the ground, and then store
and display the data as a special type of raster called an image. A platform is the vehicle—such as an
unmanned aerial vehicle (UAV), an airplane, or a satellite—that transports or "carries" a sensor on board.
The sensor-platform combination, referred to as the remote sensing system, will affect the type and
quality of the imagery collected.

Replay
The process of remote sensing, or collecting reflected or emitted energy from objects on Earth's surface, is depicted.
The yellow arrows moving toward Earth's surface represent energy coming from the sun, and the yellow and orange
arrows represent that energy reflecting or emitting back to the sensor on the satellite platform.

Collecting information: Sensor types


There are two types of remote sensing sensors: passive and active. The energy leveraged in
remote sensing is either generated passively by a source such as the sun or actively by the sensor
emitting its own energy. The following slides share more information about passive and active
sensors.

 Passive remote sensing


Passive sensors collect and measure reflected energy from a source other than the sensor. In most
cases, it is the sun that provides the energy that gets reflected from objects on the ground.
Passive sensors can also collect emitted thermal energy.

 Active remote sensing


Active sensors transmit a signal from the sensor, and the same sensor then collects the returned
signals. Active sensors can operate at night or with cloud cover. The most common types of
active sensors are radar (transmitting radio waves), lidar (transmitting light), and sonar
(transmitting sound waves).

Eye in the sky: Platforms


Recall that remote sensing requires two elements: a sensor and a platform. A platform transports
the sensor as the sensor collects information. Platforms can be satellites, airplanes, helicopters,
drones, hot air balloons, and sometimes even kites. The following slides describe three of the
most common platforms.

 Satellites
A satellite platform is placed in Earth's orbit. The sensor can collect information across large
geographic areas and circle back to the same areas on a consistent cadence. Sun-synchronous
satellite orbits are the most common orbit used for Earth observation remote sensing. Sun-
synchronous satellites are typically 400 to 600 miles from Earth, orbiting around the globe from
pole to pole and collecting information for the entire planet. This reoccurring collection of
information enables analysts to track change over time and potentially make predictions about
the future. Time between collection will vary for different satellites; however, it is important to
note that these satellites are not geostationary, or fixed to collect information from a single
location. Imagery collected with satellite platforms such as Landsat is often open-source and
easily accessible. Satellites are an ideal platform for global monitoring and environmental
science applications.

 Aerial
Aircraft and aerial platforms are more targeted than satellites, typically collecting imagery over a
smaller geographic area or field of view. The way that aerial imagery is collected varies
depending on the aircraft type and the sensor type: active or passive. A high-altitude aircraft will
carry the sensor, collecting imagery 30,000 to 90,000 feet from Earth's surface. This is a
favorable platform for intelligence and can sometimes be piloted remotely. General aviation
aircraft will carry a sensor 100 to 10,000 feet above Earth's surface. During flight, the sensor will
capture consecutive overlapping images with a smaller field of view; these images are processed
into imagery datasets, as depicted in the following graphic. Because they fly at a lower altitude
and slower speed, general aviation aircraft often provide higher-quality imagery than high-
altitude aircraft and are therefore a more popular aerial platform choice. The flight path may be
consistent from season to season or from year to year, providing the ability to compare imagery
datasets. General aviation aircraft provide imagery for numerous applications, including crop
monitoring, changing land use, and landcover.

 Unmanned aerial vehicles (UAV) 


UAV or drone platforms are growing in popularity. Due to their accessibility and their ability to
fly low, hover, and be remotely controlled, they offer attractive advantages over other aerial
platforms. Drones that are not used in high altitude will typically carry the sensor 100 to 500 feet
above Earth's surface. As indicated in the graphic, drones' field of view is more flexible than that
of other platforms because of their custom flight path, collection angles, and ability to be much
closer to the objects being observed. Applications are far-reaching and include inspections,
surveying, monitoring, and even public safety.

Exploring the electromagnetic spectrum


To understand various types of sensors and how they are able to discriminate features, you must
first understand energy, light, and the electromagnetic spectrum (EMS). Electromagnetic
radiation is energy that moves through space at the speed of light as different wavelengths of
electric and magnetic fields. Types of electromagnetic radiation are divided into regions of
energy—including gamma (γ) rays, X-rays, ultraviolet, visible light, infrared, microwave, and
radio waves—that are based on specific, or agreed upon, wavelength ranges.

Some remote sensing sensors collect, measure, and record electromagnetic energy across various
regions of the EMS. This information is then processed and ultimately viewed as imagery.
Because features on Earth's surface reflect, absorb, or emit energy differently, imagery provides
the ability to identify features or abstract information about what is happening on Earth's surface.
If the features within the landscape have high reflectance of a specific region of the EMS, the
image will appear brighter. Conversely, if the features absorb more of the energy, the imagery
will appear darker.

Terrestrial (or earth) remote sensing utilizes several specific regions of the EMS, but remote
sensing technology in general has broad applications beyond GIS. The following graphics
explore the regions of the EMS in more detail.

 Visible light
Visible light is the portion of the EMS that people can see with the naked eye. You see this as the
full range of colors from violet through blue, green, yellow, orange, and finally red. Images
collected in this portion of the EMS will appear exactly true to the colors as you would expect to
see them; in other words, blue features will reflect blue to a sensor, green features will reflect
green to a sensor, and so on.

 Near-infrared (NIR)

As the wavelengths on the EMS increase, you start to move into infrared wavelength range. The
first region of the infrared range closest to visible light is the near-infrared (NIR) energy. Images
collected in this wavelength range of the EMS are often used to discern vegetation health.
Healthy vegetation appears very bright, because in this region of the EMS, high chlorophyll
levels reflect very strongly. NIR can also help distinguish subtle differences in vegetation
structure (for instance, deciduous versus coniferous trees, or even different species of plants).
Another characteristic of this region of the EMS is that water absorbs NIR energy. Because of
this absorption property, water features will reflect less NIR energy back to a sensor and appear
dark in the imagery.

 Thermal infrared

The thermal region spans a very wide swath of the EMS. Images collected with thermal infrared
could be used to monitor geothermal activity or identify the location of a toxic gas plume. It is
important to note that thermal infrared energy is directly associated with the sensation of heat.
Most of the thermal energy collected and recorded by a sensor in this range of the EMS is
emitted either by Earth's surface or an object on the ground (for example, a tin roof on a hot day).

 Microwaves
The microwave region of the EMS is very broad and is sometimes included as a subset of the
larger radio wave region of the EMS. Radar, an active remote sensing sensor, will emit and
record returned microwaves. An image collected within the microwave region will be non-literal,
as it displays motion and position, such as a wake of a ship. The image will often look different
than an image collected in shorter wavelengths regions (such as visible or infrared). Microwaves
can penetrate portions of the atmosphere in nearly all types of conditions: haze, clouds, light rain,
and even smoke.

 Radio (FM/AM) and long radio waves

As wavelength continues to increase, the next region of the EMS is radio waves. There is little
application for earth remote sensing in this region of the spectrum; however, you are probably
familiar with some widespread applications in this portion of the EMS, such as FM and AM
radio, mobile phone frequencies, and over-air television broadcast ranges.

 Ultraviolet (UV)
Moving along the other side of visible light, the wavelength decreases. The first region of the
EMS adjacent to violet/blue light is the ultraviolet (UV) radiation. This portion of the EMS has
very limited utility for earth remote sensing and most of this energy is absorbed by ozone in the
atmosphere. However, some sensors may capture a small portion of this range to penetrate the
surface of water.

 Gamma (γ) and X-rays

Similar to UV radiation, the next two regions of decreasing wavelength, Gamma (γ) and X-rays,
have limited use in earth remote sensing. Earth's magnetosphere absorbs most of the naturally
occurring radiation in this portion of the EMS.

1.
Which type of platform would most likely meet the following criteria? 
 Open-source
 Consistent cadence
 Collects information around the world

Aerial

Kite

Satellite

UAV or drone
2.
The electromagnetic spectrum (EMS) helps users understand which information?

Highly accurate measurement recordings from pulsed light energy returning from the surface of
the earth to the active sensor

Particular wavelength ranges where remote sensing sensors collect, measure, and record
responses from various regions of energy

Using precise wavelength measurements to determine the geographic coordinate system in which
the imagery dataset should be visualized

The spatial resolution or the area of the ground represented in each individual pixel in the
imagery dataset
3.
Which two use cases or applications are common for imagery use? 
Choose two.

Model subterranean features

Forest management

Trace networks

Agricultural monitoring
4.
Which statement describes active remote sensing?

Information is captured as wavelengths, measured in nanometers, along the EMS.

The sensor collects information on a consistent cycle while being transported by a platform. 

The sensor transmits a signal and measures the reflection back from Earth's surface.

The vehicle transports or carries the sensor as it captures information about Earth's surface.
5.
Which description is accurate for the remote sensing process?

Collecting information about an object or feature without direct contact


Recording how much sunlight is absorbed into Earth's atmosphere

Tracking a sensor on a platform as it travels across a geographic area

Measuring wavelengths in nanometers in the visible portion of the EMS


Seeing beyond the visible

Spectral profiles
Recall that all features on Earth's surface reflect, absorb, or emit energy differently across the
EMS. A feature's unique reflectance pattern along the different regions of the EMS is represented
as a spectral signature. Everything on Earth has a unique spectral signature, which provides the
ability to identify features or abstract information about what is happening on Earth's surface.

In ArcGIS Pro, spectral signatures can be displayed in a chart format known as a spectral profile.
Spectral profiles can add information and distinguish details about features on the ground by
comparing spectral signatures. By selecting various features in an image and viewing the spectral
profiles, differences in reflection and absorption properties can be observed.

The following graphic is an example of a spectral profile. The x-axis represents the wavelength
range of the EMS. The y-axis is the reflected energy values. Reflectance values for specific
features in the imagery, such as water, are plotted along the range of the EMS. The unique
reflectance patterns help you identify and understand how various features absorb and reflect
energy in different regions of the EMS.

This example of a spectral profile represents how different features would appear, as intensities, across the EMS.
The instrument used to collect this information renders smooth lines, allowing for easy discrimination. (Source:
"Imagery and GIS" by Green, Congalton, and Tukman. Courtesy: Esri Press.)
< Previous

How sensors measure and collect energy into bands

In imagery, the reflected energy measured in various regions of the EMS is stored and referred to
as bands. For example, reflectance collected in the red region of the EMS would be stored in the
Red band. Different sensors can collect reflectance from different regions of the EMS; therefore,
the number of bands may vary between sensors.

Most sensors collect reflectance values beyond the visible portion of light that humans are able
to see. Just as different objects and materials reflect, emit, or absorb visible light differently—
allowing humans to see variations in color—objects reflect, emit, or absorb energy differently in
regions outside the visible portion of the EMS. Generally, bands will capture the visible, NIR,
and SWIR regions, because these regions tend to contain more useful information in Earth
observation remote sensing.

In the following graphic, well-known sensors are listed with their associated bands displayed
along the EMS. Some bands appear wider than others, as they collect a greater wavelength range
of the EMS, such as Worldview-3's Panchromatic band.

Sensors and wavelength collections in bands

Bands capturing the wavelengths between 400 and 700 nm are denoted in different colors to
represent the visible spectrum. These bands capture what our eyes can see in true color. Beyond
700 nm, all the bands are the same color, as they fall outside the visible spectrum. Although our
eyes cannot detect these bands, the bands can be manipulated to extract information about Earth's
surface.

Image characteristics: Four types of resolutions


Previously, you learned that an image stores spectral information collected by the sensor in
bands. These bands represent the spectral property of an image. Imagery uses common
characteristics, or properties, to describe the raster data, referred to as resolutions. There are four
types of resolutions to consider when working with imagery: spectral, spatial, radiometric, and
temporal. Understanding the resolutions of imagery will enable better visualization,
interpretation, and analysis of the data. Click through the following items to learn more about the
different types of imagery resolutions.

 Spectral resolution 
Spectral resolution refers to the number of bands, as well as the bands' wavelength ranges in the
imagery. This spectral information can be used to identify features or specific phenomena in the
image. The higher the spectral resolution, the more bands are available to provide spectral
information, meaning more information about features in the imagery. For example, the left
graphic would have a higher spectral resolution than the right graphic, because it contains more
bands and thus more spectral information. 

The three-band image example represents three bands collected in the red, green, and blue portions of the EMS. The
single-band image example contains one band collected across a greater wavelength range of the EMS. Because the
three-band image example contains more distinct spectral information, it has a higher spectral resolution than the
single-band image example.

 Spatial resolution
Spatial resolution refers to the size of the cell, or pixel. The cell size is the area represented on
the ground in the image. For example, if the spatial resolution of an image is 30 meters, the cell
size represents 30 meters by 30 meters on the ground. Only objects larger than 30 meters in the
image can be seen with any fidelity. Objects smaller than 30 meters become blurry and are
difficult to discern and identify. The smaller the pixels, the more detail you can see in an image.
A high spatial resolution indicates that the image can resolve smaller features with greater detail
because there are more cells per unit area.

The feature on the left can be represented in a raster dataset of different spatial resolution. Examples of low-,
medium-, and high-resolution rasters (from left to right) show how well or poorly the original feature will be
represented.

 Radiometric resolution
Radiometric resolution, often referred to as bit depth, is the range of possible data values that can
be stored in each band. It describes sensitivity or brightness values to distinguish features viewed
in the same region of the EMS. Visually, this range corresponds to the grayscale values
observable in each band. The larger the range of data values, the higher the radiometric
resolution. This increases the ability to distinguish features in the imagery with greater detail. For
example, the graphic on the right has a higher radiometric resolution because of its ability to
store a larger range of data values in each cell.

Pictured is a notional comparison between an Unsigned 8-bit grayscale image (left) and an Unsigned 16-bit
grayscale image (right). The Unsigned 16-bit image has a higher radiometric resolution because it allows for more
possible data values.

 Temporal resolution 
Temporal resolution refers to when the image was collected and the frequency of collection for
the same geographic area. This could include the time of day, date, and time between capture.
Collection consistencies allow analysts to look back in time, perform change detection, identify
trends, and make predictions about the future. Having more imagery of the same geographic
location across time can provide a better understanding of the landscape. A sensor that collects
data once every week has a higher temporal resolution than a sensor that collects data once a
month. Temporal resolution is determined primarily by the platform type. For example, if the
imagery is collected with a satellite it is more likely to collected on consistent cadence, revisiting
the same location as the satellite orbits Earth.

CORRECT
1.
What advantage does a spectral profile provide when evaluating features?

Provides a representation of each individual region of the EMS that will be used in Earth remote
sensing
Provides a graphical representation of the reflectance behaviors of different landscapes and
materials across the EMS

Provides a graphical representation comparing passive and active remote sensing sensors that can
be used in Earth remote sensing

Provides a graphical representation to determine the metadata that will need to be applied to the
imagery
CORRECT
2.
How can geographic features be identified using a spectral profile in relation to the EMS?

Features in the imagery will have a unique reflectance pattern across the EMS. 

The spectral profile displays a chart of each feature's sensitivity to the NIR region of the EMS.

Geographic features are identified in a spectral profile by comparing the visible reflectance in the
imagery.

The EMS provides a spectral profile to look up the reflectance of the features in the imagery.
CORRECT
3.
An analyst must deliver a crop health assessment for the contiguous United States for a national
agency. The assessment needs to include the current crop health and indicate how the current
crop health compares over the duration of the entire growing season. The analyst needs to
determine which imagery and raster resolutions will be the most valuable for the assessment.

Sentinel-2 Aerial imagery


 Spatial collected on
Resoluti behalf of the
on: 10 agency
meters  Spatial
 Spectral Resoluti
Resoluti on: 1
on: meter
Blue,  Spectral
Green, Resoluti
Red, on:
and Red,
NearInf Green,
rared Blue,
bands and
 Radiom NearInf
etric rared
Resoluti bands
on: 11-  Radiom
bit etric
 Tempor Resoluti
al on:
Resoluti Unsigne
on: d 16-bit
Revisit  Tempor
frequen al
cy is Resoluti
every on: This
10 days was a
one-
time
collect
for the
Nationa
l
Agency.
Which imagery and raster resolutions would be most valuable to the analyst?

Aerial imagery collected on behalf of the agency: Spatial Resolution, Temporal Resolution

Sentinel-2: -Spatial Resolution, Temporal Resolution 

Aerial imagery collected on behalf of the agency: Spectral Resolution, Temporal Resolution

Sentinel-2: Spectral Resolution, Temporal Resolution


CORRECT
4.
How do sensors discriminate and organize information from different portions of the EMS?

Creating raster datasets for each portion of the EMS

Collecting and storing this information as bands

Utilizing the cell size or spatial resolution of the imagery

Collecting and storing this information as bit depth


CORRECT
5.
Match the statement that best describes the specific raster resolution.

Statement Raster Resolution

The more                                                                                            


consistent Choose...
Temporal resolution
the
collection
of images,
the more
insight is
available on
change over
time.

The larger                                                                                            


the number Choose...
Spectral resolution
of bands
within an
image, the
more
information
can be
gained from
features in
the
imagery.

The smaller                                                                                            


the cell size Choose...
Spatial resolution
in an
image, the
more detail
can be
observed in
the
imagery.

The larger                                                                                            


the range of Choose...
Radiometric resolution
data or
brightness
values in
the image,
the more
subtle the
differences
can be
distinguishe
d.

Applying image band combinations


With multispectral imagery, different band combinations, or composites, can be manipulated to
observe unique characteristics about surface features in images. By rendering these different
bands through red, green, and blue display channels in ArcGIS Pro, imagery can be visualized in
several ways. For instance, a Natural Color rendering will have the Red band displayed through
the red channel, the Green band in the green channel, and the Blue band through the blue
channel.

However, alternative band combinations, have been standardized over the years. These
combinations are often referred to as false-color composites, or false-color infrared (IR)
composites, because they do not render a natural color appearance; instead, they render different
band combinations using the infrared bands in varying display channels.

Understanding the relationships of these different bands and the corresponding reflectance and
absorption characteristics of features in different regions of the EMS allows you to identify and
differentiate features.

Sensors collect in various regions, or bands, of the EMS. ArcGIS Pro allows you to view these bands through
specific color display channels in your map.
Using the same Landsat 8 image of the Great Sandy Desert in western Australia, click through
the following slides to see how different band composites impact the display of geographic
features on the landscape.

Seeing beyond the visible

Applying image band combinations


With multispectral imagery, different band combinations, or composites, can be manipulated to
observe unique characteristics about surface features in images. By rendering these different
bands through red, green, and blue display channels in ArcGIS Pro, imagery can be visualized in
several ways. For instance, a Natural Color rendering will have the Red band displayed through
the red channel, the Green band in the green channel, and the Blue band through the blue
channel.

However, alternative band combinations, have been standardized over the years. These
combinations are often referred to as false-color composites, or false-color infrared (IR)
composites, because they do not render a natural color appearance; instead, they render different
band combinations using the infrared bands in varying display channels.
Understanding the relationships of these different bands and the corresponding reflectance and
absorption characteristics of features in different regions of the EMS allows you to identify and
differentiate features.

Sensors collect in various regions, or bands, of the EMS. ArcGIS Pro allows you to view these bands through
specific color display channels in your map.
Using the same Landsat 8 image of the Great Sandy Desert in western Australia, click through
the following slides to see how different band composites impact the display of geographic
features on the landscape.

 Natural color composite

A natural color band combination is the closest rendition to what is visible with the human eye.
In this type of composite rendering, imagery appears the way that you would see features if
flying over an area. For instance, the sand and rock features appear as shades of brown, red-
brown, or even white in regions of more exposed and weathered rock. The stream or river
channel that runs north to south in the left of the image is visible, but the surrounding area is
unremarkable and looks very similar to other features near it.

 Color infrared composite

A color infrared composite is a type of false-color IR that helps distinguish between vegetation,
urban, and water features. Because the NIR band is displayed in the red color channel, healthy
vegetation will appear as shades of red. The vegetation along the river and stream channels
(riparian vegetation) is clearly visible as red and magenta colors. Other features, such as different
rock types and sand formations, appear with variable colors to help with discrimination. Unlike
the natural color composite, where these features are various shades of brown or white, these
same features appear as shades of yellow, green, pink, and even blue, allowing you to observe
subtle differences in areas not apparent in the visible region of the EMS.

 Vegetation analysis composite

The vegetation analysis composite is another type of preset false-color IR band combination that
you can render in ArcGIS Pro. This band combination highlights vegetation based on its water
content and cell structure. Because the NIR band is displayed in the green color channel, healthy
vegetation appears as vibrant shades of green. As a corollary to vegetation analysis, this
composite can help identify the lack of vegetation. In this band combination, the SWIR-1 band is
displayed in the red channel. This region of the EMS is very sensitive to moisture content in soil,
as well as in vegetation. The reflectance of features decreases as water, or moisture, content
increases. In other words, the drier a feature, the more red it will appear. This characteristic
can help identify burn scars on the landscape. For instance, this image shows various past
wildfire scars throughout the image. The darker or deeper the red of a feature, the more recent
the fire.

 Custom composite

In ArcGIS Pro, you are not limited to the preselected band combinations. You can select bands to
display in whichever color channel you desire. For instance, the band combination above is
SWIR-2, Red, Blue (R,G,B). The SWIR-2 band is useful in soil and mineral discrimination;
displaying this band with two of the visible band helps highlight various features in the
landscape. Another characteristic of this band is its strong water absorption property. Its water
absorption is stronger than that of the SWIR-1 band. Compare this band combination to the other
three, and observe the variability of features in different areas of the image.

1.
In ArcGIS Pro, an image analyst changed the band combination to visualize the image as a color
infrared composite. Why would the analyst modify the bands of the image in this way?

To display healthy vegetation as vibrant shades of green

To visualize the imagery as it appears to the human eye

To highlight water features with a strong water absorption property

To distinguish between vegetation, urban, and water features


CORRECT
2.
Which band combination helps users identify healthy vegetation?

Red, Green, Coastal

Panchromatic

Near infrared, Red, Green

Red, Green, Blue


CORRECT
3.
How can modifying band combinations assist with interpreting different phenomena and features
within an image?

Because modifying band combinations provides a work-around to visualize multiple single-band


images at one time, phenomena and features can be easily interpreted.

Because modifying band combinations creates a copy of the image, the features and phenomena
in the images can be compared side by side in the software.

Because spatial resolution determines which phenomena and features are visible, certain band
combinations can be used to provide further clarity of the image.

Because different features absorb and reflect energy differently, specific band combinations can
highlight or exaggerate phenomena and features within an image.

Imagery in ArcGIS Pro


Raster data models for imagery management
Understanding raster data models in ArcGIS Pro is necessary to utilize imagery data effectively.
Proper data management of imagery will help with data organization, applying proper metadata
and display, and sharing imagery within your organization. It is important to remember that your
specific imagery data and business needs will drive your raster management strategy. There are
often many considerations (such as size of data, accessibility, and how it will be used), and
strategies will vary between organizations. That said, there are three primary data models that
everyone should be familiar with to work with imagery in ArcGIS: raster dataset, mosaic
dataset, and image service.

A raster dataset is any individual raster data stored on disk. Raster datasets define how pixels are stored, including
the number of rows and columns, the number of bands, the actual pixel values, and other specific properties of the
data. Raster datasets can be stored in a number of formats that are supported in ArcGIS Pro, often referred to as
raster file formats.

Collections of raster datasets can be managed, visualized, analyzed, and shared with a mosaic dataset. The mosaic
dataset is not raster data itself; it is a framework that catalogs imagery. Therefore, the mosaic dataset does not result
in the creation of new raster datasets; it catalogs existing raster datasets. Created in a geodatabase (file or enterprise),
the framework dynamically stitches raster datasets together to be visualized as one image without altering or
converting the original imagery.

Individual raster datasets or collections of raster datasets managed in a mosaic dataset can be shared as an image
service. Image services provide access to raster data through a URL and can be accessible to users in ArcGIS Pro, as
well as in web and mobile applications. Sharing raster data as image services often reaches a broader audience to
interact with imagery, because users are not limited to using the ArcGIS Pro desktop client.

CORRECT
1.
An image analyst was provided with updated aerial imagery for their county. To manage the
collection of images, the analyst chooses to use the mosaic dataset model. To begin, the analyst
takes note of the storage location and the properties of the images. Next, they create a new
mosaic dataset in a file geodatabase. What should the analyst do next?

Apply the appropriate raster type.

Define and build overviews.

Add rasters to the mosaic dataset.

Apply the product definition.


CORRECT
2.
Why would you want to create and use a mosaic dataset? 

To copy individual images into a comprehensive data model for further visualization

To share imagery with an audience that may not have access to ArcGIS Pro

To apply visualization and analysis across multiple imagery datasets at once

To investigate metadata and raster resolutions for an individual image


CORRECT
3.
How do raster products and raster types provide optimal display of imagery?

By creating an attribute table with additional information

By adding raster datasets to mosaic datasets

By changing the default stretch render to RGB

By providing easy-to-apply processing templates


CORRECT
4.
Which component of the mosaic dataset must be referenced to catalog and display imagery? 

The collection of raster datasets

The Footprint feature class


A link to the processing template

The Boundary feature class

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy