Chapter 3, GIS
Chapter 3, GIS
Chapter 3, GIS
Active sensors: on the other hand, provide their own energy source for illumination. The
sensor emits radiation which is directed toward the target to be investigated (Figure 3.2). The
radiation reflected from that target is detected and measured by the sensor.
1|Page
Advantages for active sensors include the ability to obtain measurements anytime, regardless of
the time of day or season. Active sensors can be used for examining wavelengths that are not
sufficiently provided by the sun, such as microwaves, or to better control the way a target is
illuminated. However, active systems require the generation of a fairly large amount of energy to
adequately illuminate targets. Some examples of active sensors are radar (radio detecting and
ranging, lidar (light detection and ranging), sonar (sound navigation ranging) and a laser
fluorosensor.
2|Page
collect and record energy reflected or emitted from a target or surface, it must reside on a
stable platform removed from the target or surface being observed. Platforms for remote
sensors can be situated at heights ranging from just a few centimeters, using field equipment,
up to orbits in space as far away as 36,000 km (geostationary orbits) and beyond. Very often
the sensor is mounted on a moving vehicle, which we call the platform, such as aircraft and
satellites. Occasionally, static platforms are used. For example, by using a multispectral sensor
mounted to a pole, the changing reflectance characteristics of a specific crop during the day or
season can be assessed. As shown in Figure 3.7 platforms for remote sensors may be situated on
the ground, on an aircraft or balloon (or some other platform within the Earth's atmosphere), or
on a spacecraft or satellite outside of the Earth's atmosphere.
Ground-based sensors: are often used to record detailed information about the surface which is
compared with information collected from aircraft or satellite sensors. In some cases, this can be
used to better characterize the target which is being imaged by these other sensors, making it
possible to better understand the information in the imagery. Sensors may be placed on a ladder,
tall building, cherry picker, crane, etc.
Airborne observations are carried out using aircraft with specific modifications to carry sensors.
An aircraft needs a hole in the floor or a special remote sensing pod for the aerial camera or a
scanner. Sometimes Ultra-Light Vehicles (ULVs), balloons, helicopters, airships or kites are
used for airborne remote sensing. Depending on the platform and sensor, airborne observations
are possible at altitudes ranging from less than 100 m up to 40 km.
3|Page
The navigation of an aircraft is one of the most crucial parts of airborne remote sensing. The
availability of satellite navigation technology has significantly improved the quality of flight
execution as well as the positional accuracy of the processed data. A recent development is the
use of Unmanned Aerial Vehicles (UAVs) for remote sensing.
For space borne remote sensing, satellites and space stations are used. Satellites are launched
into space with rockets. Satellites for Earth Observation are typically positioned in orbits
between 150 km and 36,000 km altitude. The choice of the specific orbit depends on the
objectives of the mission, e.g. continuous observation of large areas or detailed observation of
smaller areas.
3.3 Orbit
Space borne remote sensing is carried out using sensors that are mounted on satellites, space
vehicles and space stations. The monitoring capabilities of the sensor are to a large extent
determined by the parameters of the satellite's orbit. In general, an orbit is a circular path
followed by a satellite in its revolution around the Earth. Satellite orbits are matched to the
capability and objective of the sensor(s) they carry. Orbit selection can vary in terms of
altitude (their height above the Earth's surface) and their orientation and rotation relative
to the Earth. Hence, different types of orbits are required to achieve continuous monitoring
(meteorology), global mapping (land cover mapping), or selective imaging (urban areas). For
remote sensing purposes, the following orbit characteristics are relevant:
Orbital altitude, which is the distance (in km) from the satellite to the surface of the
Earth. Typically; remote sensing satellites orbit either at 150 km to 1000 km (low-earth
orbit, or LEO) or at 36,000 km (geostationary orbit or GEO) distance from the Earth's
surface. This altitude influences to a large extent the area that can be viewed (coverage) and
the details that can be observed (resolution).
Orbital inclination angle, which is the angle (in degrees) between the orbital plane and
the equatorial plane. The inclination angle of the orbit determines, together with the field of
view of the sensor, the latitudes up to which the Earth can be observed. If the inclination is
4|Page
0
60°, then the satellite flies over the Earth between the latitudes 60 north and 60° south.
If the satellite is in a low-earth orbit with an inclination of 60°, then it cannot observe parts of
the Earth at latitudes above 60° north and below 60° south, which means it cannot be used
for observations of the polar regions of the Earth. Many of these satellite orbits are also
sun-synchronous such that they cover each area of the world at a constant local time of
day called local sun time.
Orbital period, which is the time (in minutes), required to complete one full orbit. The
orbital period and the mean distance to the center of the Earth are interrelated (Kepler's third
law).
Repeat cycle, which is the time (in days) between two successive identical orbits. The
revisit time, the time between two subsequent images of the same area, is determined by
the repeat cycle together with the pointing capability of the sensor. Pointing capability
refers to the possibility of the sensor-platform to look to the side, or fore and behind.
The sensors that ate mounted on SPOT, IRS and Ikonos have such a capability.
The following orbit types are most common for remote sensing missions:
0 0
Polar orbit: an orbit with inclination angle between 80 and 100 . An orbit having an
0
inclination larger than 90 means that the satellite's motion is in westward direction.
(Launching a satellite in eastward direction requires less energy, because of the eastward
rotation of the Earth). Such a polar orbit enables observation of the whole globe, also near the
poles. The satellite is typically placed in orbit at 600 km to 1000 km altitude. Most of the
remote sensing satellite platforms today are in near-polar orbits, which means that the
satellite travels northwards on one side of the Earth and then toward the southern pole on the
second half of its orbit (Figure 3.9). These are called ascending and descending passes,
respectively. If the orbit is sun synchronous the ascending pass is most likely on the
shadowed side of the Earth while the descending pass is on the sunlit side. Sensors
recording reflected solar energy only image the surface on a descending pass, when solar
illumination is available. Active sensors which provide their own illumination or passive
sensors that record emitted (e.g. thermal) radiation can also image the surface on ascending.
5|Page
Figure 2.2: Polar Orbit
Sun-synchronous orbit: this is a near-polar orbit chosen in such a way that the satellite
always passes overhead at the same time. Most sun-synchronous orbits cross the equator
at mid-morning at around 10:30 hour local solar time. At that moment the Sun angle is
low and the resultant shadows reveal terrain relief. In addition to day-time images a sun-
synchronous orbit also allows the satellite to record night-time images (thermal or radar)
during the ascending phase of the orbit at the dark of the Earth. Examples of polar orbiting,
sun-synchronous satellites are Landsat, SPOT and IRS.
Geostationary orbit: this refers to orbits in which the satellite is placed above the equator
(inclination angle: 0°) at an altitude of approximately 36,000 km. Satellites at very high
altitudes, which view the same portion of the Earth's surface at all times have geostationary
orbits (Figure 3.10). These geostationary satellites, at altitudes of approximately 36,000
kilometers, revolve at speeds which match the rotation of the Earth so they seem
stationary, relative to the Earth's surface. This allows the satellites to observe and collect
information continuously over specific areas. Weather and communications satellites
commonly have these types of orbits. Due to their high altitude, some geostationary weather
satellites can monitor weather and cloud patterns covering an entire hemisphere of the Earth.
6|Page
Figure 2.4: Geostationary orbit
Today's meteorological weather satellite systems use a combination of stationary satellites and
polar orbiters. The geostationary satellites offer a continuous hemispherical view of almost half
the Earth (45%), while polar orbiters offer a higher spatial resolution.
As a satellite revolves around the Earth, the sensor "sees" a certain portion of the Earth's surface.
The area imaged on the surface, is referred to as the swath. Imaging swaths for space borne
sensors generally vary between tens and hundreds of kilometers wide. The apparent movement
of the earth allows the satellite swath to cover a new area with each consecutive pass. The
satellite's orbit and the rotation of the Earth work together to allow complete coverage of the
Earth's surface, after it has completed one complete cycle of orbits.
7|Page
Resolution
Spatial resolution
This refers to the smallest unit-area measured. This indicates the minimum detail of objects that
can be distinguished. The detail noticeable in an image is dependent on the spatial resolution of the
sensor and refers to the size of the smallest possible feature that can be detected. Images where only
large features are visible are said to have coarse or low resolution. In fine or high resolution
images, small objects can be detected. Military sensors for example, are designed to view as much
detail as possible, and therefore have very fine resolution. Commercial satellites provide imagery
with resolutions varying from a few meters to several kilometers. Generally speaking, the finer the
resolution, the less total ground area can be seen.
Spectral resolution
This is related to the widths of the spectral wavelength bands that the sensor is sensitive to. The
finer the spectral resolution, the narrower the wavelength ranges for a particular channel
or band. Black and white film records wavelengths extending over much, or the entire visible
portion of the electromagnetic spectrum. Its spectral resolution is fairly coarse, as the various
wavelengths of the visible spectrum are not individually distinguished and the overall reflectance
in the entire visible portion is recorded. Colour film is also sensitive to the reflected energy over
the visible portion of the spectrum, but has higher spectral resolution, as it is individually
sensitive to the reflected energy at the blue, green, and red wavelengths of the spectrum (Figure
3.16). Thus, it can represent features of various colors based on their reflectance in each of these
distinct wavelength ranges.
Many remote sensing systems record energy over several separate wavelength ranges at various
spectral resolutions. These are referred to as multi-spectral sensors. Advanced multi-spectral
sensors called hyperspectral sensors, detect hundreds of very narrow spectral bands throughout
the visible, near-infrared, and mid-infrared portions of the electromagnetic spectrum.
8|Page
Radiometric resolution
This refers to the smallest differences in levels of energy that can be distinguished by the
sensor. It describes the actual information content in an image. Every time an image is acquired
on film or by a sensor, its sensitivity to the magnitude of the electromagnetic energy determines
the radiometric resolution. The radiometric resolution of an imaging system describes its
ability to discriminate very slight differences in energy. The finer the radiometric resolution of a
sensor the more sensitive it is to detecting small differences in reflected or emitted energy.
• Revisit time, which is the (minimum) time between two successive image acquisitions over
the same location on Earth. This is sometimes referred to as temporal resolution. Temporal
Resolution refers to the length of time it takes for a satellite to complete one entire orbit
cycle. The ability to collect imagery of the same area of the Earth's surface at different periods of
time is one of the most important elements for applying remote sensing data. For example,
during the growing season, most species of vegetation are in a continual state of change and our
ability to monitor those subtle changes using remote sensing is dependent on when and how
frequently we collect imagery.
Radiometric Correction
Radiometric corrections may be necessary due to variations in scene illumination and viewing
geometry, atmospheric conditions, and sensor noise and response. Radiometric corrections
include correcting the data for sensor irregularities and unwanted sensor or atmospheric noise,
and converting the data so they accurately represent the reflected or emitted radiation measured
by the sensor. This method is based on the assumption that the reflectance from these
features, if the atmosphere is clear, should be very small, if not zero. If we observe values
much greater than zero, then they are considered to have resulted from atmospheric scattering.
9|Page
Noise in an image may be due to irregularities or errors that occur in the sensor response and/or
data recording and transmission. Common forms of noise include systematic striping or
banding and dropped lines. Both of these effects should be corrected before further
enhancement or classification is performed.
Image Enhancement
The goal of image enhancements is to improve the visual interpretability of an image by
increasing the apparent distinction between the features in the scene. Although radiometric
corrections for illumination, atmospheric influences, and sensor characteristics may be done
prior to distribution of data to the user, the image may still not be optimized for visual
interpretation. Remote sensing devices, particularly those operated from satellite platforms, must
be designed to cope with levels of target/background energy which are typical of all conditions
likely to be encountered in routine use. With large variations in spectral response from a diverse
range of targets (e.g. forest, deserts, snowfields, water, etc.) no generic radiometric correction
could optimally account for and display the optimum brightness range and contrast for all
targets. The most commonly applied digital enhancement techniques are Contrast manipulation,
spatial filtering, and convolution.
Image Transformations
Image transformations typically involve the manipulation of multiple bands of data, whether
from a single multispectral image or from two or more images of the same area acquired at
different times (i.e. multi-temporal image data). Either way, image transformations generate
10 | P a g e
"new" images from two or more sources which highlight particular features or properties of
interest, better than the original input images. Basic image transformations apply simple
arithmetic operations to the image data. Image subtraction is often used to identify changes that
have occurred between images collected on different dates.
Image division or spectral rationing is one of the most common transforms applied to image
data. Image rationing serves to highlight subtle variations in the spectral responses of various
surface covers. By rationing the data from two different spectral bands, the resultant image
enhances variations in the slopes of the spectral reflectance curves between the two different
spectral ranges that may otherwise be masked by the pixel brightness variations in each of the
bands. For example Healthy vegetation reflects strongly in the near-infrared portion of the
spectrum while absorbing strongly in the visible red. Other surface types, such as soil and water,
show near equal reflectance in both the near-infrared and red portions. Thus, a ratio image of
Landsat TM Band 4 (Near-Infrared - 0.8 to 1.1 mm) divided by Band 3 (Red - 0.6 to 0.7 mm)
would result in ratios much greater than 1.0 for vegetation, and ratios around 1.0 for soil and
water. Thus the discrimination of vegetation from other surface cover types is significantly
enhanced. One widely used image transform is the Normalized Difference Vegetation Index
(NDVI) which has been used to monitor vegetation conditions on continental and global scales
using the Advanced Very High Resolution Radiometer (AVHRR) sensor onboard the NOAA
series of satellites.
A human analyst attempting to classify features in an image uses the elements of visual
interpretation to identify homogeneous groups of pixels which represent various features or land
cover classes of interest. Digital image classification uses the spectral information represented by
the digital numbers in one or more spectral bands, and attempts to classify each individual pixel
11 | P a g e
based on this spectral information. This type of classification is termed spectral pattern
recognition. In either case, the objective is to assign all pixels in the image to particular classes
or themes (e.g. water, coniferous forest, deciduous forest, corn, wheat, etc.). The resulting
classified image is comprised of a mosaic of pixels, each of which belong to a particular theme,
and is essentially a thematic "map" of the original image.
Spatial pattern recognition involves the categorization of image pixels on the basis of their
spatial relationship with pixels surrounding them. Spatial classifiers might consider such aspects
as image texture, pixel proximity, feature size, shape, directionality, repetition, and context.
These types of classifiers attempt to replicate the kind of spatial synthesis done by the human
analyst during the visual interpretation process. Accordingly, they tend to be much more
complex and computationally intensive than spectral pattern recognition procedures. Common
classification procedures can be broken down into two broad subdivisions based on the method
used: supervised classification and unsupervised classification.
Supervised Classification
In a supervised classification, the analyst identifies in the imagery homogeneous representative
samples of the different surface cover types (information classes) of interest. These samples are
referred to as training areas. The selection of appropriate training areas is based on the analyst's
familiarity with the geographical area and their knowledge of the actual surface cover types
present in the image. Thus, the analyst is "supervising" the categorization of a set of specific
classes.
Unsupervised classification
Unsupervised classifier does not utilize training data as the basis for classification. Unsupervised
classification in essence reverses the supervised classification process. Spectral classes are
grouped first, based solely on the numerical information in the data, and are then matched by the
analyst to information classes (if possible). Programs, called clustering algorithms, are used to
determine the natural (statistical) groupings or structures in the data. Usually, the analyst
specifies how many groups or clusters are to be looked for in the data. In addition to specifying
the desired number of classes, the analyst may also specify parameters related to the separation
distance among the clusters and the variation within each cluster.
12 | P a g e
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: