0% found this document useful (0 votes)
5 views

Remote sensing-Optical (1)

The document provides an overview of remote sensing, detailing its processes, applications, and the technology involved. It explains the interaction of electromagnetic radiation with the Earth's atmosphere and surfaces, as well as the types of sensors used in remote sensing, including passive and active sensors. Additionally, it discusses the significance of satellite orbits and sensor technology in capturing and analyzing data about the Earth's surface.

Uploaded by

ayushabhas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Remote sensing-Optical (1)

The document provides an overview of remote sensing, detailing its processes, applications, and the technology involved. It explains the interaction of electromagnetic radiation with the Earth's atmosphere and surfaces, as well as the types of sensors used in remote sensing, including passive and active sensors. Additionally, it discusses the significance of satellite orbits and sensor technology in capturing and analyzing data about the Earth's surface.

Uploaded by

ayushabhas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Course Title

EQ-538 Geoinformatics

Remote Sensing Studies


 Remote sensing is the science of acquiring information about the Earth's
surface without actually being in contact with it.

 This is done by sensing and recording reflected or emitted energy and


processing, analyzing, and applying that information.

 Remote sensing process involves an interaction between incident


radiation and the targets of interest.

 This is exemplified by the use of imaging systems where the following


seven elements are involved.

With the advent and tremendous technological progress in the field of remote
sensing technique it has now become possible to study several things happening
on the earth surface. Earth observing satellites have the capability of producing
synoptic view of the earth and can generate wealth of information. Remote
sensing offers this perspective and allows a researcher to examine other reference
ancillary data simultaneously and synergistically. Nature and pattern of deformation
that the earth has undergone are beautifully displayed by the satellite images
enabling us to study these in details.

Deformation mapping is the identification and characterization of structural expression.


Structures include faults, folds, synclines and anticlines and lineaments. Understanding
structures is the key to interpreting crustal movements that have shaped the present
terrain. Structures can indicate potential locations of oil and gas reserves by
characterizing both the underlying subsurface geometry of rock units and the amount of
crustal deformation and stress experienced in a certain locale. Structures are also
examined for clues to crustal movement and potential hazards, such as earthquakes,
landslides, and volcanic activity. Identification of fault lines can facilitate land use
planning by limiting construction over potentially dangerous zones of seismic activity.

Certain remote sensing devices offer unique information regarding structures, such as in
the relief expression offered by radar sensors. A benefit of side looking radar is that the
illumination conditions can be controlled, and the most appropriate geometry used for
type of terrain being examined.

1
1. Energy Source or Illumination (A) – the first requirement for remote sensing is to
have an energy source which illuminates or provides electromagnetic energy to the
target of interest.
2. Radiation and the Atmosphere (B) – as the energy travels from its source to the
target, it will come in contact with and interact with the atmosphere it passes through.
This interaction may take place a second time as the energy travels from the target to
the sensor.

3. Interaction with the Target (C) - once the energy makes its way to the target through
the atmosphere, it interacts with the target depending on the properties of both the target
and the radiation.

4. Recording of Energy by the Sensor (D) - after the energy has been scattered by, or
emitted from the target, we require a sensor (remote - not in contact with the target) to
collect and record the electromagnetic radiation.

5. Transmission, Reception, and Processing (E) - the energy recorded by the sensor
has to be transmitted, often in electronic form, to a receiving and processing station
where the data are processed into an image (hardcopy and/or digital).

6. Interpretation and Analysis (F) - the processed image is interpreted, visually and/or
digitally or electronically, to extract information about the target which was illuminated.

7. Application (G) - the final element of the remote sensing process is achieved when
we apply the information we have been able to extract from the imagery about the target
in order to better understand it, reveal some new information, or assist in solving a
particular problem.

2
Electromagnetic Radiation : The first requirement for remote sensing is to
have an energy source to illuminate the target (unless the sensed energy is
being emitted by the target).

All electromagnetic radiation has fundamental properties and behaves in


predictable ways according to the basics of wave theory. Two characteristics of
electromagnetic radiation are particularly important for understanding remote
sensing. These are the wavelength and frequency.

Electromagnetic radiation consists of an electrical field (E) which varies in


magnitude in a direction perpendicular to the direction in which the radiation is
traveling, and a magnetic field (M) oriented at right angles to the electrical field.
Both these fields travel at the speed of light (c).

Wavelength is measured in meters (m) or some factor of metres such as


nanometres (nm, 10-9 metres), micrometres (µm, 10-6 metres) (µm, 10-6
metres) or centimetres (cm, 10-2 metres).

Electromagnetic Spectrum:

For most purposes, the ultraviolet or UV portion of the spectrum has the
shortest wavelengths. The light which our eyes - our "remote sensors" - can
detect is part of the visible spectrum. The visible wavelengths cover a range
from approximately 0.4 to 0.7 µm. The longest visible wavelength is red and the

3
shortest is violet. Common wavelengths of what we perceive as particular colours
from the visible portion of the spectrum are listed below. It is important to note
that this is the only portion of the spectrum we can associate with the concept of
colours.

Violet: 0.400 - 0.446 µm


Blue: 0.446 - 0.500 µm
Green: 0.500 - 0.578 µm
Yellow: 0.578 - 0.592 µm
Orange: 0.592 - 0.620 µm
Red: 0.620 - 0.700 µm

The next portion of the spectrum of interest is the infrared (IR) region which can
be divided into two categories based on their radiation properties - the reflected
IR, and the emitted or thermal IR. Radiation in the reflected IR region is used for
remote sensing purposes in ways very similar to radiation in the visible portion.

The portion of the spectrum of more recent interest to remote sensing is the
microwave region from about 1 mm to 1 m. This covers the longest
wavelengths used for remote sensing. The shorter wavelengths have properties
similar to the thermal infrared region while the longer wavelengths approach the
wavelengths used for radio broadcasts.

Interactions with the Atmosphere : Before radiation (used for remote sensing)
reaches the Earth's surface it has to travel through some distance of the Earth's
atmosphere. Particles and gases in the atmosphere can affect the incoming light
and radiation. These effects are caused by the mechanisms of scattering and
absorption.

Rayleigh scattering occurs when particles are very small compared to the
wavelength of the radiation. These could be particles such as small specks of
dust or nitrogen and oxygen molecules. Rayleigh scattering causes shorter
wavelengths of energy to be scattered much more than longer wavelengths.
Rayleigh scattering is the dominant scattering mechanism in the upper
atmosphere.

Mie scattering occurs when the particles are just about the same size as the
wavelength of the radiation. Dust, pollen, smoke and water vapour are common
causes of Mie scattering which tends to affect longer wavelengths than those
affected by Rayleigh scattering. Mie scattering occurs mostly in the lower
portions of the atmosphere where larger particles are more abundant, and
dominates when cloud conditions are overcast.

Nonselective scattering The final scattering mechanism of importance is called


nonselective scattering. This occurs when the particles are much larger than
the wavelength of the radiation. Water droplets and large dust particles can

4
cause this type of scattering. Nonselective scattering gets its name from the fact
that all wavelengths are scattered about equally.

Absorption is the other main mechanism at work when electromagnetic


radiation interacts with the atmosphere. In contrast to scattering, this
phenomenon causes molecules in the atmosphere to absorb energy at various
wavelengths. Ozone, carbon dioxide, and water vapour are the three main
atmospheric constituents which absorb radiation. Carbon dioxide referred to as
a greenhouse gas. This is because it tends to absorb radiation strongly in the far
infrared portion of the spectrum - that area associated with thermal heating -
which serves to trap this heat inside the atmosphere. Water vapour in the
atmosphere absorbs much of the incoming longwave infrared and shortwave
microwave radiation (between 22µm and 1m).

Atmospheric Window

Those areas of the spectrum which are not severely influenced by atmospheric
absorption are useful to remote sensors, are called atmospheric windows.

The visible portion of the spectrum, to which our eyes are most sensitive,
corresponds to both an atmospheric window and the peak energy level of the
sun. Energy emitted by the Earth corresponds to a window around 10 µm in the
thermal IR portion of the spectrum. The large window at wavelengths beyond 1
mm is associated with the microwave region.

Radiation - Target Interactions

There are three (3) forms of interaction that can take place when energy strikes,
or is incident (I) upon the surface. These are:

5
Absorption (A) Transmission (T) Reflection (R)

The total incident energy will interact with the surface in one or more of these
three ways. The proportions of each will depend on the wavelength of the energy
and the material and condition of the feature.

In remote sensing, we are most interested in measuring the radiation reflected


from targets. We refer to two types of reflection, which represent the two extreme
ends of the way in which energy is reflected from a target:

Specular reflection and Diffuse reflection.

When a surface is smooth we get specular or mirror-like reflection where all (or
almost all) of the energy is directed away from the surface in a single direction.

Diffuse reflection occurs when the surface is rough and the energy is reflected
almost uniformly in all directions.

If the wavelengths are much smaller than the surface variations or the particle
sizes that make up the surface, diffuse reflection will dominate. Most earth
surface features lie somewhere between perfectly specular or perfectly diffuse
reflectors.

Leaves strongly absorb radiation in the red and blue wavelengths but reflects
green wavelengths producing green appearance. Water absorbs more longer
wavelength radiation than shorter visible wavelengths thus water typically looks
blue or blue-green due to stronger reflectance at these shorter wavelengths.

Passive vs. Active Sensing

Passive Sensors : Remote sensing systems which measure energy that is


naturally available are called passive sensors. For all reflected energy, this can
only take place during the time when the sun is illuminating the Earth. Energy
that is naturally emitted (such as thermal infrared) can be detected day or night,
as long as the amount of energy is large enough to be recorded.

6
Active sensors, on the other hand, provide their own energy source for
illumination. Some examples of active sensors are a laser and a synthetic
aperture radar (SAR).

Satellites Orbits and Swaths

Geostationary Orbits : The orbit of a satellite is elliptical in shape, but remote


sensing satellites are usually put in orbits that are very close to approximations to
a circle. Satellites at very high altitudes, which view the same portion of the
Earth's surface at all times have geostationary orbits. These geostationary
satellites, at altitudes of approximately 36,000 kilometres, revolve at speeds
which match the rotation of the Earth so they seem stationary, relative to the
Earth's surface. This allows the satellites to observe and collect information
continuously over specific areas. Weather and communications satellites
commonly have these types of orbits.

Geostationary Satellite

Nearpolar Orbits : Many remote sensing platforms are designed to follow an


orbit (basically north-south) which, in conjunction with the Earth's rotation (west-
east), allows them to cover most of the Earth's surface over a certain period of
time. These are nearpolar orbits, so named for the inclination of the orbit relative
to a line running between the North and South poles.

7
Sun-synchronous orbits: Many of these satellite orbits are also sun-
synchronous such that they cover each area of the world at a constant local time
of day called local sun time. At any given latitude, the position of the sun in the
sky as the satellite passes overhead will be the same within the same season.

Swath : As a satellite revolves around the Earth, the


sensor "sees" a certain portion of the Earth's surface.
The area imaged on the surface, is referred to as the
swath. Imaging swaths for spaceborne sensors
generally vary between tens and hundreds of
kilometers wide.

Sensor Technology

Most remote sensing instruments (sensors) are designed to measure photons.


The fundamental principle underlying sensor operation centers on the concept of
the photoelectric effect. The magnitude of the electric current produced (number
of photoelectrons per unit time) is directly proportional to the light intensity. Thus,
changes in the electric current can be used to measure changes in the photons
(numbers; intensity) that strike the detector during a given time interval.

CCD Detector

An individual CCD is an extremely small silicon (micro) detector, which is light-


sensitive. Many individual detectors are placed on a chip side by side either in a
single row as a linear array or in stacked rows of linear arrays in X-Y (two
dimensional) space. When photons strike a CCD detector, electronic charges
develop whose magnitudes are proportional to the intensity of the impinging
radiation during a short time interval (exposure time). The number of elements
per unit length, along with the optics, determine the spatial resolution of the
instrument. Using integrated circuits each linear array is sampled very rapidly in
sequence, producing an electrical signal that varies with the radiation striking the
array. Each individual CCD corresponds to the "pixel". The size of the CCD is
one factor in setting spatial resolution (smaller sizes represent smaller areas on
the target surface); another factor is the height of the observing platform (satellite
or aircraft); a third factor can be the use of a telescopic lens.

8
 In the brief flickering instant that the shutter is open, each photosite
records the intensity or brightness of the light that falls on it by
accumulating a charge; the more light, the higher the charge.
 The brightness recorded by each photosite is then stored as a set of
numbers that can then be used to set the color and brightness of dots on
the screen or ink on the printed page to reconstruct the image.

 A typical RGB CCD layout.


 The cells are situated in columns of alternating colors such that red,
green, red, green is in one and blue, green, blue, green is in the one next
to it before the column patters are repeated.
 This may be confusing at first, as there is 25% more green than there is
red or blue in the system, however, this excess of green is advantageous,
as our own eyes are much more sensitive to the color green than they are
to blue and red.

9
 The colors can be manipulated as much as is desired to make the colors
appear correct.
 Once the CCD array is read by the hardware in the camera, software in
the camera runs it through a set of algorithms in order to merge the
intensity data from the CCD's pixels into color information that is then
saved into a typical digital format, such as JPG or TIFF.
 Typically, one pixel in a JPG or TIFF file is comprised of four cells (one
red, one blue, and two green) from a CCD array.

Filters over photosites

10
Here the full color of a green pixel is about to be interpolated from the eight pixels
that surround it.
 By combining these two interpolated colors with the color measured by the
site directly, the full color of the pixel can be calculated.
 "I'm bright red and the green and blue pixels around me are also bright so
that must mean I'm really a white pixel."
 It's like a painter creating a color by mixing varying amounts of other
colors on his palette.
 This step is computer intensive since comparisons with as many as eight
neighboring pixels is required to perform this process properly.

 Three separate image sensors can be used, each with its own filter. This
way each image sensor captures the image in a single color.
 Three separate exposures can be made, changing the filter for each one.
In this way, the three colors are "painted" onto the sensor, one at a time.
 Filters can be placed over individual photosites so each can capture only
one of the three colors. In this way, one-third of the photo is captured in
red light, one-third in blue, and one-third in green.

Color Composite Images


• In displaying a colour composite image, three primary colours (red, green
and blue) are used. When these three colours are combined in various
proportions, they produce different colours in the visible spectrum.
Associating each spectral band (not necessarily a visible band) to a
separate primary colour results in a colour composite image.

11
Many colours can be formed by combining the three primary colours (Red, Green, Blue) in various
proportions.

True or Real Color Composite

• If a multispectral image consists of the three visual primary colour bands


(red, green, blue), the three bands may be combined to produce a "true
colour" image.
• For example, the bands 3 (red band), 2 (green band) and 1 (blue band)
multispectral image can be assigned respectively to the R, G, and B colors
for display. In this way, the colours of the resulting color composite image
resemble closely what would be observed by the human eyes.

• Real Color composite
• RED band on RED
• GREEN band on GREEN
• BLUE band on BLUE

False Color Composite

• The display colour assignment for any band of a multispectral image can
be done in an entirely arbitrary manner.
• In this case, the colour of a target in the displayed image does not have
any resemblance to its actual colour.
• The resulting product is known as a false color composite image.
• There are many possible schemes of producing false colour composite
images.
• However, some scheme may be more suitable for detecting certain
objects in the image.

12
• A very common false color composite scheme for displaying a
multispectral image is shown below:
• NIR band = R
Red band = G
Green band = B
• This false color composite scheme allows vegetation to be detected
readily in the image.
• In this type of false colour composite images, vegetation appears in
different shades of red depending on the types and conditions of the
vegetation, since it has a high reflectance in the NIR band.
• Clear water appears dark-bluish (higher green band reflectance), while
turbid water appears cyan (higher red reflectance due to sediments)
compared to clear water.
• Bare soils, roads and buildings may appear in various shades of blue,
yellow or grey, depending on their composition.

False Colour’ composite


NIR band on RED
RED band on GREEN
GREEN band on BLUE

Types of Imaging System


Depending on how the sensor acquires and records the incoming signal, imaging
systems can be divided into three general categories : framing cameras,
scanning systems, and pushbroom imagers.

13
A framing camera takes a snapshot of an area of the surface, which is then
projected by the camera optics on a film or a two-dimensional array of detectors
located in the camera focal plane. Framing cameras have the major advantage
that excellent geometric fidelity can be achieved because the entire image is
acquired at once.

Scanning systems use a scanning mirror that projects the image of one surface
resolution element on a single detector. To make an image, across-track
scanning is used to cover the images swath across the track. The platform
motion carries the imaged swath along the track. The major disadvantage of
such a system is the presence of moving parts and the low detection or dwell
time for each pixel. In addition, images acquired with scanning systems typically
have poorer geometric fidelity than those acquired with framing cameras.
Examples of scanning systems are the Landsat instruments such as the
Multispectral Scanner (MSS) and Thematic Mapper (TM) and the Enhanced
Thematic Mapper Plus (ETM+).

Pushbroom imagers delete the scanning mechanism and use a linear array of
detectors to cover all the pixels in the across-track dimension at the same time.
This allows a much longer detector dwell time on each surface pixel, thus
allowing much higher sensitivity and a narrower bandwidth of observation.
Examples of such systems are the SPOT and the ASTER cameras. The fixed
geometry allowed by the detector arrays results in high geometric accuracies in
the line direction, which will simplify the image reconstruction and processing
tasks.

Comparison of Different Imaging Systems


Type Advantage Disadvantage
Film framing camera Large image format Transmission of film
High information density Potential image smearing
Cartographic accuracy Wide field of view optics
Electronic framing Broad spectral range Difficulty in getting large arrays or
camera Data in digital format sensitive surface
Simultaneous sampling of image, Wide field of view optics
good geometric fidelity
Scanning system Simple detector Low detector dwell time
Narrow field-of-view optics Moving parts
Wide sweep capability Difficult to achieve good image,
Easy to use with multiple geometric fidelity
wavelengths
Pushbroom imagers Long dwell time for each detector Wide field-of-view optics
Across-track geometric fidelity

Resolution
Spatial Resolution
Spectral Resolution

14
Radiometric Resolution
Temporal Resolution

Spatial Resolution :Spatial resolution, the capability of distinguishing closely


spaced objects on an image or a photograph. Image resolution is determined by
the size and number of picture elements or pixels used to form an image. The
smaller the pixel size, the greater the resolution. In photography, resolution is
limited primarily by the film grain size, but lenses and other technical
considerations play important roles.

Spatial resolution of passive sensors depends


primarily on their Instantaneous Field of View
(IFOV). The IFOV is the angular cone of visibility of
the sensor (A) and determines the area on the
Earth's surface which is "seen" from a given altitude
at one particular moment in time (B).
The size of the area viewed is determined by
multiplying the IFOV by the distance from the ground
to the sensor (C). This area on the ground is called
the resolution cell and determines a sensor's
maximum spatial resolution.

Spectral Resolution : Spectral resolution describes the ability of a sensor to


define fine wavelength intervals. The finer the spectral resolution, the narrower
the wavelength range for a particular channel or band. Black and white film
records wavelengths extending over much, or all of the visible portion of the
electromagnetic spectrum. Colour film has higher spectral resolution, as it is
individually sensitive to the reflected energy at the blue, green, and red
wavelengths of the spectrum. Many remote sensing systems record energy over
several separate wavelength ranges at various spectral resolutions.

Radiometric Resolution :Radiometric characteristics describe the actual


information content in an image. Every time an image is acquired on film or by a
sensor, its sensitivity to the magnitude of the electromagnetic energy determines
the radiometric resolution. The radiometric resolution of an imaging system
describes its ability to discriminate very slight differences in energy. The finer the
radiometric resolution of a sensor, the more sensitive it is to detecting small
differences in reflected or emitted energy. Imagery data are represented by
positive digital numbers which vary from 0 to (one less than) a selected power of
2. This range corresponds to the number of bits used for coding numbers in
binary format. Each bit records an exponent of power 2 (e.g. 1 bit=21=2). The
maximum number of brightness levels available depends on the number of bits
used in representing the energy recorded. Thus, if a sensor used 8 bits to record
the data, there would be 28=256 digital values available, ranging from 0 to 255.

15
Temporal Resolution: Temporal resolution is also important to consider in a
remote sensing system which refers to the length of time it takes for a satellite to
complete one entire orbit cycle. The revisit period of a satellite sensor is usually
several days. Therefore the absolute temporal resolution of a remote sensing
system to image the exact same area at the same viewing angle a second time is
equal to this period.

Thermal Imaging
All matter of the earth radiates energy at Thermal Infrared wavelength (3 µm to
15 µm) both day and night. Thermal sensors use photo detectors sensitive to
the direct contact of photons on their surface, to detect emitted thermal radiation.
The detectors are cooled to temperatures close to absolute zero (0oK) in order to
limit their own thermal emissions.

Thermal sensors essentially measure the surface temperature and thermal


properties of targets. Thermal IR images generally record broad spectral bands,
typically 8.0 µm to 14.0 µm for images from aircraft and 10.5 µm to 12.5 µm for
images from satellites. To interpret thermal IR images, one must understand the
basic physical processes that control the interactions between thermal energy
and matter, as well as the thermal properties of matter that determine the rate
and intensity of the interactions.
Heat, Temperature & Radiant Flux

Kinetic Heat is the energy of particles of matter in a random motion. The random
motion causes particles to collide, resulting in change of energy state and the
emission of electromagnetic radiation from the surface of materials. The internal,
or kinetic, heat energy of matter is thus converted into radiant energy. The
amount of heat is measured in calories.

Temperature is a measure of the concentration of heat (in centigrade). On the


Kelvin, or absolute, temperature scale, 0oK is absolute zero, the point at which all
molecular motion ceases. The Kelvin and Celsius scales correlate as follows: 0oC
= 273oK, and 100oC = 373oK.

Heat energy is transferred from one place to another by three means:

Conduction transfers heat through a material by molecular contact. Say heat


transfer from pan to vegetables during cooking.

Convection transfers heat through the physical movement of heated matter. The
circulation of heated water and air are examples of convection.

Radiation transfers heat in the form of electromagnetic waves. Heat from the
sun reaches the earth by radiation. Radiation can transfer heat through vacuum.

16
Materials at the surface of the earth receive thermal energy primarily in the form
of radiation from the sun. To a much lesser extent, heat is also conducted from
the interior of the earth. Energy from the interior of the earth reaches the surface
primarily by conduction.

The atmosphere does not transmit all wavelengths of thermal IR radiation


uniformly. Carbon dioxide, ozone, and water vapor absorb energy in certain
wavelength regions, called absorption bands.

The atmosphere transmits wavelengths of 3 to 5 µm and 8 to 14 µm. These


bands are called atmospheric windows.

17
Radiant Energy

For an object at a constant kinetic


temperature, the radiant energy, or flux, varies
as a function of wavelength.
As temperature increases, the total amount of
radiant energy increases and the radiant
energy peak shifts to shorter wavelengths.
This shift, or displacement, to shorter
wavelengths with increasing temperature is
described by Wien’s displacement law states
that
λmax = 2897 µm. oK / Trad
Where Trad is radiant temperature in
degrees Kelvin and 2897 µm. oK is a physical
constant. The wavelength of the radiant peak
of an object may be determined by
substituting the value of Trad.

Thermal Properties of Materials


Radiant energy striking the surface of a material is partly reflected, partly
absorbed, and partly transmitted through the material. Therefore,

Reflectivity + Absorptivity + Transmissivity = 1

Reflectivity, Absorptivity, and Transmissivity are determined by properties of


matter and also vary with the wavelength of the incident radiant energy and with
the temperature of the surface. Reflectivity is expressed as albedo, which is the
ratio of reflected energy to incident energy. The absorbed energy causes an
increase in the kinetic temperature of the material.

Blackbody concept, Emissivity, and Radiant Temperature


The concept of a blackbody is fundamental to understanding heat radiation. A blackbody
is a theoretical material that absorbs all the radiant energy that strikes it.

Planck’s Radiation Law


Relates spectral characteristics and magnitude of emission to the temperature of
the emitting body. For a perfect emitter (blackbody) all energy absorbed is
emitted.
C / λT
Wλ = C1 / λ5 [e 2 – 1]

= spectral radiant emittance in w/m2/ µm

18
This equation shows that for any given wavelength, the total energy emitted by a
blackbody increases with increasing temperature.

Stefan-Boltzmann Law
According to the Stefan-Boltzmann law, the radiant flux of a blackbody (Fb) at a
kinetic temperature of Tkin is

F b = σ . T 4kin

Where σ is the Stefan-Boltzmann constant (5.67 x 10-12 W. cm-2 . K-4)

This equation shows that the total energy emitted (i.e., the area under the curve)
from a blackbody, over all wavelengths, is directly proportional to the 4th power
of its absolute temperature

Kirchoff’s Radiation Law


For a perfect emitter,
α=ε=1
The emissivity of a blackbody therefore = 1

In other words, “good absorbers are good emitters and good reflectors are poor
emitters”. Natural bodies are gray bodies i.e., imperfect absorbers and emitters
and their emissivities are always less than 1

Emissivity (ε)

For real materials a property called emissivity (ε) has been defined as
ε = Fr / F b
Where F r is radiant flux from a real material. The emissivity for a blackbody is 1,
but for all real materials it is less than 1. Emissivity is wavelength dependent,
which means that the emissivity of a real material will be different when
measured at different wavelengths of radiant energy.

 Temperature of an object measured remotely is known as its radiant or


apparent temperature.

 Radiant temperature is the blackbody or kinetic temperature reduced by


its emissivity.

 Remotely sensed thermal IR radiances are a composite of emitted energy,


emissivity, and atmospheric and sensor effects.

19
 On most thermal IR images, the brightest tones represent the warmest
radiant temperatures, and the darkest tones represent the coolest ones.

 Clouds typically show the patchy warm-and-cool pattern.

 The thermal inertia of water is similar to that of soils and rocks, but in
daytime, water bodies have a cooler surface temperature than soils and
rocks. At night the relative surface temperatures are reversed, so that
water is warmer than soils and rocks.

 If water bodies have warm signatures relative to the adjacent terrain, the
image was acquired at night. Whereas, relatively cool water bodies
indicate daytime imagery. Damp soil is cooler than dry soil, both day and
night.

 Green deciduous vegetation has a cool signature on daytime images and


a warm signature on nighttime images. During the day, transpiration of
water vapor lowers leaf temperature, causing vegetation to have a cool
signature relative to the surrounding soil. At night the insulating effect of
leafy foliage and the high water content retain heat, which results in warm
nighttime temperatures.

Applications of Thermal IR Sensing


Land cover classification and mapping
 Estimate sea surface temperatures
 Estimate soil moisture
 Monitor plant stress
 Detect ground water and geological
Structures and materials
 Detect and map thermal discharges
 Measure heat loss of buildings
 Assess urban heat island effects
 Map forest fires
 Monitor volcanic activity

20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy