SEC 201
SEC 201
SEC 201
2. Microwaves:
o Shorter wavelength, higher frequency than radio waves
3. Infrared Radiation:
o Shorter wavelength, higher frequency than microwaves
4. Visible Light:
o Narrow band of wavelengths detectable by the human eye
6. X-rays:
SEC 201
o Shorter wavelength, higher frequency than UV radiation
7. Gamma Rays:
o Shortest wavelength, highest frequency
4. Emission
Definition: Emission is the process by which electromagnetic radiation is
released from a material.
Mechanism: When a material is heated, its atoms or molecules gain energy
and vibrate more rapidly. This increased vibration causes the material to emit
electromagnetic radiation.
Types of emission:
o Black-body radiation: The radiation emitted by an ideal black body,
which absorbs all incident radiation and emits radiation at all
wavelengths.
o Incandescent emission: The emission of light from a heated object,
such as a light bulb filament.
o Luminescence: The emission of light by a material that has absorbed
energy from another source, such as a fluorescent lamp.
Factors affecting emission:
o Temperature of the material: Hotter objects emit more radiation
and at shorter wavelengths.
o Material properties: The emissivity of a material, which is its ability
to emit radiation, varies with the material.
Spectral signatures:
A spectral signature is a function of the wavelength and is defined as the ratio
of reflected radiation energy [Er(λ)] to incident radiation energy [Et(λ)] on an object.
All matter on earth's surface has separate values of spectral
SEC 201
reflectance characteristics. The reflectance is directly related to the object's color
and tone in an image (Jensen, 2009). The color of an object can be described based
on the wavelength reflected by it. The spectral reflectance of an object is averaged
over separately defined wavelengths, which helps to differentiate it from others.
The reflectance (ρ(λ)) value varies with the wavelength and terrain features. It can
be expressed in mathematical expression as:
ρ(λ)=[Er(λ)Et(λ)]×100.
Spectral signature is the variation of reflectance or emittance of a material with
respect to wavelengths (i.e., reflectance/emittance as a function of wavelength).
[1]
The spectral signature of stars indicates the composition of the stellar
atmosphere. The spectral signature of an object is a function of the incidental
EM wavelength and material interaction with that section of the electromagnetic
spectrum.
The measurements can be made with various instruments, including a task
specific spectrometer, although the most common method is separation of the red,
green, blue and near infrared portion of the EM spectrum as acquired by digital
cameras. Calibrating spectral signatures under specific illumination are collected in
order to apply a correction to airborne or satellite imagery digital images.
The user of one kind of spectroscope looks through it at a tube of ionized gas. The
user sees specific lines of colour falling on a graduated scale. Each substance will
have its own unique pattern of spectral lines.
Most remote sensing applications process digital images to extract spectral
signatures at each pixel and use them to divide the image in groups of similar pixels
(segmentation) using different approaches. As a last step, they assign a class to
each group (classification) by comparing with known spectral signatures. Depending
on pixel resolution, a pixel can represent many spectral signature "mixed" together -
that is why much remote sensing analysis is done to "unmix mixtures". Ultimately
correct matching of spectral signature recorded by image pixel with spectral
signature of existing elements leads to accurate classification in remote sensing.
Shadow:
Shadows cast by objects can reveal details about their shape, height, and
orientation. They can also help identify features that might be obscured in
direct sunlight.
Pattern:
Pattern refers to the repetitive arrangement of objects or features in an
image. It can be used to identify specific land use types or geological
formations. For example, agricultural fields often exhibit a grid-like pattern.
Site and Association:
Site refers to the location of an object in relation to its surroundings. It can
provide clues about the object's function or purpose. For example, a school
building is often located near residential areas.
Association involves recognizing the relationship between different objects in
an image. For instance, a road may be associated with a nearby town or a
river.
By carefully considering these elements, we can derive valuable insights from visual
data. Whether analyzing satellite images for environmental monitoring or
interpreting medical scans for diagnosis, a solid understanding of visual
interpretation principles is essential
SEC 201
Supervised Classification:
Supervised classification is a powerful technique in remote sensing that involves
training a computer algorithm to recognize and categorize different land cover
types or objects within an image. This process requires human intervention to guide
the algorithm, making it a supervised learning approach.
The Process:
1. Training Data Selection: The analyst identifies and selects representative
samples, known as training sites, for each land cover class of interest. These
training sites should be spectrally distinct and cover a range of variability
within each class.
2. Algorithm Training: The selected training sites are used to train the
classification algorithm. The algorithm learns the spectral characteristics
associated with each class, such as the distribution of pixel values in different
spectral bands.
SEC 201
3. Classification: Once the algorithm is trained, it is applied to the entire
image, assigning each pixel to the class it most closely resembles based on
its spectral signature and the learned patterns from the training data.
Unsupervised Classification:
Unsupervised classification, also known as clustering, is a powerful technique in
remote sensing for automatically grouping pixels of a satellite image based on their
spectral similarities. Unlike supervised classification, which requires labeled training
data, unsupervised classification operates without prior knowledge of the land cover
types present in the image.
While unsupervised classification is advantageous for its simplicity and lack of
reliance on training data, it also presents challenges. The accuracy of the
classification heavily depends on the choice of algorithm and the number of
specified classes. Additionally, the resulting classes may not directly correspond to
meaningful land cover categories, requiring further interpretation and potential
post-processing.
Despite these limitations, unsupervised classification remains a valuable tool for
initial exploration of remote sensing data, especially when labeled training data is
unavailable or limited. It can provide insights into the underlying patterns and
variations within the image, aiding in subsequent analysis and decision-making.
Introduction to advanced classification techniques of satellite imageries:
Satellite imagery, a powerful tool in remote sensing, provides vast amounts of data
that can be analyzed to extract valuable information about our planet. Advanced
classification techniques play a pivotal role in unlocking the potential of this data.
These techniques are designed to automatically categorize pixels in a satellite
image into meaningful classes, such as land cover types (forest, urban, water) or
thematic categories (agriculture, mining, deforestation).
Advanced Classification Techniques
To address these limitations, a range of advanced classification techniques have
emerged:
Object-Based Image Analysis (OBIA): OBIA treats image objects
(segments) as the fundamental unit of analysis rather than individual pixels.
This approach considers both spectral and spatial information, making it
effective for complex landscapes with heterogeneous features.
Machine Learning: Machine learning algorithms, such as support vector
machines (SVMs), random forests, and neural networks, have revolutionized
satellite image classification. These algorithms learn complex patterns from
training data and can accurately classify pixels, even in challenging
conditions.
SEC 201
Deep Learning: A subset of machine learning, deep learning employs
artificial neural networks with multiple layers to extract hierarchical features
from satellite images. Convolutional neural networks (CNNs) are particularly
well-suited for image classification tasks, as they can automatically learn
relevant features without explicit feature engineering.
Hybrid Approaches: Combining multiple techniques can often lead to
improved classification accuracy. For example, combining object-based
segmentation with machine learning or deep learning can leverage the
strengths of both approaches.
Types of data:
Vector Data
Vector data represents geographic features as discrete objects with defined shapes
and locations. It consists of three fundamental elements:
Points: Points represent specific locations with no spatial extent. They are
used to depict features like wells, fire hydrants, or individual trees.
SEC 201
Lines: Lines represent linear features with length but negligible width. They
are used to depict features like roads, rivers, or power lines.
Polygons: Polygons represent areas with defined boundaries. They are used to
depict features like countries, states, lakes, or land parcels.
Vector data is well-suited for representing discrete features with sharp boundaries
and is ideal for storing and analyzing attribute data associated with each feature.
Raster Data
Raster data represents geographic information as a grid of cells, where each cell
contains a value representing a specific attribute, such as elevation, temperature,
or land cover. Raster data is often derived from remote sensing imagery, such as
satellite or aerial photographs.
Raster data is well-suited for representing continuous phenomena, such as elevation
changes or temperature variations, and is ideal for analyzing spatial patterns and
trends
Overlaying of data:
Overlaying data in GIS, also known as spatial overlay or map overlay, is a powerful
technique that involves combining multiple layers of spatial data to create new
information and insights. It allows us to analyze the relationships between different
geographic features and their attributes, revealing patterns and trends that might
not be apparent when looking at individual layers.
There are two main types of overlay operations: vector overlay and raster overlay.
Vector overlay deals with data represented as points, lines, and polygons, while
raster overlay works with data organized into a grid of cells, each with a specific
value. Both types of overlay are essential for various applications in fields like urban
planning, environmental science, and resource management.
Vector overlay involves combining the geometries and attributes of two or more
vector layers to create a new layer. This can be done through operations like
intersection, union, and clip. Intersection creates a new layer containing only the
areas where the input layers overlap, while union combines all features from both
layers. Clip extracts the features of one layer that fall within the boundaries of
another layer.
Raster overlay, on the other hand, involves combining the values of corresponding
cells in two or more raster layers. This can be done using operations like addition,
subtraction, multiplication, and division. For example, we can overlay a layer of soil
type with a layer of slope to identify areas suitable for agriculture.
Overlay analysis has numerous applications. In urban planning, it can be used to
identify suitable locations for new development by overlaying layers of land use,
SEC 201
zoning, and infrastructure. In environmental science, it can help assess the impact
of climate change by overlaying layers of temperature, precipitation, and
vegetation. In resource management, it can assist in optimizing resource allocation
by overlaying layers of resource availability and demand.
In conclusion, overlaying data in GIS is a valuable tool for understanding complex
spatial relationships and making informed decisions. By combining different layers
of information, we can gain deeper insights into the world around us and address a
wide range of challenges.
Querying of data:
GIS, or Geographic Information Systems, empowers us to ask and answer complex
questions about our world by analyzing spatial data. A fundamental aspect of GIS is
the ability to query data, which involves extracting specific information from a
dataset based on defined criteria. This process allows us to gain valuable insights
and make informed decisions.
Types of Queries in GIS
1. Attribute Queries: These queries focus on the non-spatial attributes
associated with geographic features. By specifying conditions on these
attributes, we can identify and select features that meet specific criteria.
2. Spatial Queries: Spatial queries, on the other hand, involve analyzing the
spatial relationships between geographic features. These queries allow us to
identify features based on their location, proximity, or overlap with other
features.
Querying Techniques
GIS software provides various techniques for querying data, including:
SQL (Structured Query Language): SQL is a powerful language for
querying databases, and many GIS systems support SQL for complex data
extraction and analysis.
Graphical User Interface (GUI): GIS software often includes user-friendly
GUIs that allow users to construct queries visually. This approach is
particularly useful for non-technical users.
Scripting: Scripting languages like Python can be used to automate complex
queries and analysis tasks, providing flexibility and efficiency.
SEC 201
Remote Sensing
Remote sensing involves acquiring information about an object or phenomenon
without direct contact. In atmospheric studies, RS utilizes satellites and aircraft
equipped with specialized sensors to capture data on various atmospheric
parameters. These sensors measure electromagnetic radiation emitted or reflected
by the Earth's surface and atmosphere, providing information on temperature,
humidity, pressure, wind speed, and the concentration of greenhouse gases and
pollutants.
GIS
Geographic Information Systems (GIS) provide a framework for capturing, storing,
analyzing, and visualizing spatial data. In atmospheric studies, GIS enables the
integration of RS data with other relevant information, such as topography, land
use, and population density. This integration allows for the creation of detailed maps
and models that help scientists understand the spatial distribution of atmospheric
phenomena and their impact on the environment and human society.
Combined Power of RS and GIS
The combination of RS and GIS offers numerous benefits for atmospheric studies:
Global Monitoring: RS allows for continuous monitoring of atmospheric
conditions on a global scale, providing valuable data for climate change
research and weather forecasting.
Spatiotemporal Analysis: GIS enables the analysis of atmospheric data
over time and space, revealing patterns and trends that may not be apparent
from single-point measurements.
Visualization: GIS tools provide powerful visualization capabilities, allowing
scientists to create maps, charts, and animations that effectively
communicate complex atmospheric processes to the public and
policymakers.
SEC 201
Model Development and Validation: RS and GIS data can be used to
develop and validate atmospheric models, which are essential for predicting
future climate scenarios and understanding the impact of human activities on
the atmosphere.
Decision Support: The integration of RS and GIS data can provide valuable
information for decision-making in areas such as air quality management,
disaster response, and sustainable development.
By harnessing the power of RS and GIS, scientists can gain a deeper understanding
of the Earth's atmosphere and its role in shaping our planet's climate and weather
patterns. These technologies are essential for addressing pressing environmental
challenges and ensuring a sustainable future for generations to come.