FieldGuide PDF
FieldGuide PDF
FieldGuide PDF
The information contained in this document is the exclusive property of Leica Geosystems GIS & Mapping, LLC. This work is protected under
United States copyright law and other international copyright treaties and conventions. No part of this work may be reproduced or transmitted in
any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system, except
as expressly permitted in writing by Leica Geosystems GIS & Mapping, LLC. All requests should be sent to the attention of Manager of Technical
Documentation, Leica Geosystems GIS & Mapping, LLC, 2801 Buford Highway NE, Suite 400, Atlanta, GA, 30329-2137, USA.
Government Reserved Rights. MrSID technology incorporated in the Software was developed in part through a project at the Los Alamos National
Laboratory, funded by the U.S. Government, managed under contract by the University of California (University), and is under exclusive
commercial license to LizardTech, Inc. It is used under license from LizardTech. MrSID is protected by U.S. Patent No. 5,710,835. Foreign patents
pending. The U.S. Government and the University have reserved rights in MrSID technology, including without limitation: (a) The U.S. Government
has a non-exclusive, nontransferable, irrevocable, paid-up license to practice or have practiced throughout the world, for or on behalf of the United
States, inventions covered by U.S. Patent No. 5,710,835 and has other rights under 35 U.S.C. § 200-212 and applicable implementing regulations;
(b) If LizardTech's rights in the MrSID Technology terminate during the term of this Agreement, you may continue to use the Software. Any
provisions of this license which could reasonably be deemed to do so would then protect the University and/or the U.S. Government; and (c) The
University has no obligation to furnish any know-how, technical assistance, or technical data to users of MrSID software and makes no warranty or
representation as to the validity of U.S. Patent 5,710,835 nor that the MrSID Software will not infringe any patent or other proprietary right. For
further information about these provisions, contact LizardTech, 1008 Western Ave., Suite 200, Seattle, WA 98104.
ERDAS, ERDAS IMAGINE, IMAGINE OrthoBASE, Stereo Analyst and IMAGINE VirtualGIS are registered trademarks; IMAGINE OrthoBASE
Pro is a trademark of Leica Geosystems GIS & Mapping, LLC.
Other companies and products mentioned herein are trademarks or registered trademarks of their respective owners.
Table of Contents
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Conventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Chapter 1
Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Absorption / Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Spectral Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Spatial Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Radiometric Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Temporal Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
iii
Table of Contents
Chapter 2
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Color Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Other Vector Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Shapefile Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
SDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
ArcGIS Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Chapter 3
Raster and Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Importing and Exporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Raster Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Annotation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Generic Binary Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
IKONOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
IRS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Landsat 1-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Landsat 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
NLAPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
OrbView-3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
iv ERDAS
Table of Contents
SeaWiFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
SPOT4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Future Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
AIRSAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Daedalus TMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Aerial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
DOQs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
.Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Raster Product Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
CADRG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Satellite Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Differential Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Applications of GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
GRID and GRID Stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
JFIF (JPEG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Field Guide v
Table of Contents
MrSID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
SDTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
SUN Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
GeoTIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Chapter 4
Image Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
8-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
24-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
24-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Using the Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Linking Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Chapter 5
Mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Input Image Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Exclude Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Image Dodging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Color Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Intersection Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
vi ERDAS
Table of Contents
Chapter 6
Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Correcting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Radiometric Correction: Visible/Infrared Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Spatial Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Wavelet Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Wavelet Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Algorithm Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Prerequisites and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Spectral Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Spectral Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
Hyperspectral Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
IAR Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Log Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
Rescale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Processing Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Spectrum Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Signal to Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Mean per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Profile Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Chapter 7
Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Selecting Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
viii ERDAS
Table of Contents
Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Nonparametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Fuzzy Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Fuzzy Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Fuzzy Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Expert Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Knowledge Engineer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Knowledge Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Chapter 8
Photogrammetric Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
What is Photogrammetry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Types of Photographs and Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Why use Photogrammetry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
Photogrammetry vs. Conventional Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Single Frame Orthorectification vs. Block Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Image and Data Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Photogrammetric Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Desktop Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Scanning Resolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Terrestrial Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Principal Point and Focal Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Fiducial Marks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Lens Distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
The Collinearity Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Photogrammetric Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Space Resection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Space Forward Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Bundle Block Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Least Squares Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Self-calibrating Bundle Adjustment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Automatic Gross Error Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Field Guide ix
Table of Contents
GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
GCP Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Processing Multiple Strips of Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Tie Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Automatic Tie Point Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Image Matching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Area Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Feature Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Relation Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Image Pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Satellite Photogrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
SPOT Interior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
SPOT Exterior Orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Collinearity Equations and Satellite Block Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
Chapter 9
Radar Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
IMAGINE OrthoRadar Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Parameters Required for Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Algorithm Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
IMAGINE StereoSAR DEM Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Subset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Despeckle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Degrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Constrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
Degrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
IMAGINE IFSAR DEM Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Electromagnetic Wave Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
The Interferometric Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
Image Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Phase Noise Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Phase Flattening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Phase Unwrapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Chapter 10
Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Georeferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Latitude/Longitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
x ERDAS
Table of Contents
Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Polynomial Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Rubber Sheeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Triangle-Based Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Triangle-based rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Nonlinear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Check Point Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Rectifying to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Bicubic Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
Map-to-Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
Conversion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
Chapter 11
Terrain Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
Field Guide xi
Table of Contents
Chapter 12
Geographic Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Information vs. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Using Attributes in Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Script Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Editing Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
Constructing Topology (Coverages Only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Chapter 13
Cartography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
xii ERDAS
Table of Contents
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Map Projection Uses in a GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Deciding Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Learning Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Plan the Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
US National Map Accuracy Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
USGS Land Use and Land Cover Map Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
USDA SCS Soils Maps Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Digitized Hardcopy Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Chapter 14
Hardcopy Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Scaled Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Printing Large Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
Appendix A
Math Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
Appendix B
Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
Works Cited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
Related Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
xiv ERDAS
List of Figures
Figure 1-1: Pixels and Bands in a Raster Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Figure 1-2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Figure 1-3: Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Figure 1-4: Sun Illumination Spectral Irradiance at the Earth’s Surface . . . . . . . . . . . . . . . . . . . . . .6
Figure 1-5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
Figure 1-6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
Figure 1-7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . . . . . . . . . . . . 11
Figure 1-8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 1-9: Brightness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 1-10: Landsat TM—Band 2 (Four Types of Resolution) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 1-11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 1-12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 1-13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 1-14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 1-15: Examples of Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 2-1: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Figure 2-2: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 2-3: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Figure 2-4: Attribute CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Figure 2-5: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Figure 2-6: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Figure 2-7: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Figure 3-1: Multispectral Imagery Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Figure 3-2: Landsat MSS vs. Landsat TM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Figure 3-3: SPOT Panchromatic vs. SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Figure 3-4: SLAR Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Figure 3-5: Received Radar Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Figure 3-6: Radar Reflection from Different Sources and Distances . . . . . . . . . . . . . . . . . . . . . . . . 70
Figure 3-7: ADRG Overview File Displayed in a Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Figure 3-8: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Figure 3-9: Seamless Nine Image DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Figure 3-10: ADRI Overview File Displayed in a Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Figure 3-11: Arc/second Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Figure 3-12: Common Uses of GPS Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Figure 4-1: Example of One Seat with One Display and Two Screens . . . . . . . . . . . . . . . . . . . . . . 109
Figure 4-2: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Figure 4-3: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Figure 4-4: Transforming Data File Values to Screen Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Figure 4-5: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Figure 4-6: Stretching by Min/Max vs. Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Figure 4-7: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Figure 4-8: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Figure 4-9: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Figure 4-10: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Figure 4-11: Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Figure 4-12: Linked Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Figure 6-1: Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Figure 6-2: Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Field Guide xv
List of Figures
xvi ERDAS
List of Figures
xviii ERDAS
List of Figures
xx ERDAS
List of Tables
Table 1-1: Bandwidths Used in Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Table 2-1: Description of File Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Table 3-1: Raster Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Table 3-2: Annotation Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Table 3-3: Vector Data Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Table 3-4: IKONOS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Table 3-5: LISS-III Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Table 3-6: Panchromatic Band and Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Table 3-7: WiFS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Table 3-8: MSS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Table 3-9: TM Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Table 3-10: Landsat 7 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Table 3-11: AVHRR Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Table 3-12: OrbView-3 Bands and Spectral Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Table 3-13: SeaWiFS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Table 3-14: SPOT XS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Table 3-15: SPOT4 Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Table 3-16: Commonly Used Bands for Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Table 3-17: Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Table 3-18: JERS-1 Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Table 3-19: RADARSAT Beam Mode Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Table 3-20: SIR-C/X-SAR Bands and Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Table 3-21: Daedalus TMS Bands and Wavelengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Table 3-22: ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Table 3-23: Legend Files for the ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Table 3-24: Common Raster Data Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Table 3-25: File Types Created by Screendump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Table 3-26: The Most Common TIFF Format Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Table 3-27: Conversion of DXF Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Table 3-28: Conversion of IGES Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Table 4-1: Colorcell Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Table 4-2: Commonly Used RGB Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Table 4-3: Overview of Zoom Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Table 6-1: Description of Modeling Functions Available for Enhancement . . . . . . . . . . . . . . . . . . . 143
Table 6-2: Theoretical Coefficient of Variation Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Table 6-3: Parameters for Sigma Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Table 6-4: Pre-Classification Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Table 7-1: Training Sample Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Table 7-2: Feature Space Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Table 7-3: ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Table 7-4: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Table 7-5: Parallelepiped Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Table 7-6: Feature Space Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Table 7-7: Minimum Distance Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Table 7-8: Mahalanobis Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Table 7-9: Maximum Likelihood/Bayesian Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Table 8-1: Scanning Resolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Table 9-1: SAR Parameters Required for Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
xxii ERDAS
List of Tables
xxiv ERDAS
Preface
Introduction The purpose of the ERDAS Field Guide™ is to provide background information on why one
might use particular geographic information system (GIS) and image processing functions and
how the software is manipulating the data, rather than what buttons to push to actually perform
those functions. This book is also aimed at a diverse audience: from those who are new to
geoprocessing to those savvy users who have been in this industry for years. For the novice, the
ERDAS Field Guide provides a brief history of the field, an extensive glossary of terms, and
notes about applications for the different processes described. For the experienced user, the
ERDAS Field Guide includes the formulas and algorithms that are used in the code, so that he
or she can see exactly how each operation works.
Although the ERDAS Field Guide is primarily a reference to basic image processing and GIS
concepts, it is geared toward ERDAS IMAGINE® users and the functions within ERDAS
IMAGINE software, such as GIS analysis, image processing, cartography and map projections,
graphics display hardware, statistics, and remote sensing. However, in some cases, processes
and functions are described that may not be in the current version of the software, but planned
for a future release. There may also be functions described that are not available on your system,
due to the actual package that you are using.
The enthusiasm with which the first four editions of the ERDAS Field Guide were received has
been extremely gratifying, both to the authors and to Leica Geosystems GIS & Mapping, LLC
as a whole. First conceived as a helpful manual for users, the ERDAS Field Guide is now being
used as a textbook, lab manual, and training guide throughout the world.
The ERDAS Field Guide will continue to expand and improve to keep pace with the profession.
Suggestions and ideas for future editions are always welcome, and should be addressed to the
Technical Writing department of Engineering at Leica Geosystems, in Atlanta, Georgia.
Conventions Used The following paragraphs are used throughout the ERDAS Field Guide and other ERDAS
in this Book IMAGINE documentation.
These paragraphs direct you to the ERDAS IMAGINE software function that
accomplishes the described task.
These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals
for additional information.
xxvi ERDAS
Chapter 1
Raster Data
Introduction The ERDAS IMAGINE system incorporates the functions of both image processing and GIS.
These functions include importing, viewing, altering, and analyzing raster and vector data sets.
This chapter is an introduction to raster data, including:
• remote sensing
• radiometric correction
• geocoded data
Image Data In general terms, an image is a digital picture or representation of an object. Remotely sensed
image data are digital representations of the Earth. Image data are stored in data files, also called
image files, on magnetic tapes, computer disks, or other media. The data consist only of
numbers. These representations form images when they are displayed on a screen or are output
to hardcopy.
Each number in an image file is a data file value. Data file values are sometimes referred to as
pixels. The term pixel is abbreviated from picture element. A pixel is the smallest part of a
picture (the area being scanned) with a single value. The data file value is the measured
brightness value of the pixel at a specific wavelength.
Raster image data are laid out in a grid similar to the squares on a checkerboard. Each cell of
the grid is represented by a pixel, also known as a grid cell.
In remotely sensed image data, each pixel represents an area of the Earth at a specific location.
The data file value assigned to that pixel is the record of reflected radiation or emitted heat from
the Earth’s surface at that location.
Data file values may also represent elevation, as in digital elevation models (DEMs).
NOTE: DEMs are not remotely sensed image data, but are currently being produced from stereo
points in radar imagery.
Field Guide 1
Raster Data
The terms pixel and data file value are not interchangeable in ERDAS IMAGINE. Pixel is
used as a broad term with many meanings, one of which is data file value. One pixel in a
file may consist of many data file values. When an image is displayed or printed, other
types of values are represented by a pixel.
See Chapter 4 “Image Display” for more information on how images are displayed.
Bands Image data may include several bands of information. Each band is a set of data file values for
a specific portion of the electromagnetic spectrum of reflected light or emitted heat (red, green,
blue, near-infrared, infrared, thermal, etc.) or some other user-defined information created by
combining or enhancing the original bands, or creating new bands from other sources.
ERDAS IMAGINE programs can handle an unlimited number of bands of image data in a single
file.
3 bands
1 pixel
2 ERDAS
Image Data
Numeral Types
The range and the type of numbers used in a raster layer determine how the layer is displayed
and processed. For example, a layer of elevation data with values ranging from -51.257 to
553.401 would be treated differently from a layer using only two values to show land and water.
The data file values in raster layers generally fall into these categories:
• Nominal data file values are simply categorized and named. The actual value used for each
category has no inherent meaning—it is simply a class value. An example of a nominal
raster layer would be a thematic layer showing tree species.
• Ordinal data are similar to nominal data, except that the file values put the classes in a rank
or order. For example, a layer with classes numbered and named
1 - Good, 2 - Moderate, and 3 - Poor is an ordinal system.
• Interval data file values have an order, but the intervals between the values are also
meaningful. Interval data measure some characteristic, such as elevation or degrees
Fahrenheit, which does not necessarily have an absolute zero. (The difference between two
values in interval data is meaningful.)
• Ratio data measure a condition that has a natural zero, such as electromagnetic radiation (as
in most remotely sensed data), rainfall, or slope.
Nominal and ordinal data lend themselves to applications in which categories, or themes, are
used. Therefore, these layers are sometimes called categorical or thematic.
Likewise, interval and ratio layers are more likely to measure a condition, causing the file values
to represent continuous gradations across the layer. Such layers are called continuous.
Coordinate Systems The location of a pixel in a file or on a displayed or printed image is expressed using a
coordinate system. In two-dimensional coordinate systems, locations are organized in a grid of
columns and rows. Each location on the grid is expressed as a pair of coordinates known as X
and Y. The X coordinate specifies the column of the grid, and the Y coordinate specifies the
row. Image data organized into such a grid are known as raster data.
There are two basic coordinate systems used in ERDAS IMAGINE:
• file coordinates—indicate the location of a pixel within the image (data file)
File Coordinates
File coordinates refer to the location of the pixels within the image (data) file. File coordinates
for the pixel in the upper left corner of the image always begin at 0, 0.
Field Guide 3
Raster Data
1 (3,1)
x,y
rows (y) 2
columns (x)
Map Coordinates
Map coordinates may be expressed in one of a number of map coordinate or projection systems.
The type of map coordinates used by a data file depends on the method used to create the file
(remote sensing, scanning an existing map, etc.). In ERDAS IMAGINE, a data file can be
converted from one map coordinate system to another.
For more information on map coordinates and projection systems, see Chapter 13
“Cartography” or Appendix B “Map Projections”. See Chapter 10 “Rectification” for
more information on changing the map coordinate system of a data file.
Remote Sensing Remote sensing is the acquisition of data about an object or scene by a sensor that is far from
the object (Colwell, 1983). Aerial photography, satellite imagery, and radar are all forms of
remotely sensed data.
Usually, remotely sensed data refer to data of the Earth collected from sensors on satellites or
aircraft. Most of the images used as input to the ERDAS IMAGINE system are remotely sensed.
However, you are not limited to remotely sensed data.
This section is a brief introduction to remote sensing. There are many books available for
more detailed information, including Colwell, 1983, Swain and Davis, 1978; and Slater,
1980 (see “Bibliography”).
4 ERDAS
Remote Sensing
All types of land cover (rock types, water bodies, etc.) absorb a portion of the electromagnetic
spectrum, giving a distinguishable signature of electromagnetic radiation. Armed with the
knowledge of which wavelengths are absorbed by certain features and the intensity of the
reflectance, you can analyze a remotely sensed image and make fairly accurate assumptions
about the scene. Figure 1-3 illustrates the electromagnetic spectrum (Suits, 1983; Star and Estes,
1990).
SWIR LWIR
Ultraviolet
Radar
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0
Near-infrared Middle-infrared Far-infrared
(0.7 - 2.0) (2.0 - 5.0) (8.0 - 15.0)
Visible
(0.4 - 0.7)
Blue (0.4 - 0.5)
Green (0.5 - 0.6) micrometers µm (one millionth of a meter)
Red (0.6 - 0.7)
Absorption / When radiation interacts with matter, some wavelengths are absorbed and others are
Reflection Spectra reflected.To enhance features in image data, it is necessary to understand how vegetation, soils,
water, and other land covers reflect and absorb radiation. The study of the absorption and
reflection of EMR waves is called spectroscopy.
Spectroscopy
Most commercial sensors, with the exception of imaging radar sensors, are passive solar
imaging sensors. Passive solar imaging sensors can only receive radiation waves; they cannot
transmit radiation. (Imaging radar sensors are active sensors that emit a burst of microwave
radiation and receive the backscattered radiation.)
The use of passive solar imaging sensors to characterize or identify a material of interest is
based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared (VIS/IR)
multispectral data set and properly apply enhancement algorithms, it is necessary to understand
these basic principles. Spectroscopy reveals the:
• reflection spectra—the EMR wavelengths that are reflected by specific materials of interest
Field Guide 5
Raster Data
Absorption Spectra
Absorption is based on the molecular bonds in the (surface) material. Which wavelengths are
absorbed depends upon the chemical composition and crystalline structure of the material. For
pure compounds, these absorption bands are so specific that the SWIR region is often called an
infrared fingerprint.
Atmospheric Absorption
In remote sensing, the sun is the radiation source for passive sensors. However, the sun does not
emit the same amount of radiation at all wavelengths. Figure 1-4 shows the solar irradiation
curve, which is far from linear.
2500
1000
500
0
0.0 0.3 0.6 0.9 1.2 1.5 1.8 2.1 2.4 2.7 3.0
Wavelength mm
UV VIS INFRARED
• scattering—the amount of radiation scattered away from the field of view by the
atmosphere
6 ERDAS
Remote Sensing
Radiation
Absorption—the amount of
radiation absorbed by the
atmosphere
Emission Source—radiation
re-emitted after absorption
Field Guide 7
Raster Data
Reflectance Spectra
After rigorously defining the incident radiation (solar irradiation at target), it is possible to study
the interaction of the radiation with the target material. When an electromagnetic wave (solar
illumination in this case) strikes a target surface, three interactions are possible (Elachi, 1987):
• reflection
• transmission
• scattering
It is the reflected radiation, generally modeled as bidirectional reflectance (Clark and Roush,
1984), that is measured by the remote sensor.
Remotely sensed data are made up of reflectance values. The resulting reflectance values
translate into discrete digital numbers (or values) recorded by the sensing device. These gray
scale values fit within a certain bit range (such as 0 to 255, which is 8-bit data) depending on
the characteristics of the sensor.
Each satellite sensor detector is designed to record a specific portion of the electromagnetic
spectrum. For example, Landsat Thematic Mapper (TM) band 1 records the 0.45 to 0.52 mm
portion of the spectrum and is designed for water body penetration, making it useful for coastal
water mapping. It is also useful for soil/vegetation discriminations, forest type mapping, and
cultural features identification (Lillesand and Kiefer, 1987).
The characteristics of each sensor provide the first level of constraints on how to approach the
task of enhancing specific features, such as vegetation or urban areas. Therefore, when choosing
an enhancement technique, one should pay close attention to the characteristics of the land cover
types within the constraints imposed by the individual sensors.
The use of VIS/IR imagery for target discrimination, whether the target is mineral, vegetation,
man-made, or even the atmosphere itself, is based on the reflectance spectrum of the material
of interest (see Figure 1-6). Every material has a characteristic spectrum based on the chemical
composition of the material. When sunlight (the illumination source for VIS/IR imagery) strikes
a target, certain wavelengths are absorbed by the chemical bonds; the rest are reflected back to
the sensor. It is, in fact, the wavelengths that are not returned to the sensor that provide
information about the imaged area.
Specific wavelengths are also absorbed by gases in the atmosphere (H2O vapor, CO2, O2, etc.).
If the atmosphere absorbs a large percentage of the radiation, it becomes difficult or impossible
to use that particular wavelength(s) to study the Earth. For the present Landsat and Systeme
Pour l’observation de la Terre (SPOT) sensors, only the water vapor bands are considered strong
enough to exclude the use of their spectral absorption region. Figure 1-6 shows how Landsat
TM bands 5 and 7 were carefully placed to avoid these regions. Absorption by other
atmospheric gases was not extensive enough to eliminate the use of the spectral region for
present day broad band sensors.
8 ERDAS
Remote Sensing
1 2 3 4 5 7
Landsat TM bands
100 Atmospheric
absorption
bands
Kaolinite
80
Vegetation (green)
60
Reflectance%
40
Silt loam
20
0
.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4
Wavelength, µm
NOTE: This chart is for comparison purposes only. It is not meant to show actual values. The
spectra are offset to better display the lines.
An inspection of the spectra reveals the theoretical basis of some of the indices in the ERDAS
IMAGINE Image Interpreter. Consider the vegetation index TM4/TM3. It is readily apparent
that for vegetation this value could be very large. For soils, the value could be much smaller,
and for clay minerals, the value could be near zero. Conversely, when the clay ratio TM5/TM7
is considered, the opposite applies.
Hyperspectral Data
As remote sensing moves toward the use of more and narrower bands (for example, AVIRIS
with 224 bands only 10 nm wide), absorption by specific atmospheric gases must be considered.
These multiband sensors are called hyperspectral sensors. As more and more of the incident
radiation is absorbed by the atmosphere, the digital number (DN) values of that band get lower,
eventually becoming useless—unless one is studying the atmosphere. Someone wanting to
measure the atmospheric content of a specific gas could utilize the bands of specific absorption.
Field Guide 9
Raster Data
Figure 1-6 shows the spectral bandwidths of the channels for the Landsat sensors plotted above
the absorption spectra of some common natural materials (kaolin clay, silty loam soil, and green
vegetation). Note that while the spectra are continuous, the Landsat channels are segmented or
discontinuous. We can still use the spectra in interpreting the Landsat data. For example, a
Normalized Difference Vegetation Index (NDVI) ratio for the three would be very different
and, therefore, could be used to discriminate between the three materials. Similarly, the ratio
TM5/TM7 is commonly used to measure the concentration of clay minerals. Evaluation of the
spectra shows why.
Figure 1-7 shows detail of the absorption spectra of three clay minerals. Because of the wide
bandpass (2080 to 2350 nm) of TM band 7, it is not possible to discern between these three
minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensor has a large
number of approximately 10 nm wide bands. With the proper selection of band ratios, mineral
identification becomes possible. With this data set, it would be possible to discriminate between
these three clay minerals, again using band ratios. For example, a color composite image
prepared from RGB = 2160nm/2190nm, 2220nm/2250nm, 2350nm/2488nm could produce a
color-coded clay mineral image-map.
The commercial airborne multispectral scanners are used in a similar fashion. The Airborne
Imaging Spectrometer from the Geophysical & Environmental Research Corp. (GER) has 79
bands in the UV, visible, SWIR, and thermal-infrared regions. The Airborne Multispectral
Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in the visible, SWIR, and thermal-
infrared regions. To properly utilize these hyperspectral sensors, you must understand the
phenomenon involved and have some idea of the target materials being sought.
10 ERDAS
Remote Sensing
Landsat TM band 7
2080 nm 2350 nm
Kaolinite
Reflectance%
Montmorillonite
Illite
The characteristics of Landsat, AVIRIS, and other data types are discussed in Chapter 3
“Raster and Vector Data Sources”. See Chapter 6 “Enhancement” for more information
on the NDVI ratio.
Field Guide 11
Raster Data
It is the active sensors, termed imaging radar, that are introducing a new generation of satellite
imagery to remote sensing. To produce an image, these satellites emit a directed beam of
microwave energy at the target, and then collect the backscattered (reflected) radiation from the
target scene. Because they must emit a powerful burst of energy, these satellites require large
solar collectors and storage batteries. For this reason, they cannot operate continuously; some
satellites are limited to 10 minutes of operation per hour.
The microwave energy emitted by an active radar sensor is coherent and defined by a narrow
bandwidth. The following table summarizes the bandwidths used in remote sensing.
A key element of a radar sensor is the antenna. For a given position in space, the resolution of
the resultant image is a function of the antenna size. This is termed a real-aperture radar (RAR).
At some point, it becomes impossible to make a large enough antenna to create the desired
spatial resolution. To get around this problem, processing techniques have been developed
which combine the signals received by the sensor as it travels over the target. Thus, the antenna
is perceived to be as long as the sensor path during backscatter reception. This is termed a
synthetic aperture and the sensor a synthetic aperture radar (SAR).
The received signal is termed a phase history or echo hologram. It contains a time history of the
radar signal over all the targets in the scene, and is itself a low resolution RAR image. In order
to produce a high resolution image, this phase history is processed through a hardware/software
system called an SAR processor. The SAR processor software requires operator input
parameters, such as information about the sensor flight path and the radar sensor's
characteristics, to process the raw signal data into an image. These input parameters depend on
the desired result or intended application of the output imagery.
One of the most valuable advantages of imaging radar is that it creates images from its own
energy source and therefore is not dependent on sunlight. Thus one can record uniform imagery
any time of the day or night. In addition, the microwave frequencies at which imaging radars
operate are largely unaffected by the atmosphere. This allows image collection through cloud
cover or rain storms. However, the backscattered signal can be affected. Radar images collected
during heavy rainfall are often seriously attenuated, which decreases the signal-to-noise ratio
(SNR). In addition, the atmosphere does cause perturbations in the signal phase, which
decreases resolution of output products, such as the SAR image or generated DEMs.
12 ERDAS
Resolution
These broad definitions are inadequate when describing remotely sensed data. Four distinct
types of resolution must be considered:
• radiometric—the number of possible data file values in each band (indicated by the number
of bits into which the recorded energy is divided)
These four domains contain separate information that can be extracted from the raw data.
Spectral Resolution Spectral resolution refers to the specific wavelength intervals in the electromagnetic spectrum
that a sensor can record (Simonett et al, 1983). For example, band 1 of the Landsat TM sensor
records energy between 0.45 and 0.52 mm in the visible part of the spectrum.
Wide intervals in the electromagnetic spectrum are referred to as coarse spectral resolution, and
narrow intervals are referred to as fine spectral resolution. For example, the SPOT panchromatic
sensor is considered to have coarse spectral resolution because it records EMR between 0.51
and 0.73 µm. On the other hand, band 3 of the Landsat TM sensor has fine spectral resolution
because it records EMR between 0.63 and 0.69 µm (Jensen, 1996).
NOTE: The spectral resolution does not indicate how many levels the signal is broken into.
Spatial Resolution Spatial resolution is a measure of the smallest object that can be resolved by the sensor, or the
area on the ground represented by each pixel (Simonett et al, 1983). The finer the resolution, the
lower the number. For instance, a spatial resolution of 79 meters is coarser than a spatial
resolution of 10 meters.
Scale
The terms large-scale imagery and small-scale imagery often refer to spatial resolution. Scale is
the ratio of distance on a map as related to the true distance on the ground (Star and Estes, 1990).
Large-scale in remote sensing refers to imagery in which each pixel represents a small area on
the ground, such as SPOT data, with a spatial resolution of 10 m or 20 m. Small scale refers to
imagery in which each pixel represents a large area on the ground, such as Advanced Very High
Resolution Radiometer (AVHRR) data, with a spatial resolution of 1.1 km.
Field Guide 13
Raster Data
This terminology is derived from the fraction used to represent the scale of the map, such as
1:50,000. Small-scale imagery is represented by a small fraction (one over a very large number).
Large-scale imagery is represented by a larger fraction (one over a smaller number). Generally,
anything smaller than 1:250,000 is considered small-scale imagery.
NOTE: Scale and spatial resolution are not always the same thing. An image always has the
same spatial resolution, but it can be presented at different scales (Simonett et al, 1983).
20m 20m
house
20m
Radiometric Radiometric resolution refers to the dynamic range, or number of possible data file values in
Resolution each band. This is referred to by the number of bits into which the recorded energy is divided.
14 ERDAS
Resolution
For instance, in 8-bit data, the data file values range from 0 to 255 for each pixel, but in 7-bit
data, the data file values for each pixel range from 0 to 128.
In Figure 1-9, 8-bit and 7-bit data are illustrated. The sensor measures the EMR in its range. The
total intensity of the energy from 0 to the maximum amount the sensor measures is broken down
into 256 brightness values for 8-bit data, and 128 brightness values for 7-bit data.
8-bit
0 max. intensity
7-bit
0 max. intensity
Temporal Resolution Temporal resolution refers to how often a sensor obtains imagery of a particular area. For
example, the Landsat satellite can view the same area of the globe once every 16 days. SPOT,
on the other hand, can revisit the same area every three days.
Spectral
Resolution:
0.52 - 0.60 mm
Day 1
Temporal Resolution:
Day 17 same area viewed every
Day 31 16 days
Source: EOSAT
Field Guide 15
Raster Data
Data Correction There are several types of errors that can be manifested in remotely sensed data. Among these
are line dropout and striping. These errors can be corrected to an extent in GIS by radiometric
and geometric correction functions.
NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT.
Line Dropout Line dropout occurs when a detector either completely fails to function or becomes temporarily
saturated during a scan (like the effect of a camera flash on a human retina). The result is a line
or partial line of data with higher data file values, creating a horizontal streak until the
detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line of estimated data file
values. The estimated line is based on the lines above and below it.
You can correct line dropout using the 5 × 5 Median Filter from the Radar Speckle
Suppression function. The Convolution and Focal Analysis functions in the ERDAS
IMAGINE Image Interpreter also corrects for line dropout.
Striping Striping or banding occurs if a detector goes out of adjustment—that is, it provides readings
consistently greater than or less than the other detectors for the same band over the same ground
cover.
Use ERDAS IMAGINE Image Interpreter or ERDAS IMAGINE Spatial Modeler for
implementing algorithms to eliminate striping. The ERDAS IMAGINE Spatial Modeler
editing capabilities allow you to adapt the algorithms to best address the data.
Data Storage Image data can be stored on a variety of media—tapes, CD-ROMs, or floppy diskettes, for
example—but how the data are stored (e.g., structure) is more important than on what they are
stored.
All computer data are in binary format. The basic unit of binary data is a bit. A bit can have two
possible values—0 and 1, or “off” and “on” respectively. A set of bits, however, can have many
more values, depending on the number of bits used. The number of values that can be expressed
by a set of bits is 2 to the power of the number of bits used.
A byte is 8 bits of data. Generally, file size and disk space are referred to by number of bytes.
For example, a PC may have 640 kilobytes (1,024 bytes = 1 kilobyte) of RAM (random access
memory), or a file may need 55,698 bytes of disk space. A megabyte (Mb) is about one million
bytes. A gigabyte (Gb) is about one billion bytes.
Storage Formats Image data can be arranged in several ways on a tape or other media. The most common storage
formats are:
16 ERDAS
Data Storage
For a single band of data, all formats (BIL, BIP, and BSQ) are identical, as long as the data are
not blocked.
BIL
In BIL (band interleaved by line) format, each record in the file contains a scan line (row) of
data for one band (Slater, 1980). All bands of data for a given line are stored consecutively
within the file as shown in Figure 1-11.
Header
Image
Line 1, Band 1
Line 1, Band 2
•
•
•
Line 1, Band x
Line 2, Band 1
Line 2, Band 2
•
•
•
Line 2, Band x
Line n, Band 1
Line n, Band 2
•
•
•
Line n, Band x
Trailer
NOTE: Although a header and trailer file are shown in this diagram, not all BIL data contain
header and trailer files.
BSQ
In BSQ (band sequential) format, each entire band is stored consecutively in the same file
(Slater, 1980). This format is advantageous, in that:
Field Guide 17
Raster Data
Header File(s)
Line 1, Band 1
Line 2, Band 1
Image File Line 3, Band 1
Band 1 •
•
•
Line n, Band 1
end-of-file
Line 1, Band 2
Line 2, Band 2
Image File Line 3, Band 2
Band 2 •
•
•
Line n, Band 2
end-of-file
Line 1, Band x
Line 2, Band x
Image File Line 3, Band x
Band x •
•
•
Line n, Band x
Trailer File(s)
Landsat TM data are stored in a type of BSQ format known as fast format. Fast format data have
the following characteristics:
• Files are not split between tapes. If a band starts on the first tape, it ends on the first tape.
• An end-of-volume marker marks the end of each volume (tape). An end-of-volume marker
consists of three end-of-file markers.
• Regular products (not geocoded) are normally unblocked. Geocoded products are normally
blocked (EOSAT).
ERDAS IMAGINE imports all of the header and image file information.
18 ERDAS
Data Storage
BIP
In BIP (band interleaved by pixel) format, the values for each band are ordered within a given
pixel. The pixels are arranged sequentially on the tape (Slater, 1980). The sequence for BIP
format is:
Pixel 1, Band 1
Pixel 1, Band 2
Pixel 1, Band 3
.
.
.
Pixel 2, Band 1
Pixel 2, Band 2
Pixel 2, Band 3
.
.
.
Storage Media Today, most raster data are available on a variety of storage media to meet the needs of users,
depending on the system hardware and devices available. When ordering data, it is sometimes
possible to select the type of media preferred. The most common forms of storage media are
discussed in the following section:
• 9-track tape
• 4 mm tape
• 8 mm tape
• CD-ROM/optical disk
• videotape
Field Guide 19
Raster Data
Tape
The data on a tape can be divided into logical records and physical records. A record is the basic
storage unit on a tape.
• A logical record is a series of bytes that form a unit. For example, all the data for one line
of an image may form a logical record.
Blocked Data
For reasons of efficiency, data can be blocked to fit more on a tape. Blocked data are sequenced
so that there are more logical records in each physical record. The number of logical records in
each physical record is the blocking factor. For instance, a record may contain 28,000 bytes, but
only 4000 columns due to a blocking factor of 7.
Tape Contents
Tapes are available in a variety of sizes and storage capacities. To obtain information about the
data on a particular tape, read the tape label or box, or read the header file. Often, there is limited
information on the outside of the tape. Therefore, it may be necessary to read the header files
on each tape for specific information, such as:
• number of bands
• blocking factor
4 mm Tapes
The 4 mm tape is a relative newcomer in the world of GIS. This tape is a mere 2” × .75” in size,
but it can hold up to 2 Gb of data. This petite cassette offers an obvious shipping and storage
advantage because of its size.
8 mm Tapes
The 8 mm tape offers the advantage of storing vast amounts of data. Tapes are available in 5
and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gb size). The 8
mm tape is a 2.5” × 4” cassette, which makes it easy to ship and handle.
20 ERDAS
Data Storage
9-Track Tapes
A 9-track tape is an older format that was the standard for two decades. It is a large circular tape
approximately 10” in diameter. It requires a 9-track tape drive as a peripheral device for
retrieving data. The size and storage capability make 9-track less convenient than 8 mm or 1/4”
tapes. However, 9-track tapes are still widely used.
A single 9-track tape may be referred to as a volume. The complete set of tapes that contains
one image is referred to as a volume set.
The storage format of a 9-track tape in binary format is described by the number of bits per inch,
bpi, on the tape. The tapes most commonly used have either 1600 or 6250 bpi. The number of
bits per inch on a tape is also referred to as the tape density. Depending on the length of the tape,
9-tracks can store between 120-150 Mb of data.
CD-ROM
Data such as ADRG and Digital Line Graphs (DLG) are most often available on CD-ROM,
although many types of data can be requested in CD-ROM format. A CD-ROM is an optical
read-only storage device which can be read with a CD player. CD-ROMs offer the advantage
of storing large amounts of data in a small, compact device. Up to 644 Mb can be stored on a
CD-ROM. Also, since this device is read-only, it protects the data from accidentally being
overwritten, erased, or changed from its original integrity. This is the most stable of the current
media storage types and data stored on CD-ROM are expected to last for decades without
degradation.
Calculating Disk To calculate the amount of disk space a raster file requires on an ERDAS IMAGINE system,
Space use the following formula:
Where:
y = rows
x = columns
b = number of bytes per pixel
n = number of bands
1.2 adds 10% to the file size for pyramid layers (starting with the second) and 10%
for miscellaneous adjustments, such as histograms, lookup tables, etc.
For example, to load a 3 band, 16-bit file with 500 rows and 500 columns, about 2,100,000 bytes
of disk space is needed.
Field Guide 21
Raster Data
NOTE: On the PC, disk space is shown in bytes. On the workstation, disk space is shown as
kilobytes (1,024 bytes).
ERDAS IMAGINE In ERDAS IMAGINE, file name extensions identify the file type. When data are imported into
Format (.img) ERDAS IMAGINE, they are converted to the ERDAS IMAGINE file format and stored in
image files. ERDAS IMAGINE image files (.img) can contain two types of raster layers:
• thematic
• continuous
An image file can store a combination of thematic and continuous layers, or just one type.
Raster Layer(s)
• soils
• land use
• land cover
• roads
22 ERDAS
Data Storage
• hydrology
soils
See Chapter 4 “Image Display” for information on displaying thematic raster layers.
• Landsat
• SPOT
• DEM
• slope
• temperature
NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a true
color raster layer.
Field Guide 23
Raster Data
Landsat TM DEM
Tiled Data
Data in the .img format are tiled data. Tiled data are stored in tiles that can be set to any size.
• statistics
• lookup tables
• map coordinates
• map projection
This additional information can be viewed using the Image Information function located
on the Viewer’s tool bar.
Statistics
In ERDAS IMAGINE, the file statistics are generated from the data file values in the layer and
incorporated into the image file. This statistical information is used to create many program
defaults, and helps you make processing decisions.
24 ERDAS
Image File Organization
Pyramid Layers
Sometimes a large image takes longer than normal to display in the Viewer. The pyramid layer
option enables you to display large images faster. Pyramid layers are image layers which are
successively reduced by the power of 2 and resampled.
The Pyramid Layer option is available in the Image Information function located on the
Viewer’s tool bar and, from the Import function.
See Chapter 4 “Image Display” for more information on pyramid layers. See the On-Line
Help for detailed information on ERDAS IMAGINE file formats.
Image File Data are easy to locate if the data files are well organized. Well organized files also make data
Organization more accessible to anyone who uses the system. Using consistent naming conventions and the
ERDAS IMAGINE Image Catalog helps keep image files well organized and accessible.
Consistent Naming Many processes create an output file, and every time a file is created, it is necessary to assign a
Convention file name. The name that is used can either cause confusion about the process that has taken
place, or it can clarify and give direction. For example, if the name of the output file is
image.img, it is difficult to determine the contents of the file. On the other hand, if a standard
nomenclature is developed in which the file name refers to a process or contents of the file, it is
possible to determine the progress of a project and contents of a file by examining the directory.
Develop a naming convention that is based on the contents of the file. This helps everyone
involved know what the file contains. For example, in a project to create a map composition for
Lake Lanier, a directory for the files may look similar to the one below:
lanierTM.img
lanierSPOT.img
lanierSymbols.ovr
lanierlegends.map.ovr
lanierScalebars.map.ovr
lanier.map
lanier.plt
lanier.gcc
lanierUTM.img
From this listing, one can make some educated guesses about the contents of each file based on
naming conventions used. For example, lanierTM.img is probably a Landsat TM scene of Lake
Lanier. The file lanier.map is probably a map composition that has map frames with
lanierTM.img and lanierSPOT.img data in them. The file lanierUTM.img was probably created
when lanierTM.img was rectified to a UTM map projection.
Keeping Track of Using a database to store information about images enables you to track image files (.img)
Image Files without having to know the name or location of the file. The database can be queried for specific
parameters (e.g., size, type, map projection) and the database returns a list of image files that
match the search criteria. This file information helps to quickly determine which image(s) to
use, where it is located, and its ancillary data. An image database is especially helpful when
there are many image files and even many on-going projects. For example, you could use the
database to search for all of the image files of Georgia that have a UTM map projection.
Field Guide 25
Raster Data
Use the ERDAS IMAGINE Image Catalog to track and store information for image files
(.img) that are imported and created in ERDAS IMAGINE.
NOTE: All information in the Image Catalog database, except archive information, is extracted
from the image file header. Therefore, if this information is modified in the Image Information
utility, it is necessary to recatalog the image in order to update the information in the Image
Catalog database.
Geocoded Data Geocoding, also known as georeferencing, is the geographical registration or coding of the
pixels in an image. Geocoded data are images that have been rectified to a particular map
projection and pixel size.
Raw, remotely-sensed image data are gathered by a sensor on a platform, such as an aircraft or
satellite. In this raw form, the image data are not referenced to a map projection. Rectification
is the process of projecting the data onto a plane and making them conform to a map projection
system.
It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools.
Geocoded data are also available from Space Imaging EOSAT and SPOT.
See Appendix B “Map Projections” for detailed information on the different projections
available. See Chapter 10 “Rectification” for information on geocoding raw imagery with
ERDAS IMAGINE.
Using Image Data ERDAS IMAGINE provides many tools designed to extract the necessary information from the
in GIS images in a database. The following chapters in this book describe many of these processes.
This section briefly describes some basic image file techniques that may be useful for any
application.
26 ERDAS
Using Image Data in GIS
Subsetting and Within ERDAS IMAGINE, there are options available to make additional image files from
Mosaicking those acquired from EOSAT, SPOT, etc. These options involve combining files, mosaicking,
and subsetting.
ERDAS IMAGINE programs allow image data with an unlimited number of bands, but the most
common satellite data types—Landsat and SPOT—have seven or fewer bands. Image files can
be created with more than seven bands.
It may be useful to combine data from two different dates into one file. This is called
multitemporal imagery. For example, a user may want to combine Landsat TM from one date
with TM data from a later date, then perform a classification based on the combined data. This
is particularly useful for change detection studies.
You can also incorporate elevation data into an existing image file as another band, or create
new bands through various enhancement techniques.
To combine two or more image files, each file must be georeferenced to the same
coordinate system, or to each other. See Chapter 10 “Rectification” for information on
georeferencing images.
Subset
Subsetting refers to breaking out a portion of a large file into one or more smaller files. Often,
image files contain areas much larger than a particular study area. In these cases, it is helpful to
reduce the size of the image file to include only the area of interest (AOI). This not only
eliminates the extraneous data in the file, but it speeds up processing due to the smaller amount
of data to process. This can be important when dealing with multiband data.
The ERDAS IMAGINE Import option often lets you define a subset area of an image to
preview or import. You can also use the Subset option from ERDAS IMAGINE Image
Interpreter to define a subset area.
Mosaic
On the other hand, the study area in which you are interested may span several image files. In
this case, it is necessary to combine the images to create one large file. This is called
mosaicking.
To create a mosaicked image, use the Mosaic Images option from the Data Preparation
menu.
Enhancement Image enhancement is the process of making an image more interpretable for a particular
application (Faust, 1989). Enhancement can make important features of raw, remotely sensed
data and aerial photographs more interpretable to the human eye. Enhancement techniques are
often used instead of classification for extracting useful information from images.
There are many enhancement techniques available. They range in complexity from a simple
contrast stretch, where the original data file values are stretched to fit the range of the display
device, to principal components analysis, where the number of image file bands can be reduced
and new bands created to account for the most variance in the data.
Field Guide 27
Raster Data
Multispectral Image data are often used to create thematic files through multispectral classification. This
Classification entails using spectral pattern recognition to identify groups of pixels that represent a common
characteristic of the scene, such as soil type or vegetation.
Editing Raster ERDAS IMAGINE provides raster editing tools for editing the data values of thematic and
Data continuous raster data. This is primarily a correction mechanism that enables you to correct bad
data values which produce noise, such as spikes and holes in imagery. The raster editing
functions can be applied to the entire image or a user-selected area of interest (AOI).
With raster editing, data values in thematic data can also be recoded according to class.
Recoding is a function that reassigns data values to a region or to an entire class of pixels.
See Chapter 12 “Geographic Information Systems” for information about recoding data.
See Chapter 6 “Enhancement” for information about reducing data noise using spatial
filtering.
The ERDAS IMAGINE raster editing functions allow the use of focal and global spatial
modeling functions for computing the values to replace noisy pixels or areas in continuous or
thematic data.
Focal operations are filters that calculate the replacement value based on a window
(3 × 3, 5 × 5, etc.), and replace the pixel of interest with the replacement value. Therefore this
function affects one pixel at a time, and the number of surrounding pixels that influence the
value is determined by the size of the moving window.
Global operations calculate the replacement value for an entire area rather than affecting one
pixel at a time. These functions, specifically the Majority option, are more applicable to
thematic data.
See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs.
28 ERDAS
Editing Raster Data
Editing Continuous
(Athematic) Data
Editing DEMs
DEMs occasionally contain spurious pixels or bad data. These spikes, holes, and other noises
caused by automatic DEM extraction can be corrected by editing the raster data values and
replacing them with meaningful values. This discussion of raster editing focuses on DEM
editing.
The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs,
but it can also be used with images of other continuous data sources, such as radar, SPOT,
Landsat, and digitized photographs.
When editing continuous raster data, you can modify or replace original pixel values with the
following:
• the average of the buffering pixels—replace the original pixel value with the average of the
pixels in a specified buffer area around the AOI. This is used where the constant values of
the AOI are not known, but the area is flat or homogeneous with little variation (for
example, a lake).
• the original data value plus a constant value—add a negative constant value to the original
data values to compensate for the height of trees and other vertical features in the DEM.
This technique is commonly used in forested areas.
• spatial filtering—filter data values to eliminate noise such as spikes or holes in the data.
Interpolation While the previously listed raster editing techniques are perfectly suitable for some
Techniques applications, the following interpolation techniques provide the best methods for raster editing:
• distance weighting
Each pixel’s data value is interpolated from the reference points in the data file. These
interpolation techniques are described below:
2-D Polynomial
This interpolation technique provides faster interpolation calculations than distance weighting
and multisurface functions. The following equation is used:
V = a0 + a1x + a2y + a2x2 + a4xy + a5y2 +. . .
Field Guide 29
Raster Data
Where:
V = data value (elevation value for DEM)
a = polynomial coefficients
x = x coordinate
y = y coordinate
Multisurface Functions
The multisurface technique provides the most accurate results for editing DEMs that have been
created through automatic extraction. The following equation is used:
V = ∑ Wi Qi
Where:
V = output data value (elevation value for DEM)
Wi = coefficients which are derived by the least squares method
Qi = distance-related kernels which are actually interpretable as continuous single
value surfaces
Source: Wang, Z., 1990
Distance Weighting
The weighting function determines how the output data values are interpolated from a set of
reference data points. For each pixel, the values of all reference points are weighted by a value
corresponding with the distance between each point and the pixel.
The weighting function used in ERDAS IMAGINE is:
S- – 1 2
W = ---
D
Where:
S = normalization factor
D = distance from output data point and reference point
The value for any given pixel is calculated by taking the sum of weighting factors for all
reference points multiplied by the data values of those points, and dividing by the sum of the
weighting factors:
∑ Wi × Vi
i=1
V = ----------------------------
-
n
∑ Wi
i=1
30 ERDAS
Editing Raster Data
Where:
V = output data value (elevation value for DEM)
i = ith reference point
Wi = weighting factor of point i
Vi = data value of point i
n = number of reference points
Source: Wang, Z., 1990
Field Guide 31
Raster Data
32 ERDAS
Chapter 2
Vector Layers
Introduction ERDAS IMAGINE is designed to integrate two data types, raster and vector, into one system.
While the previous chapter explored the characteristics of raster data, this chapter is focused on
vector data. The vector data structure in ERDAS IMAGINE is based on the ArcInfo data model
(developed by ESRI, Inc.). This chapter describes vector data, attribute information, and
symbolization.
You do not need ArcInfo software or an ArcInfo license to use the vector capabilities in
ERDAS IMAGINE. Since the ArcInfo data model is used in ERDAS IMAGINE, you can
use ArcInfo coverages directly without importing them.
• points
• lines
• polygons
vertices node
polygons
line
label point
node
points
Field Guide 33
Vector Layers
Points A point is represented by a single x, y coordinate pair. Points can represent the location of a
geographic feature or a point that has no area, such as a mountain peak. Label points are also
used to identify polygons (see Figure 2-2).
Lines A line (polyline) is a set of line segments and represents a linear geographic feature, such as a
river, road, or utility line. Lines can also represent nongeographical boundaries, such as voting
districts, school zones, contour lines, etc.
Polygons A polygon is a closed line or closed set of lines defining a homogeneous area, such as soil type,
land use, or water body. Polygons can also be used to represent nongeographical features, such
as wildlife habitats, state borders, commercial districts, etc. Polygons also contain label points
that identify the polygon. The label point links the polygon to its attributes.
Vertex The points that define a line are vertices. A vertex is a point that defines an element, such as the
endpoint of a line segment or a location in a polygon where the line segment defining the
polygon changes direction. The ending points of a line are called nodes. Each line has two
nodes: a from-node and a to-node. The from-node is the first vertex in a line. The to-node is the
last vertex in a line. Lines join other lines only at nodes. A series of lines in which the from-
node of the first line joins the to-node of the last line is a polygon.
label point
line polygon
vertices
In Figure 2-2, the line and the polygon are each defined by three vertices.
Coordinates Vector data are expressed by the coordinates of vertices. The vertices that define each element
are referenced with x, y, or Cartesian, coordinates. In some instances, those coordinates may be
inches [as in some computer-aided design (CAD) applications], but often the coordinates are
map coordinates, such as State Plane, Universal Transverse Mercator (UTM), or Lambert
Conformal Conic. Vector data digitized from an ungeoreferenced image are expressed in file
coordinates.
Tics
Vector layers are referenced to coordinates or a map projection system using tic files that
contain geographic control points for the layer. Every vector layer must have a tic file. Tics are
not topologically linked to other features in the layer and do not have descriptive data associated
with them.
34 ERDAS
Introduction
Vector Layers Although it is possible to have points, lines, and polygons in a single layer, a layer typically
consists of one type of feature. It is possible to have one vector layer for streams (lines) and
another layer for parcels (polygons). A vector layer is defined as a set of features where each
feature has a location (defined by coordinates and topological pointers to other features) and,
possibly attributes (defined as a set of named items or variables) (ESRI 1989). Vector layers
contain both the vector features (points, lines, polygons) and the attribute information.
Usually, vector layers are also divided by the type of information they represent. This enables
the user to isolate data into themes, similar to the themes used in raster layers. Political districts
and soil types would probably be in separate layers, even though both are represented with
polygons. If the project requires that the coincidence of features in two or more layers be
studied, the user can overlay them or create a new layer.
See Chapter 12 “Geographic Information Systems” for more information about analyzing
vector layers.
Topology The spatial relationships between features in a vector layer are defined using topology. In
topological vector data, a mathematical procedure is used to define connections between
features, identify adjacent polygons, and define a feature as a set of other features (e.g., a
polygon is made of connecting lines) (Environmental Systems Research Institute, 1990).
Topology is not automatically created when a vector layer is created. It must be added later
using specific functions. Topology must be updated after a layer is edited also.
“Digitizing” describes how topology is created for a new or edited vector layer.
Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based on the ArcInfo data
model used for ARC coverages. This georelational data model is actually a set of files using the
computer’s operating system for file management and input/output. An ERDAS IMAGINE
vector layer is stored in subdirectories on the disk. Vector data are represented by a set of logical
tables of information, stored as files within the subdirectory. These files may serve the
following purposes:
• define features
A workspace is a location that contains one or more vector layers. Workspaces provide a
convenient means for organizing layers into related groups. They also provide a place for the
storage of tabular data not directly tied to a particular layer. Each workspace is completely
independent. It is possible to have an unlimited number of workspaces and an unlimited number
of vector layers in a workspace. Table 2-1 summarizes the types of files that are used to make
up vector layers.
Field Guide 35
Vector Layers
Figure 2-3 illustrates how a typical vector workspace is set up (Environmental Systems
Research Institute, 1992).
georgia
parcels testdata
Because vector layers are stored in directories rather than in simple files, you MUST use
the utilities provided in ERDAS IMAGINE to copy and rename them. A utility is also
provided to update path names that are no longer correct due to the use of regular system
commands on vector layers.
See the ESRI documentation for more detailed information about the different vector files.
36 ERDAS
Attribute Information
Attribute Along with points, lines, and polygons, a vector layer can have a wealth of associated
Information descriptive, or attribute, information associated with it. Attribute information is displayed in
CellArrays. This is the same information that is stored in the INFO database of ArcInfo. Some
attributes are automatically generated when the layer is created. Custom fields can be added to
each attribute table. Attribute fields can contain numerical or character data.
The attributes for a roads layer may look similar to the example in Figure 2-4. You can select
features in the layer based on the attribute information. Likewise, when a row is selected in the
attribute CellArray, that feature is highlighted in the Viewer.
To utilize all of this attribute information, the INFO files can be merged into the PAT and AAT
files. Once this attribute information has been merged, it can be viewed in CellArrays and edited
as desired. This new information can then be exported back to its original format.
The complete path of the file must be specified when establishing an INFO file name in a
Viewer application, such as exporting attributes or merging attributes, as shown in the following
example:
/georgia/parcels/info!arc!parcels.pcode
Use the Attributes option in the Viewer to view and manipulate vector attribute data,
including merging and exporting. (The Raster Attribute Editor is for raster attributes only
and cannot be used to edit vector attributes.)
Field Guide 37
Vector Layers
See the ERDAS IMAGINE On-Line Help for more information about using CellArrays.
Displaying Vector Vector data are displayed in Viewers, as are other data types in ERDAS IMAGINE. You can
Data display a single vector layer, overlay several layers in one Viewer, or display a vector layer(s)
over a raster layer(s).
In layers that contain more than one feature (a combination of points, lines, and polygons), you
can select which features to display. For example, if you are studying parcels, you may want to
display only the polygons in a layer that also contains street centerlines (lines).
Color Schemes Vector data are usually assigned class values in the same manner as the pixels in a thematic
raster file. These class values correspond to different colors on the display screen. As with a
pseudo color image, you can assign a color scheme for displaying the vector classes.
See Chapter 4 “Image Display” for a thorough discussion of how images are displayed.
Symbolization Vector layers can be displayed with symbolization, meaning that the attributes can be used to
determine how points, lines, and polygons are rendered. Points, lines, polygons, and nodes are
symbolized using styles and symbols similar to annotation. For example, if a point layer
represents cities and towns, the appropriate symbol could be used at each point based on the
population of that area.
Points
Point symbolization options include symbol, size, and color. The symbols available are the
same symbols available for annotation.
Lines
Lines can be symbolized with varying line patterns, composition, width, and color. The line
styles available are the same as those available for annotation.
Polygons
Polygons can be symbolized as lines or as filled polygons. Polygons symbolized as lines can
have varying line styles (see “Lines”). For filled polygons, either a solid fill color or a repeated
symbol can be selected. When symbols are used, you select the symbol to use, the symbol size,
symbol color, background color, and the x- and y-separation between symbols. Figure 2-5
illustrates a pattern fill.
38 ERDAS
Vector Data Sources
See the ERDAS IMAGINE Tour Guides or On-Line Help for information about selecting
features and using CellArrays.
• screen digitizing—create new vector layers by using the mouse to digitize on the screen
• using other software packages—many external vector data types can be converted to
ERDAS IMAGINE vector layers
Field Guide 39
Vector Layers
Digitizing In the broadest sense, digitizing refers to any process that converts nondigital data into numbers.
However, in ERDAS IMAGINE, the digitizing of vectors refers to the creation of vector data
from hardcopy materials or raster images that are traced using a digitizer keypad on a digitizing
tablet or a mouse on a displayed image.
Any image not already in digital format must be digitized before it can be read by the computer
and incorporated into the database. Most Landsat, SPOT, or other satellite data are already in
digital format upon receipt, so it is not necessary to digitize them. However, you may also have
maps, photographs, or other nondigital data that contain information you want to incorporate
into the study. Or, you may want to extract certain features from a digital image to include in a
vector layer. Tablet digitizing and screen digitizing enable you to digitize certain features of a
map or photograph, such as roads, bodies of water, voting districts, and so forth.
Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer nondigital data such as maps
or photographs to vector format. The digitizing tablet contains an internal electronic grid that
transmits data to ERDAS IMAGINE on cue from a digitizer keypad operated by you.
Digitizer Setup
The map or photograph to be digitized is secured on the tablet, and a coordinate system is
established with a setup procedure.
Digitizer Operation
The handheld digitizer keypad features a small window with a crosshair and keypad buttons.
Position the intersection of the crosshair directly over the point to be digitized. Depending on
the type of equipment and the program being used, one of the input buttons is pushed to tell the
system which function to perform, such as:
Move the puck along the desired polygon boundaries or lines, digitizing points at appropriate
intervals (where lines curve or change direction), until all the points are collected.
40 ERDAS
Digitizing
Newly created vector layers do not contain topological data. You must create topology
using the Build or Clean options. This is discussed further in Chapter 12 “Geographic
Information Systems”.
Digitizing Modes
There are two modes used in digitizing:
• stream mode—points are generated continuously at specified intervals, while the puck is in
proximity to the surface of the digitizing tablet
You can create a new vector layer from the Viewer. Select the Tablet Input function from
the Viewer to use a digitizing tablet to enter new information into that layer.
Measurement
The digitizing tablet can also be used to measure both linear and areal distances on a map or
photograph. The digitizer puck is used to outline the areas to measure. You can measure:
Measurements can be saved to a file, printed, and copied. These operations can also be
performed with screen digitizing.
Select the Measure function from the Viewer or click on the Ruler tool in the Viewer tool
bar to enable tablet or screen measurement.
Screen Digitizing In screen digitizing, vector data are drawn with a mouse in the Viewer using the displayed image
as a reference. These data are then written to a vector layer.
Screen digitizing is used for the same purposes as tablet digitizing, such as:
Field Guide 41
Vector Layers
Imported Vector Many types of vector data from other software packages can be incorporated into the ERDAS
Data IMAGINE system. These data formats include:
• Vector Product Format (VPF) files from the Defense Mapping Agency
See Chapter 3 “Raster and Vector Data Sources” for more information on these data.
Raster to Vector A raster layer can be converted to a vector layer and used as another layer in a vector database.
Conversion The following diagram illustrates a thematic file in raster format that has been converted to
vector format.
42 ERDAS
Other Vector Data Types
Most commonly, thematic raster data rather than continuous data are converted to vector format,
since converting continuous layers may create more vector features than are practical or even
manageable.
Convert vector data to raster data, and vice versa, using IMAGINE Vector™.
Other Vector Data While this chapter has focused mainly on the ArcInfo coverage format, there are other types of
Types vector formats that you can use in ERDAS IMAGINE. The two primary types are:
• shapefile
Shapefile Vector The shapefile vector format was designed by ESRI. You can now use shapefile format
Format (extension .shp) in ERDAS IMAGINE. You can now:
• display shapefiles
• create shapefiles
• edit shapefiles
• attribute shapefiles
• symbolize shapefiles
• print shapefiles
Field Guide 43
Vector Layers
SDE Like the shapefile format, the Spatial Database Engine (SDE) is a vector format designed by
ESRI. The data layers are stored in a relational database management system (RDBMS) such as
Oracle, or SQL Server. Some of the features of SDE include:
• powerful and flexible query capabilities using the SQL where clause
ERDAS IMAGINE has the capability to act as a client to access SDE vector layers stored in a
database. To do this, it uses a wizard interface to connect ERDAS IMAGINE to a SDE database,
and selects one of the vector layers. Additionally, it can join business tables with the vector
layer, and generate a subset of features by imposing attribute constraints (e.g., SQL where
clause).
The definition of the vector layer as extracted from a SDE database is stored in a
<layername>.sdv file, and can be loaded as a regular ERDAS IMAGINE data file. ERDAS
IMAGINE supports the SDE projection systems. Currently, ERDAS IMAGINE’s SDE
capability is read-only. For example, features can be queried and AOIs can be created, but not
edited.
SDTS SDTS stands for Spatial Data Transfer Standard. SDTS is used to transfer spatial data between
computer systems. Such data includes attribute, georeferencing, data quality report, data
dictionary, and supporting metadata.
According to the USGS, the
implementation of SDTS is of significant interest to users and producers of digital spatial
data because of the potential for increased access to and sharing of spatial data, the
reduction of information loss in data exchange, the elimination of the duplication of data
acquisition, and the increase in the quality and integrity of spatial data (United States
Geological Survey, 1999c).
The components of SDTS are broken down into six parts. The first three parts are related, but
independent, and are concerned with the transfer of spatial data. The last three parts provide
definitions for rules and formats for applying SDTS to the exchange of data. The parts of SDTS
are as follows:
44 ERDAS
Other Vector Data Types
ArcGIS Integration ArcGIS Integration is the method you use to access the data in a geodatabase. The term
geodatabase is the short form of geographic database. The geodatabase is hosted inside of a
regional database management system that provides services for managing geographic data.
The services include validation rules, relationships, and topological associations. ERDAS
IMAGINE has always supported ESRI data formats such as coverages and shapefiles, and now,
using ArcGIS Vector Integration, ERDAS IMAGINE can also access CAD and VPF data on
the internet.
There are two types of geodatabases: personal and enterprise. The personal geodatabases are
for use by an individual or small group, and the enterprise geodatabases are for use by large
groups. Industrial strength host systems such as Oracle support the organizational structure of
enterprise geodatabases. The organization of both personal and enterprise geodatabases starts
with a workspace that contains both spatial and non-spatial datasets such as feature classes,
raster datasets, and tables. An example of a feature dataset would be U.S. Agriculture. Within
the datasets are feature classes. An example of a feature class would be U.S. Hydrology. Within
every feature class are particular features like wells and lakes. Each feature class will be
symbolized by only one type of geometry such as points symbolizing wells or polygons
symbolizing lakes.
It is important to remember when you delete a personal database connection, the entire database
is deleted from disk. When you delete a database connection on an enterprise database, only the
connection is broken, and nothing in the geodatabase is deleted.
Field Guide 45
Vector Layers
46 ERDAS
Chapter 3
Introduction This chapter is an introduction to the most common raster and vector data types that can be used
with the ERDAS IMAGINE software package. The raster data types covered include:
• radar imagery
Importing and
Exporting
Raster Data There is an abundance of data available for use in GIS today. In addition to satellite and airborne
imagery, raster data sources include digital x-rays, sonar, microscopic imagery, video digitized
data, and many other sources.
Because of the wide variety of data formats, ERDAS IMAGINE provides two options for
importing data:
Field Guide 47
Raster and Vector Data Sources
Import
Table 3-1 lists some of the raster data formats that can be imported to exported from, directly
read from, and directly written to ERDAS IMAGINE.
There is a distinct difference between import and direct read. Import means that the data
is converted from its original format into another format (e.g. IMG, TIFF, or GRID Stack),
which can be read directly by ERDAS IMAGINE. Direct read formats are those formats
which the Viewer and many of its associated tools can read immediately without any
conversion process.
Direct Direct
Data Type Import Export
Read Write
ADRG • •
ADRI •
ARCGEN • •
Arc Coverage • •
ArcInfo & Space Imaging BIL, BIP, • • •
BSQ
Arc Interchange • •
ASCII •
ASRP • •
ASTER (EOS HDF Format) •
AVHRR (NOAA) •
AVHRR (Dundee Format) •
AVHRR (Sharp) •
a • • •b
BIL, BIP, BSQ (Generic Binary)
CADRG (Compressed ADRG) • • •
CIB (Controlled Image Base) • • •
DAEDALUS •
USGS DEM • •
DOQ • •
DOQ (JPEG) • •
DTED • • •
ER Mapper •
ERS (I-PAF CEOS) •
48 ERDAS
Importing and Exporting
Direct Direct
Data Type Import Export
Read Write
ERS (Conae-PAF CEOS) •
ERS (Tel Aviv-PAF CEOS) •
ERS (D-PAF CEOS) •
ERS (UK-PAF CEOS) •
FIT •
a
Generic Binary (BIL, BIP, BSQ) • • •b
GeoTIFF • • • •
GIS (Erdas 7.x) • • •
GRASS • •
GRID • • •
GRID Stack • • • •
GRID Stack 7.x • • •
GRD (Surfer: ASCII/Binary) • •
IRS-1C/1D (EOSAT Fast Format C) •
IRS-1C/1D(EUROMAP Fast Format •
C)
IRS-1C/1D (Super Structured Format) •
JFIF (JPEG) • • •
Landsat-7 Fast-L7A ACRES •
Landsat-7 Fast-L7A EROS •
Landsat-7 Fast-L7A Eurimage •
LAN (Erdas 7.x) • • •
MODIS (EOS HDF Format) •
MrSID • •
MSS Landsat •
NLAPS Data Format (NDF) •
NASDA CEOS •
PCX • • •
RADARSAT (Vancouver CEOS) •
RADARSAT (Acres CEOS) •
RADARSAT (West Freugh CEOS) •
Raster Product Format • • •
SDE • •
Field Guide 49
Raster and Vector Data Sources
Direct Direct
Data Type Import Export
Read Write
SDTS • •
SeaWiFS L1B and L2A (OrbView) •
Shapefile • • • •
SPOT •
SPOT CCRS •
SPOT (GeoSpot) •
SPOT SICORP MetroView •
SUN Raster • •
TIFF • • • •
TM Landsat Acres Fast Format •
TM Landsat Acres Standard Format •
TM Landsat EOSAT Fast Format •
TM Landsat EOSAT Standard Format •
TM Landsat ESA Fast Format •
TM Landsat ESA Standard Format •
TM Landsat-7 Eurimage CEOS •
(Multispectral)
TM Landsat-7 Eurimage CEOS •
(Panchromatic)
TM Landsat-7 HDF Format •
TM Landsat IRS Fast Format •
TM Landsat IRS Standard Format •
TM Landsat-7 Fast-L7A ACRES •
TM Landsat-7 Fast-L7A EROS •
TM Landsat-7 Fast-L7A Eurimage •
TM Landsat Radarsat Fast Format •
TM Landsat Radarsat Standard •
Format
USRP • •
b Direct read of generic binary data requires an accompanying header file in the ESRI ArcInfo, Space Imaging, or
ERDAS IMAGINE formats.
50 ERDAS
Importing and Exporting
The import function converts raster data to the ERDAS IMAGINE file format (.img), or other
formats directly writable by ERDAS IMAGINE. The import function imports the data file
values that make up the raster image, as well as the ephemeris or additional data inherent to the
data structure. For example, when the user imports Landsat data, ERDAS IMAGINE also
imports the georeferencing data for the image.
Raster data formats cannot be exported as vector data formats unless they are converted
with the Vector utilities.
Each direct function is programmed specifically for that type of data and cannot be used
to import other data types.
NITFS
NITFS stands for the National Imagery Transmission Format Standard. NITFS is designed to
pack numerous image compositions with complete annotation, text attachments, and imagery-
associated metadata.
According to Jordan and Beck,
NITFS is an unclassified format that is based on ISO/IEC 12087-5, Basic Image
Interchange Format (BIIF). The NITFS implementation of BIIF is documented in U.S.
Military Standard 2500B, establishing a standard data format for digital imagery and
imagery-related products.
NITFS was first introduced in 1990 and was for use by the government and intelligence
agencies. NITFS is now the standard for military organizations as well as commercial
industries.
Jordan and Beck list the following attributes of NITF files:
• provide a common basis for storage and digital interchange of images and associated data
among existing and future systems
• minimize formatting overhead, particularly for those users transmitting only a small
amount of data or with limited bandwidth
Field Guide 51
Raster and Vector Data Sources
• multiple images
• annotation on images
The process of translating NITFS files is a cross-translation process. One system’s internal
representation for the files and their associated data is processed and put into the NITF format.
The receiving system reformats the NITF file, and converts it for the receiving systems internal
representation of the files and associated data.
In ERDAS IMAGINE, the IMAGINE NITF™ software accepts such information and assembles
it into one file in the standard NITF format.
Annotation Data Annotation data can also be imported directly. Table 3-2 lists the Annotation formats.
There is a distinct difference between import and direct read. Import means that the data is
converted from its original format into another format (e.g. IMG, TIFF, or GRID Stack), which
can be read directly by ERDAS IMAGINE. Direct read formats are those formats which the
Viewer and many of its associated tools can read immediately without any conversion process.
Direct Direct
Data Type Import Export
Read Write
ANT (Erdas 7.x) • •
ASCII To Point Annotation •
DXF To Annotation •
Generic Binary Data The Generic Binary import option is a flexible program which enables the user to define the data
structure for ERDAS IMAGINE. This program allows the import of BIL, BIP, and BSQ data
that are stored in left to right, top to bottom row order. Data formats from unsigned 1-bit up to
64-bit floating point can be imported. This program imports only the data file values—it does
not import ephemeris data, such as georeferencing information. However, this ephemeris data
can be viewed using the Data View option (from the Utility menu or the Import dialog).
Complex data cannot be imported using this program; however, they can be imported as two
real images and then combined into one complex image using the Spatial Modeler.
You cannot import tiled or compressed data using the Generic Binary import option.
Vector Data Vector layers can be created within ERDAS IMAGINE by digitizing points, lines, and polygons
using a digitizing tablet or the computer screen. Several vector data types, which are available
from a variety of government agencies and private companies, can also be imported. Table 3-3
lists some of the vector data formats that can be imported to, and exported from, ERDAS
IMAGINE:
52 ERDAS
Importing and Exporting
There is a distinct difference between import and direct read. Import means that the data
is converted from its original format into another format (e.g. IMG, TIFF, or GRID Stack),
which can be read directly by ERDAS IMAGINE. Direct read formats are those formats
which the Viewer and many of its associated tools can read immediately without any
conversion process.
Direct Direct
Data Type Import Export
Read Write
ARCGEN • •
Arc Interchange • •
Arc_Interchange to Coverage •
Arc_Interchange to Grid •
ASCII To Point Coverage •
DFAD • •
DGN (Intergraph IGDS) •
DIG Files (Erdas 7.x) •
DLG • •
DXF to Annotation •
DXF to Coverage •
ETAK •
IGDS (Intergraph .dgn File) •
IGES • •
MIF/MID (MapInfo) to Coverage •
SDE • •
SDTS • •
Shapefile • •
Terramodel •
TIGER • •
VPF • •
Once imported, the vector data are automatically converted to ERDAS IMAGINE vector layers.
These vector formats are discussed in more detail in “Vector Data from Other Software
Vendors”. See Chapter 2 “Vector Layers” for more information on ERDAS IMAGINE
vector layers.
Field Guide 53
Raster and Vector Data Sources
Import and export vector data with the Import/Export function. You can also convert
vector layers to raster format, and vice versa, with the IMAGINE Vector utilities.
Satellite Data There are several data acquisition options available including photography, aerial sensors, and
sophisticated satellite scanners. However, a satellite system offers these advantages:
• Digital data gathered by a satellite sensor can be transmitted over radio or microwave
communications links and stored on magnetic tapes, so they are easily processed and
analyzed by a computer.
• Many satellites orbit the Earth, so the same area can be covered on a regular basis for
change detection.
• Once the satellite is launched, the cost for data acquisition is less than that for aircraft data.
• Satellites have very stable geometry, meaning that there is less chance for distortion or
skew in the final image.
Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. The sensors
are made up of detectors.
• The scanner is the entire data acquisition system, such as the Landsat TM scanner or the
SPOT panchromatic scanner (Lillesand and Kiefer, 1987). It includes the sensor and the
detectors.
• A sensor is a device that gathers energy, converts it to a signal, and presents it in a form
suitable for obtaining information about the environment (Colwell, 1983).
• A detector is the device in a sensor system that records electromagnetic radiation. For
example, in the sensor system on the Landsat TM scanner there are 16 detectors for each
wavelength band (except band 6, which has 4 detectors).
In a satellite system, the total width of the area on the ground covered by the scanner is called
the swath width, or width of the total field of view (FOV). FOV differs from IFOV in that the
IFOV is a measure of the field of view of each detector. The FOV is a measure of the field of
view of all the detectors combined.
Satellite The U. S. Landsat and the French SPOT satellites are two important data acquisition satellites.
Characteristics These systems provide the majority of remotely-sensed digital images in use today. The Landsat
and SPOT satellites have several characteristics in common:
• Both scanners can produce nadir views. Nadir is the area on the ground directly beneath the
scanner’s detectors.
54 ERDAS
Satellite Data
• They have sun-synchronous orbits, meaning that they rotate around the Earth at the same
rate as the Earth rotates on its axis, so data are always collected at the same local time of
day over the same region.
• They both record electromagnetic radiation in one or more bands. Multiband data are
referred to as multispectral imagery. Single band, or monochrome, imagery is called
panchromatic.
NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery.
2.0
2.1
2.2 Band 7
2.3
2.4
2.5
2.6
3.0
3.5
Band 3
4.0
5.0
6.0
7.0
8.0
9.0
10.0
11.0 Band 4
12.0 Band 6 Band 5
13.0
1
NOAA AVHRR band 5 is not on the NOAA 10 satellite, but is on NOAA 11.
Field Guide 55
Raster and Vector Data Sources
IKONOS The IKONOS satellite was launched in September of 1999 by the Athena II rocket.
The resolution of the panchromatic sensor is 1 m. The resolution of the multispectral scanner is
4 m. The swath width is 13 km at nadir. The accuracy with out ground control is 12 m
horizontally, and 10 m vertically; with ground control it is 2 m horizontally, and 3 m vertically.
IKONOS orbits at an altitude of 423 miles, or 681 kilometers. The revisit time is 2.9 days at 1
m resolution, and 1.5 days at 1.5 m resolution
Source: Space Imaging, 1999a; Center for Health Applications of Aerospace Related
Technologies, 2000a
IRS
IRS-1C
The IRS-1C sensor was launched in December of 1995.
The repeat coverage of IRS-1C is every 24 days. The sensor has a 744 km swath width.
The IRS-1C satellite has three sensors on board with which to capture images of the Earth.
Those sensors are as follows:
LISS-III
LISS-III has a spatial resolution of 23 m, with the exception of the SW Infrared band, which is
70 m. Bands 2, 3, and 4 have a swath width of 142 kilometers; band 5 has a swath width of 148
km. Repeat coverage occurs every 24 days at the Equator.
56 ERDAS
Satellite Data
Panchromatic Sensor
The panchromatic sensor has 5.8 m spatial resolution, as well as stereo capability. Its swath
width is 70 m. Repeat coverage is every 24 days at the Equator.The revisit time is every five
days, with ± 26° off-nadir viewing.
Source: Space Imaging, 1999b; Center for Health Applications of Aerospace Related
Technologies, 1998
IRS-1D
IRS-1D was launched in September of 1997. It collects imagery at a spatial resolution of 5.8 m.
IRS-1D’s sensors were copied for IRS-1C, which was launched in December 1995.
Imagery collected by IRS-1D is distributed in black and white format. The panchromatic
imagery “reveals objects on the Earth’s surface (such) as transportation networks, large ships,
parks and opens space, and built-up urban areas” (Space Imaging, 1999b). This information can
be used to classify land cover in applications such as urban planning and agriculture. The Space
Imaging facility located in Norman, Oklahoma has been obtaining IRS-1D data since 1997.
Field Guide 57
Raster and Vector Data Sources
Landsat 1-5 In 1972, the National Aeronautics and Space Administration (NASA) initiated the first civilian
program specializing in the acquisition of remotely sensed digital satellite data. The first system
was called ERTS (Earth Resources Technology Satellites), and later renamed to Landsat. There
have been several Landsat satellites launched since 1972. Landsats 1, 2, and 3 are no longer
operating, but Landsats 4 and 5 are still in orbit gathering data.
Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5 collect
MSS and TM data. MSS and TM are discussed in more detail in the following sections.
NOTE: Landsat data are available through the Earth Observation Satellite Company (EOSAT)
or the Earth Resources Observation Systems (EROS) Data Center. See “Ordering Raster Data”
for more information.
MSS
The MSS from Landsats 4 and 5 has a swath width of approximately 185 × 170 km from a height
of approximately 900 km for Landsats 1, 2, and 3, and 705 km for Landsats 4 and 5. MSS data
are widely used for general geologic studies as well as vegetation inventories.
The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m IFOV. A typical scene
contains approximately 2340 rows and 3240 columns. The radiometric resolution is 6-bit, but it
is stored as 8-bit (Lillesand and Kiefer, 1987).
Detectors record electromagnetic radiation (EMR) in four bands:
• Bands 1 and 2 are in the visible portion of the spectrum and are useful in detecting cultural
features, such as roads. These bands also show detail in water.
• Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used in land/water
and vegetation discrimination.
Wavelength
Band Comments
(microns)
1, Green 0.50 to 0.60 µm This band scans the region between the blue and red chlorophyll
absorption bands. It corresponds to the green reflectance of healthy
vegetation, and it is also useful for mapping water bodies.
2, Red 0.60 to 0.70 µm This is the red chlorophyll absorption band of healthy green
vegetation and represents one of the most important bands for
vegetation discrimination. It is also useful for determining soil
boundary and geological boundary delineations and cultural
features.
3, Red, 0.70 to 0.80 µm This band is especially responsive to the amount of vegetation
NIR biomass present in a scene. It is useful for crop identification and
emphasizes soil/crop and land/water contrasts.
4, NIR 0.80 to 1.10 µm This band is useful for vegetation surveys and for penetrating haze
(Jensen, 1996).
58 ERDAS
Satellite Data
TM
The TM scanner is a multispectral scanning system much like the MSS, except that the TM
sensor records reflected/emitted electromagnetic energy from the visible, reflective-infrared,
middle-infrared, and thermal-infrared regions of the spectrum. TM has higher spatial, spectral,
and radiometric resolution than MSS.
TM has a swath width of approximately 185 km from a height of approximately 705 km. It is
useful for vegetation type and health determination, soil moisture, snow and cloud
differentiation, rock type discrimination, etc.
The spatial resolution of TM is 28.5 × 28.5 m for all bands except the thermal (band 6), which
has a spatial resolution of 120 × 120 m. The larger pixel size of this band is necessary for
adequate signal strength. However, the thermal band is resampled to 28.5 × 28.5 m to match the
other bands. The radiometric resolution is 8-bit, meaning that each pixel has a possible range of
data values from 0 to 255.
Detectors record EMR in seven bands:
• Bands 1, 2, and 3 are in the visible portion of the spectrum and are useful in detecting
cultural features such as roads. These bands also show detail in water.
• Bands 4, 5, and 7 are in the reflective-infrared portion of the spectrum and can be used in
land/water discrimination.
• Band 6 is in the thermal portion of the spectrum and is used for thermal mapping (Jensen,
1996; Lillesand and Kiefer, 1987).
Wavelength
Band Comments
(microns)
1, Blue 0.45 to 0.52 µm This band is useful for mapping coastal water areas, differentiating
between soil and vegetation, forest type mapping, and detecting
cultural features.
2, Green 0.52 to 0.60 µm This band corresponds to the green reflectance of healthy
vegetation. Also useful for cultural feature identification.
3, Red 0.63 to 0.69 µm This band is useful for discriminating between many plant species.
It is also useful for determining soil boundary and geological
boundary delineations as well as cultural features.
4, NIR 0.76 to 0.90 µm This band is especially responsive to the amount of vegetation
biomass present in a scene. It is useful for crop identification and
emphasizes soil/crop and land/water contrasts.
Field Guide 59
Raster and Vector Data Sources
Wavelength
Band Comments
(microns)
5, MIR 1.55 to 1.75 µm This band is sensitive to the amount of water in plants, which is
useful in crop drought studies and in plant health analyses. This is
also one of the few bands that can be used to discriminate between
clouds, snow, and ice.
6, TIR 10.40 to 12.50 µm This band is useful for vegetation and crop stress detection, heat
intensity, insecticide applications, and for locating thermal
pollution. It can also be used to locate geothermal activity.
7, MIR 2.08 to 2.35 µm This band is important for the discrimination of geologic rock type
and soil boundaries, as well as soil and vegetation moisture
content.
4 bands
MSS
7 bands
TM
radiometric
resolution 1 pixel=
0-127 57x79m
1 pixel=
30x30m
radiometric
resolution
0-255
NOTE: The order of the bands corresponds to the Red, Green, and Blue (RGB) color guns of
the monitor.
• Bands 3, 2, 1 create a true color composite. True color means that objects look as they
would to the naked eye—similar to a color photograph.
60 ERDAS
Satellite Data
• Bands 4, 3, 2 create a false color composite. False color composites appear similar to an
infrared photograph where objects do not have the same colors or contrasts as they would
naturally. For instance, in an infrared image, vegetation appears red, water appears navy or
black, etc.
• Bands 5, 4, 2 create a pseudo color composite. (A thematic image is also a pseudo color
image.) In pseudo color, the colors do not reflect the features in natural colors. For instance,
roads may be red, water yellow, and vegetation blue.
Different color schemes can be used to bring out or enhance the features under study. These are
by no means all of the useful combinations of these seven bands. The bands to be used are
determined by the particular application.
See Chapter 4 “Image Display” for more information on how images are displayed,
Chapter 6 “Enhancement” for more information on how images can be enhanced, and
“Ordering Raster Data” for information on types of Landsat data available.
Landsat 7 The Landsat 7 satellite, launched in 1999, uses Enhanced Thematic Mapper Plus (ETM+) to
observe the Earth. The capabilities new to Landsat 7 include the following:
The primary receiving station for Landsat 7 data is located in Sioux Falls, South Dakota at the
USGS EROS Data Center (EDC). ETM+ data is transmitted using X-band direct downlink at a
rate of 150 Mbps. Landsat 7 is capable of capturing scenes without cloud obstruction, and the
receiving stations can obtain this data in real time using the X-band. Stations located around the
globe, however, are only able to receive data for the portion of the ETM+ ground track where
the satellite can be seen by the receiving station.
Landsat 7 Specifications
Information about the spectral range and ground resolution of the bands of the Landsat 7
satellite is provided in the following table:
Field Guide 61
Raster and Vector Data Sources
Landsat 7 has a swath width of 185 kilometers. The repeat coverage interval is 16 days, or 233
orbits. The satellite orbits the Earth at 705 kilometers.
Source: National Aeronautics and Space Administration, 1998; National Aeronautics and
Space Administration, 2001
NLAPS The National Landsat Archive Production System (NLAPS) is the Landsat processing system
used by EROS. The NLAPS system is able to “produce systematically-corrected, and terrain
corrected products. . .” (United States Geological Survey, n.d.).
Landsat data received from satellites is generated into TM corrected data using the NLAPS by:
• correcting and validating the mirror scan and payload correction data
According to the USGS, the products provided by NLAPS include the following:
• processing procedure, which contains information describing the process by which the
image data were produced
• DEM data and the metadata describing them (available only with terrain corrected
products)
62 ERDAS
Satellite Data
For information about the Landsat data processed by NLAPS, see “Landsat 1-5” and
“Landsat 7”.
NOAA Polar Orbiter NOAA has sponsored several polar orbiting satellites to collect data of the Earth. These
Data satellites were originally designed for meteorological applications, but the data gathered have
been used in many fields—from agronomy to oceanography (Needham, 1986).
The first of these satellites to be launched was the TIROS-N in 1978. Since the TIROS-N, five
additional NOAA satellites have been launched. Of these, the last three are still in orbit
gathering data.
AVHRR
The NOAA AVHRR data are small-scale data and often cover an entire country. The swath
width is 2700 km and the satellites orbit at a height of approximately 833 km (Kidwell, 1988;
Needham, 1986).
The AVHRR system allows for direct transmission in real-time of data called High Resolution
Picture Transmission (HRPT). It also allows for about ten minutes of data to be recorded over
any portion of the world on two recorders on board the satellite. These recorded data are called
Local Area Coverage (LAC). LAC and HRPT have identical formats; the only difference is that
HRPT are transmitted directly and LAC are recorded.
There are three basic formats for AVHRR data which can be imported into ERDAS IMAGINE:
• LAC—data recorded on board the sensor with a spatial resolution of approximately 1.1 ×
1.1 km,
• HRPT—direct transmission of AVHRR data in real-time with the same resolution as LAC,
and
• GAC—data produced from LAC data by using only 1 out of every 3 scan lines. GAC data
have a spatial resolution of approximately 4 × 4 km.
AVHRR data are available in 10-bit packed and 16-bit unpacked format. The term packed refers
to the way in which the data are written to the tape. Packed data are compressed to fit more data
on each tape (Kidwell, 1988).
AVHRR images are useful for snow cover mapping, flood monitoring, vegetation mapping,
regional soil moisture analysis, wildfire fuel mapping, fire detection, dust and sandstorm
monitoring, and various geologic applications (Lillesand and Kiefer, 1987). The entire globe
can be viewed in 14.5 days. There may be four or five bands, depending on when the data were
acquired.
Field Guide 63
Raster and Vector Data Sources
Wavelength
Band Comments
(microns)
1, Visible 0.58 to 0.68 µm This band corresponds to the green reflectance of healthy
vegetation and is important for vegetation discrimination.
2, NIR 0.725 to 1.10 µm This band is especially responsive to the amount of vegetation
biomass present in a scene. It is useful for crop identification and
emphasizes soil/crop and land/water contrasts.
3, TIR 3.55 to 3.93 µm This is a thermal band that can be used for snow and ice
discrimination. It is also useful for detecting fires.
4, TIR 10.50 to 11.50 µm This band is useful for vegetation and crop stress detection. It can
(NOAA 6, 8, 10) also be used to locate geothermal activity.
10.30 to 11.30 µm
(NOAA 7, 9, 11)
5, TIR 10.50 to 11.50 µm See Band 4, above.
(NOAA 6, 8, 10)
11.50 to 12.50 µm
(NOAA 7, 9, 11)
AVHRR data have a radiometric resolution of 10-bits, meaning that each pixel has a possible
data file value between 0 and 1023. AVHRR scenes may contain one band, a combination of
bands, or all bands. All bands are referred to as a full set, and selected bands are referred to as
an extract.
See “Ordering Raster Data” for information on the types of NOAA data available.
OrbView-3 OrbView is a high-resolution satellite scheduled for launch by ORBIMAGE in the year 2000.
The OrbView-3 satellite will provide both 1 m panchromatic imagery and 4 m multispectral
imagery. “One-meter imagery will enable the viewing of houses, automobiles and aircraft, and
will make it possible to create highly precise digital maps and three-dimensional fly-through
scenes. Four-meter multispectral imagery will provide color and infrared information to further
characterize cities, rural areas and undeveloped land from space” (ORBIMAGE, 1999).
Specific applications include telecommunications and utilities, agriculture and forestry.
OrbView-3’s swath width is 8 km, with an image area of 64 km2. The revisit time is less than 3
days. OrbView-3 orbits the Earth at an altitude of 470 km.
64 ERDAS
Satellite Data
SeaWiFS The Sea-viewing Wide Field-of-View Sensor (SeaWiFS) instrument is on-board the SeaStar
spacecraft, which was launched in 1997. The SeaStar spacecraft’s orbit is circular, at an altitude
of 705 km.The satellite uses an attitude control system (ACS), which maintains orbit, as well as
performs solar and lunar calibration maneuvers. The ACS also provides attitude information
within one SeaWiFS pixel.
The SeaWiFS instrument is made up of an optical scanner and an electronics module. The swath
width is 2,801 km LAC/HRPT (958.3 degrees) and 1,502 km GAC (45 degrees). The spatial
resolution is 1.1 km LAC and 4.5 km GAC. The revisit time is one day.
Source: National Aeronautics and Space Administration, 1999; Center for Health Applications
of Aerospace Related Technologies, 1998
SPOT The first SPOT satellite, developed by the French Centre National d’Etudes Spatiales (CNES),
was launched in early 1986. The second SPOT satellite was launched in 1990 and the third was
launched in 1993. The sensors operate in two modes, multispectral and panchromatic. SPOT is
commonly referred to as a pushbroom scanner meaning that all scanning parts are fixed, and
scanning is accomplished by the forward motion of the scanner. SPOT pushes 3000/6000
sensors along its orbit. This is different from Landsat which scans with 16 detectors
perpendicular to its orbit.
Field Guide 65
Raster and Vector Data Sources
The SPOT satellite can observe the same area on the globe once every 26 days. The SPOT
scanner normally produces nadir views, but it does have off-nadir viewing capability. Off-nadir
refers to any point that is not directly beneath the detectors, but off to an angle. Using this off-
nadir capability, one area on the Earth can be viewed as often as every 3 days.
This off-nadir viewing can be programmed from the ground control station, and is quite useful
for collecting data in a region not directly in the path of the scanner or in the event of a natural
or man-made disaster, where timeliness of data acquisition is crucial. It is also very useful in
collecting stereo data from which elevation data can be extracted.
The width of the swath observed varies between 60 km for nadir viewing and 80 km for off-
nadir viewing at a height of 832 km (Jensen, 1996).
Panchromatic
SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatial resolution,
contains 1 band—0.51 to 0.73 µm—and is similar to a black and white photograph. It has a
radiometric resolution of 8 bits (Jensen, 1996).
XS
SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit radiometric resolution, and
contains 3 bands (Jensen, 1996).
Wavelength
Band Comments
(microns)
1, Green 0.50 to 0.59 µm This band corresponds to the green reflectance of healthy
vegetation.
2, Red 0.61 to 0.68 µm This band is useful for discriminating between plant species. It
is also useful for soil boundary and geological boundary
delineations.
3, Reflective 0.79 to 0.89 µm This band is especially responsive to the amount of vegetation
IR biomass present in a scene. It is useful for crop identification
and emphasizes soil/crop and land/water contrasts.
66 ERDAS
Satellite Data
Panc
hrom 1 band
atic
XS
3 bands
1 pixel=
10x10m
radiometric
resolution
0-255 1 pixel=
20x20m
See “Ordering Raster Data” for information on the types of SPOT data available.
Stereoscopic Pairs
Two observations can be made by the panchromatic scanner on successive days, so that the two
images are acquired at angles on either side of the vertical, resulting in stereoscopic imagery.
Stereoscopic imagery can also be achieved by using one vertical scene and one off-nadir scene.
This type of imagery can be used to produce a single image, or topographic and planimetric
maps (Jensen, 1996).
Topographic maps indicate elevation. Planimetric maps correctly represent horizontal distances
between objects (Star and Estes, 1990).
See “Topographic Data” and Chapter 11 “Terrain Analysis” for more information about
topographic data and how SPOT stereopairs and aerial photographs can be used to create
elevation data and orthographic images.
SPOT4 The SPOT4 satellite was launched in 1998. SPOT4 carries High Resolution Visible Infrared
(HR VIR) instruments that obtain information in the visible and near-infrared spectral bands.
The SPOT4 satellite orbits the Earth at 822 km at the Equator. The SPOT4 satellite has two
sensors on board: a multispectral sensor, and a panchromatic sensor. The multispectral scanner
has a pixel size of 20 × 20 m, and a swath width of 60 km. The panchromatic scanner has a pixel
size of 10 × 10 m, and a swath width of 60 km.
Field Guide 67
Raster and Vector Data Sources
Band Wavelength
1, Green 0.50 to 0.59 µm
2, Red 0.61 to 0.68 µm
3, (near-IR) 0.78 to 0.89 µm
4, (mid-IR) 1.58 to 1.75 µm
Panchromatic 0.61 to 0.68 µm
Source: SPOT Image, 1998; SPOT Image, 1999; Center for Health Applications of Aerospace
Related Technologies, 2000c.
• the backscattered radiation is detected by the radar system’s receiving antenna, which is
tuned to the frequency of the transmitted waves.
While there is a specific importer for data from RADARSAT and others, most types of
radar image data can be imported into ERDAS IMAGINE with the Generic import option
of Import/Export. The Generic SAR Node of the IMAGINE Radar Mapping Suite™ can be
used to create or edit the radar ephemeris.
A radar system can be airborne, spaceborne, or ground-based. Airborne radar systems have
typically been mounted on civilian and military aircraft, but in 1978, the radar satellite Seasat-
1 was launched. The radar data from that mission and subsequent spaceborne radar systems
have been a valuable addition to the data available for use in GIS processing. Researchers are
finding that a combination of the characteristics of radar data and visible/infrared data is
providing a more complete picture of the Earth. In the last decade, the importance and
applications of radar have grown rapidly.
Advantages of Using Radar data have several advantages over other types of remotely sensed imagery:
Radar Data
• Radar microwaves can penetrate the atmosphere day or night under virtually all weather
conditions, providing data even in the presence of haze, light rain, snow, clouds, or smoke.
• Under certain circumstances, radar can partially penetrate arid and hyperarid surfaces,
revealing subsurface features of the Earth.
68 ERDAS
Radar Data
• Although radar does not penetrate standing water, it can reflect the surface action of oceans,
lakes, and other bodies of water. Surface eddies, swells, and waves are greatly affected by
the bottom features of the water body, and a careful study of surface action can provide
accurate details about the bottom features.
Radar Sensors Radar images are generated by two different types of sensors:
• SAR—uses a side-looking, fixed antenna to create a synthetic aperture. SAR sensors are
mounted on satellites and the NASA Space Shuttle. The sensor transmits and receives as it
is moving. The signals received over a time interval are combined to create the image.
Both SLAR and SAR systems use side-looking geometry. Figure 3-4 shows a representation of
an airborne SLAR system.
Beam
Range Width Sensor Height
Direction at Nadir
Azimuth
Direction
Azimuth
Previous Resolution
Image
Lines
Field Guide 69
Raster and Vector Data Sources
Trees
Trees
Hill Shadow
Hill
Strength (DN)
Valley
Time
Radar waves
are transmitted
in phase.
70 ERDAS
Radar Data
Frequency Wavelength
Band Radar System
Range Range
X 5.20-10.90 GHZ 5.77-2.75 cm USGS SLAR
C 3.9-6.2 GHZ 3.8-7.6 cm ERS-1, RADARSAT
L 0.39-1.55 GHZ 76.9-19.3 cm SIR-A,B, Almaz, FUYO-1 (JERS-1)
P 0.225-0.391 GHZ 40.0-76.9 cm AIRSAR
More information about these radar systems is given later in this chapter.
Radar bands were named arbitrarily when radar was first developed by the military. The letter
designations have no special meaning.
NOTE: The C band overlaps the X band. Wavelength ranges may vary slightly between sensors.
Speckle Noise Once out of phase, the radar waves can interfere constructively or destructively to produce light
and dark pixels known as speckle noise. Speckle noise in radar data must be reduced before the
data can be utilized. However, the radar image processing programs used to reduce speckle
noise also produce changes to the image. This consideration, combined with the fact that
different applications and sensor outputs necessitate different speckle removal models, has lead
ERDAS to offer several speckle reduction algorithms.
When processing radar data, the order in which the image processing programs are
implemented is crucial. This is especially true when considering the removal of speckle
noise. Since any image processing done before removal of the speckle results in the noise
being incorporated into and degrading the image, do not rectify, correct to ground range,
or in any way resample the pixel values before removing speckle noise. A rotation using
nearest neighbor might be permissible.
• import radar data into the GIS as a stand-alone source or as an additional layer with other
imagery sources
• enhance edges
Field Guide 71
Raster and Vector Data Sources
The IMAGINE IFSAR DEM™ module allows you to generate DEMs from SAR data using
interferometric techniques.
The IMAGINE StereoSAR DEM™ module allows you to generate DEMs from SAR data using
stereoscopic techniques.
See Chapter 6 “Enhancement” and Chapter 9 “Radar Concepts” for more information on
radar imagery enhancement.
Applications for Radar Radar data can be used independently in GIS applications or combined with other satellite data,
Data such as Landsat, SPOT, or AVHRR. Possible GIS applications for radar data include:
• Geology—radar’s ability to partially penetrate land cover and sensitivity to micro relief
makes radar data useful in geologic mapping, mineral exploration, and archaeology.
• Glaciology—the ability to provide imagery of ocean and ice phenomena makes radar an
important tool for monitoring climatic change through polar ice variation.
• Oceanography—radar is used for wind and wave measurement, sea-state and weather
forecasting, and monitoring ocean circulation, tides, and polar oceans.
• Hydrology—radar data are proving useful for measuring soil moisture content and
mapping snow distribution and water content.
• Offshore oil activities—radar data are used to provide ice updates for offshore drilling rigs,
determining weather and sea conditions for drilling and installation operations, and
detecting oil spills.
• Pollution monitoring—radar can detect oil on the surface of water and can be used to track
the spread of an oil spill.
Current Radar Table 3-17 gives a brief description of currently available radar sensors. This is not a complete
Sensors list of such sensors, but it does represent the ones most useful for GIS applications.
72 ERDAS
Radar Data
Almaz-1
Almaz was launched by the Soviet Union in 1987. The SAR operated with a single frequency
SAR, which was attached to a spacecraft. Almaz-1 provided optically-processed data. The
Almaz mission was largely kept secret
Almaz-1 was launched in 1991, and provides S-band information. It also includes a “single
polarization SAR as well as a sounding radiometric scanner (RMS) system and several infrared
bands” (Atlantis Scientific, Inc., 1997).
The swath width of Almaz-1 is 20-45 km, the range resolution is 15-30 m, and the azimuth
resolution is 15 m.
Source: National Aeronautics and Space Administration, 1996; Atlantis Scientific, Inc., 1997
ERS-1
ERS-1, a radar satellite, was launched by ESA in July of 1991. One of its primary instruments
is the Along-Track Scanning Radiometer (ATSR). The ATSR monitors changes in vegetation
of the Earth’s surface.
The instruments aboard ERS-1 include: SAR Image Mode, SAR Wave Mode, Wind
Scatterometer, Radar Altimeter, and Along Track Scanning Radiometer-1 (European Space
Agency, 1997).
ERS-1 receiving stations are located all over the world, in countries such as Sweden, Norway,
and Canada.
Some of the information that is obtained from the ERS-1 (as well as ERS-2, to follow) includes:
According to ESA,
. . .ERS-1 provides both global and regional views of the Earth, regardless of cloud
coverage and sunlight conditions. An operational near-real-time capability for data
acquisition, processing and dissemination, offering global data sets within three hours of
observation, has allowed the development of time-critical applications particularly in
weather, marine and ice forecasting, which are of great importance for many industrial
activities (European Space Agency, 1995).
Field Guide 73
Raster and Vector Data Sources
ERS-2
ERS-2, a radar satellite, was launched by ESA in April of 1995. It has an instrument called
GOME, which stands for Global Ozone Monitoring Experiment. This instrument is designed to
evaluate atmospheric chemistry. ERS-2, like ERS-1 makes use of the ATSR.
The instruments aboard ERS-2 include: SAR Image Mode, SAR Wave Mode, Wind
Scatterometer, Radar Altimeter, Along Track Scanning Radiometer-2, and the Global Ozone
Monitoring Experiment.
ERS-2 receiving stations are located all over the world. Facilities that process and archive ERS-
2 data are also located around the globe.
One of the benefits of the ERS-2 satellite is that, along with ERS-1, it can provide data from the
exact same type of synthetic aperture radar (SAR).
ERS-2 provides many different types of information. See “ERS-1” for some of the most
common types. Data obtained from ERS-2 used in conjunction with that from ERS-1 enables
you to perform interferometric tasks. Using the data from the two sensors, DEMs can be created.
JERS-1
JERS stands for Japanese Earth Resources Satellite. The JERS-1 satellite was launched in
February of 1992, with an SAR instrument and a 4-band optical sensor aboard. The SAR
sensor’s ground resolution is 18 m, and the optical sensor’s ground resolution is roughly 18 m
across-track and 24 m along-track. The revisit time of the satellite is every 44 days. The satellite
travels at an altitude of 568 km, at an inclination of 97.67°.
Band Wavelength
1 0.52 to 0.60 µm
2 0.63 to 0.69 µm
3 0.76 to 0.86 µm
41 0.76 to 0.86 µm
5 1.60 to 1.71 µm
6 2.01 to 2.12 µm
7 2.13 to 2.25 µm
8 2.27 to 2.40 µm
74 ERDAS
Radar Data
1
Viewing 15.3° forward
JERS-1 data comes in two different formats: European and Worldwide. The European data
format consists mainly of coverage for Europe and Antarctica. The Worldwide data format has
images that were acquired from stations around the globe. According to NASA, “a reduction in
transmitter power has limited the use of JERS-1 data” (National Aeronautics and Space
Administration, 1996).
RADARSAT
RADARSAT satellites carry SARs, which are capable of transmitting signals that can be
received through clouds and during nighttime hours. RADARSAT satellites have multiple
imaging modes for collecting data, which include Fine, Standard, Wide, ScanSAR Narrow,
ScanSAR Wide, Extended (H), and Extended (L). The resolution and swath width varies with
each one of these modes, but in general, Fine offers the best resolution: 8 m.
The types of RADARSAT image products include: Single Data, Single Look Complex, Path
Image, Path Image Plus, Map Image, Precision Map Image, and Orthorectified. You can obtain
this data in forms ranging from CD-ROM to print.
The RADARSAT satellite uses a single frequency, C-band. The altitude of the satellite is 496
miles, or 798 km. The satellite is able to image the entire Earth, and its path is repeated every
24 days. The swath width is 500 km. Daily coverage is available of the Arctic, and any area of
Canada can be obtained within three days.
Field Guide 75
Raster and Vector Data Sources
SIR-A
SIR stands for Spaceborne Imaging Radar. SIR-A was launched and began collecting data in
1981. The SIR-A mission built on the Seasat SAR mission that preceded it by increasing the
incidence angle with which it captured images. The primary goal of the SIR-A mission was to
collect geological information. This information did not have as pronounced a layover effect as
previous imagery.
An important achievement of SIR-A data is that it is capable of penetrating surfaces to obtain
information. For example, NASA says that the L-band capability of SIR-A enabled the
discovery of dry river beds in the Sahara Desert.
SIR-1 uses L-band, has a swath width of 50 km, a range resolution of 40 m, and an azimuth
resolution of 40 m (Atlantis Scientific, Inc., 1997).
For information on the ERDAS IMAGINE software that reduces layover effect, IMAIGNE
OrthoRadar, see “IMAGINE OrthoRadar Theory”.
Source: National Aeronautics and Space Administration, 1995a; National Aeronautics and
Space Administration, 1996; Atlantis Scientific, Inc., 1997.
SIR-B
SIR-B was launched and began collecting data in 1984. SIR-B improved over SIR-A by using
an articulating antenna. This antenna allowed the incidence angle to range between 15 and 60
degrees. This enabled the mapping of surface
76 ERDAS
Topographic Data
USGS DEMs
There are two types of DEMs that are most commonly available from USGS:
• 1:24,000 scale, also called 7.5-minute DEM, is usually referenced to the UTM coordinate
system. It has a spatial resolution of 30 × 30 m.
Both types have a 16-bit range of elevation values, meaning each pixel can have a possible
elevation of -32,768 to 32,767.
DEM data are stored in ASCII format. The data file values in ASCII format are stored as ASCII
characters rather than as zeros and ones like the data file values in binary data.
DEM data files from USGS are initially oriented so that North is on the right side of the image
instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the
Import process so that coordinates read with any ERDAS IMAGINE program are correct.
DTED DTED data are produced by the National Imagery and Mapping Agency (NIMA) and are
available only to US government agencies and their contractors. DTED data are distributed on
9-track tapes and on CD-ROM.
There are two types of DTED data available:
Both are in Arc/second format and are distributed in cells. A cell is a 1° × 1° area of coverage.
Both have a 16-bit range of elevation values.
Like DEMs, DTED data files are also oriented so that North is on the right side of the image
instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the
Import process so that coordinates read with any ERDAS IMAGINE program are correct.
Using Topographic Topographic data have many uses in a GIS. For example, topographic data can be used in
Data conjunction with other data to:
• calculate the shortest and most navigable path over a mountain range
Field Guide 91
Raster and Vector Data Sources
See Chapter 11 “Terrain Analysis” for more information about using topographic and
elevation data.
GPS Data
Introduction Global Positioning System (GPS) data has been in existence since the launch of the first satellite
in the US Navigation System with Time and Ranging (NAVSTAR) system on February 22,
1978, and the availability of a full constellation of satellites since 1994. Initially, the system was
available to US military personnel only, but from 1993 onwards the system started to be used
(in a degraded mode) by the general public. There is also a Russian GPS system called
GLONASS with similar capabilities.
The US NAVSTAR GPS consists of a constellation of 24 satellites orbiting the Earth,
broadcasting data that allows a GPS receiver to calculate its spatial position.
Satellite Position Positions are determined through the traditional ranging technique. The satellites orbit the Earth
(at an altitude of 20,200 km) in such a manner that several are always visible at any location on
the Earth's surface. A GPS receiver with line of site to a GPS satellite can determine how long
the signal broadcast by the satellite has taken to reach its location, and therefore can determine
the distance to the satellite. Thus, if the GPS receiver can see three or more satellites and
determine the distance to each, the GPS receiver can calculate its own position based on the
known positions of the satellites (i.e., the intersection of the spheres of distance from the
satellite locations). Theoretically, only three satellites should be required to find the 3D position
of the receiver, but various inaccuracies (largely based on the quality of the clock within the
GPS receiver that is used to time the arrival of the signal) mean that at least four satellites are
generally required to determine a three-dimensional (3D) x, y, z position.
The explanation above is an over-simplification of the technique used, but does show the
concept behind the use of the GPS system for determining position. The accuracy of that
position is affected by several factors, including the number of satellites that can be seen by a
receiver, but especially for commercial users by Selective Availability. Each satellite actually
sends two signals at different frequencies. One is for civilian use and one for military use. The
signal used for commercial receivers has an error introduced to it called Selective Availability.
Selective Availability introduces a positional inaccuracy of up to 100m to commercial GPS
receivers. This is mainly intended to limit the use of highly accurate GPS positioning to hostile
users, but the errors can be ameliorated through various techniques, such as keeping the GPS
receiver stationary; thereby allowing it to average out the errors, or through more advanced
techniques discussed in the following sections.
92 ERDAS
GPS Data
Differential Correction Differential Correction (or Differential GPS - DGPS) can be used to remove the majority of the
effects of Selective Availability. The technique works by using a second GPS unit (or base
station) that is stationary at a precisely known position. As this GPS knows where it actually is,
it can compare this location with the position it calculates from GPS satellites at any particular
time and calculate an error vector for that time (i.e., the distance and direction that the GPS
reading is in error from the real position). A log of such error vectors can then be compared with
GPS readings taken from the first, mobile unit (the field unit that is actually taking GPS location
readings of features). Under the assumption that the field unit had line of site to the same GPS
satellites to acquire its position as the base station, each field-read position (with an appropriate
time stamp) can be compared to the error vector for that time and the position corrected using
the inverse of the vector. This is generally performed using specialist differential correction
software.
Real Time Differential GPS (RDGPS) takes this technique one step further by having the base
station communicate the error vector via radio to the field unit in real time. The field unit can
then automatically updates its own location in real time. The main disadvantage of this
technique is that the range that a GPS base station can broadcast over is generally limited,
thereby restricting the range the mobile unit can be used away from the base station. One of the
biggest uses of this technique is for ocean navigation in coastal areas, where base stations have
been set up along coastlines and around ports so that the GPS systems on board ships can get
accurate real time positional information to help in shallow-water navigation.
Applications of GPS GPS data finds many uses in remote sensing and GIS applications, such as:
Data
• Collection of ground truth data, even spectral properties of real-world conditions at known
geographic positions, for use in image classification and validation. The user in the field
identifies a homogeneous area of identifiable land cover or use on the ground and records
its location using the GPS receiver. These locations can then be plotted over an image to
either train a supervised classifier or to test the validity of a classification.
• Moving map applications take the concept of relating the GPS positional information to
your geographic data layers one step further by having the GPS position displayed in real
time over the geographical data layers. Thus you take a computer out into the field and
connect the GPS receiver to the computer, usually via the serial port. Remote sensing and
GIS data layers are then displayed on the computer and the positional signal from the GPS
receiver is plotted on top of them.
• GPS receivers can be used for the collection of positional information for known point
features on the ground. If these can be identified in an image, the positional data can be used
as Ground Control Points (GCPs) for geocorrecting the imagery to a map projection
system. If the imagery is of high resolution, this generally requires differential correction
of the positional data.
• DGPS data can be used to directly capture GIS data and survey data for direct use in a GIS
or CAD system. In this regard the GPS receiver can be compared to using a digitizing tablet
to collect data, but instead of pointing and clicking at features on a paper document, you
are pointing and clicking on the real features to capture the information.
Field Guide 93
Raster and Vector Data Sources
• Precision agriculture uses GPS extensively in conjunction with Variable Rate Technology
(VRT). VRT relies on the use of a VRT controller box connected to a GPS and the pumping
mechanism for a tank full of fertilizers/pesticides/seeds/water/etc. A digital polygon map
(often derived from remotely sensed data) in the controller specifies a predefined amount
to dispense for each polygonal region. As the tractor pulls the tank around the field the GPS
logs the position that is compared to the map position in memory. The correct amount is
then dispensed at that location. The aim of this process is to maximize yields without
causing any environmental damage.
• GPS is often used in conjunction with airborne surveys. The aircraft, as well as carrying a
camera or scanner, has on board one or more GPS receivers tied to an inertial navigation
system. As each frame is exposed precise information is captured (or calculated in post
processing) on the x, y, z and roll, pitch, yaw of the aircraft. Each image in the aerial survey
block thus has initial exterior orientation parameters which therefore minimizes the need
for control in a block triangulation process.
Ordering Raster Table 3-24 describes the different Landsat, SPOT, AVHRR, and DEM products that can be
Data ordered. Information in this chart does not reflect all the products that are available, but only the
most common types that can be imported into ERDAS IMAGINE.
Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded
94 ERDAS
Ordering Raster Data
Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded
Addresses to Contact For more information about these and related products, contact the following agencies:
• SPOT data:
SPOT Image Corporation
1897 Preston White Dr.
Reston, VA 22091-4368 USA
Telephone: 703-620-2200
Fax: 703-648-1813
Internet: www.spot.com
Field Guide 95
Raster and Vector Data Sources
• Cartographic data including, maps, airphotos, space images, DEMs, planimetric data, and
related information from federal, state, and private agencies:
National Mapping Division
U.S. Geological Survey, National Center
12201 Sunrise Valley Drive
Reston, VA 20192 USA
Telephone: 703/648-4000
Internet: mapping.usgs.gov
• Landsat data:
Customer Services
U.S. Geological Survey
EROS Data Center
47914 252nd Street
Sioux Falls, SD 57198 USA
Telephone: 800/252-4547
Fax: 605/594-6589
Internet: edcwww.cr.usgs.gov/eros-home.html
96 ERDAS
Raster Data from Other Software Vendors
• RADARSAT data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-5424
Fax: 613-238-5425
Internet: www.rsi.ca
Raster Data from ERDAS IMAGINE also enables you to import data created by other software vendors. This
Other Software way, if another type of digital data system is currently in use, or if data is received from another
Vendors system, it easily converts to the ERDAS IMAGINE file format for use in ERDAS IMAGINE.
Data from other vendors may come in that specific vendor’s format, or in a standard format
which can be used by several vendors. The Import and/or Direct Read function handles these
raster data types from other software systems:
• JFIF (JPEG)
• MrSID
Field Guide 97
Raster and Vector Data Sources
• SDTS
• Sun Raster
Other data types might be imported using the Generic Binary import option.
Convert a vector layer to a raster layer, or vice versa, by using IMAGINE Vector.
ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software. The two basic
types of ERDAS Ver. 7.X data files are indicated by the file name extensions:
• .LAN—a multiband continuous image file (the name is derived from the Landsat satellite)
• .GIS—a single-band thematic data file in which pixels are divided into discrete categories
(the name is derived from geographic information system)
.LAN and .GIS image files are stored in the same format. The image data are arranged in a BIL
format and can be 4-bit, 8-bit, or 16-bit. The ERDAS Ver. 7.X file structure includes:
When you import a .GIS file, it becomes an image file with one thematic raster layer. When
you import a .LAN file, each band becomes a continuous raster layer within the image file.
GRID and GRID Stacks GRID is a raster geoprocessing program distributed by Environmental Systems Research
Institute, Inc. (ESRI) in Redlands, California. GRID is designed to complement the vector data
model system, ArcInfo is a well-known vector GIS that is also distributed by ESRI. The name
GRID is taken from the raster data format of presenting information in a grid of cells.
The data format for GRID is a compressed tiled raster data structure. Like ArcInfo Coverages,
a GRID is stored as a set of files in a directory, including files to keep the attributes of the GRID.
Each GRID represents a single layer of continuous or thematic imagery, but it is also possible
to combine GRIDs files into a multilayer image. A GRID Stack (.stk) file names multiple
GRIDs to be treated as a multilayer image. Starting with ArcInfo version 7.0, ESRI introduced
the STK format, referred to in ERDAS software as GRID Stack 7.x, which contains multiple
GRIDs. The GRID Stack 7.x format keeps attribute tables for the entire stack in a separate
directory, in a manner similar to that of GRIDs and Coverages.
98 ERDAS
Raster Data from Other Software Vendors
JFIF (JPEG) JPEG is a set of compression techniques established by the Joint Photographic Experts Group
(JPEG). The most commonly used form of JPEG involves Discrete Cosine Transformation
(DCT), thresholding, followed by Huffman encoding. Since the output image is not exactly the
same as the input image, this form of JPEG is considered to be lossy. JPEG can compresses
monochrome imagery, but achieves compression ratios of 20:1 or higher with color (RGB)
imagery, by taking advantage of the fact that the data being compressed is a visible image. The
integrity of the source image is preserved by focussing its compression on aspects of the image
that are less noticeable to the human eye. JPEG cannot be used on thematic imagery, due to the
change in pixel values.
There is a lossless form of JPEG compression that uses DCT followed by nonlossy encoding,
but it is not frequently used since it only yields an approximate compression ratio of 2:1.
ERDAS IMAGINE only handles the lossy form of JPEG.
While JPEG compression is used by other file formats, including TIFF, the JPEG File
Interchange Format (JFIF) is a standard file format used to store JPEG-compressed imagery.
The ISO JPEG committee is currently working on a new enhancement to the JPEG standard
known as JPEG 2000, which will incorporate wavelet compression techniques and more
flexibility in JPEG compression.
MrSID Multiresolution Seamless Image Database (MrSID, pronounced Mister Sid) is a wavelet
transform-based compression algorithm designed by LizardTech, Inc. in Seattle, Washington
(http://www.lizardtech.com). The novel developments in MrSID include a memory efficient
implementation and automatic inclusion of pyramid layers in every data set, both of which make
MrSID well-suited to provide efficient storage and retrieval of very large digital images.
The underlying wavelet-based compression methodology used in MrSID yields high
compression ratios while satisfying stringent image quality requirements. The compression
technique used in MrSID is lossy (i.e., the compression-decompression process does not
reproduce the source data pixel-for-pixel). Lossy compression is not appropriate for thematic
imagery, but is essential for large continuous images since it allows much higher compression
ratios than lossless methods (e.g., the Lempel-Ziv-Welch, LZW, algorithm used in the GIF and
TIFF image formats). At standard compression ratios, MrSID encoded imagery is visually
lossless. On typical remotely sensed imagery, lossless methods provide compression ratios of
perhaps 2:1, whereas MrSID provides excellent image quality at compression ratios of 30:1 or
more.
SDTS The Spatial Data Transfer Standard (SDTS) was developed by the USGS to promote and
facilitate the transfer of georeferenced data and its associated metadata between dissimilar
computer systems without loss of fidelity. To achieve these goals, SDTS uses a flexible, self-
describing method of encoding data, which has enough structure to permit interoperability.
For metadata, SDTS requires a number of statements regarding data accuracy. In addition to the
standard metadata, the producer may supply detailed attribute data correlated to any image
feature.
SDTS Profiles
The SDTS standard is organized into profiles. Profiles identify a restricted subset of the standard
needed to solve a certain problem domain. Two subsets of interest to ERDAS IMAGINE users
are:
• Topological Vector Profile (TVP), which covers attributed vector data. This is imported via
the SDTS (Vector) title.
Field Guide 99
Raster and Vector Data Sources
• SDTS Raster Profile and Extensions (SRPE), which covers gridded raster data. This is
imported as SDTS Raster.
SUN Raster A SUN Raster file is an image captured from a monitor display. In addition to GIS, SUN Raster
files can be used in desktop publishing applications or any application where a screen capture
would be useful.
There are two basic ways to create a SUN Raster file on a SUN workstation:
Both methods read the contents of a frame buffer and write the display data to a user-specified
file. Depending on the display hardware and options chosen, screendump can create any of the
file types listed in Table 3-25.
TIFF TIFF was developed by Aldus Corp. (Seattle, Washington) in 1986 in conjunction with major
scanner vendors who needed an easily portable file format for raster image data. Today, the
TIFF format is a widely supported format used in video, fax transmission, medical imaging,
satellite imaging, document storage and retrieval, and desktop publishing applications. In
addition, the GeoTIFF extensions permit TIFF files to be geocoded.
The TIFF format’s main appeal is its flexibility. It handles black and white line images, as well
as gray scale and color images, which can be easily transported between different operating
systems and computers.
100 ERDAS
Raster Data from Other Software Vendors
Any TIFF file that contains an unsupported value for one of these elements may not be
compatible with ERDAS IMAGINE.
Motorola (MSB/LSB)
Gray scale
Inverted gray scale
Color palette
RGB (3-band)
Configuration BIP
BSQ
Compressiond None
Packbits
LZWe
LZW with horizontal differencinge
a All bands must contain the same number of bits (i.e., 4, 4, 4 or 8, 8, 8). Multiband data with bit depths differing per
band cannot be imported into ERDAS IMAGINE.
e LZW is governed by patents and is not supported by the basic version of ERDAS IMAGINE.
GeoTIFF According to the GeoTIFF Format Specification, Revision 1.0, "The GeoTIFF spec defines a
set of TIFF tags provided to describe all ’Cartographic’ information associated with TIFF
imagery that originates from satellite imaging systems, scanned aerial photography, scanned
maps, digital elevation models, or as a result of geographic analysis" (Ritter and Ruth, 1995).
The GeoTIFF format separates cartographic information into two parts: georeferencing and
geocoding.
Georeferencing
Georeferencing is the process of linking the raster space of an image to a model space (i.e., a
map system). Raster space defines how the coordinate system grid lines are placed relative to
the centers of the pixels of the image. In ERDA IMAGINE, the grid lines of the coordinate
system always intersect at the center of a pixel. GeoTIFF allows the raster space to be defined
either as having grid lines intersecting at the centers of the pixels (PixelIsPoint) or as having
grid lines intersecting at the upper left corner of the pixels (PixelIsArea). ERDAS IMAGINE
converts the georeferencing values for PixelIsArea images so that they conform to its raster
space definition.
GeoTIFF allows georeferencing via a scale and an offset, a full affine transformation, or a set
of tie points. ERDAS IMAGINE currently ignores GeoTIFF georeferencing in the form of
multiple tie points.
Geocoding
Geocoding is the process of linking coordinates in model space to the Earth’s surface.
Geocoding allows for the specification of projection, datum, ellipsoid, etc. ERDAS IMAGINE
interprets the GeoTIFF geocoding to determine the latitude and longitude of the map
coordinates for GeoTIFF images. This interpretation also allows the GeoTIFF image to be
reprojected.
In GeoTIFF, the units of the map coordinates are obtained from the geocoding, not from the
georeferencing. In addition, GeoTIFF defines a set of standard projected coordinate systems.
The use of a standard projected coordinate system in GeoTIFF constrains the units that can be
used with that standard system. Therefore, if the units used with a projection in ERDAS
IMAGINE are not equal to the implied units of an equivalent GeoTIFF geocoding, ERDAS
IMAGINE transforms the georeferencing to conform to the implied units so that the standard
projected coordinate system code can be used. The alternative (preserving the georeferencing
as is and producing a nonstandard projected coordinate system) is regarded as less
interoperable.
Vector Data from It is possible to directly import several common vector formats into ERDAS IMAGINE. These
Other Software files become vector layers when imported. These data can then be used for the analyses and, in
Vendors most cases, exported back to their original format (if desired).
Although data can be converted from one type to another by importing a file into ERDAS
IMAGINE and then exporting the ERDAS IMAGINE file into another format, the import and
export routines were designed to work together. For example, if you have information in
AutoCAD that you would like to use in the GIS, you can import a Drawing Interchange File
(DXF) into ERDAS IMAGINE, do the analysis, and then export the data back to DXF format.
In most cases, attribute data are also imported into ERDAS IMAGINE. Each of the following
sections lists the types of attribute data that are imported.
Use Import/Export to import vector data from other software vendors into ERDAS
IMAGINE vector layers. These routines are based on ArcInfo data conversion routines.
102 ERDAS
Vector Data from Other Software Vendors
See Chapter 2 “Vector Layers” for more information on ERDAS IMAGINE vector layers.
See Chapter 12 “Geographic Information Systems” for more information about using
vector data in a GIS.
ARCGEN ARCGEN files are ASCII files created with the ArcInfo UNGENERATE command. The
import ARCGEN program is used to import features to a new layer. Topology is not created or
maintained, therefore the coverage must be built or cleaned after it is imported into ERDAS
IMAGINE.
ARCGEN files must be properly prepared before they are imported into ERDAS
IMAGINE. If there is a syntax error in the data file, the import process may not work. If
this happens, you must kill the process, correct the data file, and then try importing again.
See the ArcInfo documentation for more information about these files.
AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc. (Sausalito, California).
AutoCAD is a computer-aided design program that enables the user to draw two- and three-
dimensional models. This software is frequently used in architecture, engineering, urban
planning, and many other applications.
AutoCAD DXF is the standard interchange format used by most CAD systems. The AutoCAD
program DXFOUT creates a DXF file that can be converted to an ERDAS IMAGINE vector
layer. AutoCAD files can also be output to IGES format using the AutoCAD program
IGESOUT.
DXF files can be converted in the ASCII or binary format. The binary format is an optional
format for AutoCAD Releases 10 and 11. It is structured just like the ASCII format, only the
data are in binary format.
DXF files are composed of a series of related layers. Each layer contains one or more drawing
elements or entities. An entity is a drawing element that can be placed into an AutoCAD
drawing with a single command. When converted to an ERDAS IMAGINE vector layer, each
entity becomes a single feature. Table 3-27 describes how various DXF entities are converted
to ERDAS IMAGINE.
ERDAS
DXF Entity IMAGINE Comments
Feature
Line Line These entities become two point lines. The initial Z value
3DLine of 3D entities is stored.
Trace Line These entities become four or five point lines. The initial Z
Solid value of 3D entities is stored.
3DFace
Circle Line These entities form lines. Circles are composed of 361
Arc points—one vertex for each degree. The first and last point
is at the same location.
Polyline Line These entities can be grouped to form a single line having
many vertices.
Point Point These entities become point features in a layer.
Shape
The ERDAS IMAGINE import process also imports line and point attribute data (if they exist)
and creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point
attributes) files. If an imported DXF file is exported back to DXF format, this information is
also exported.
Refer to an AutoCAD manual for more information about the format of DXF files.
DLG DLGs are furnished by the U.S. Geological Survey and provide planimetric base map
information, such as transportation, hydrography, contours, and public land survey boundaries.
DLG files are available for the following USGS map series:
• 1:100,000-scale quadrangles
DLGs are topological files that contain nodes, lines, and areas (similar to the points, lines, and
polygons in an ERDAS IMAGINE vector layer). DLGs also store attribute information in the
form of major and minor code pairs. Code pairs are encoded in two integer fields, each
containing six digits. The major code describes the class of the feature (road, stream, etc.) and
the minor code stores more specific information about the feature.
DLGs can be imported in standard format (144 bytes per record) and optional format (80 bytes
per record). You can export to DLG-3 optional format. Most DLGs are in the Universal
Transverse Mercator (UTM) map projection. However, the 1:2,000,000 scale series is in
geographic coordinates.
104 ERDAS
Topographic Data
USGS DEMs
There are two types of DEMs that are most commonly available from USGS:
• 1:24,000 scale, also called 7.5-minute DEM, is usually referenced to the UTM coordinate
system. It has a spatial resolution of 30 × 30 m.
Both types have a 16-bit range of elevation values, meaning each pixel can have a possible
elevation of -32,768 to 32,767.
DEM data are stored in ASCII format. The data file values in ASCII format are stored as ASCII
characters rather than as zeros and ones like the data file values in binary data.
DEM data files from USGS are initially oriented so that North is on the right side of the image
instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the
Import process so that coordinates read with any ERDAS IMAGINE program are correct.
DTED DTED data are produced by the National Imagery and Mapping Agency (NIMA) and are
available only to US government agencies and their contractors. DTED data are distributed on
9-track tapes and on CD-ROM.
There are two types of DTED data available:
Both are in Arc/second format and are distributed in cells. A cell is a 1° × 1° area of coverage.
Both have a 16-bit range of elevation values.
Like DEMs, DTED data files are also oriented so that North is on the right side of the image
instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise as part of the
Import process so that coordinates read with any ERDAS IMAGINE program are correct.
Using Topographic Topographic data have many uses in a GIS. For example, topographic data can be used in
Data conjunction with other data to:
• calculate the shortest and most navigable path over a mountain range
Field Guide 91
Raster and Vector Data Sources
See Chapter 11 “Terrain Analysis” for more information about using topographic and
elevation data.
GPS Data
Introduction Global Positioning System (GPS) data has been in existence since the launch of the first satellite
in the US Navigation System with Time and Ranging (NAVSTAR) system on February 22,
1978, and the availability of a full constellation of satellites since 1994. Initially, the system was
available to US military personnel only, but from 1993 onwards the system started to be used
(in a degraded mode) by the general public. There is also a Russian GPS system called
GLONASS with similar capabilities.
The US NAVSTAR GPS consists of a constellation of 24 satellites orbiting the Earth,
broadcasting data that allows a GPS receiver to calculate its spatial position.
Satellite Position Positions are determined through the traditional ranging technique. The satellites orbit the Earth
(at an altitude of 20,200 km) in such a manner that several are always visible at any location on
the Earth's surface. A GPS receiver with line of site to a GPS satellite can determine how long
the signal broadcast by the satellite has taken to reach its location, and therefore can determine
the distance to the satellite. Thus, if the GPS receiver can see three or more satellites and
determine the distance to each, the GPS receiver can calculate its own position based on the
known positions of the satellites (i.e., the intersection of the spheres of distance from the
satellite locations). Theoretically, only three satellites should be required to find the 3D position
of the receiver, but various inaccuracies (largely based on the quality of the clock within the
GPS receiver that is used to time the arrival of the signal) mean that at least four satellites are
generally required to determine a three-dimensional (3D) x, y, z position.
The explanation above is an over-simplification of the technique used, but does show the
concept behind the use of the GPS system for determining position. The accuracy of that
position is affected by several factors, including the number of satellites that can be seen by a
receiver, but especially for commercial users by Selective Availability. Each satellite actually
sends two signals at different frequencies. One is for civilian use and one for military use. The
signal used for commercial receivers has an error introduced to it called Selective Availability.
Selective Availability introduces a positional inaccuracy of up to 100m to commercial GPS
receivers. This is mainly intended to limit the use of highly accurate GPS positioning to hostile
users, but the errors can be ameliorated through various techniques, such as keeping the GPS
receiver stationary; thereby allowing it to average out the errors, or through more advanced
techniques discussed in the following sections.
92 ERDAS
GPS Data
Differential Correction Differential Correction (or Differential GPS - DGPS) can be used to remove the majority of the
effects of Selective Availability. The technique works by using a second GPS unit (or base
station) that is stationary at a precisely known position. As this GPS knows where it actually is,
it can compare this location with the position it calculates from GPS satellites at any particular
time and calculate an error vector for that time (i.e., the distance and direction that the GPS
reading is in error from the real position). A log of such error vectors can then be compared with
GPS readings taken from the first, mobile unit (the field unit that is actually taking GPS location
readings of features). Under the assumption that the field unit had line of site to the same GPS
satellites to acquire its position as the base station, each field-read position (with an appropriate
time stamp) can be compared to the error vector for that time and the position corrected using
the inverse of the vector. This is generally performed using specialist differential correction
software.
Real Time Differential GPS (RDGPS) takes this technique one step further by having the base
station communicate the error vector via radio to the field unit in real time. The field unit can
then automatically updates its own location in real time. The main disadvantage of this
technique is that the range that a GPS base station can broadcast over is generally limited,
thereby restricting the range the mobile unit can be used away from the base station. One of the
biggest uses of this technique is for ocean navigation in coastal areas, where base stations have
been set up along coastlines and around ports so that the GPS systems on board ships can get
accurate real time positional information to help in shallow-water navigation.
Applications of GPS GPS data finds many uses in remote sensing and GIS applications, such as:
Data
• Collection of ground truth data, even spectral properties of real-world conditions at known
geographic positions, for use in image classification and validation. The user in the field
identifies a homogeneous area of identifiable land cover or use on the ground and records
its location using the GPS receiver. These locations can then be plotted over an image to
either train a supervised classifier or to test the validity of a classification.
• Moving map applications take the concept of relating the GPS positional information to
your geographic data layers one step further by having the GPS position displayed in real
time over the geographical data layers. Thus you take a computer out into the field and
connect the GPS receiver to the computer, usually via the serial port. Remote sensing and
GIS data layers are then displayed on the computer and the positional signal from the GPS
receiver is plotted on top of them.
• GPS receivers can be used for the collection of positional information for known point
features on the ground. If these can be identified in an image, the positional data can be used
as Ground Control Points (GCPs) for geocorrecting the imagery to a map projection
system. If the imagery is of high resolution, this generally requires differential correction
of the positional data.
• DGPS data can be used to directly capture GIS data and survey data for direct use in a GIS
or CAD system. In this regard the GPS receiver can be compared to using a digitizing tablet
to collect data, but instead of pointing and clicking at features on a paper document, you
are pointing and clicking on the real features to capture the information.
Field Guide 93
Raster and Vector Data Sources
• Precision agriculture uses GPS extensively in conjunction with Variable Rate Technology
(VRT). VRT relies on the use of a VRT controller box connected to a GPS and the pumping
mechanism for a tank full of fertilizers/pesticides/seeds/water/etc. A digital polygon map
(often derived from remotely sensed data) in the controller specifies a predefined amount
to dispense for each polygonal region. As the tractor pulls the tank around the field the GPS
logs the position that is compared to the map position in memory. The correct amount is
then dispensed at that location. The aim of this process is to maximize yields without
causing any environmental damage.
• GPS is often used in conjunction with airborne surveys. The aircraft, as well as carrying a
camera or scanner, has on board one or more GPS receivers tied to an inertial navigation
system. As each frame is exposed precise information is captured (or calculated in post
processing) on the x, y, z and roll, pitch, yaw of the aircraft. Each image in the aerial survey
block thus has initial exterior orientation parameters which therefore minimizes the need
for control in a block triangulation process.
Ordering Raster Table 3-24 describes the different Landsat, SPOT, AVHRR, and DEM products that can be
Data ordered. Information in this chart does not reflect all the products that are available, but only the
most common types that can be imported into ERDAS IMAGINE.
Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded
94 ERDAS
Ordering Raster Data
Ground # of Available
Data Type Pixel Size Format
Covered Bands Geocoded
Addresses to Contact For more information about these and related products, contact the following agencies:
• SPOT data:
SPOT Image Corporation
1897 Preston White Dr.
Reston, VA 22091-4368 USA
Telephone: 703-620-2200
Fax: 703-648-1813
Internet: www.spot.com
Field Guide 95
Raster and Vector Data Sources
• Cartographic data including, maps, airphotos, space images, DEMs, planimetric data, and
related information from federal, state, and private agencies:
National Mapping Division
U.S. Geological Survey, National Center
12201 Sunrise Valley Drive
Reston, VA 20192 USA
Telephone: 703/648-4000
Internet: mapping.usgs.gov
• Landsat data:
Customer Services
U.S. Geological Survey
EROS Data Center
47914 252nd Street
Sioux Falls, SD 57198 USA
Telephone: 800/252-4547
Fax: 605/594-6589
Internet: edcwww.cr.usgs.gov/eros-home.html
96 ERDAS
Raster Data from Other Software Vendors
• RADARSAT data:
RADARSAT International, Inc.
265 Carling Ave., Suite 204
Ottawa, Ontario
Canada K1S 2E1
Telephone: 613-238-5424
Fax: 613-238-5425
Internet: www.rsi.ca
Raster Data from ERDAS IMAGINE also enables you to import data created by other software vendors. This
Other Software way, if another type of digital data system is currently in use, or if data is received from another
Vendors system, it easily converts to the ERDAS IMAGINE file format for use in ERDAS IMAGINE.
Data from other vendors may come in that specific vendor’s format, or in a standard format
which can be used by several vendors. The Import and/or Direct Read function handles these
raster data types from other software systems:
• JFIF (JPEG)
• MrSID
Field Guide 97
Raster and Vector Data Sources
• SDTS
• Sun Raster
Other data types might be imported using the Generic Binary import option.
Convert a vector layer to a raster layer, or vice versa, by using IMAGINE Vector.
ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software. The two basic
types of ERDAS Ver. 7.X data files are indicated by the file name extensions:
• .LAN—a multiband continuous image file (the name is derived from the Landsat satellite)
• .GIS—a single-band thematic data file in which pixels are divided into discrete categories
(the name is derived from geographic information system)
.LAN and .GIS image files are stored in the same format. The image data are arranged in a BIL
format and can be 4-bit, 8-bit, or 16-bit. The ERDAS Ver. 7.X file structure includes:
When you import a .GIS file, it becomes an image file with one thematic raster layer. When
you import a .LAN file, each band becomes a continuous raster layer within the image file.
GRID and GRID Stacks GRID is a raster geoprocessing program distributed by Environmental Systems Research
Institute, Inc. (ESRI) in Redlands, California. GRID is designed to complement the vector data
model system, ArcInfo is a well-known vector GIS that is also distributed by ESRI. The name
GRID is taken from the raster data format of presenting information in a grid of cells.
The data format for GRID is a compressed tiled raster data structure. Like ArcInfo Coverages,
a GRID is stored as a set of files in a directory, including files to keep the attributes of the GRID.
Each GRID represents a single layer of continuous or thematic imagery, but it is also possible
to combine GRIDs files into a multilayer image. A GRID Stack (.stk) file names multiple
GRIDs to be treated as a multilayer image. Starting with ArcInfo version 7.0, ESRI introduced
the STK format, referred to in ERDAS software as GRID Stack 7.x, which contains multiple
GRIDs. The GRID Stack 7.x format keeps attribute tables for the entire stack in a separate
directory, in a manner similar to that of GRIDs and Coverages.
98 ERDAS
Raster Data from Other Software Vendors
JFIF (JPEG) JPEG is a set of compression techniques established by the Joint Photographic Experts Group
(JPEG). The most commonly used form of JPEG involves Discrete Cosine Transformation
(DCT), thresholding, followed by Huffman encoding. Since the output image is not exactly the
same as the input image, this form of JPEG is considered to be lossy. JPEG can compresses
monochrome imagery, but achieves compression ratios of 20:1 or higher with color (RGB)
imagery, by taking advantage of the fact that the data being compressed is a visible image. The
integrity of the source image is preserved by focussing its compression on aspects of the image
that are less noticeable to the human eye. JPEG cannot be used on thematic imagery, due to the
change in pixel values.
There is a lossless form of JPEG compression that uses DCT followed by nonlossy encoding,
but it is not frequently used since it only yields an approximate compression ratio of 2:1.
ERDAS IMAGINE only handles the lossy form of JPEG.
While JPEG compression is used by other file formats, including TIFF, the JPEG File
Interchange Format (JFIF) is a standard file format used to store JPEG-compressed imagery.
The ISO JPEG committee is currently working on a new enhancement to the JPEG standard
known as JPEG 2000, which will incorporate wavelet compression techniques and more
flexibility in JPEG compression.
MrSID Multiresolution Seamless Image Database (MrSID, pronounced Mister Sid) is a wavelet
transform-based compression algorithm designed by LizardTech, Inc. in Seattle, Washington
(http://www.lizardtech.com). The novel developments in MrSID include a memory efficient
implementation and automatic inclusion of pyramid layers in every data set, both of which make
MrSID well-suited to provide efficient storage and retrieval of very large digital images.
The underlying wavelet-based compression methodology used in MrSID yields high
compression ratios while satisfying stringent image quality requirements. The compression
technique used in MrSID is lossy (i.e., the compression-decompression process does not
reproduce the source data pixel-for-pixel). Lossy compression is not appropriate for thematic
imagery, but is essential for large continuous images since it allows much higher compression
ratios than lossless methods (e.g., the Lempel-Ziv-Welch, LZW, algorithm used in the GIF and
TIFF image formats). At standard compression ratios, MrSID encoded imagery is visually
lossless. On typical remotely sensed imagery, lossless methods provide compression ratios of
perhaps 2:1, whereas MrSID provides excellent image quality at compression ratios of 30:1 or
more.
SDTS The Spatial Data Transfer Standard (SDTS) was developed by the USGS to promote and
facilitate the transfer of georeferenced data and its associated metadata between dissimilar
computer systems without loss of fidelity. To achieve these goals, SDTS uses a flexible, self-
describing method of encoding data, which has enough structure to permit interoperability.
For metadata, SDTS requires a number of statements regarding data accuracy. In addition to the
standard metadata, the producer may supply detailed attribute data correlated to any image
feature.
SDTS Profiles
The SDTS standard is organized into profiles. Profiles identify a restricted subset of the standard
needed to solve a certain problem domain. Two subsets of interest to ERDAS IMAGINE users
are:
• Topological Vector Profile (TVP), which covers attributed vector data. This is imported via
the SDTS (Vector) title.
Field Guide 99
Raster and Vector Data Sources
• SDTS Raster Profile and Extensions (SRPE), which covers gridded raster data. This is
imported as SDTS Raster.
SUN Raster A SUN Raster file is an image captured from a monitor display. In addition to GIS, SUN Raster
files can be used in desktop publishing applications or any application where a screen capture
would be useful.
There are two basic ways to create a SUN Raster file on a SUN workstation:
Both methods read the contents of a frame buffer and write the display data to a user-specified
file. Depending on the display hardware and options chosen, screendump can create any of the
file types listed in Table 3-25.
TIFF TIFF was developed by Aldus Corp. (Seattle, Washington) in 1986 in conjunction with major
scanner vendors who needed an easily portable file format for raster image data. Today, the
TIFF format is a widely supported format used in video, fax transmission, medical imaging,
satellite imaging, document storage and retrieval, and desktop publishing applications. In
addition, the GeoTIFF extensions permit TIFF files to be geocoded.
The TIFF format’s main appeal is its flexibility. It handles black and white line images, as well
as gray scale and color images, which can be easily transported between different operating
systems and computers.
100 ERDAS
Raster Data from Other Software Vendors
Any TIFF file that contains an unsupported value for one of these elements may not be
compatible with ERDAS IMAGINE.
Motorola (MSB/LSB)
Gray scale
Inverted gray scale
Color palette
RGB (3-band)
Configuration BIP
BSQ
Compressiond None
Packbits
LZWe
LZW with horizontal differencinge
a All bands must contain the same number of bits (i.e., 4, 4, 4 or 8, 8, 8). Multiband data with bit depths differing per
band cannot be imported into ERDAS IMAGINE.
e LZW is governed by patents and is not supported by the basic version of ERDAS IMAGINE.
GeoTIFF According to the GeoTIFF Format Specification, Revision 1.0, "The GeoTIFF spec defines a
set of TIFF tags provided to describe all ’Cartographic’ information associated with TIFF
imagery that originates from satellite imaging systems, scanned aerial photography, scanned
maps, digital elevation models, or as a result of geographic analysis" (Ritter and Ruth, 1995).
The GeoTIFF format separates cartographic information into two parts: georeferencing and
geocoding.
Georeferencing
Georeferencing is the process of linking the raster space of an image to a model space (i.e., a
map system). Raster space defines how the coordinate system grid lines are placed relative to
the centers of the pixels of the image. In ERDA IMAGINE, the grid lines of the coordinate
system always intersect at the center of a pixel. GeoTIFF allows the raster space to be defined
either as having grid lines intersecting at the centers of the pixels (PixelIsPoint) or as having
grid lines intersecting at the upper left corner of the pixels (PixelIsArea). ERDAS IMAGINE
converts the georeferencing values for PixelIsArea images so that they conform to its raster
space definition.
GeoTIFF allows georeferencing via a scale and an offset, a full affine transformation, or a set
of tie points. ERDAS IMAGINE currently ignores GeoTIFF georeferencing in the form of
multiple tie points.
Geocoding
Geocoding is the process of linking coordinates in model space to the Earth’s surface.
Geocoding allows for the specification of projection, datum, ellipsoid, etc. ERDAS IMAGINE
interprets the GeoTIFF geocoding to determine the latitude and longitude of the map
coordinates for GeoTIFF images. This interpretation also allows the GeoTIFF image to be
reprojected.
In GeoTIFF, the units of the map coordinates are obtained from the geocoding, not from the
georeferencing. In addition, GeoTIFF defines a set of standard projected coordinate systems.
The use of a standard projected coordinate system in GeoTIFF constrains the units that can be
used with that standard system. Therefore, if the units used with a projection in ERDAS
IMAGINE are not equal to the implied units of an equivalent GeoTIFF geocoding, ERDAS
IMAGINE transforms the georeferencing to conform to the implied units so that the standard
projected coordinate system code can be used. The alternative (preserving the georeferencing
as is and producing a nonstandard projected coordinate system) is regarded as less
interoperable.
Vector Data from It is possible to directly import several common vector formats into ERDAS IMAGINE. These
Other Software files become vector layers when imported. These data can then be used for the analyses and, in
Vendors most cases, exported back to their original format (if desired).
Although data can be converted from one type to another by importing a file into ERDAS
IMAGINE and then exporting the ERDAS IMAGINE file into another format, the import and
export routines were designed to work together. For example, if you have information in
AutoCAD that you would like to use in the GIS, you can import a Drawing Interchange File
(DXF) into ERDAS IMAGINE, do the analysis, and then export the data back to DXF format.
In most cases, attribute data are also imported into ERDAS IMAGINE. Each of the following
sections lists the types of attribute data that are imported.
Use Import/Export to import vector data from other software vendors into ERDAS
IMAGINE vector layers. These routines are based on ArcInfo data conversion routines.
102 ERDAS
Vector Data from Other Software Vendors
See Chapter 2 “Vector Layers” for more information on ERDAS IMAGINE vector layers.
See Chapter 12 “Geographic Information Systems” for more information about using
vector data in a GIS.
ARCGEN ARCGEN files are ASCII files created with the ArcInfo UNGENERATE command. The
import ARCGEN program is used to import features to a new layer. Topology is not created or
maintained, therefore the coverage must be built or cleaned after it is imported into ERDAS
IMAGINE.
ARCGEN files must be properly prepared before they are imported into ERDAS
IMAGINE. If there is a syntax error in the data file, the import process may not work. If
this happens, you must kill the process, correct the data file, and then try importing again.
See the ArcInfo documentation for more information about these files.
AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc. (Sausalito, California).
AutoCAD is a computer-aided design program that enables the user to draw two- and three-
dimensional models. This software is frequently used in architecture, engineering, urban
planning, and many other applications.
AutoCAD DXF is the standard interchange format used by most CAD systems. The AutoCAD
program DXFOUT creates a DXF file that can be converted to an ERDAS IMAGINE vector
layer. AutoCAD files can also be output to IGES format using the AutoCAD program
IGESOUT.
DXF files can be converted in the ASCII or binary format. The binary format is an optional
format for AutoCAD Releases 10 and 11. It is structured just like the ASCII format, only the
data are in binary format.
DXF files are composed of a series of related layers. Each layer contains one or more drawing
elements or entities. An entity is a drawing element that can be placed into an AutoCAD
drawing with a single command. When converted to an ERDAS IMAGINE vector layer, each
entity becomes a single feature. Table 3-27 describes how various DXF entities are converted
to ERDAS IMAGINE.
ERDAS
DXF Entity IMAGINE Comments
Feature
Line Line These entities become two point lines. The initial Z value
3DLine of 3D entities is stored.
Trace Line These entities become four or five point lines. The initial Z
Solid value of 3D entities is stored.
3DFace
Circle Line These entities form lines. Circles are composed of 361
Arc points—one vertex for each degree. The first and last point
is at the same location.
Polyline Line These entities can be grouped to form a single line having
many vertices.
Point Point These entities become point features in a layer.
Shape
The ERDAS IMAGINE import process also imports line and point attribute data (if they exist)
and creates an INFO directory with the appropriate ACODE (arc attributes) and XCODE (point
attributes) files. If an imported DXF file is exported back to DXF format, this information is
also exported.
Refer to an AutoCAD manual for more information about the format of DXF files.
DLG DLGs are furnished by the U.S. Geological Survey and provide planimetric base map
information, such as transportation, hydrography, contours, and public land survey boundaries.
DLG files are available for the following USGS map series:
• 1:100,000-scale quadrangles
DLGs are topological files that contain nodes, lines, and areas (similar to the points, lines, and
polygons in an ERDAS IMAGINE vector layer). DLGs also store attribute information in the
form of major and minor code pairs. Code pairs are encoded in two integer fields, each
containing six digits. The major code describes the class of the feature (road, stream, etc.) and
the minor code stores more specific information about the feature.
DLGs can be imported in standard format (144 bytes per record) and optional format (80 bytes
per record). You can export to DLG-3 optional format. Most DLGs are in the Universal
Transverse Mercator (UTM) map projection. However, the 1:2,000,000 scale series is in
geographic coordinates.
104 ERDAS
Vector Data from Other Software Vendors
The ERDAS IMAGINE import process also imports point, line, and polygon attribute data (if
they exist) and creates an INFO directory with the appropriate ACODE, PCODE (polygon
attributes), and XCODE files. If an imported DLG file is exported back to DLG format, this
information is also exported.
To maintain the topology of a vector layer created from a DLG file, you must Build or
Clean it. See Chapter 12 “Geographic Information Systems” for information on this
process.
ETAK ETAK’s MapBase is an ASCII digital street centerline map product available from ETAK, Inc.
(Menlo Park, California). ETAK files are similar in content to the Dual Independent Map
Encoding (DIME) format used by the U.S. Census Bureau. Each record represents a single
linear feature with address and political, census, and ZIP code boundary information. ETAK has
also included road class designations and, in some areas, major landmark features.
There are four possible types of ETAK features:
• DIME or D types—if the feature type is D, a line is created along with a corresponding
ACODE (arc attribute) record. The coordinates are stored in Lat/Lon decimal degrees.
• Alternate address or A types—each record contains an alternate address record for a line.
These records are written to the attribute file, and are useful for building address coverages.
• Shape features or S types—shape records are used to add vertices to the lines. The
coordinates for these features are in Lat/Lon decimal degrees.
• Landmark or L types—if the feature type is L and you opt to output a landmark layer, then
a point feature is created along with an associated PCODE record.
IGES IGES files are often used to transfer CAD data between systems. IGES Version 3.0 format,
published by the U.S. Department of Commerce, is in uncompressed ASCII format only.
IGES files can be produced in AutoCAD using the IGESOUT command. The following IGES
entities can be converted:
The ERDAS IMAGINE import process also imports line and point attribute data (if they exist)
and creates an INFO directory with the appropriate ACODE and XCODE files. If an imported
IGES file is exported back to IGES format, this information is also exported.
TIGER TIGER files are line network products of the U.S. Census Bureau. The Census Bureau is using
the TIGER system to create and maintain a digital cartographic database that covers the United
States, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the Trust Territories of the
Pacific.
TIGER/Line is the line network product of the TIGER system. The cartographic base is taken
from Geographic Base File/Dual Independent Map Encoding (GBF/DIME), where available,
and from the USGS 1:100,000-scale national map series, SPOT imagery, and a variety of other
sources in all other areas, in order to have continuous coverage for the entire United States. In
addition to line segments, TIGER files contain census geographic codes and, in metropolitan
areas, address ranges for the left and right sides of each segment. TIGER files are available in
ASCII format on both CD-ROM and tape media. All released versions after April 1989 are
supported.
There is a great deal of attribute information provided with TIGER/Line files. Line and point
attribute information can be converted into ERDAS IMAGINE format. The ERDAS IMAGINE
import process creates an INFO directory with the appropriate ACODE and XCODE files. If an
imported TIGER file is exported back to TIGER format, this information is also exported.
TIGER attributes include the following:
• Permanent record numbers—each line segment is assigned a permanent record number that
is maintained throughout all versions of TIGER/Line files.
• Source codes—each line and landmark point feature is assigned a code to specify the
original source.
• Census feature class codes—line segments representing physical features are coded based
on the USGS classification codes in DLG-3 files.
• Legal and statistical area attributes—legal areas include states, counties, townships, towns,
incorporated cities, Indian reservations, and national parks. Statistical areas are areas used
during the census-taking, where legal areas are not adequate for reporting statistics.
TIGER files for major metropolitan areas outside of the United States (e.g., Puerto Rico,
Guam) do not have address ranges.
106 ERDAS
Vector Data from Other Software Vendors
The information presented in this section, “Vector Data from Other Software Vendors”,
was obtained from the Data Conversion and the 6.0 ARC Command References manuals,
both published by ESRI, Inc., 1992.
108 ERDAS
Chapter 4
Image Display
Introduction This section defines some important terms that are relevant to image display. Most of the
terminology and definitions used in this chapter are based on the X Window System
(Massachusetts Institute of Technology) terminology. This may differ from other systems, such
as Microsoft Windows NT.
A seat is a combination of an X-server and a host workstation.
• A display may consist of multiple screens. These screens work together, making it possible
to move the mouse from one screen to the next.
• The display hardware contains the memory that is used to produce the image. This
hardware determines which types of displays are available (e.g., true color or pseudo color)
and the pixel depth (e.g., 8-bit or 24-bit).
Figure 4-1: Example of One Seat with One Display and Two Screens
Screen Screen
Display Memory Size The size of memory varies for different displays. It is expressed in terms of:
• the number of bits for each pixel or pixel depth, as explained below.
Displays are referred to in terms of a number of bits, such as 8-bit or 24-bit. These bits are used
to determine the number of possible brightness values. For example, in a 24-bit display, 24 bits
per pixel breaks down to eight bits for each of the three color guns per pixel. The number of
possible values that can be expressed by eight bits is 28, or 256. Therefore, on a 24-bit display,
each color gun of a pixel can have any one of 256 possible brightness values, expressed by the
range of values 0 to 255.
The combination of the three color guns, each with 256 possible brightness values, yields 2563,
(or 224, for the 24-bit image display), or 16,777,216 possible colors for each pixel on a 24-bit
display. If the display being used is not 24-bit, the example above calculates the number of
possible brightness values and colors that can be displayed.
Pixel The term pixel is abbreviated from picture element. As an element, a pixel is the smallest part
of a digital picture (image). Raster image data are divided by a grid, in which each cell of the
grid is represented by a pixel. A pixel is also called a grid cell.
Pixel is a broad term that is used for both:
• the data file value(s) for one data unit in an image (file pixels), or
Usually, one pixel in a file corresponds to one pixel in a display or printout. However, an image
can be magnified or reduced so that one file pixel no longer corresponds to one pixel in the
display or printout. For example, if an image is displayed with a magnification factor of 2, then
one file pixel takes up 4 (2 × 2) grid cells on the display screen.
To display an image, a file pixel that consists of one or more numbers must be transformed into
a display pixel with properties that can be seen, such as brightness and color. Whereas the file
pixel has values that are relevant to data (such as wavelength of reflected light), the displayed
pixel must have a particular color or gray level that represents these data file values.
Colors Human perception of color comes from the relative amounts of red, green, and blue light that
are measured by the cones (sensors) in the eye. Red, green, and blue light can be added together
to produce a wide variety of colors—a wider variety than can be formed from the combinations
of any three other colors. Red, green, and blue are therefore the additive primary colors.
A nearly infinite number of shades can be produced when red, green, and blue light are
combined. On a display, different colors (combinations of red, green, and blue) allow you to
perceive changes across an image. Color displays that are available today yield 224, or
16,777,216 colors. Each color has a possible 256 different values (28).
Color Guns
On a display, color guns direct electron beams that fall on red, green, and blue phosphors. The
phosphors glow at certain frequencies to produce different colors. Color monitors are often
called RGB monitors, referring to the primary colors.
The red, green, and blue phosphors on the picture tube appear as tiny colored dots on the display
screen. The human eye integrates these dots together, and combinations of red, green, and blue
are perceived. Each pixel is represented by an equal number of red, green, and blue phosphors.
110 ERDAS
Introduction
Brightness Values
Brightness values (or intensity values) are the quantities of each primary color to be output to
each displayed pixel. When an image is displayed, brightness values are calculated for all three
color guns, for every pixel.
All of the colors that can be output to a display can be expressed with three brightness values—
one for each color gun.
Colormap and A color on the screen is created by a combination of red, green, and blue values, where each of
Colorcells these components is represented as an 8-bit value. Therefore, 24 bits are needed to represent a
color. Since many systems have only an 8-bit display, a colormap is used to translate the 8-bit
value into a color. A colormap is an ordered set of colorcells, which is used to perform a function
on a set of input values. To display or print an image, the colormap translates data file values in
memory into brightness values for each color gun. Colormaps are not limited to 8-bit displays.
Colorcells
There is a colorcell in the colormap for each data file value. The red, green, and blue values
assigned to the colorcell control the brightness of the color guns for the displayed pixel (Nye
1990). The number of colorcells in a colormap is determined by the number of bits in the display
(e.g., 8-bit, 24-bit).
For example, if a pixel with a data file value of 40 was assigned a display value (colorcell value)
of 24, then this pixel uses the brightness values for the 24th colorcell in the colormap. In the
colormap below (Table 4-1), this pixel is displayed as blue.
The colormap is controlled by the X Windows system. There are 256 colorcells in a colormap
with an 8-bit display. This means that 256 colors can be displayed simultaneously on the
display. With a 24-bit display, there are 256 colorcells for each color: red, green, and blue. This
offers 256 × 256 × 256, or 16,777,216 different colors.
When an application requests a color, the server specifies which colorcell contains that color
and returns the color. Colorcells can be read-only or read/write.
Read-only Colorcells
The color assigned to a read-only colorcell can be shared by other application windows, but it
cannot be changed once it is set. To change the color of a pixel on the display, it would not be
possible to change the color for the corresponding colorcell. Instead, the pixel value would have
to be changed and the image redisplayed. For this reason, it is not possible to use auto-update
operations in ERDAS IMAGINE with read-only colorcells.
Read/Write Colorcells
The color assigned to a read/write colorcell can be changed, but it cannot be shared by other
application windows. An application can easily change the color of displayed pixels by
changing the color for the colorcell that corresponds to the pixel value. This allows applications
to use auto update operations. However, this colorcell cannot be shared by other application
windows, and all of the colorcells in the colormap could quickly be utilized.
Changeable Colormaps
Some colormaps can have both read-only and read/write colorcells. This type of colormap
allows applications to utilize the type of colorcell that would be most preferred.
Display Types The possible range of different colors is determined by the display type. ERDAS IMAGINE
supports the following types of displays:
• 8-bit PseudoColor
• 24-bit DirectColor
• 24-bit TrueColor
A display may offer more than one visual type and pixel depth. See the ERDAS IMAGINE
Configuration Guide for more information on specific display hardware.
32-bit Displays
A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor, or TrueColor
display. Whether or not it is DirectColor or TrueColor depends on the display hardware.
8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells. Each cell has a red, green,
and blue brightness value, giving 256 combinations of red, green, and blue. The data file value
for the pixel is transformed into a colorcell value. The brightness values for the colorcell that is
specified by this colorcell value are used to define the color to be displayed.
112 ERDAS
Introduction
1
Green band
Colorcell 2
value
value
(4) 3
Blue band
4 0 0 255 Blue pixel
value
5
6
In Figure 4-2, data file values for a pixel of three continuous raster layers (bands) is transformed
to a colorcell value. Since the colorcell value is four, the pixel is displayed with the brightness
values of the fourth colorcell (blue).
This display grants a small number of colors to ERDAS IMAGINE. It works well with thematic
raster layers containing less than 200 colors and with gray scale continuous raster layers. For
image files with three continuous raster layers (bands), the colors are severely limited because,
under ideal conditions, 256 colors are available on an 8-bit display, while 8-bit, 3-band image
files can contain over 16,000,000 different colors.
Auto Update
An 8-bit PseudoColor display has read-only and read/write colorcells, allowing ERDAS
IMAGINE to perform near real-time color modifications using Auto Update and Auto Apply
options.
24-bit DirectColor A 24-bit DirectColor display enables you to view up to three bands of data at one time, creating
displayed pixels that represent the relationships between the bands by their colors. Since this is
a 24-bit display, it offers up to 256 shades of red, 256 shades of green, and 256 shades of blue,
which is approximately 16 million different colors (2563). The data file values for each band are
transformed into colorcell values. The colorcell that is specified by these values is used to define
the color to be displayed.
Colormap Color-
Red
Cell
Data File Values Colorcell Values Value
Color- Index
Green
Cell Value
Index 1 0
Red band Red band
value Color-
value Cell Blue 1 0 2 0
(1) Value
Index 3
2 90
Green band Green band 4
value 1 0 3
value
(2) 4 5
2 0
Blue band Blue band 5 6 55
3
value value
(6) 4 6 55
5
6 200
Blue-green pixel
(0, 90, 200 RGB)
In Figure 4-3, data file values for a pixel of three continuous raster layers (bands) are
transformed to separate colorcell values for each band. Since the colorcell value is 1 for the red
band, 2 for the green band, and 6 for the blue band, the RGB brightness values are 0, 90, 200.
This displays the pixel as a blue-green color.
This type of display grants a very large number of colors to ERDAS IMAGINE and it works
well with all types of data.
Auto Update
A 24-bit DirectColor display has read-only and read/write colorcells, allowing ERDAS
IMAGINE to perform real-time color modifications using the Auto Update and Auto Apply
options.
24-bit TrueColor A 24-bit TrueColor display enables you to view up to three continuous raster layers (bands) of
data at one time, creating displayed pixels that represent the relationships between the bands by
their colors. The data file values for the pixels are transformed into screen values and the colors
are based on these values. Therefore, the color for the pixel is calculated without querying the
server and the colormap. The colormap for a 24-bit TrueColor display is not available for
ERDAS IMAGINE applications. Once a color is assigned to a screen value, it cannot be
changed, but the color can be shared by other applications.
114 ERDAS
Introduction
The screen values are used as the brightness values for the red, green, and blue color guns. Since
this is a 24-bit display, it offers 256 shades of red, 256 shades of green, and 256 shades of blue,
which is approximately 16 million different colors (2563).
In Figure 4-4, data file values for a pixel of three continuous raster layers (bands) are
transformed to separate screen values for each band. Since the screen value is 0 for the red band,
90 for the green band, and 200 for the blue band, the RGB brightness values are 0, 90, and 200.
This displays the pixel as a blue-green color.
Auto Update
The 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE, and thus does
not provide ERDAS IMAGINE with any real-time color changing capability. Each time a color
is changed, the screen values must be calculated and the image must be redrawn.
Color Quality
The 24-bit TrueColor visual provides the best color quality possible with standard equipment.
There is no color degradation under any circumstances with this display.
PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following visual type and pixel
depths:
• 8-bit PseudoColor
• 15-bit HiColor
• 24-bit TrueColor
8-bit PseudoColor
An 8-bit PseudoColor display for the PC uses the same type of colormap as the X Windows 8-
bit PseudoColor display, except that each colorcell has a range of 0 to 63 on most video display
adapters, instead of 0 to 255. Therefore, each colorcell has a red, green, and blue brightness
value, giving 64 different combinations of red, green, and blue. The colormap, however, is the
same as the X Windows 8-bit PseudoColor display. It has 256 colorcells allowing 256 different
colors to be displayed simultaneously.
15-bit HiColor
A 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24-bit
TrueColor display, except that it offers 32 shades of red, 32 shades of green, and 32 shades of
blue, for a total of 32,768 possible color combinations. Some video display adapters allocate 6
bits to the green color gun, allowing 64,000 colors. These adapters use a 16-bit color scheme.
24-bit TrueColor
A 24-bit TrueColor display for the PC assigns colors the same way as the X Windows 24-bit
TrueColor display.
Displaying Raster Image files (.img) are raster files in the ERDAS IMAGINE format. There are two types of raster
Layers layers:
• continuous
• thematic
Thematic raster layers require a different display process than continuous raster layers. This
section explains how each raster layer type is displayed.
Continuous Raster An image file (.img) can contain several continuous raster layers; therefore, each pixel can have
Layers multiple data file values. When displaying an image file with continuous raster layers, it is
possible to assign which layers (bands) are to be displayed with each of the three color guns.
The data file values in each layer are input to the assigned color gun. The most useful color
assignments are those that allow for an easy interpretation of the displayed image. For example:
• A natural-color image approximates the colors that would appear to a human observer of
the scene.
• A color-infrared image shows the scene as it would appear on color-infrared film, which is
familiar to many analysts.
Band assignments are often expressed in R,G,B order. For example, the assignment 4, 2, 1
means that band 4 is assigned to red, band 2 to green, and band 1 to blue. Below are some widely
used band to color gun assignments (Faust, 1989):
• Landsat TM—color-infrared: 4, 3, 2
This is infrared because band 4 = infrared.
• SPOT Multispectral—color-infrared: 3, 2, 1
This is infrared because band 3 = infrared.
116 ERDAS
Displaying Raster Layers
Contrast Table
When an image is displayed, ERDAS IMAGINE automatically creates a contrast table for
continuous raster layers. The red, green, and blue brightness values for each band are stored in
this table.
Since the data file values in continuous raster layers are quantitative and related, the brightness
values in the colormap are also quantitative and related. The screen pixels represent the
relationships between the values of the file pixels by their colors. For example, a screen pixel
that is bright red has a high brightness value in the red color gun, and a high data file value in
the layer assigned to red, relative to other data file values in that layer.
The brightness values often differ from the data file values, but they usually remain in the same
order of lowest to highest. Some meaningful relationships between the values are usually
maintained.
Contrast Stretch
Different displays have different ranges of possible brightness values. The range of most
displays is 0 to 255 for each color gun.
Since the data file values in a continuous raster layer often represent raw data (such as elevation
or an amount of reflected light), the range of data file values is often not the same as the range
of brightness values of the display. Therefore, a contrast stretch is usually performed, which
stretches the range of the values to fit the range of the display.
For example, Figure 4-5 shows a layer that has data file values from 30 to 40. When these values
are used as brightness values, the contrast of the displayed image is poor. A contrast stretch
simply stretches the range between the lower and higher data file values, so that the contrast of
the displayed image is higher—that is, lower data file values are displayed with the lowest
brightness values, and higher data file values are displayed with the highest brightness values.
The colormap stretches the range of colorcell values from 30 to 40, to the range 0 to 255.
Because the output values are incremented at regular intervals, this stretch is a linear contrast
stretch. (The numbers in Figure 4-5 are approximations and do not show an exact linear
relationship.)
30→0
255
31→25
32→51
See Chapter 6 “Enhancement” for more information about contrast stretching. Contrast
stretching is performed the same way for display purposes as it is for permanent image
enhancement.
A two standard deviation linear contrast stretch is applied to stretch pixel values of all
image files from 0 to 255 before they are displayed in the Viewer, unless a saved contrast
stretch exists (the file is not changed). This often improves the initial appearance of the
data in the Viewer.
Statistics Files
To perform a contrast stretch, certain statistics are necessary, such as the mean and the standard
deviation of the data file values in each layer.
Use the Image Information utility to create and view statistics for a raster layer.
Usually, not all of the data file values are used in the contrast stretch calculations. The minimum
and maximum data file values of each band are often too extreme to produce good results. When
the minimum and maximum are extreme in relation to the rest of the data, then the majority of
data file values are not stretched across a very wide range, and the displayed image has low
contrast.
118 ERDAS
Displaying Raster Layers
frequency
0 -2σ mean +2σ 255
stored data file values
Original Histogram
frequency
values stretched
values stretched
over 255 are
less than 0 are
not displayed
not displayed -2σ mean +2σ 0 -2σ mean +2σ 255
0 stretched data file values 255 stretched data file values
The mean and standard deviation of the data file values for each band are used to locate the
majority of the data file values. The number of standard deviations above and below the mean
can be entered, which determines the range of data used in the stretch.
See Appendix A “Math Topics” for more information on mean and standard deviation.
Use the Contrast Tools dialog, which is accessible from the Lookup Table Modification
dialog, to enter the number of standard deviations to be used in the contrast stretch.
Histograms
of each band:
Ranges of
data file
values to
be displayed:
0 data file values in 255 0 data file values in 255 0 data file values in 255
Colormap:
0 brightness values out 255 0 brightness values out 255 0 brightness values out 255
Color
guns:
Brightness
values in
each color
gun:
Color display:
120 ERDAS
Displaying Raster Layers
Thematic Raster A thematic raster layer generally contains pixels that have been classified, or put into distinct
Layers categories. Each data file value is a class value, which is simply a number for a particular
category. A thematic raster layer is stored in an image (.img) file. Only one data file value—the
class value—is stored for each pixel.
Since these class values are not necessarily related, the gradations that are possible in true color
mode are not usually useful in pseudo color. The class system gives the thematic layer a discrete
look, in which each class can have its own color.
Color Table
When a thematic raster layer is displayed, ERDAS IMAGINE automatically creates a color
table. The red, green, and blue brightness values for each class are stored in this table.
RGB Colors
Individual color schemes can be created by combining red, green, and blue in different
combinations, and assigning colors to the classes of a thematic layer.
Colors can be expressed numerically, as the brightness values for each color gun. Brightness
values of a display generally range from 0 to 255, however, ERDAS IMAGINE translates the
values from 0 to 1. The maximum brightness value for the display device is scaled to 1. The
colors listed in Table 4-2 are based on the range that is used to assign brightness values in
ERDAS IMAGINE.
Table 4-2 contains only a partial listing of commonly used colors. Over 16 million colors are
possible on a 24-bit display.
NOTE: Black is the absence of all color (0,0,0) and white is created from the highest values of
all three colors (1, 1, 1). To lighten a color, increase all three brightness values. To darken a
color, decrease all three brightness values.
Use the Raster Attribute Editor to create your own color scheme.
1 2 3
Original
image by 4 3 5
class:
2 1 4
Brightness Values
CLASS COLOR RED GREEN BLUE
1 Red = 255 0 0
2 Orange = 255 128 0
Color 3 Yellow = 255 255 0
scheme: 4 Violet = 128 0 255
5 Green = 0 255 0
Colormap:
Brightness
values in
each color
gun:
= 255
= 128
=0
R O Y
Display: V Y G
O R V
122 ERDAS
Using the Viewer
Using the Viewer The Viewer is a window for displaying raster, vector, and annotation layers. You can open as
many Viewer windows as their window manager supports.
NOTE: The more Viewers that are opened simultaneously, the more RAM memory is necessary.
The Viewer not only makes digital images visible quickly, but it can also be used as a tool for
image processing and raster GIS modeling. The uses of the Viewer are listed briefly in this
section, and described in greater detail in other chapters of the ERDAS Field Guide.
Colormap
ERDAS IMAGINE does not use the entire colormap because there are other applications that
also need to use it, including the window manager, terminal windows, Arc View, or a clock.
Therefore, there are some limitations to the number of colors that the Viewer can display
simultaneously, and flickering may occur as well.
Color Flickering
If an application requests a new color that does not exist in the colormap, the server assigns that
color to an empty colorcell. However, if there are not any available colorcells and the
application requires a private colorcell, then a private colormap is created for the application
window. Since this is a private colormap, when the cursor is moved out of the window, the
server uses the main colormap and the brightness values assigned to the colorcells. Therefore,
the colors in the private colormap are not applied and the screen flickers. Once the cursor is
moved into the application window, the correct colors are applied for that window.
Resampling
When a raster layer(s) is displayed, the file pixels may be resampled for display on the screen.
Resampling is used to calculate pixel values when one raster grid must be fitted to another. In
this case, the raster grid defined by the file must be fit to the grid of screen pixels in the Viewer.
All Viewer operations are file-based. So, any time an image is resampled in the Viewer, the
Viewer uses the file as its source. If the raster layer is magnified or reduced, the Viewer refits
the file grid to the new screen grid.
The resampling methods available are:
• Nearest Neighbor—uses the value of the closest pixel to assign to the output pixel value.
Preference Editor
The Preference Editor enables you to set parameters for the Viewer that affect the way the
Viewer operates.
See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how
to set preferences for the Viewer.
Pyramid Layers Sometimes a large image file may take a long time to display in the Viewer or to be resampled
by an application. The Pyramid Layer option enables you to display large images faster and
allows certain applications to rapidly access the resampled data. Pyramid layers are image
layers which are copies of the original layer successively reduced by the power of 2 and then
resampled. If the raster layer is thematic, then it is resampled using the Nearest Neighbor
method. If the raster layer is continuous, it is resampled by a method that is similar to Cubic
Convolution. The data file values for sixteen pixels in a 4 × 4 window are used to calculate an
output data file value with a filter function.
The number of pyramid layers created depends on the size of the original image. A larger image
produces more pyramid layers. When the Create Pyramid Layer option is selected, ERDAS
IMAGINE automatically creates successively reduced layers until the final pyramid layer can
be contained in one block. The default block size is 64 × 64 pixels.
Pyramid layers are added as additional layers in the image file. However, these layers cannot be
accessed for display. The file size is increased by approximately one-third when pyramid layers
are created. The actual increase in file size can be determined by multiplying the layer size by
this formula
n
1-
∑ ---
4
i
i=0
124 ERDAS
Using the Viewer
Where:
n = number of pyramid layers
NOTE: This equation is applicable to all types of pyramid layers: internal and external.
Pyramid layers do not appear as layers which can be processed: they are for viewing purposes
only. Therefore, they do not appear as layers in other parts of the ERDAS IMAGINE system
(e.g., the Arrange Layers dialog).
The Image Files (General) section of the Preference Editor contains a preference for the
Initial Pyramid Layer Number. By default, the value is set to 2. This means that the first
pyramid layer generated is discarded. In Figure 4-9 below, the 2K × 2K layer is
discarded. If you wish to keep that layer, then set the Initial Pyramid Layer Number to 1.
Pyramid layers can be deleted through the Image Information utility. However, when
pyramid layers are deleted, they are not deleted from the image file; therefore, the image
file size does not change, but ERDAS IMAGINE utilizes this file space, if necessary.
Pyramid layers are deleted from viewing and resampling access only - that is, they can no
longer be viewed or used in an application.
ERDAS IMAGINE
Pyramid layer (64 × 64) selects the pyramid
layer that displays the
Pyramid layer (128 × 128) fastest in the Viewer.
Original Image
(4K × 4K)
image file
For example, a file that is 4K × 4K pixels could take a long time to display when the image is
fit to the Viewer. The Compute Pyramid Layers option creates additional layers successively
reduced from 4K × 4K, to 2K × 2K, 1K × 1K, 512 × 512, 128 × 128, down to 64 × 64. ERDAS
IMAGINE then selects the pyramid layer size most appropriate for display in the Viewer
window when the image is displayed.
The Compute Pyramid Layers option is available from Import and the Image Information
utility.
For more information about the .img format, see Chapter 1 “Raster Data” and the On-
Line Help.
Dithering A display is capable of viewing only a limited number of colors simultaneously. For example,
an 8-bit display has a colormap with 256 colorcells, therefore, a maximum of 256 colors can be
displayed at the same time. If some colors are being used for auto update color adjustment while
other colors are still being used for other imagery, the color quality degrades.
Dithering lets a smaller set of colors appear to be a larger set of colors. If the desired display
color is not available, a dithering algorithm mixes available colors to provide something that
looks like the desired color.
For a simple example, assume the system can display only two colors: black and white, and you
want to display gray. This can be accomplished by alternating the display of black and white
pixels.
In Figure 4-10, dithering is used between a black pixel and a white pixel to obtain a gray pixel.
The colors that the Viewer dithers between are similar to each other, and are dithered on the
pixel level. Using similar colors and dithering on the pixel level makes the image appear
smooth.
126 ERDAS
Using the Viewer
Color Patches
When the Viewer performs dithering, it uses patches of 2 × 2 pixels. If the desired color has an
exact match, then all of the values in the patch match it. If the desired color is halfway between
two of the usable colors, the patch contains two pixels of each of the surrounding usable colors.
If it is 3/4 of the way between two usable colors, the patch contains 3 pixels of the color it is
closest to, and 1 pixel of the color that is second closest. Figure 4-11 shows what the color
patches would look like if the usable colors were black and white and the desired color was gray.
If the desired color is not an even multiple of 1/4 of the way between two allowable colors, it is
rounded to the nearest 1/4. The Viewer separately dithers the red, green, and blue components
of a desired color.
Color Artifacts
Since the Viewer requires 2 × 2 pixel patches to represent a color, and actual images typically
have a different color for each pixel, artifacts may appear in an image that has been dithered.
Usually, the difference in color resolution is insignificant, because adjacent pixels are normally
similar to each other. Similarity between adjacent pixels usually smooths out artifacts that
appear.
Viewing Layers The Viewer displays layers as one of the following types of view layers:
• annotation
• vector
• pseudo color
• gray scale
• true color
Viewing Multiple It is possible to view as many layers of all types (with the exception of vector layers, which have
Layers a limit of 10) at one time in a single Viewer.
To overlay multiple layers in one Viewer, they must all be referenced to the same map
coordinate system. The layers are positioned geographically within the window, and resampled
to the same scale as previously displayed layers. Therefore, raster layers in one Viewer can have
different cell sizes.
When multiple layers are magnified or reduced, raster layers are resampled from the file to fit
to the new scale.
Display multiple layers from the Viewer. Be sure to turn off the Clear Display check box
when you open subsequent layers.
Overlapping Layers
When layers overlap, the order in which the layers are opened is very important. The last layer
that is opened always appears to be on top of the previously opened layers.
In a raster layer, it is possible to make values of zero transparent in the Viewer, meaning that
they have no opacity. Thus, if a raster layer with zeros is displayed over other layers, the areas
with zero values allow the underlying layers to show through.
Opacity is a measure of how opaque, or solid, a color is displayed in a raster layer. Opacity is a
component of the color scheme of categorical data displayed in pseudo color.
• 100% opacity means that a color is completely opaque, and cannot be seen through.
• 50% opacity lets some color show, and lets some of the underlying layers show through.
The effect is like looking at the underlying layers through a colored fog.
128 ERDAS
Using the Viewer
By manipulating opacity, you can compare two or more layers of raster data that are
displayed in a Viewer. Opacity can be set at any value in the range of 0% to 100%. Use
the Arrange Layers dialog to restack layers in a Viewer so that they overlap in a different
order, if needed.
Non-Overlapping Layers
Multiple layers that are opened in the same Viewer do not have to overlap. Layers that cover
distinct geographic areas can be opened in the same Viewer. The layers are automatically
positioned in the Viewer window according to their map coordinates, and are positioned relative
to one another geographically. The map coordinate systems for the layers must be the same.
Linking Viewers Linking Viewers is appropriate when two Viewers cover the same geographic area (at least
partially), and are referenced to the same map units. When two Viewers are linked:
• Either the same geographic point is displayed in the centers of both Viewers, or a box
shows where one view fits inside the other.
• You can manipulate the zoom ratio of one Viewer from another.
• Any inquire cursors in one Viewer appear in the other, for multiple-Viewer pixel inquiry.
• The auto-zoom is enabled, if the Viewers have the same zoom ratio and nearly the same
window size.
It is often helpful to display a wide view of a scene in one Viewer, and then a close-up of a
particular area in another Viewer. When two such Viewers are linked, a box opens in the wide
view window to show where the close-up view lies.
Any image that is displayed at a magnification (higher zoom ratio) of another image in a linked
Viewer is represented in the other Viewer by a box. If several Viewers are linked together, there
may be multiple boxes in that Viewer.
Figure 4-12 shows how one view fits inside the other linked Viewer. The link box shows the
extent of the larger-scale view.
Zoom and Roam Zooming enlarges an image on the display. When an image is zoomed, it can be roamed
(scrolled) so that the desired portion of the image appears on the display screen. Any image that
does not fit entirely in the Viewer can be roamed and/or zoomed. Roaming and zooming have
no effect on how the image is stored in the file.
The zoom ratio describes the size of the image on the screen in terms of the number of file pixels
used to store the image. It is the ratio of the number of screen pixels in the X or Y dimension to
the number that are used to display the corresponding file pixels.
A zoom ratio greater than 1 is a magnification, which makes the image features appear larger in
the Viewer. A zoom ratio less than 1 is a reduction, which makes the image features appear
smaller in the Viewer.
A zoom ratio of 1 means... each file pixel is displayed with 1 screen pixel
in the Viewer.
A zoom ratio of 2 means... each file pixel is displayed with a block of 2 ×
2 screen pixels. Effectively, the image is
displayed at 200%.
A zoom ratio of 0.5 means... each block of 2 × 2 file pixels is displayed
with 1 screen pixel. Effectively, the image is
displayed at 50%.
NOTE: ERDAS IMAGINE allows floating point zoom ratios, so that images can be zoomed at
virtually any scale (i.e., continuous fractional zoom). Resampling is necessary whenever an
image is displayed with a new pixel grid. The resampling method used when an image is zoomed
is the same one used when the image is displayed, as specified in the Open Raster Layer dialog.
The default resampling method is Nearest Neighbor.
130 ERDAS
Using the Viewer
Zoom the data in the Viewer via the Viewer menu bar, the Viewer tool bar, or the Quick
View right-button menu.
Geographic To prepare to run many programs, it may be necessary to determine the data file coordinates,
Information map coordinates, or data file values for a particular pixel or a group of pixels. By displaying the
image in the Viewer and then selecting the pixel(s) of interest, important information about the
pixel(s) can be viewed.
The Quick View right-button menu gives you options to view information about a specific
pixel. Use the Raster Attribute Editor to access information about classes in a thematic
layer.
See Chapter 12 “Geographic Information Systems” for information about attribute data.
Enhancing Working with the brightness values in the colormap is useful for image enhancement. Often, a
Continuous Raster trial and error approach is needed to produce an image that has the right contrast and highlights
Layers the right features. By using the tools in the Viewer, it is possible to quickly view the effects of
different enhancement techniques, undo enhancements that are not helpful, and then save the
best results to disk.
Use the Raster options from the Viewer to enhance continuous raster layers.
Creating New Image It is easy to create a new image file (.img) from the layer(s) displayed in the Viewer. The new
Files image file contains three continuous raster layers (RGB), regardless of how many layers are
currently displayed. The Image Information utility must be used to create statistics for the new
image file before the file is enhanced.
Annotation layers can be converted to raster format, and written to an image file. Or, vector data
can be gridded into an image, overwriting the values of the pixels in the image plane, and
incorporated into the same band as the image.
Use the Viewer to .img function to create a new image file from the currently displayed
raster layers.
132 ERDAS
Chapter 5
Mosaic
Introduction The Mosaic process offers you the capability to stitch images together so one large, cohesive
image of an area can be created. Because of the different features of the Mosaic Tool, you can
smooth these images before mosaicking them together as well as color balance them, or adjust
the histograms of each image in order to present a better large picture. It is necessary for the
images to contain map and projection information, but they do not need to be in the same
projection or have the same cell sizes. The input images must have the same number of layers.
There are a number of features included with the Mosaic Tool to aid you in creating a better
mosaicked image from many separate images. In this chapter, the following features will be
discussed as part of the Mosaic Tool input image options. In Input Image Mode:
• Exclude Areas
• Image Dodging
• Color Balancing
• Histogram Matching
You can choose from the following when using Intersection Mode:
Image Dodging The Image Dodging feature of the Mosaic Tool applies a filter and global statistics across each
image you are mosaicking in order to smooth out light imbalance over the image. The outcome
of Image Dodging is very similar to that of Color Balancing, but if you wish to perform both
functions on your images before mosaicking, you need to do Image Dodging first. Unlike Color
Balancing, Image Dodging uses blocks instead of pixels to balance the image.
When you bring up the Image Dodging dialog you have several different sections. Options for
Current Image, Options for All Images, and Display Setting are all above the viewer area
showing the image and a place for previewing the dodged image. If you want to skip dodging
for a certain image, you can check the Don’t do dodging on this image box and skip to the next
image you want to mosaic.
In the area titled Statistics Collection, you can change the Grid Size, Skip Factor X, and Skip
Factor Y. If you want a specific number to apply to all of your images, you can click that button
so you don’t have to reenter the information with each new image.
In Options For All Images, you can first choose whether the image should be dodged by each
band or as one. You then decide if you want the dodging performed across all of the images you
intend to mosaic or just one image. This is helpful if you have a set of images that all look
smooth except for one that may show a shadow or bright spot in it. If you click Edit Correction
Settings, you will get a prompt to Compute Settings first. If you want to, go ahead and compute
the settings you have stipulated in the dialog. After the settings are computed, you will see a
dialog titled Set Dodging Correction Parameters. In this dialog you are able to change and reset
the brightness and contrast and the constraints of the image.
Use Display Setting to choose either a RGB image or a Single Band image. If using an RGB
image, you can change those bands to whatever combination you wish. After you compute the
settings a final time, preview the dodged image in the dialog viewer so you will know if you
need to do anything further to it before mosaicking.
134 ERDAS
Input Image Mode
Color Balancing When you click Use Color Balancing, you are given the option of Automatic Color Balancing.
If you choose this option, the method will be chosen for you. If you want to manually choose
the surface method and display options, choose Manual Color Manipulation in the Set Color
Balancing dialog.
Mosaic Color Balancing gives you several options to balance any color disparities in your
images before mosaicking them together into one large image. When you choose to use Color
Balancing in the Color Corrections dialog, you will be asked if you want to color balance your
images automatically or manually. For more control over how the images are color balanced,
you should choose the manual color balancing option. Once you choose this option, you will
have access to the Mosaic Color Balancing tool where you can choose different surface
methods, display options, and surface settings for color balancing your images.
Surface Methods
When choosing a surface method you should concentrate on how the light abnormality in your
image is dispersed. Depending on the shape of the bright or shadowed area you want to correct,
you should choose one of the following:
• Parabolic -The color difference is elliptical and does not darken at an equal rate on all sides.
• Conic - The color difference will peak in brightness in the center and darken at an equal
rate on all sides.
• Exponential - The color difference is very bright in the center and slowly, but not always
evenly, darkens on all sides.
It may be necessary to experiment a bit when trying to decide what surface method to use. It can
sometimes be particularly difficult to tell the difference right away between parabolic, conic,
and exponential. Conic is usually best for hot spots found in aerial photography although linear
may be necessary in those situations due to the correction of flight line variations. The linear
method is also useful for images with a large fall off in illumination along the look direction,
especially with SAR images, and also with off-nadir viewing sensors.
In the same area, you will see a check box for Common center for all layers. If you check this
option, all layers in the current image will have their center points set to that of the current layer.
Whenever the selector is moved, the text box updated, or the reset button clicked, all of the
layers will be updated. If you move the center point, and you wish to bring it back to the middle
of the image, you can click Reset Center Point in the Surface Method area.
Display Setting
The Display Setting area of the Mosaic Color Balancing tool lets you choose between RGB
images and Single Band images. You can also alter which layer in an RGB image is the red,
green, or blue.
Surface Settings
When you choose a Surface Method, the Surface Settings become the parameters used in that
method’s formula. The parameters define the surface, and the surface will then be used to flatten
the brightness variation throughout the image. You can change the following Surface Settings:
• Offset
• Scale
• Center X
• Center Y
• Axis Ratio
As you change the settings, you can see the Image Profile graph change as well. If you want to
preview the color balanced image before accepting it, you can click Preview at the bottom of
the Mosaic Color Balancing tool. This is helpful because you can change any disparities that
still exist in the image.
Histogram Matching Histogram Matching is used in other facets of IMAGINE, but it is particularly useful to the
mosaicking process. You should use the Histogram Matching option to match data of the same
or adjacent scenes that was captured on different days, or data that is slightly different because
of sun or atmospheric effects.
By choosing Histogram Matching through the Color Corrections dialog in Mosaic Tool, you
have the options of choosing the Matching Method, the Histogram Type, and whether or not to
use an external reference file. When choosing a Matching Method, decide if you want your
images to be matched according to all the other images you want to mosaic or just matched to
the overlapping areas between the images. For Histogram Type you can choose to match images
band by band or by the intensity (RGB) of the images.
If you check Use external reference, you will get the choice of using an image file or parameters
as your Histogram Source. If you have an image that contains the characteristics you would like
to see in the image you are running through Histogram Matching, then you should use it.
Intersection Mode When you mosaic images, you will have overlapping areas. For those overlapping areas, you
can specify a cutline so that the pixels on one side of a particular cutline take the value of one
overlapping image, while the pixels on the other side of the cutline take the value of another
overlapping image. The cutlines can be generated manually or automatically.
When you choose the Set Mode for Intersection button on the Mosaic Tool toolbar, you have
several different options for handling the overlapping of your images. The features for dealing
with image overlap include:
• Automatic clipping, extending, and merging of cutlines that cross multiple image
intersections
• Loading images and calibration information from triangulated block files as well as setting
the elevation source
136 ERDAS
Intersection Mode
• Selecting mosaic output areas with ASCII files containing corner coordinates of sheets that
may be rotated. The ASCII import tool is used to try to parse ASCII files that do not
conform to a predetermined format.
• Loading clip boundary output regions from AOI or vector files. This boundary applies to
all output regions. Pixels outside the clip boundary will be set to the background color.
Set Overlap Function When you are using more than one image, you need to define how they should overlap. Set
Overlap Function gives you the options of no cutline existing, and if one does not exist, how to
handle the overlap of images as well as if a cutline exists, then what smoothing or feathering
options to use concerning the cutline.
No Cutline Exists
When no cutline exists between overlapping images, you will need to choose how to handle the
overlap. You are given the following choices:
• Overlay
• Average
• Minimum
• Maximum
• Feather
Cutline Exists
When a cutline does exist between images, you will need to decide on smoothing and feathering
options to cover the overlap area in the vicinity of the cutline. The Smoothing Options area
allows you to choose both the Distance and the Smoothing Filter. The Feathering Options given
are No Feathering, Feathering, and Feathering by Distance. If you choose Feathering by
Distance, you will be able to enter a specific distance.
Automatically The current implementation of Automatic Cutline Generation is geometry-based. The method
Generate Cutlines For uses the centerlines of the overlapping polygons as cutlines. While this is a very straightforward
Intersection approach, it is not recommended for images containing buildings, bridges, rivers, and so on
because of the possibility the method would make the mosaicked images look obviously
inaccurate near the cutline area. For example, if the cutline crosses a bridge, the bridge may look
broken at the point where the cutline crosses it.
Geometry-based Geometry-based Cutline Generation is a bit more simplistic because it is based only on the
Cutline Generation geometry of the overlapping region between images. Pixel values of the involved images are not
used. For an overlapping region that only involves two images, the geometry-based cutline can
be seen as a center line of the overlapping area that cuts the region into two equal halves. One
half is closer to the center of the first image, and the other half is closer to the center of the
second image. Geometry-based Cutline Generation runs very quickly compared to Weighted
Cutline Generation. Geometry-based generation does not have to factor in pixels from the
images. Use the geometry-based method when your images contain homogenous areas like
grasses or lakes, but use Weighted Cutline Generation for images where the cutline cannot break
such as buildings, roads, rivers, and urban areas.
Output Image After you have chosen images to be mosaicked, gone through any color balancing or histogram
Mode matching, or image dodging, and checked overlapping images for possible cutline needs, you
are ready to output the images to an actual mosaic file. When you select the Set Mode for Output
portion of the Mosaic Tool, the first feature you will want to use is Output Image Options. After
choosing those options, you can preview the mosaic and then run it to disc.
Output Image Options This dialog lets you define your output map areas and change output map projection if you wish.
You will be given the choice of using Union of All Inputs, User-defined AOI, Map Series File,
USGS Maps Database, or ASCII Sheet File as your defining feature for an output map area. The
default is Union of All Inputs.
138 ERDAS
Output Image Mode
Different choices yield different options to further modify the output image. For instance, if you
select User-defined AOI, then you are given the choice of outputting multiple AOI objects to
either multiple files or a single file. If you choose Map Series File, you will be able to enter the
filename you want to use and choose whether to treat the map extent as pixel centers or pixel
edges.
If you choose ASCII Sheet File to define the Output Map Area, you will need to supply a text
file. If you need to create an ASCII file, you should do so according to the following definitions:
ASCII Sheet File Definition:
The ASCII Sheet File may have one or more records in the following format. Fields are white
space delimited.
• Field 2: One of UL, UR, LL, or LR to identify which coordinate the following field
represents.
• Field 3: X coordinate
• Field 4: Y coordinate
Fields 2-4 may be repeated for any two of the coordinates or for all four. If all four coordinates
are present, the sheet will be treated as a rotated orthoimage. Otherwise, it will be treated as a
north-up orthoimage.
Each line represents one sheet. A line consists of eight floating-point values representing four
corners of the output sheet. The name of the sheet may also be present.
Examples:
0 0 10 10
OR
some_name 0 0 10 10
OR
00303303
OR
some_name 0 0 3 0 3 3 0 3
Also part of Output Image Options is the choice of choosing a Clip Boundary. If you choose
Clip Boundary, any area outside of the Clip Boundary will be designated as backgroung value
in your output image. This differs from the User-defined AOI because Clip Boundary applies
to all output images. You can also click Change Output Map Projection to bring up the
Projection Chooser. The Projection Chooser lets you choose a particular projection to use from
categories and projections around the world. If you want to choose a customized map
projection, you can do that as well.
You are also given the options of changing the Output Cell Size from the default of 8.0, and you
can choose a particular Output Data Type from a dropdown list instead of the default Unsigned
8 bit.
When you are done selecting Output Image Options, you can preview the mosaicked image
before saving it as a file.
Run Mosaic To Disc When you are ready to process the mosaicked image to disc, you can click this icon and open
the Output File Name dialog. From this dialog, browse to the directory where you want to store
your mosaicked image, and enter the file name for the image. There are several options on the
Output Options tab such as Output to a Common Look Up Table, Ignore Input Values, Output
Background Value, and Create Output in Batch mode. You can choose from any of these
according to your desired outcome.
140 ERDAS
Chapter 6
Enhancement
Introduction Image enhancement is the process of making an image more interpretable for a particular
application (Faust, 1989). Enhancement makes important features of raw, remotely sensed data
more interpretable to the human eye. Enhancement techniques are often used instead of
classification techniques for feature extraction—studying and locating areas and objects on the
ground and deriving useful information from images.
The techniques to be used in image enhancement depend upon:
• Your data—the different bands of Landsat, SPOT, and other imaging sensors are selected
to detect certain features. You must know the parameters of the bands being used before
performing any enhancement. (See Chapter 1 “Raster Data” for more details.)
• Your objective—for example, sharpening an image to identify features that can be used for
training samples requires a different set of enhancement techniques than reducing the
number of bands in the study. You must have a clear idea of the final product desired before
enhancement is performed.
This chapter discusses these enhancement techniques available with ERDAS IMAGINE:
See “Bibliography” to find current literature that provides a more detailed discussion of
image processing enhancement techniques.
Display vs. File With ERDAS IMAGINE, image enhancement may be performed:
Enhancement
• temporarily, upon the image that is displayed in the Viewer (by manipulating the function
and display memories), or
Enhancing a displayed image is much faster than enhancing an image on disk. If one is looking
for certain visual effects, it may be beneficial to perform some trial and error enhancement
techniques on the display. Then, when the desired results are obtained, the values that are stored
in the display device memory can be used to make the same changes to the data file.
For more information about displayed images and the memory of the display device, see
Chapter 4 “Image Display”.
Spatial Modeling Two types of models for enhancement can be created in ERDAS IMAGINE:
Enhancements
• Graphical models—use Model Maker (Spatial Modeler) to easily, and with great
flexibility, construct models that can be used to enhance the data.
• Script models—for even greater flexibility, use the Spatial Modeler Language (SML) to
construct models in script form. SML enables you to write scripts which can be written,
edited, and run from the Spatial Modeler component or directly from the command line.
You can edit models created with Model Maker using SML or Model Maker.
Although a graphical model and a script model look different, they produce the same results
when applied.
Image Interpreter
ERDAS IMAGINE supplies many algorithms constructed as models, which are ready to be
applied with user-input parameters at the touch of a button. These graphical models, created
with Model Maker, are listed as menu functions in the Image Interpreter. These functions are
mentioned throughout this chapter. Just remember, these are modeling functions which can be
edited and adapted as needed with Model Maker or the SML.
142 ERDAS
Introduction
The modeling functions available for enhancement in Image Interpreter are briefly described in
Table 6-1.
Function Description
Adaptive Filter Varies the contrast stretch for each pixel depending upon
the DN values in the surrounding moving window.
Statistical Filter Produces the pixel output DN by averaging pixels within a
moving window that fall within a statistically defined
range.
Resolution Merge Merges imagery of differing spatial resolutions.
LUT (Lookup Table) Stretch Creates an output image that contains the data values as
modified by a lookup table.
Histogram Equalization Redistributes pixel values with a nonlinear contrast stretch
so that there are approximately the same number of pixels
with each value within a range.
Histogram Match Mathematically determines a lookup table that converts
the histogram of one image to resemble the histogram of
another.
Brightness Inversion Allows both linear and nonlinear reversal of the image
intensity range.
Haze Reduction* Dehazes Landsat 4 and 5 TM data and panchromatic data.
Destripe TM Data Removes striping from a raw TM4 or TM5 data file.
Function Description
NOTE: There are other Image Interpreter functions that do not necessarily apply to image
enhancement.
144 ERDAS
Correcting Data
Correcting Data Each generation of sensors shows improved data acquisition and image quality over previous
generations. However, some anomalies still exist that are inherent to certain sensors and can be
corrected by applying mathematical formulas derived from the distortions (Lillesand and
Kiefer, 1987). In addition, the natural distortion that results from the curvature and rotation of
the Earth in relation to the sensor platform produces distortions in the image data, which can
also be corrected.
Radiometric Correction
Generally, there are two types of data correction: radiometric and geometric. Radiometric
correction addresses variations in the pixel intensities (DNs) that are not caused by the object or
scene being scanned. These variations include:
• topographic effects
• atmospheric effects
Geometric Correction
Geometric correction addresses errors in the relative positions of pixels. These errors are
induced by:
• terrain variations
Radiometric
Correction:
Visible/Infrared
Imagery
Striping
Striping or banding occurs if a detector goes out of adjustment—that is, it provides readings
consistently greater than or less than the other detectors for the same band over the same ground
cover.
Some Landsat 1, 2, and 3 data have striping every sixth line, because of improper calibration of
some of the 24 detectors that were used by the MSS. The stripes are not constant data values,
nor is there a constant error factor or bias. The differing response of the errant detector is a
complex function of the data value sensed.
This problem has been largely eliminated in the newer sensors. Various algorithms have been
advanced in current literature to help correct this problem in the older data. Among these
algorithms are simple along-line convolution, high-pass filtering, and forward and reverse
principal component transformations (Crippen, 1989a).
Data from airborne multispectral or hyperspectral imaging scanners also shows a pronounced
striping pattern due to varying offsets in the multielement detectors. This effect can be further
exacerbated by unfavorable sun angle. These artifacts can be minimized by correcting each scan
line to a scene-derived average (Kruse, 1988).
Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminate
striping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best
address the data. The IMAGINE Radar Interpreter Adjust Brightness function also
corrects some of these problems.
Line Dropout
Another common remote sensing device error is line dropout. Line dropout occurs when a
detector either completely fails to function, or becomes temporarily saturated during a scan (like
the effect of a camera flash on the retina). The result is a line or partial line of data with higher
data file values, creating a horizontal streak until the detector(s) recovers, if it recovers.
Line dropout is usually corrected by replacing the bad line with a line of estimated data file
values, which is based on the lines above and below it.
Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered errors, since they
are part of the signal received by the sensing device (Bernstein, 1983). However, it is often
important to remove atmospheric effects, especially for scene matching and change detection
analysis.
Over the past 30 years, a number of algorithms have been developed to correct for variations in
atmospheric transmission. Four categories are mentioned here:
• linear regressions
• atmospheric modeling
Use the Spatial Modeler to construct the algorithms for these operations.
146 ERDAS
Correcting Data
Linear Regressions
A number of methods using linear regressions have been tried. These techniques use bispectral
plots and assume that the position of any pixel along that plot is strictly a result of illumination.
The slope then equals the relative reflectivities for the two spectral bands. At an illumination of
zero, the regression plots should pass through the bispectral origin. Offsets from this represent
the additive extraneous components, due to atmosphere effects (Crippen, 1987).
Atmospheric Modeling
Atmospheric modeling is computationally complex and requires either assumptions or inputs
concerning the atmosphere at the time of imaging. The atmospheric model used to define the
computations is frequently Lowtran or Modtran (Kneizys et al, 1988). This model requires
inputs such as atmospheric profile (e.g., pressure, temperature, water vapor, ozone), aerosol
type, elevation, solar zenith angle, and sensor viewing angle.
Accurate atmospheric modeling is essential in preprocessing hyperspectral data sets where
bandwidths are typically 10 nm or less. These narrow bandwidth corrections can then be
combined to simulate the much wider bandwidths of Landsat or SPOT sensors (Richter, 1990).
Geometric Correction As previously noted, geometric correction is applied to raw sensor data to correct errors of
perspective due to the Earth’s curvature and sensor motion. Today, some of these errors are
commonly removed at the sensor’s data processing center. In the past, some data from Landsat
MSS 1, 2, and 3 were not corrected before distribution.
Many visible/infrared sensors are not nadir-viewing: they look to the side. For some
applications, such as stereo viewing or DEM generation, this is an advantage. For other
applications, it is a complicating factor.
In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir. Other
pixels, especially those on the view periphery, are viewed off-nadir. For scenes covering very
large geographic areas (such as AVHRR), this can be a significant problem.
This and other factors, such as Earth curvature, result in geometric imperfections in the sensor
image. Terrain variations have the same distorting effect, but on a smaller (pixel-by-pixel) scale.
These factors can be addressed by rectifying the image to a map.
A more rigorous geometric correction utilizes a DEM and sensor position information to correct
these distortions. This is orthocorrection.
Radiometric Radiometric enhancement deals with the individual values of the pixels in the image. It differs
Enhancement from spatial enhancement (discussed in “Spatial Enhancement”), which takes into account the
values of neighboring pixels.
Depending on the points and the bands in which they appear, radiometric enhancements that are
applied to one band may not be appropriate for other bands. Therefore, the radiometric
enhancement of a multiband image can usually be considered as a series of independent, single-
band enhancements (Faust, 1989).
Radiometric enhancement usually does not bring out the contrast of every pixel in an image.
Contrast can be lost between some pixels, while gained on others.
Frequency
0 j k 255 0 j k 255
In Figure 6-1, the range between j and k in the histogram of the original data is about one third
of the total range of the data. When the same data are radiometrically enhanced, the range
between j and k can be widened. Therefore, the pixels between j and k gain contrast—it is easier
to distinguish different brightness values in these pixels.
However, the pixels outside the range between j and k are more grouped together than in the
original histogram to compensate for the stretch between j and k. Contrast among these pixels
is lost.
Contrast Stretching When radiometric enhancements are performed on the display device, the transformation of
data file values into brightness values is illustrated by the graph of a lookup table.
For example, Figure 6-2 shows the graph of a lookup table that increases the contrast of data file
values in the middle range of the input data (the range within the brackets). Note that the input
range within the bracket is narrow, but the output brightness values for the same pixels are
stretched over a wider range. This process is called contrast stretching.
148 ERDAS
Radiometric Enhancement
255
Notice that the graph line with the steepest (highest) slope brings out the most contrast by
stretching output values farther apart.
linear
nonlinear
piecewise
linear
0
0 255
input data file values
In ERDAS IMAGINE, the Piecewise Linear Contrast function is set up so that there are
always pixels in each data file value from 0 to 255. You can manipulate the percentage of
pixels in a particular range, but you cannot eliminate a range of data file values.
150 ERDAS
Radiometric Enhancement
100%
LUT Value
Low Middle High
The contrast value for each range represents the percent of the available output range that
particular range occupies. The brightness value for each range represents the middle of the total
range of brightness values occupied by that range. Since rules 1 and 2 above are enforced, as
the contrast and brightness values are changed, they may affect the contrast and brightness of
other ranges. For example, if the contrast of the low range increases, it forces the contrast of the
middle to decrease.
In ERDAS IMAGINE, you can permanently change the data file values to the lookup table
values. Use the Image Interpreter LUT Stretch function to create an .img output file with
the same data values as the displayed contrast stretched image.
See Chapter 1 “Raster Data” for more information on the data contained in image files.
The statistics in the image file contain the mean, standard deviation, and other statistics on each
band of data. The mean and standard deviation are used to determine the range of data file values
to be translated into brightness values or new data file values. You can specify the number of
standard deviations from the mean that are to be used in the contrast stretch. Usually the data
file values that are two standard deviations above and below the mean are used. If the data have
a normal distribution, then this range represents approximately 95 percent of the data.
The mean and standard deviation are used instead of the minimum and maximum data file
values because the minimum and maximum data file values are usually not representative of
most of the data. A notable exception occurs when the feature being sought is in shadow. The
shadow pixels are usually at the low extreme of the data file values, outside the range of two
standard deviations from the mean.
The use of these statistics in contrast stretching is discussed and illustrated in Chapter 4
“Image Display”. Statistical terms are discussed in Appendix A “Math Topics”.
Figure 6-6: Contrast Stretch Using Lookup Tables, and Effect on Histogram
255 255
output brightness values
input
histogram
0 0
0 255 0 255
input data file values input data file values
2. A breakpoint is added to the
1. Linear stretch. Values are linear function, redistributing
clipped at 255. the contrast.
255 255
output brightness values
0 0
0 255 0 255
input data file values input data file values
3. Another breakpoint added. 4. The breakpoint at the top of
Contrast at the peak of the the function is moved so that
histogram continues to increase. values are not clipped.
152 ERDAS
Radiometric Enhancement
Histogram Histogram equalization is a nonlinear stretch that redistributes pixel values so that there is
Equalization approximately the same number of pixels with each value within a range. The result
approximates a flat histogram. Therefore, contrast is increased at the peaks of the histogram and
lessened at the tails.
Histogram equalization can also separate pixels into distinct groups if there are few output
values over a wide range. This can have the visual effect of a crude classification.
peak
pixels at
tail are
tail grouped -
contrast
is lost
pixels at peak are spread
apart - contrast is gained
To perform a histogram equalization, the pixel values of an image (either data file values or
brightness values) are reassigned to a certain number of bins, which are simply numbered sets
of pixels. The pixels are then given new values, based upon the bins to which they are assigned.
The following parameters are entered:
• N - the number of bins to which pixel values can be assigned. If there are many bins or many
pixels with the same value(s), some bins may be empty.
• M - the maximum of the range of the output values. The range of the output values is from
0 to M.
The total number of pixels is divided by the number of bins, equaling the number of pixels per
bin, as shown in the following equation:
T-
A = ---
N
Where:
N = the number of bins
T = the total number of pixels in the image
A = the equalized number of pixels per bin
The pixels of each input value are assigned to bins, so that the number of pixels in each bin is
as close to A as possible. Consider Figure 6-8:
40
number of pixels
30
A = 24
15
10 10
5 5 5
0 1 2 3 4 5 6 7 8 9
data file values
There are 240 pixels represented by this histogram. To equalize this histogram to 10 bins, there
would be:
i–1 H
+ -----i
∑ H k 2
k = 1
B i = int -----------------------------------
A
Where:
A = equalized number of pixels per bin (see above)
Hi = the number of values with the value i (histogram)
int = integer function (truncating real numbers to integer)
Bi = bin number for pixels with value i
Source: Modified from Gonzalez and Wintz, 1977
The 10 bins are rescaled to the range 0 to M. In this example, M = 9, because the input values
ranged from 0 to 9, so that the equalized histogram can be compared to the original. The output
histogram of this equalized image looks like Figure 6-9:
154 ERDAS
Radiometric Enhancement
40
number of pixels
30
4 5
20 A = 24
6
15 15
2 7
8
1 3
0 0 0 0 9
0 1 2 3 4 5 6 7 8 9
output data file values
Effect on Contrast
By comparing the original histogram of the example data with the one above, you can see that
the enhanced image gains contrast in the peaks of the original histogram. For example, the input
range of 3 to 7 is stretched to the range 1 to 8. However, data values at the tails of the original
histogram are grouped together. Input values 0 through 2 all have the output value of 0. So,
contrast among the tail pixels, which usually make up the darkest and brightest regions of the
input image, is lost.
The resulting histogram is not exactly flat, since the pixels can rarely be grouped together into
bins with an equal number of pixels. Sets of pixels with the same value are never split up to form
equal bins.
Level Slice
A level slice is similar to a histogram equalization in that it divides the data into equal amounts.
A level slice on a true color display creates a stair-stepped lookup table. The effect on the data
is that input file values are grouped together at regular intervals into a discrete number of levels,
each with one output brightness value.
To perform a true color level slice, you must specify a range for the output brightness values
and a number of output levels. The lookup table is then stair-stepped so that there is an equal
number of input pixels in each of the output levels.
Histogram Matching Histogram matching is the process of determining a lookup table that converts the histogram of
one image to resemble the histogram of another. Histogram matching is useful for matching data
of the same or adjacent scenes that were scanned on separate days, or are slightly different
because of sun angle or atmospheric effects. This is especially useful for mosaicking or change
detection.
To achieve good results with histogram matching, the two input images should have similar
characteristics:
• Relative dark and light features in the image should be the same.
• For some applications, the spatial resolution of the data should be the same.
• The relative distributions of land covers should be about the same, even when matching
scenes that are not of the same area. If one image has clouds and the other does not, then
the clouds should be removed before matching the histograms. This can be done using the
AOI function. The AOI function is available from the Viewer menu bar.
To match the histograms, a lookup table is mathematically derived, which serves as a function
for converting one histogram to the other, as illustrated in Figure 6-10.
(a) (b)
frequency
frequency
frequency
=
+
Brightness Inversion The brightness inversion functions produce images that have the opposite contrast of the
original image. Dark detail becomes light, and light detail becomes dark. This can also be used
to invert a negative image that has been scanned to produce a positive image.
Brightness inversion has two options: inverse and reverse. Both options convert the input data
range (commonly 0 - 255) to 0 - 1.0. A min-max remapping is used to simultaneously stretch
the image and handle any input bit format. The output image is in floating point format, so a
min-max stretch is used to convert the output image into 8-bit format.
Inverse is useful for emphasizing detail that would otherwise be lost in the darkness of the low
DN pixels. This function applies the following algorithm:
DNout = 0.1
if 0.1 < DN < 1
DNin
156 ERDAS
Spatial Enhancement
Spatial While radiometric enhancements operate on each pixel individually, spatial enhancement
Enhancement modifies pixel values based on the values of surrounding pixels. Spatial enhancement deals
largely with spatial frequency, which is the difference between the highest and lowest values of
a contiguous set of pixels. Jensen (Jensen, 1986) defines spatial frequency as “the number of
changes in brightness value per unit distance for any particular part of an image.”
Consider the examples in Figure 6-11:
• zero spatial frequency—a flat image, in which every pixel has the same value
• highest spatial frequency—an image consisting of a checkerboard of black and white pixels
• Resolution merging
See “Radar Imagery Enhancement” for a discussion of Edge Detection and Texture
Analysis. These spatial enhancement techniques can be applied to any type of data.
Convolution Filtering Convolution filtering is the process of averaging small sets of pixels across an image.
Convolution filtering is used to change the spatial frequency characteristics of an image
(Jensen, 1996).
A convolution kernel is a matrix of numbers that is used to average the value of each pixel with
the values of surrounding pixels in a particular way. The numbers in the matrix serve to weight
this average toward particular pixels. These numbers are often called coefficients, because they
are used as such in the mathematical equations.
In ERDAS IMAGINE, there are four ways you can apply convolution filtering to an image:
1) The kernel filtering option in the Viewer
2) The Convolution function in Image Interpreter
3) The IMAGINE Radar Interpreter Edge Enhancement function
4) The Convolution function in Model Maker
Filtering is a broad term, which refers to the altering of spatial or spectral features for image
enhancement (Jensen, 1996). Convolution filtering is one method of spatial filtering. Some texts
may use the terms synonymously.
Convolution Example
To understand how one pixel is convolved, imagine that the convolution kernel is overlaid on
the data file values of the image (in one band), so that the pixel to be convolved is in the center
of the window.
2 8 6 6 6 -1 -1 -1
2 8 6 6 6 -1 16 -1
2 2 8 6 6 -1 -1 -1
2 2 2 8 6
Kernel
2 2 2 2 8
Data
Figure 6-12 shows a 3 × 3 convolution kernel being applied to the pixel in the third column,
third row of the sample data (the pixel that corresponds to the center of the kernel).
To compute the output value for this pixel, each value in the convolution kernel is multiplied by
the image pixel value that corresponds to it. These products are summed, and the total is divided
by the sum of the values in the kernel, as shown here:
integer [(-1 × 8) + (-1 × 6) + (-1 × 6) + (-1 × 2) + (16 × 8) + (-1 × 6) +
(-1 × 2) + (-1 × 2) + (-1 × 8) : (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1)]
158 ERDAS
Spatial Enhancement
When the 2 × 2 set of pixels near the center of this 5 × 5 image is convolved, output values are:
1 2 3 4 5
1 2 8 6 6 6
2 2 11 5 6 6
3 2 0 11 6 6
4 2 2 2 8 6
5 2 2 2 2 8
The kernel used in this example is a high frequency kernel, as explained below. It is important
to note that the relatively lower values become lower, and the higher values become higher, thus
increasing the spatial frequency of the image.
Convolution Formula
The following formula is used to derive an output data file value for the pixel being convolved
(in the center):
q
q
∑ ∑ f ij d ij
i = 1 j = 1
V = ------------------------------------
F
Where:
fij = the coefficient of a convolution kernel at position i,j (in the kernel)
dij = the data value of the pixel that corresponds to fij
q = the dimension of the kernel, assuming a square kernel (if q=3, the kernel is 3 ×
3)
F = either the sum of the coefficients of the kernel, or 1 if the sum of coefficients is
0
V = the output pixel value
In cases where V is less than 0, V is clipped to 0.
Source: Modified from Jensen, 1996; Schowengerdt, 1983
The sum of the coefficients (F) is used as the denominator of the equation above, so that the
output values are in relatively the same range as the input values. Since F cannot equal zero
(division by zero is not defined), F is set to 1 if the sum is zero.
Zero-Sum Kernels
Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equals zero.
When a zero-sum kernel is used, then the sum of the coefficients is not used in the convolution
equation, as above. In this case, no division is performed (F = 1), since division by zero is not
defined.
This generally causes the output values to be:
• zero in areas where all input values are equal (no edges)
• extreme in areas of high spatial frequency (high values become much higher, low values
become much lower)
Therefore, a zero-sum kernel is an edge detector, which usually smooths out or zeros out areas
of low spatial frequency and creates a sharp contrast where spatial frequency is high, which is
at the edges between homogeneous (homogeneity is low spatial frequency) groups of pixels.
The resulting image often consists of only edges and zeros.
Zero-sum kernels can be biased to detect edges in a particular direction. For example, this
3 × 3 kernel is biased to the south (Jensen, 1996).
-1 -1 -1
1 -2 1
1 1 1
High-Frequency Kernels
A high-frequency kernel, or high-pass kernel, has the effect of increasing spatial frequency.
High-frequency kernels serve as edge enhancers, since they bring out the edges between
homogeneous groups of pixels. Unlike edge detectors (such as zero-sum kernels), they highlight
edges and do not necessarily eliminate other features.
-1 -1 -1
-1 16 -1
-1 -1 -1
When this kernel is used on a set of pixels in which a relatively low value is surrounded by
higher values, like this...
160 ERDAS
Spatial Enhancement
BEFORE AFTER
...the low value gets lower. Inversely, when the kernel is used on a set of pixels in which a
relatively high value is surrounded by lower values...
BEFORE AFTER
64 60 57 64 60 57
61 125 69 61 187 69
58 60 70 58 60 70
...the high value becomes higher. In either case, spatial frequency is increased by this kernel.
Low-Frequency Kernels
Below is an example of a low-frequency kernel, or low-pass kernel, which decreases spatial
frequency.
1 1 1
1 1 1
1 1 1
This kernel simply averages the values of the pixels, causing them to be more homogeneous.
The resulting image looks either more smooth or more blurred.
Crisp The Crisp filter sharpens the overall scene luminance without distorting the interband variance
content of the image. This is a useful enhancement if the image is blurred due to atmospheric
haze, rapid sensor motion, or a broad point spread function of the sensor.
The algorithm used for this function is:
1) Calculate principal components of multiband input image.
2) Convolve PC-1 with summary filter.
3) Retransform to RGB space.
The logic of the algorithm is that the first principal component (PC-1) of an image is assumed
to contain the overall scene luminance. The other PCs represent intra-scene variance. Thus, you
can sharpen only PC-1 and then reverse the principal components calculation to reconstruct the
original image. Luminance is sharpened, but variance is retained.
Resolution Merge The resolution of a specific sensor can refer to radiometric, spatial, spectral, or temporal
resolution.
Landsat TM sensors have seven bands with a spatial resolution of 28.5 m. SPOT panchromatic
has one broad band with very good spatial resolution—10 m. Combining these two images to
yield a seven-band data set with 10 m resolution provides the best characteristics of both
sensors.
A number of models have been suggested to achieve this image merge. Welch and Ehlers
(Welch and Ehlers, 1987) used forward-reverse RGB to IHS transforms, replacing I (from
transformed TM data) with the SPOT panchromatic image. However, this technique is limited
to three bands (R, G, B).
Chavez (Chavez et al, 1991), among others, uses the forward-reverse principal components
transforms with the SPOT image, replacing PC-1.
In the above two techniques, it is assumed that the intensity component (PC-1 or I) is spectrally
equivalent to the SPOT panchromatic image, and that all the spectral information is contained
in the other PCs or in H and S. Since SPOT data do not cover the full spectral range that TM
data do, this assumption does not strictly hold. It is unacceptable to resample the thermal band
(TM6) based on the visible (SPOT panchromatic) image.
Another technique (Schowengerdt, 1980) combines a high frequency image derived from the
high spatial resolution data (i.e., SPOT panchromatic) additively with the high spectral
resolution Landsat TM image.
The Resolution Merge function has two different options for resampling low spatial resolution
data to a higher spatial resolution while retaining spectral information:
• multiplicative
• PC-1 contains only overall scene luminance; all interband variation is contained in the other
5 PCs, and
162 ERDAS
Spatial Enhancement
With the above assumptions, the forward transform into PCs is made. PC-1 is removed and its
numerical range (min to max) is determined. The high spatial resolution image is then remapped
so that its histogram shape is kept constant, but it is in the same numerical range as PC-1. It is
then substituted for PC-1 and the reverse transform is applied. This remapping is done so that
the mathematics of the reverse transform do not distort the thematic information (Welch and
Ehlers, 1987).
Multiplicative
The second technique in the Image Interpreter uses a simple multiplicative algorithm:
(DNTM1) (DNSPOT) = DNnew TM1
The algorithm is derived from the four component technique of Crippen (Crippen, 1989a). In
this paper, it is argued that of the four possible arithmetic methods to incorporate an intensity
image into a chromatic image (addition, subtraction, division, and multiplication), only
multiplication is unlikely to distort the color.
However, in his study Crippen first removed the intensity component via band ratios, spectral
indices, or PC transform. The algorithm shown above operates on the original image. The result
is an increased presence of the intensity component. For many applications, this is desirable.
People involved in urban or suburban studies, city planning, and utilities routing often want
roads and cultural features (which tend toward high reflection) to be pronounced in the image.
Brovey Transform
In the Brovey Transform method, three bands are used according to the following formula:
[DNB1 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB1_new
[DNB2 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB2_new
[DNB3 / DNB1 + DNB2 + DNB3] × [DNhigh res. image] = DNB3_new
Where:
B = band
The Brovey Transform was developed to visually increase contrast in the low and high ends of
an image’s histogram (i.e., to provide contrast in shadows, water and high reflectance areas such
as urban features). Consequently, the Brovey Transform should not be used if preserving the
original scene radiometry is important. However, it is good for producing RGB images with a
higher degree of contrast in the low and high ends of the image histogram and for producing
visually appealing images.
Since the Brovey Transform is intended to produce RGB images, only three bands at a time
should be merged from the input multispectral scene, such as bands 3, 2, 1 from a SPOT or
Landsat TM image or 4, 3, 2 from a Landsat TM image. The resulting merged image should
then be displayed with bands 1, 2, 3 to RGB.
Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard image processing
technique. However, even adjustable stretches like the piecewise linear stretch act on the scene
globally. There are many circumstances where this is not the optimum approach. For example,
coastal studies where much of the water detail is spread through a very low DN range and the
land detail is spread through a much higher DN range would be such a circumstance. In these
cases, a filter that adapts the stretch to the region of interest (the area within the moving window)
would produce a better enhancement. Adaptive filters attempt to achieve this (Fahnestock and
Schowengerdt, 1983; Peli and Lim, 1982; Schwartz and Soha, 1977).
ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. The
Adaptive Filter function in Image Interpreter can be applied to undegraded images, such
as SPOT, Landsat, and digitized photographs. The Image Enhancement function in
IMAGINE Radar Interpreter is better for degraded or difficult images.
Scenes to be adaptively filtered can be divided into three broad and overlapping categories:
• Undegraded—these scenes have good and uniform illumination overall. Given a choice,
these are the scenes one would prefer to obtain from imagery sources such as Space
Imaging or SPOT.
• Low luminance—these scenes have an overall or regional less than optimum intensity. An
underexposed photograph (scanned) or shadowed areas would be in this category. These
scenes need an increase in both contrast and overall scene luminance.
No single filter with fixed parameters can address this wide variety of conditions. In addition,
multiband images may require different parameters for each band. Without the use of adaptive
filters, the different bands would have to be separated into one-band files, enhanced, and then
recombined.
For this function, the image is separated into high and low frequency component images. The
low frequency image is considered to be overall scene luminance. These two component parts
are then recombined in various relative amounts using multipliers derived from LUTs. These
LUTs are driven by the overall scene luminance:
DNout = K(DNHi) + DNLL
Where:
K = user-selected contrast multiplier
Hi = high luminance (derives from the LUT)
LL = local luminance (derives from the LUT)
164 ERDAS
Wavelet Resolution Merge
255
Local Luminance
Intercept (I)
Figure 6-14 shows the local luminance intercept, which is the output luminance value that an
input luminance value of 0 would be assigned.
Wavelet The ERDAS IMAGINE Wavelet Resolution Merge allows multispectral images of relatively
Resolution Merge low spatial resolution to be sharpened using a co-registered panchromatic image of relatively
higher resolution. A primary intended target dataset is Landsat 7 ETM+. Increasing the spatial
resolution of multispectral imagery in this fashion is, in fact, the rationale behind the Landsat 7
sensor design.
The ERDAS IMAGINE algorithm is a modification of the work of King and Wang (King et al,
2001) with extensive input from Lemeshewsky (Lemeshewsky, 1999, Lemeshewsky, 2002a,
Lemeshewsky, 2002b). Aside from traditional Pan-Multispectral image sharpening, this
algorithm can be used to merge any two images, for example, radar with SPOT Pan.
Fusing information from several sensors into one composite image can take place on four levels;
signal, pixel, feature, and symbolic. This algorithm works at the pixel level. The results of pixel-
level fusion are primarily for presentation to a human observer/analyst (Rockinger and Fechner,
1998). However, in the case of pan/multispectral image sharpening, it must be considered that
computer-based analysis (e.g., supervised classification) could be a logical follow-on. Thus, it
is vital that the algorithm preserve the spectral fidelity of the input dataset.
Wavelet Theory Wavelet-based image reduction is similar to Fourier transform analysis. In the Fourier
transform, long continuous (sine and cosine) waves are used as the basis. The wavelet transform
uses short, discrete “wavelets” instead of a long wave. Thus the new transform is much more
local (Strang et al, 1997). In image processing terms, the wavelet can be parameterized as a
finite size moving window.
A key element of using wavelets is selection of the base waveform to be used; the “mother
wavelet” or “basis”. The “basis” is the basic waveform to be used to represent the image. The
input signal (image) is broken down into successively smaller multiples of this basis.
Wavelets are derived waveforms that have a lot of mathematically useful characteristics that
make them preferable to simple sine or cosine functions. For example, wavelets are discrete;
that is, they have a finite length as opposed to sine waves which are continuous and infinite in
length. Once the basis waveform is mathematically defined, a family of multiples can be created
with incrementally increasing frequency. For example, related wavelets of twice the frequency,
three times the frequency, four times the frequency, etc. can be created.
Once the waveform family is defined, the image can be decomposed by applying coefficients
to each of the waveforms. Given a sufficient number of waveforms in the family, all the detail
in the image can be defined by coefficient multiples of the ever-finer waveforms.
In practice, the coefficients of the discrete high-pass filter are of more interest than the wavelets
themselves. The wavelets are rarely even calculated (Shensa, 1992). In image processing, we
do not want to get deeply involved in mathematical waveform decomposition; we want
relatively rapid processing kernels (moving windows). Thus, we use the above theory to derive
moving window, high-pass kernels which approximate the waveform decomposition.
For image processing, orthogonal and biorthogonal transforms are of interest. With orthogonal
transforms, the new axes are mutually perpendicular and the output signal has the same length
as the input signal. The matrices are unitary and the transform is lossless. The same filters are
used for analysis and reconstruction.
In general, biorthogonal (and symmetrical) wavelets are more appropriate than orthogonal
wavelets for image processing applications (Strang et al, 1997, p. 362-363). Biorthogonal
wavelets are ideal for image processing applications because of their symmetry and perfect
reconstruction properties. Each biorthogonal wavelet has a reconstruction order and a
decomposition order associated with it. For example, biorthogonal 3.3 denotes a biorthogonal
wavelet with reconstruction order 3 and decomposition order 3. For biorthogonal transforms,
the lengths of and angles between the new axes may change. The new axes are not necessarily
perpendicular. The analysis and reconstruction filters are not required to be the same. They are,
however, mathematically constrained so that no information is lost, perfect reconstruction is
possible and the matrices are invertible.
The signal processing properties of the Discrete Wavelet Transform (DWT) are strongly
determined by the choice of high-pass (bandpass) filter (Shensa, 1992). Although biorthogonal
wavelets are phase linear, they are shift variant due to the decimation process, which saves only
even-numbered averages and differences. This means that the resultant subimage changes if the
starting point is shifted (translated) by one pixel. For the commonly used, fast (Mallat, 1989)
discrete wavelet decomposition algorithm, a shift of the input image can produce large changes
in the values of the wavelet decomposition coefficients. One way to overcome this is to use an
average of each average and difference pair.
Once selected, the wavelets are applied to the input image recursively via a pyramid algorithm
or filter bank. This is commonly implemented as a cascading series of highpass and lowpass
filters, based on the mother wavelet, applied sequentially to the low-pass image of the previous
recursion. After filtering at any level, the low-pass image (commonly termed the
“approximation” image) is passed to the next finer filtering in the filter bank. The high-pass
images (termed “horizontal”, “vertical”, and “diagonal”) are retained for later image
reconstruction. In practice, three or four recursions are sufficient.
• approximation coefficients Wϕ
166 ERDAS
Wavelet Resolution Merge
• diagonal coefficients W ψD – variations along the diagonals (Gonzalez and Woods, 2001)
hϕ Wϕ
low
sub-
hϕ pass
image
low
pass hψ W ψH
high
input
pass
image
hϕ W ψV
sub- low
hψ pass
image
high
pass column hψ W ψD
decimation high
pass row
decimation
Symbols h ϕ and h ψ are, respectively, the low-pass and high-pass wavelet filters used for
decomposition. The rows of the image are convolved with the low-pass and high-pass filters and
the result is downsampled along the columns. This yields two subimages whose horizontal
resolutions are reduced by a factor of 2. The high-pass or detailed coefficients characterize the
image’s high frequency information with vertical orientation while the low-pass component
contains its low frequency, vertical information. Both subimages are again filtered columnwise
with the same low-pass and high-pass filters and downsampled along rows.
Thus, for each input image, we have four subimages each reduced by a factor of 4 compared to
the original image; W ϕ , W ψH , W ψV , and W ψD .
Wϕ h˜ ϕ
low
pass h˜ ϕ
low
W ψH h˜ ψ pass
high
output
pass
image
W ψV h˜ ϕ
low
pass h˜ ψ
high
column
W ψD h˜ ψ padding
pass
high
row
pass
padding
The sequence of steps is the opposite of that in the DWT, the subimages are upsampled along
rows (since the last step in the DWT was downsampling along rows) and convolved with the
low-pass and high-pass filters columnwise (in the DWT we filtered along the columns last).
These intermediate outputs are concatenated, upsampled along columns and then filtered
rowwise and finally concatenated to yield the original image.
Algorithm Theory The basic theory of the decomposition is that an image can be separated into high-frequency and
low-frequency components. For example, a low-pass filter can be used to create a low-
frequency image. Subtracting this low-frequency image from the original image would create
the corresponding high-frequency image. These two images contain all of the information in the
original image. If they were added together the result would be the original image.
The same could be done by high-pass filter filtering an image and the corresponding low-
frequency image could be derived. Again, adding the two together would yield the original
image. Any image can be broken into various high- and low-frequency components using
various high- and low-pass filters. The wavelet family can be thought of as a high-pass filter.
Thus wavelet-based high- and low-frequency images can be created from any input image. By
definition, the low-frequency image is of lower resolution and the high-frequency image
contains the detail of the image.
This process can be repeated recursively. The created low-frequency image could be again
processed with the kernels to create new images with even lower resolution. Thus, starting with
a 5-meter image, a 10-meter low-pass image and the corresponding high-pass image could be
created. A second iteration would create a 20-meter low- and, corresponding, high-pass images.
A third recursion would create a 40-meter low- and, corresponding, high-frequency images, etc.
Consider two images taken on the same day of the same area: one a 5-meter panchromatic, the
other 40-meter multispectral. The 5-meter has better spatial resolution, but the 40-meter has
better spectral resolution. It would be desirable to take the high-pass information from the 5-
meter image and combine it with the 40-meter multispectral image yielding a 5-meter
multispectral image.
168 ERDAS
Wavelet Resolution Merge
Using wavelets, one can decompose the 5-meter image through several iterations until a 40-
meter low-pass image is generated plus all the corresponding high-pass images derived during
the recursive decomposition. This 40-meter low-pass image, derived from the original 5-meter
pan image, can be replaced with the 40-meter multispectral image and the whole wavelet
decomposition process reversed, using the high-pass images derived during the decomposition,
to reconstruct a 5-meter resolution multispectral image. The approximation component of the
high spectral resolution image and the horizontal, vertical, and diagonal components of the high
spatial resolution image are fused into a new output image.
If all of the above calculations are done in a mathematically rigorously way (histomatch and
resample before substitution etc.) one can derive a multispectral image that has the high-pass
(high-frequency) details from the 5-meter image.
In the above scenario, it should be noted that the high-resolution image (panchromatic, perhaps)
is a single band and so the substitution image, from the multispectral image, must also be a
single band. There are tools available to compress the multispectral image into a single band for
substitution using the IHS transform or PC transform. Alternately, single bands can be
processed sequentially.
high
Resample
spectral
Histogram Match
res
a v fused
DWT -1 image
h d
high v
spatial DWT
res h d
Prerequisites and
Limitations
Precise Coregistration
A first prerequisite is that the two images be precisely co-registered. For some sensors (e.g.,
Landsat 7 ETM+) this co-registration is inherent in the dataset. If this is not the case, a greatly
over-defined 2nd order polynomial transform should be used to coregister one image to the
other. By over-defining the transform (that is, by having far more than the minimum number of
tie points), it is possible to reduce the random RMS error to the subpixel level. This is easily
accomplished by using the Point Prediction option in the GCP Tool. In practice, well-distributed
tie points are collected until the predicted point consistently falls exactly were it should. At that
time, the transform must be correct. This may require 30-60 tie points for a typical Landsat
TM—SPOT Pan co-registration.
When doing the coregistration, it is generally preferable to register the lower resolution image
to the higher resolution image, i.e., the high resolution image is used as the Reference Image.
This will allow the greatest accuracy of registration. However, if the lowest resolution image
has georeferencing that is to be retained, it may be desirable to use it as the Reference Image. A
larger number of tie points and more attention to precise work would then be required to attain
the same registration accuracy. Evaluation of the X- and Y-Residual and the RMS Error
columns in the ERDAS IMAGINE GCP Tool will indicate the accuracy of registration.
It is preferable to store the high and low resolution images as separate image files rather than
Layerstacking them into a single image file. In ERDAS IMAGINE, stacked image layers are
resampled to a common pixel size. Since the Wavelet Resolution Merge algorithm does the
pixel resampling at an optimal stage in the calculation, this avoids multiple resamplings.
After creating the coregistered images, they should be codisplayed in an ERDAS IMAGINE
Viewer. Then the Fade, Flicker, and Swipe Tools can be used to visually evaluate the precision
of the coregistration.
Temporal Considerations
A trivial corollary is that the two images must have no temporally-induced differences. If a crop
has been harvested, trees have dropped their foliage, lakes have grown or shrunk, etc., then
merging of the two images in that area is inappropriate. If the areas of change are small, the
merge can proceed and those areas removed from evaluation. If, however, the areas of change
are large, the histogram matching step may introduce data distortions.
Theoretical Limitations
As described in the discussion of the discrete wavelet transform, the algorithm downsamples the
high spatial resolution input image by a factor of two with each iteration. This produces
approximation (a) images with pixel sizes reduced by a factor of two with each iteration. The
low (spatial) resolution image will substitute exactly for the “a” image only if the input images
have relative pixel sizes differing by a multiple of 2. Any other pixel size ratio will require
resampling of the low (spatial) resolution image prior to substitution. Certain ratios can result
in a degradation of the substitution image that may not be fully overcome by the subsequent
wavelet sharpening. This will result in a less than optimal enhancement. For the most common
scenarios, Landsat ETM+, IKONOS and QuickBird, this is not a problem.
170 ERDAS
Spectral Enhancement
Although the mathematics of the algorithm are precise for any pixel size ratio, a resolution
increase of greater than two or three becomes theoretically questionable. For example, all
images are degraded due to atmospheric refraction and scattering of the returning signal. This
is termed “point spread”. Thus, both images in a resolution merge operation have, to some
(unknown) extent, been “smeared”. Thus, both images in a resolution merge operation have, to
an unknown extent, already been degraded. It is not reasonable to assume that each
multispectral pixel can be precisely devolved into nine or more subpixels.
Spectral Transform Three merge scenarios are possible. The simplest is when the input low (spatial) resolution
image is only one band; a single band of a multispectral image, for example. In this case, the
only option is to select which band to use. If the low resolution image to be processed is a
multispectral image, two methods will be offered for creating the grayscale representation of the
multispectral image intensity; IHS and PC.
The IHS method accepts only 3 input bands. It has been suggested that this technique produces
an output image that is the best for visual interpretation. Thus, this technique would be
appropriate when producing a final output product for map production. Since a visual product
is likely to be only an R, G, B image, the 3-band limitation on this method is not a distinct
limitation. Clearly, if one wished to sharpen more data layers, the bands could be done as
separate groups of 3 and then the whole dataset layerstacked back together.
Lemeshewsky (Lemeshewsky, 2002b) discusses some theoretical limitations on IHS
sharpening that suggest that sharpening of the bands individually (as discussed above) may be
preferable. Yocky (Yocky, 1995) demonstrates that the IHS transform can distort colors,
particularly red, and discusses theoretical explanations.
The PC Method will accept any number of input data layers. It has been suggested
(Lemeshewsky, 2002a) that this technique produces an output image that better preserves the
spectral integrity of the input dataset. Thus, this method would be most appropriate if further
processing of the data is intended; for example, if the next step was a classification operation.
Note, however, that Zhang (Zhang, 1999) has found equivocal results with the PC versus IHS
approaches.
The wavelet, IHS, and PC calculations produce single precision floating point output.
Consequently, the resultant image must undergo a data compression to get it back to 8 bit
format.
Spectral The enhancement techniques that follow require more than one band of data. They can be used
Enhancement to:
• extract new bands of data that are more interpretable to the eye
• display a wider variety of information in the three available color guns (R, G, B)
Some of these enhancements can be used to prepare data for classification. However, this
is a risky practice unless you are very familiar with your data and the changes that you
are making to it. Anytime you alter values, you risk losing some information.
Principal Components Principal components analysis (PCA) is often used as a method of data compression. It allows
Analysis redundant data to be compacted into fewer bands—that is, the dimensionality of the data is
reduced. The bands of PCA data are noncorrelated and independent, and are often more
interpretable than the source data (Jensen, 1996; Faust, 1989).
The process is easily explained graphically with an example of data in two bands. Below is an
example of a two-band scatterplot, which shows the relationships of data file values in two
bands. The values of one band are plotted against those of the other. If both bands have normal
distributions, an ellipse shape results.
histogram
Band B
histogram
Band A
0
0 255
Band A
data file values
Ellipse Diagram
In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid (3 dimensions), or
hyperellipsoid (more than 3 dimensions) is formed if the distributions of each input band are
normal or near normal. (The term ellipse is used for general purposes here.)
To perform PCA, the axes of the spectral space are rotated, changing the coordinates of each
pixel in spectral space, as well as the data file values. The new axes are parallel to the axes of
the ellipse.
172 ERDAS
Spectral Enhancement
Principal Component
(new axis)
0
0 255
The first principal component shows the direction and length of the widest transect of the
ellipse. Therefore, as an axis in spectral space, it measures the highest variation within the data.
In Figure 6-20 it is easy to see that the first eigenvalue is always greater than the ranges of the
input bands, just as the hypotenuse of a right triangle must always be longer than the legs.
range of pc 1
data file values
Band B
range of Band B
range of Band A
0
0 255
Band A
data file values
PC 2
PC 1
90° angle
(orthogonal)
0
0 255
• is the widest transect of the ellipse that is orthogonal to the previous components in the n-
dimensional space of the scatterplot (Faust, 1989), and
• accounts for a decreasing amount of the variation in the data which is not already accounted
for by previous principal components (Taylor, 1977).
Although there are n output bands in a PCA, the first few bands account for a high proportion
of the variance in the data—in some cases, almost 100%. Therefore, PCA is useful for
compressing data into fewer bands.
In other applications, useful information can be gathered from the principal component bands
with the least variance. These bands can show subtle details in the image that were obscured by
higher contrast in the original image. These bands may also show regular noise in the data (for
example, the striping in old MSS data) (Faust, 1989).
174 ERDAS
Spectral Enhancement
v 1 0 0 ... 0
0 v 2 0 ... 0
V =
...
0 0 0 ... v n
E Cov ET = V
Where:
Cov = the covariance matrix
E = the matrix of eigenvectors
T
= the transposition function
V = a diagonal matrix of eigenvalues, in which all nondiagonal elements are zeros
V is computed so that its nonzero elements are ordered from greatest to least, so that
v1 > v2 > v3... > vn
Source: Faust, 1989
A full explanation of this computation can be found in Gonzalez and Wintz, 1977.
The matrix V is the covariance matrix of the output principal component file. The zeros
represent the covariance between bands (there is none), and the eigenvalues are the variance
values for each band. Because the eigenvalues are ordered from v1 to vn, the first eigenvalue is
the largest and represents the most variance in the data.
Each column of the resulting eigenvector matrix, E, describes a unit-length vector in spectral
space, which shows the direction of the principal component (the ellipse axis). The numbers are
used as coefficients in the following equation, to transform the original data file values into the
principal component values.
n
Pe = ∑ dk Eke
k=1
Where:
e = the number of the principal component (first, second)
Pe = the output principal component value for principal component band e
k = a particular input band
n = the total number of bands
dk = an input data file value in band k
E = the eigenvector matrix, such that Eke = the element of that matrix at row k,
column e
Source: Modified from Gonzalez and Wintz, 1977
• alter the distribution of the image DN values within the 0 - 255 range of the display device,
and
The decorrelation stretch stretches the principal components of an image, not to the original
image.
A principal components transform converts a multiband image into a set of mutually orthogonal
images portraying inter-band variance. Depending on the DN ranges and the variance of the
individual input bands, these new images (PCs) occupy only a portion of the possible 0 - 255
data range.
Each PC is separately stretched to fully utilize the data range. The new stretched PC composite
image is then retransformed to the original data areas.
Either the original PCs or the stretched PCs may be saved as a permanent image file for viewing
after the stretch.
NOTE: Storage of PCs as floating point, single precision is probably appropriate in this case.
Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimensional
space where N is the number of bands. Each pixel, positioned according to its DN value in each
band, lies within the N-dimensional space. This pixel distribution is determined by the
absorption/reflection spectra of the imaged material. This clustering of the pixels is termed the
data structure (Crist and Kauth, 1986).
See Chapter 1 “Raster Data” for more information on absorption/reflection spectra. See
the discussion on “Principal Components Analysis”.
The data structure can be considered a multidimensional hyperellipsoid. The principal axes of
this data structure are not necessarily aligned with the axes of the data space (defined as the
bands of the input image). They are more directly related to the absorption spectra. For viewing
purposes, it is advantageous to rotate the N-dimensional space such that one or two of the data
structure axes are aligned with the Viewer X and Y axes. In particular, you could view the axes
that are largest for the data structure produced by the absorption peaks of special interest for the
application.
For example, a geologist and a botanist are interested in different absorption features. They
would want to view different data structures and therefore, different data structure axes. Both
would benefit from viewing the data in a way that would maximize visibility of the data
structure of interest.
The Tasseled Cap transformation offers a way to optimize data viewing for vegetation studies.
Research has produced three data structure axes that define the vegetation information content
(Crist et al, 1986, Crist and Kauth, 1986):
• Brightness—a weighted sum of all bands, defined in the direction of the principal variation
in soil reflectance.
176 ERDAS
Spectral Enhancement
A simple calculation (linear combination) then rotates the data space to present any of these axes
to you.
These rotations are sensor-dependent, but once defined for a particular sensor (say Landsat 4
TM), the same rotation works for any scene taken by that sensor. The increased dimensionality
(number of bands) of TM vs. MSS allowed Crist et al (Crist et al, 1986) to define three
additional axes, termed Haze, Fifth, and Sixth. Lavreau (Lavreau, 1991) has used this haze
parameter to devise an algorithm to dehaze Landsat imagery.
The Tasseled Cap algorithm implemented in the Image Interpreter provides the correct
coefficient for MSS, TM4, and TM5 imagery. For TM4, the calculations are:
RGB to IHS The color monitors used for image display on image processing systems have three color guns.
These correspond to red, green, and blue (R,G,B), the additive primary colors. When displaying
three bands of a multiband data set, the viewed image is said to be in R,G,B space.
However, it is possible to define an alternate color space that uses intensity (I), hue (H), and
saturation (S) as the three positioned parameters (in lieu of R,G, and B). This system is
advantageous in that it presents colors more nearly as perceived by the human eye.
• Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black) to 1
(white).
• Saturation represents the purity of color and also varies linearly from 0 to 1.
• Hue is representative of the color or dominant wavelength of the pixel. It varies from 0 at
the red midpoint through green and blue back to the red midpoint at 360. It is a circular
dimension (see Figure 6-22). In Figure 6-22, 0 to 255 is the selected range; it could be
defined as any data range. However, hue must vary from 0 to 360 to define the entire sphere
(Buchanan, 1979).
INTENSITY
Blue
255 SATURATION 0
Green
HUE
255,0 Red
To use the RGB to IHS transform, use the RGB to IHS function from Image Interpreter.
The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac Corporation,
1980):
M – r-
R = --------------
M–m
M – g-
G = --------------
M–m
M – b-
B = --------------
M–m
Where:
R,G,B are each in the range of 0 to 1.0.
r, g, b are each in the range of 0 to 1.0.
M = largest value, r, g, or b
m = least value, r, g, or b
NOTE: At least one of the R, G, or B values is 0, corresponding to the color with the largest
value, and at least one of the R, G, or B values is 1, corresponding to the color with the least
value.
178 ERDAS
Spectral Enhancement
I = M + m-
--------------
2
M – m-
If I ≤ 0.5, S = --------------
M+m
M–m -
If I > 0.5, S = -----------------------
2–M–m
IHS to RGB The family of IHS to RGB is intended as a complement to the standard RGB to IHS transform.
In the IHS to RGB algorithm, a min-max stretch is applied to either intensity (I), saturation (S),
or both, so that they more fully utilize the 0 to 1 value range. The values for hue (H), a circular
dimension, are 0 to 360. However, depending on the dynamic range of the DN values of the
input image, it is possible that I or S or both occupy only a part of the 0 to 1 range. In this model,
a min-max stretch is applied to either I, S, or both, so that they more fully utilize the 0 to 1 value
range. After stretching, the full IHS image is retransformed back to the original RGB space. As
the parameter Hue is not modified, it largely defines what we perceive as color, and the resultant
image looks very much like the input image.
It is not essential that the input parameters (IHS) to this transform be derived from an RGB to
IHS transform. You could define I and/or S as other parameters, set Hue at 0 to 360, and then
transform to RGB space. This is a method of color coding other data sets.
In another approach (Daily, 1983), H and I are replaced by low- and high-frequency radar
imagery. You can also replace I with radar intensity before the IHS to RGB transform (Croft
(Holcomb), 1993). Chavez evaluates the use of the IHS to RGB transform to resolution merge
Landsat TM with SPOT panchromatic imagery (Chavez et al, 1991).
See the previous section on RGB to IHS transform for more information.
The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac Corporation,
1980):
Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0
If I ≤ 0.5, M = I (1 + S)
m=2*1-M
If H < 120, G = m
H – 120-
If 120 ≤ H < 180, m + (M - m) ------------------
60
If 180 ≤ H < 300, G = M
360 – H-
If 300 ≤ H ≤ 360, G = m + (M-m) ------------------
60
Equations for calculating B in the range of 0 to 1.0:
If H < 60, B = M
Indices Indices are used to create output images by mathematically combining the DN values of
different bands. These may be simplistic:
(Band X - Band Y)
or more complex:
Band X - Band Y
Band X + Band Y
180 ERDAS
Spectral Enhancement
Band X
Band Y
These ratio images are derived from the absorption/reflection spectra of the material of interest.
The absorption is based on the molecular bonds in the (surface) material. Thus, the ratio often
gives information on the chemical composition of the target.
See Chapter 1 “Raster Data” for more information on the absorption/reflection spectra.
Applications
• Indices are used extensively in mineral exploration and vegetation analysis to bring out
small differences between various rock types and vegetation classes. In many cases,
judiciously chosen indices can highlight and enhance differences that cannot be observed
in the display of the original color bands.
• Indices can also be used to minimize shadow effects in satellite and aircraft multispectral
images. Black and white images of individual indices or a color combination of three ratios
may be generated.
Index Examples
The following are examples of indices that have been preprogrammed in the Image Interpreter
in ERDAS IMAGINE:
• IR/R (infrared/red)
• SQRT (IR/R)
IR – R-
---------------
IR + R
IR – R- + 0.5
---------------
IR + R
The following table shows the infrared (IR) and red (R) band for some common sensors
(Tucker, 1979, Jensen, 1996):
Landsat MSS 7 5
SPOT XS 3 2
Landsat TM 4 3
NOAA AVHRR 2 1
Image Algebra
Image algebra is a general term used to describe operations that combine the pixels of two or
more raster layers in mathematical combinations. For example, the calculation:
(infrared band) - (red band)
DNir - DNred
yields a simple, yet very useful, measure of the presence of vegetation. At the other extreme is
the Tasseled Cap calculation (described in the following pages), which uses a more complicated
mathematical combination of as many as six bands to define vegetation.
Band ratios, such as:
182 ERDAS
Hyperspectral Image Processing
TM5
------------ = clay minerals
TM7
are also commonly used. These are derived from the absorption spectra of the material of
interest. The numerator is a baseline of background absorption and the denominator is an
absorption peak.
IR – R-
NDVI = ---------------
IR + R
Hyperspectral Hyperspectral image processing is, in many respects, simply an extension of the techniques used
Image Processing for multispectral data sets; indeed, there is no set number of bands beyond which a data set is
hyperspectral. Thus, many of the techniques or algorithms currently used for multispectral data
sets are logically applicable, regardless of the number of bands in the data set (see the discussion
of Figure 1-7 of this manual). What is of relevance in evaluating these data sets is not the
number of bands per se, but the spectral bandwidth of the bands (channels). As the bandwidths
get smaller, it becomes possible to view the data set as an absorption spectrum rather than a
collection of discontinuous bands. Analysis of the data in this fashion is termed imaging
spectrometry.
A hyperspectral image data set is recognized as a three-dimensional pixel array. As in a
traditional raster image, the x-axis is the column indicator and the y-axis is the row indicator.
The z-axis is the band number or, more correctly, the wavelength of that band (channel). A
hyperspectral image can be visualized as shown in Figure 6-23.
Y
Z
A data set with narrow contiguous bands can be plotted as a continuous spectrum and compared
to a library of known spectra using full profile spectral pattern fitting algorithms. A serious
complication in using this approach is assuring that all spectra are corrected to the same
background.
At present, it is possible to obtain spectral libraries of common materials. The JPL and USGS
mineral spectra libraries are included in ERDAS IMAGINE. These are laboratory-measured
reflectance spectra of reference minerals, often of high purity and defined particle size. The
spectrometer is commonly purged with pure nitrogen to avoid absorbance by atmospheric
gases. Conversely, the remote sensor records an image after the sunlight has passed through the
atmosphere (twice) with variable and unknown amounts of water vapor, CO2. (This
atmospheric absorbance curve is shown in Figure 1-4.) The unknown atmospheric absorbances
superimposed upon the Earth’s surface reflectances makes comparison to laboratory spectra or
spectra taken with a different atmosphere inexact. Indeed, it has been shown that atmospheric
composition can vary within a single scene. This complicates the use of spectral signatures even
within one scene. Atmospheric absorption and scattering is discussed in “Atmospheric
Absorption”.
A number of approaches have been advanced to help compensate for this atmospheric
contamination of the spectra. These are introduced briefly in “Atmospheric Effects” for the
general case. Two specific techniques, Internal Average Relative Reflectance (IARR) and Log
Residuals, are implemented in ERDAS IMAGINE. These have the advantage of not requiring
auxiliary input information; the correction parameters are scene-derived. The disadvantage is
that they produce relative reflectances (i.e., they can be compared to reference spectra in a semi-
quantitative manner only).
Normalize Pixel albedo is affected by sensor look angle and local topographic effects. For airborne sensors,
this look angle effect can be large across a scene. It is less pronounced for satellite sensors.
Some scanners look to both sides of the aircraft. For these data sets, the average scene
luminance between the two half-scenes can be large. To help minimize these effects, an equal
area normalization algorithm can be applied (Zamudio and Atkinson, 1990). This calculation
shifts each (pixel) spectrum to the same overall average brightness. This enhancement must be
used with a consideration of whether this assumption is valid for the scene. For an image that
contains two (or more) distinctly different regions (e.g., half ocean and half forest), this may not
be a valid assumption. Correctly applied, this normalization algorithm helps remove albedo
variations and topographic effects.
IAR Reflectance As discussed above, it is desired to convert the spectra recorded by the sensor into a form that
can be compared to known reference spectra. This technique calculates a relative reflectance by
dividing each spectrum (pixel) by the scene average spectrum (Kruse, 1988). The algorithm is
based on the assumption that this scene average spectrum is largely composed of the
atmospheric contribution and that the atmosphere is uniform across the scene. However, these
assumptions are not always valid. In particular, the average spectrum could contain absorption
features related to target materials of interest. The algorithm could then overcompensate for
(i.e., remove) these absorbance features. The average spectrum should be visually inspected to
check for this possibility. Properly applied, this technique can remove the majority of
atmospheric effects.
Log Residuals The Log Residuals technique was originally described by Green and Craig (Green and Craig,
1985), but has been variously modified by researchers. The version implemented here is similar
to the approach of Lyon (Lyon, 1987). The algorithm can be conceptualized as:
Output Spectrum = (input spectrum) - (average spectrum) -
(pixel brightness) + (image brightness)
All parameters in the above equation are in logarithmic space, hence the name.
This algorithm corrects the image for atmospheric absorption, systemic instrumental variation,
and illuminance differences between pixels.
184 ERDAS
Hyperspectral Image Processing
Rescale Many hyperspectral scanners record the data in a format larger than 8-bit. In addition, many of
the calculations used to correct the data are performed with a floating point format to preserve
precision. At some point, it is advantageous to compress the data back into an 8-bit range for
effective storage and/or display. However, when rescaling data to be used for imaging
spectrometry analysis, it is necessary to consider all data values within the data cube, not just
within the layer of interest. This algorithm is designed to maintain the 3-dimensional integrity
of the data values. Any bit format can be input. The output image is always 8-bit.
When rescaling a data cube, a decision must be made as to which bands to include in the
rescaling. Clearly, a bad band (i.e., a low S/N layer) should be excluded. Some sensors image
in different regions of the electromagnetic (EM) spectrum (e.g., reflective and thermal infrared
or long- and short-wave reflective infrared). When rescaling these data sets, it may be
appropriate to rescale each EM region separately. These can be input using the Select Layer
option in the Viewer.
NOTE: Bands 26 through 28, and 46 through 55 have been deleted from the calculation.The
deleted bands are still rescaled, but they are not factored into the rescale calculation.
Processing Sequence The above (and other) processing steps are utilized to convert the raw image into a form that is
easier to interpret. This interpretation often involves comparing the imagery, either visually or
automatically, to laboratory spectra or other known end-member spectra. At present there is no
widely accepted standard processing sequence to achieve this, although some have been
advanced in the scientific literature (Zamudio and Atkinson, 1990; Kruse, 1988; Green and
Craig, 1985; Lyon, 1987). Two common processing sequences have been programmed as single
automatic enhancements, as follows:
Spectrum Average In some instances, it may be desirable to average together several pixels. This is mentioned
above under “IAR Reflectance” as a test for applicability. In preparing reference spectra for
classification, or to save in the Spectral Library, an average spectrum may be more
representative than a single pixel. Note that to implement this function it is necessary to define
which pixels to average using the AOI tools. This enables you to average any set of pixels that
is defined; the pixels do not need to be contiguous and there is no limit on the number of pixels
averaged. Note that the output from this program is a single pixel with the same number of input
bands as the original image.
AOI Polygon
Click here to
enter an Area
of Interest
Signal to Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the usefulness or validity of a
particular band. In this implementation, S/N is defined as Mean/Std.Dev. in a 3 × 3 moving
window. After running this function on a data set, each layer in the output image should be
visually inspected to evaluate suitability for inclusion into the analysis. Layers deemed
unacceptable can be excluded from the processing by using the Select Layers option of the
various Graphical User Interfaces (GUIs). This can be used as a sensor evaluation tool.
Mean per Pixel This algorithm outputs a single band, regardless of the number of input bands. By visually
inspecting this output image, it is possible to see if particular pixels are outside the norm. While
this does not mean that these pixels are incorrect, they should be evaluated in this context. For
example, a CCD detector could have several sites (pixels) that are dead or have an anomalous
response, these would be revealed in the mean-per-pixel image. This can be used as a sensor
evaluation tool.
Profile Tools To aid in visualizing this three-dimensional data cube, three basic tools have been designed:
• Spectral Profile—a display that plots the reflectance spectrum of a designated pixel, as
shown in Figure 6-26.
186 ERDAS
Hyperspectral Image Processing
• Spatial Profile—a display that plots spectral information along a user-defined polyline. The
data can be displayed two-dimensionally for a single band, as in Figure 6-27.
The data can also be displayed three-dimensionally for multiple bands, as in Figure 6-28.
• Surface Profile—a display that allows you to designate an x,y area and view any selected
layer, z.
Wavelength Axis Data tapes containing hyperspectral imagery commonly designate the bands as a simple
numerical sequence. When plotted using the profile tools, this yields an x-axis labeled as 1, 2,
3, 4, etc. Elsewhere on the tape or in the accompanying documentation is a file that lists the
center frequency and width of each band. This information should be linked to the image
intensity values for accurate analysis or comparison to other spectra, such as the Spectra
Libraries.
188 ERDAS
Fourier Analysis
Spectral Library Two spectral libraries are presently included in the software package (JPL and USGS). In
addition, it is possible to extract spectra (pixels) from a data set or prepare average spectra from
an image and save these in a user-derived spectral library. This library can then be used for
visual comparison with other image spectra, or it can be used as input signatures in a
classification.
Classification The advent of data sets with very large numbers of bands has pressed the limits of the traditional
classifiers such as Isodata, Maximum Likelihood, and Minimum Distance, but has not obviated
their usefulness. Much research has been directed toward the use of Artificial Neural Networks
(ANN) to more fully utilize the information content of hyperspectral images (Merenyi et al,
1996). To date, however, these advanced techniques have proven to be only marginally better
at a considerable cost in complexity and computation. For certain applications, both Maximum
Likelihood (Benediktsson et al, 1990) and Minimum Distance (Merenyi et al, 1996) have
proven to be appropriate. Chapter 7 “Classification” contains a detailed discussion of these
classification techniques.
A second category of classification techniques utilizes the imaging spectroscopy model for
approaching hyperspectral data sets. This approach requires a library of possible end-member
materials. These can be from laboratory measurements using a scanning spectrometer and
reference standards (Clark et al, 1990). The JPL and USGS libraries are compiled this way. The
reference spectra (signatures) can also be scene-derived from either the scene under study or
another similar scene (Adams et al, 1989).
System Requirements Because of the large number of bands, a hyperspectral data set can be surprisingly large. For
example, an AVIRIS scene is only 512 × 614 pixels in dimension, which seems small.
However, when multiplied by 224 bands (channels) and 16 bits, it requires over 140 megabytes
of data storage space. To process this scene requires corresponding large swap and temp space.
In practice, it has been found that a 48 Mb memory board and 100 Mb of swap space is a
minimum requirement for efficient processing. Temporary file space requirements depend upon
the process being run.
Fourier Analysis Image enhancement techniques can be divided into two basic categories: point and
neighborhood. Point techniques enhance the pixel based only on its value, with no concern for
the values of neighboring pixels. These techniques include contrast stretches (nonadaptive),
classification, and level slices. Neighborhood techniques enhance a pixel based on the values of
surrounding pixels. As a result, these techniques require the processing of a possibly large
number of pixels for each output pixel. The most common way of implementing these
enhancements is via a moving window convolution. However, as the size of the moving window
increases, the number of requisite calculations becomes enormous. An enhancement that
requires a convolution operation in the spatial domain can be implemented as a simple
multiplication in frequency space—a much faster calculation.
In ERDAS IMAGINE, the FFT is used to convert a raster image from the spatial (normal)
domain into a frequency domain image. The FFT calculation converts the image into a series of
two-dimensional sine waves of various frequencies. The Fourier image itself cannot be easily
viewed, but the magnitude of the image can be calculated, which can then be displayed either
in the Viewer or in the FFT Editor. Analysts can edit the Fourier image to reduce noise or
remove periodic features, such as striping. Once the Fourier image is edited, it is then
transformed back into the spatial domain by using an IFFT. The result is an enhanced version
of the original image.
This section focuses on the Fourier editing techniques available in the FFT Editor. Some rules
and guidelines for using these tools are presented in this document. Also included are some
examples of techniques that generally work for specific applications, such as striping.
NOTE: You may also want to refer to the works cited at the end of this section for more
information.
The basic premise behind a Fourier transform is that any one-dimensional function, f(x) (which
might be a row of pixels), can be represented by a Fourier series consisting of some combination
of sine and cosine terms and their associated coefficients. For example, a line of pixels with a
high spatial frequency gray scale pattern might be represented in terms of a single coefficient
multiplied by a sin(x) function. High spatial frequencies are those that represent frequent gray
scale changes in a short pixel distance. Low spatial frequencies represent infrequent gray scale
changes that occur gradually over a relatively large number of pixel distances. A more
complicated function, f(x), might have to be represented by many sine and cosine terms with
their associated coefficients.
Sine
Cosine
0 π 2π 0 π 2π
0 π 2π
Figure 6-30 shows how a function f(x) can be represented as a linear combination of sine and
cosine. The Fourier transform of that same function is also shown.
A Fourier transform is a linear transformation that allows calculation of the coefficients
necessary for the sine and cosine terms to adequately represent the image. This theory is used
extensively in electronics and signal processing, where electrical signals are continuous and not
discrete. Therefore, DFT has been developed. Because of the computational load in calculating
the values for all the sine and cosine terms along with the coefficient multiplications, a highly
efficient version of the DFT was developed and called the FFT.
190 ERDAS
Fourier Analysis
Applications
Fourier transformations are typically used for the removal of noise such as striping, spots, or
vibration in imagery by identifying periodicities (areas of high spatial frequency). Fourier
editing can be used to remove regular errors in data such as those caused by sensor anomalies
(e.g., striping). This analysis technique can also be used across bands as another form of
pattern/feature recognition.
M – 1N – 1
– j2πux ⁄ M – j2πvy ⁄ N
F ( u, v ) ← ∑ ∑ [ f ( x, y )e ]
x = 0y = 0
Where:
M = the number of pixels horizontally
N = the number of pixels vertically
u,v = spatial frequency variables
e ≈ 2.71828, the natural logarithm base
j = the imaginary component of a complex number
The number of pixels horizontally and vertically must each be a power of two. If the dimensions
of the input image are not a power of two, they are padded up to the next highest power of two.
There is more information about this later in this section.
Source: Modified from Oppenheim and Schafer, 1975; Press et al, 1988.
Images computed by this algorithm are saved with an .fft file extension.
You should run a Fourier Magnitude transform on an .fft file before viewing it in the
Viewer. The FFT Editor automatically displays the magnitude without further processing.
Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewing or
editing. Each pixel of a fourier image is a complex number (i.e., it has two components: real and
imaginary). For display as a single image, these components are combined in a root-sum of
squares operation. Also, since the dynamic range of Fourier spectra vastly exceeds the range of
a typical display device, the Fourier Magnitude calculation involves a logarithmic function.
Finally, a Fourier image is symmetric about the origin (u, v = 0, 0). If the origin is plotted at the
upper left corner, the symmetry is more difficult to see than if the origin is at the center of the
image. Therefore, in the Fourier magnitude image, the origin is shifted to the center of the raster
array.
In this transformation, each .fft layer is processed twice. First, the maximum magnitude, |X|max,
is computed. Then, the following computation is performed for each FFT element magnitude x:
x
y ( x ) = 255.0ln -------------- ( e – 1 ) + 1
x max
Where:
x = input FFT element
y = the normalized log magnitude of the FFT element
|x|max = the maximum magnitude
e ≈ 2.71828, the natural logarithm base
|| = the magnitude operator
This function was chosen so that y would be proportional to the logarithm of a linear function
of x, with y(0)=0 and y (|x|max) = 255.
In Figure 6-31, Image A is one band of a badly striped Landsat TM scene. Image B is the Fourier
Magnitude image derived from the Landsat image.
Note that, although Image A has been transformed into Image B, these raster images are very
different symmetrically. The origin of Image A is at (x, y) = (0, 0) in the upper left corner. In
Image B, the origin (u, v) = (0, 0) is in the center of the raster. The low frequencies are plotted
near this origin while the higher frequencies are plotted further out. Generally, the majority of
the information in an image is in the low frequencies. This is indicated by the bright area at the
center (origin) of the Fourier image.
192 ERDAS
Fourier Analysis
It is important to realize that a position in a Fourier image, designated as (u, v), does not always
represent the same frequency, because it depends on the size of the input raster image. A large
spatial domain image contains components of lower frequency than a small spatial domain
image. As mentioned, these lower frequencies are plotted nearer to the center (u, v = 0, 0) of the
Fourier image. Note that the units of spatial frequency are inverse length, e.g., m-1.
The sampling increments in the spatial and frequency domain are related by:
1 -
∆u = -----------
M∆x
1-
∆v = ----------
N∆y
Where:
M = horizontal image size in pixels
N = vertical image size in pixels
∆x = pixel size
∆y = pixel size
For example, converting a 512 × 512 Landsat TM image (pixel size = 28.5m) into a Fourier
image:
1 –5 –1
∆u = ∆v = ------------------------- = 6.85 × 10 m
512 × 28.5
u or v Frequency
0 0
1 –5 –1
∆u = ∆v = ---------------------------- = 3.42 × 10 m
1024 × 28.5
u or v Frequency
0 0
1 3.42 × 10-5
2 6.85 × 10-5
So, as noted above, the frequency represented by a (u, v) position depends on the size of the
input image.
For the above calculation, the sample images are 512 × 512 and 1024 × 1024 (powers of two).
These were selected because the FFT calculation requires that the height and width of the input
image be a power of two (although the image need not be square). In practice, input images
usually do not meet this criterion. Three possible solutions are available in ERDAS IMAGINE:
• Pad the image—the input raster is increased in size to the next power of two by imbedding
it in a field of the mean value of the entire input image.
• Resample the image so that its height and width are powers of two.
300
512
512
The padding technique is automatically performed by the FFT program. It produces a minimum
of artifacts in the output Fourier image. If the image is subset using a power of two (i.e., 64 ×
64, 128 × 128, 64 × 128), no padding is used.
IFFT The IFFT computes the inverse two-dimensional FFT of the spectrum stored.
• The input file must be in the compressed .fft format described earlier (i.e., output from the
FFT or FFT Editor).
• If the original image was padded by the FFT program, the padding is automatically
removed by IFFT.
• This program creates (and deletes, upon normal termination) a temporary file large enough
to contain one entire band of .fft data.The specific expression calculated by this program is:
M – 1N – 1
1 - j2πux ⁄ M + j2πvy ⁄ N
f ( x, y ) ¨ --------------
N1 N2 ∑ ∑ [ F ( u, v )e ]
u = 0v = 0
0 ≤ x ≤ M – 1, 0 ≤ y ≤ N – 1
194 ERDAS
Fourier Analysis
Where:
M = the number of pixels horizontally
N = the number of pixels vertically
u, v = spatial frequency variables
e ≈ 2.71828, the natural logarithm base
Source: Modified from Oppenheim and Schafer, 1975 and Press et al, 1988.
Images computed by this algorithm are saved with an .ifft.img file extension by default.
Filtering Operations performed in the frequency (Fourier) domain can be visualized in the context of the
familiar convolution function. The mathematical basis of this interrelationship is the
convolution theorem, which states that a convolution operation in the spatial domain is
equivalent to a multiplication operation in the frequency domain:
g(x, y) = h(x, y) ∗ f(x, y) ≡ G(u, v) = H(u, v) × F(u, v)
Where:
f(x, y) = input image
h(x, y) = position invariant operation (convolution kernel)
g(x, y) = output image
G, F, H = Fourier transforms of g, f, h
The names high-pass, low-pass, high-frequency indicate that these convolution functions derive
from the frequency domain.
Low-Pass Filtering
The simplest example of this relationship is the low-pass kernel. The name, low-pass kernel, is
derived from a filter that would pass low frequencies and block (filter out) high frequencies. In
practice, this is easily achieved in the spatial domain by the M = N = 3 kernel:
1 1 1
1 1 1
1 1 1
Obviously, as the size of the image and, particularly, the size of the low-pass kernel increases,
the calculation becomes more time-consuming. Depending on the size of the input image and
the size of the kernel, it can be faster to generate a low-pass image via Fourier processing.
Figure 6-33 compares Direct and Fourier domain processing for finite area convolution.
In the Fourier domain, the low-pass operation is implemented by attenuating the pixels’
frequencies that satisfy:
2 2 2
u + v > D0
64 × 64 50 3
30 3.5
20 5
10 9
5 14
196 ERDAS
Fourier Analysis
128 × 128 20 13
10 22
256 × 256 20 25
10 42
This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 as the
cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image.
High-Pass Filtering
Just as images can be smoothed (blurred) by attenuating the high-frequency components of an
image using low-pass filters, images can be sharpened and edge-enhanced by attenuating the
low-frequency components using high-pass filters. In the Fourier domain, the high-pass
operation is implemented by attenuating the pixels’ frequencies that satisfy:
2 2 2
u + v < D0
Windows The attenuation discussed above can be done in many different ways. In ERDAS IMAGINE
Fourier processing, five window functions are provided to achieve different types of
attenuation:
• Ideal
• Bartlett (triangular)
• Butterworth
• Gaussian
• Hanning (cosine)
Each of these windows must be defined when a frequency domain process is used. This
application is perhaps easiest understood in the context of the high-pass and low-pass filter
operations. Each window is discussed in more detail:
Ideal
The simplest low-pass filtering is accomplished using the ideal window, so named because its
cutoff point is absolute. Note that in Figure 6-34 the cross section is ideal.
H(u,v)
gain
1
0 D(u,v)
D0
frequency
H(u, v) = 1 if D(u, v) ≤ D0
0 if D(u, v) > D0
All frequencies inside a circle of a radius D0 are retained completely (passed), and all
frequencies outside the radius are completely attenuated. The point D0 is termed the cutoff
frequency.
High-pass filtering using the ideal window looks like the following illustration:
H(u,v)
gain
0 D(u,v)
D0
frequency
H(u, v) = 0 if D(u, v) ≤ D0
1 if D(u, v) > D0
All frequencies inside a circle of a radius D0 are completely attenuated, and all frequencies
outside the radius are retained completely (passed).
A major disadvantage of the ideal filter is that it can cause ringing artifacts, particularly if the
radius (r) is small. The smoother functions (e.g., Butterworth and Hanning) minimize this
effect.
Bartlett
Filtering using the Bartlett window is a triangular function, as shown in the following low- and
high-pass cross sections:
198 ERDAS
Fourier Analysis
Low-Pass High-Pass
H(u,v) H(u,v)
gain
gain
1 1
D(u,v) 0 D(u,v)
0 D0 D0
frequency frequency
1 1
gain
gain
0.5 0.5
D(u,v) 0 D(u,v)
0 1 2 3 1 2 3
D0 D0
frequency frequency
NOTE: The Butterworth window approaches its window center gain asymptotically.
for 0 ≤ x ≤ 2D 0
0 otherwise.
Fourier Noise Removal Occasionally, images are corrupted by noise that is periodic in nature. An example of this is the
scan lines that are present in some TM images. When these images are transformed into Fourier
space, the periodic line pattern becomes a radial line. The Fourier Analysis functions provide
two main tools for reducing noise in images:
• editing
Editing
In practice, it has been found that radial lines centered at the Fourier origin (u, v = 0, 0) are best
removed using back-to-back wedges centered at (0, 0). It is possible to remove these lines using
very narrow wedges with the Ideal window. However, the sudden transitions resulting from
zeroing-out sections of a Fourier image causes a ringing of the image when it is transformed
back into the spatial domain. This effect can be lessened by using a less abrupt window, such as
Butterworth.
Other types of noise can produce artifacts, such as lines not centered at u,v = 0,0 or circular spots
in the Fourier image. These can be removed using the tools provided in the FFT Editor. As these
artifacts are always symmetrical in the Fourier magnitude image, editing tools operate on both
components simultaneously. The FFT Editor contains tools that enable you to attenuate a
circular or rectangular region anywhere on the image.
Select the Periodic Noise Removal option from Image Interpreter to use this function.
Homomorphic Homomorphic filtering is based upon the principle that an image may be modeled as the product
Filtering of illumination and reflectance components:
I(x, y) = i(x, y) × r(x, y)
Where:
I(x, y) = image intensity (DN) at pixel x, y
i(x, y) = illumination of pixel x, y
r(x, y) = reflectance at pixel x, y
200 ERDAS
Fourier Analysis
The illumination image is a function of lighting conditions and shadows. The reflectance image
is a function of the object being imaged. A log function can be used to separate the two
components (i and r) of the image:
ln I(x, y) = ln i(x, y) + ln r(x, y)
This transforms the image from multiplicative to additive superposition. With the two
component images separated, any linear operation can be performed. In this application, the
image is now transformed into Fourier space. Because the illumination component usually
dominates the low frequencies, while the reflectance component dominates the higher
frequencies, the image may be effectively manipulated in the Fourier domain.
By using a filter on the Fourier image, which increases the high-frequency components, the
reflectance image (related to the target material) may be enhanced, while the illumination image
(related to the scene illumination) is de-emphasized.
Select the Homomorphic Filter option from Image Interpreter to use this function.
i decreased
r increased
As mentioned earlier, if an input image is not a power of two, the ERDAS IMAGINE Fourier
analysis software automatically pads the image to the next largest size to make it a power of
two. For manual editing, this causes no problems. However, in automatic processing, such as
the homomorphic filter, the artifacts induced by the padding may have a deleterious effect on
the output image. For this reason, it is recommended that images that are not a power of two be
subset before being used in an automatic process.
A detailed description of the theory behind Fourier series and Fourier transforms is given
in Gonzales and Wintz (Gonzalez and Wintz, 1977). See also Oppenheim (Oppenheim and
Schafer, 1975) and Press (Press et al, 1988).
Radar Imagery The nature of the surface phenomena involved in radar imaging is inherently different from that
Enhancement of visible/infrared (VIS/IR) images. When VIS/IR radiation strikes a surface it is either
absorbed, reflected, or transmitted. The absorption is based on the molecular bonds in the
(surface) material. Thus, this imagery provides information on the chemical composition of the
target.
When radar microwaves strike a surface, they are reflected according to the physical and
electrical properties of the surface, rather than the chemical composition. The strength of radar
return is affected by slope, roughness, and vegetation cover. The conductivity of a target area is
related to the porosity of the soil and its water content. Consequently, radar and VIS/IR data are
complementary; they provide different information about the target area. An image in which
these two data types are intelligently combined can present much more information than either
image by itself.
See Chapter 1 “Raster Data” and Chapter 3 “Raster and Vector Data Sources” for more
information on radar data.
This section describes enhancement techniques that are particularly useful for radar imagery.
While these techniques can be applied to other types of image data, this discussion focuses on
the special requirements of radar imagery enhancement. The ERDAS IMAGINE Radar
Interpreter provides a sophisticated set of image processing tools designed specifically for use
with radar imagery. This section describes the functions of the ERDAS IMAGINE Radar
Interpreter.
For information on the Radar Image Enhancement function, see the section on
“Radiometric Enhancement”.
Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensing systems,
although it may appear in any type of remotely sensed image utilizing coherent radiation. An
active radar sensor gives off a burst of coherent radiation that reflects from the target, unlike a
passive microwave sensor that simply receives the low-level radiation naturally emitted by
targets.
Like the light from a laser, the waves emitted by active sensors travel in phase and interact
minimally on their way to the target area. After interaction with the target area, these waves are
no longer in phase. This is because of the different distances they travel from targets, or single
versus multiple bounce scattering.
Once out of phase, radar waves can interact to produce light and dark pixels known as speckle
noise. Speckle noise must be reduced before the data can be effectively utilized. However, the
image processing programs used to reduce speckle noise produce changes in the image.
Because any image processing done before removal of the speckle results in the noise
being incorporated into and degrading the image, you should not rectify, correct to
ground range, or in any way resample, enhance, or classify the pixel values before
removing speckle noise. Functions using Nearest Neighbor are technically permissible,
but not advisable.
202 ERDAS
Radar Imagery Enhancement
Since different applications and different sensors necessitate different speckle removal models,
ERDAS IMAGINE Radar Interpreter includes several speckle reduction algorithms:
• Mean filter
• Median filter
• Lee-Sigma filter
• Lee filter
• Frost filter
• Gamma-MAP filter
NOTE: Speckle noise in radar images cannot be completely removed. However, it can be
reduced significantly.
Mean Filter
The Mean filter is a simple calculation. The pixel of interest (center of window) is replaced by
the arithmetic average of all values within the window. This filter does not remove the aberrant
(speckle) value; it averages it into the data.
In theory, a bright and a dark pixel within the same window would cancel each other out. This
consideration would argue in favor of a large window size (e.g., 7 × 7). However, averaging
results in a loss of detail, which argues for a small window size.
In general, this is the least satisfactory method of speckle reduction. It is useful for applications
where loss of resolution is not a problem.
Median Filter
A better way to reduce speckle, but still simplistic, is the Median filter. This filter operates by
arranging all DN values in sequential order within the window that you define. The pixel of
interest is replaced by the value in the center of this distribution. A Median filter is useful for
removing pulse or spike noise. Pulse functions of less than one-half of the moving window
width are suppressed or eliminated. In addition, step functions or ramp functions are retained.
The effect of Mean and Median filters on various signals is shown (for one dimension) in Figure
6-39.
Step
Ramp
Single Pulse
Double Pulse
The Median filter is useful for noise suppression in any image. It does not affect step or ramp
functions; it is an edge preserving filter (Pratt, 1991). It is also applicable in removing pulse
function noise, which results from the inherent pulsing of microwaves. An example of the
application of the Median filter is the removal of dead-detector striping, as found in Landsat 4
TM data (Crippen, 1989a).
204 ERDAS
Radar Imagery Enhancement
= pixel of interest
= North region
= NE region
= SW region
2
Σ ( DN x, y – Mean )
Variance = ----------------------------------------------
-
n–1
Source: Nagao and Matsuyama, 1978
The algorithm compares the variance values of the regions surrounding the pixel of interest. The
pixel of interest is replaced by the mean of all DN values within the region with the lowest
variance (i.e., the most uniform region). A region with low variance is assumed to have pixels
minimally affected by wave interference, yet very similar to the pixel of interest. A region of
low variance is probably such for several surrounding pixels.
The result is that the output image is composed of numerous uniform areas, the size of which is
determined by the moving window size. In practice, this filter can be utilized sequentially 2 or
3 times, increasing the window size. The resultant output image is an appropriate input to a
classification application.
Table 6-2 gives theoretical coefficient of variation values for various look-average radar scenes:
The Lee filters are based on the assumption that the mean and variance of the pixel of interest
are equal to the local mean and variance of all pixels within the moving window you select.
The actual calculation used for the Lee filter is:
DNout = [Mean] + K[DNin - Mean]
Where:
Mean = average of pixels in a moving window
Var ( x )
K = ---------------------------------------------------
2 2
-
[ Mean ] σ + Var ( x )
2
[ Variance within window ] + [ Mean within window ]
Var ( x ) = -------------------------------------------------------------------------------------------------------------------------------------------------------------------
2
-
[ Sigma ] + 1
2
– [ Mean within window ]
Source: Lee, 1981
The Sigma filter is based on the probability of a Gaussian distribution. It is assumed that 95.5%
of random samples are within a 2 standard deviation (2 sigma) range. This noise suppression
filter replaces the pixel of interest with the average of all DN values within the moving window
that fall within the designated range.
As with all the radar speckle filters, you must specify a moving window size. The center pixel
of the moving window is the pixel of interest.
As with the Statistics filter, a coefficient of variation specific to the data set must be input.
Finally, you must specify how many standard deviations to use (2, 1, or 0.5) to define the
accepted range.
206 ERDAS
Radar Imagery Enhancement
The statistical filters (Sigma and Statistics) are logically applicable to any data set for
preprocessing. Any sensor system has various sources of noise, resulting in a few erratic pixels.
In VIS/IR imagery, most natural scenes are found to follow a normal distribution of DN values,
thus filtering at 2 standard deviations should remove this noise. This is particularly true of
experimental sensor systems that frequently have significant noise problems.
These speckle filters can be used iteratively. You must view and evaluate the resultant image
after each pass (the data histogram is useful for this), and then decide if another pass is
appropriate and what parameters to use on the next pass. For example, three passes of the Sigma
filter with the following parameters are very effective when used with any type of data:
Similarly, there is no reason why successive passes must be of the same filter. The following
sequence is useful prior to a classification:
With all speckle reduction filters there is a playoff between noise reduction and loss of
resolution. Each data set and each application have a different acceptable balance
between these two factors. The ERDAS IMAGINE filters have been designed to be
versatile and gentle in reducing noise (and resolution).
Frost Filter
The Frost filter is a minimum mean square error algorithm that adapts to the local statistics of
the image. The local statistics serve as weighting parameters for the impulse response of the
filter (moving window). This algorithm assumes that noise is multiplicative with stationary
statistics.
The formula used is:
–α t
DN = ∑ Kαe
n×n
Where:
K = normalization constant
Ι = local mean
σ = local variance
σ = image coefficient of variation value
|t| = |X-X0| + |Y-Y0|
n = moving window size
And:
2 2 2
α = ( 4 ⁄ nσ ) ( σ ⁄ I )
Gamma-MAP Filter
The Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN, which is
assumed to lie between the local average and the degraded (actual) pixel DN. MAP logic
maximizes the a posteriori probability density function with respect to the original image.
Many speckle reduction filters (e.g., Lee, Lee-Sigma, Frost) assume a Gaussian distribution for
the speckle noise. Recent work has shown this to be an invalid assumption. Natural vegetated
areas have been shown to be more properly modeled as having a Gamma distributed cross
section. This algorithm incorporates this assumption. The exact formula used is the cubic
equation:
ˆ3 ˆ2 ˆ
I – II + σ ( I – DN ) = 0
Where:
Î = sought value
Ι = local mean
DN = input value
σ = original image variance
Source: Frost et al, 1982
Edge Detection Edge and line detection are important operations in digital image processing. For example,
geologists are often interested in mapping lineaments, which may be fault lines or bedding
structures. For this purpose, edge and line detection are major enhancement techniques.
In selecting an algorithm, it is first necessary to understand the nature of what is being enhanced.
Edge detection could imply amplifying an edge, a line, or a spot (see Figure 6-41).
208 ERDAS
Radar Imagery Enhancement
DN Value
DN Value
slope DN change
DN change 90o
slope midpoint
x or y
x or y Step edge
Ramp edge
DN Value
DN Value
DN change DN change
x or y x or y
Line Roof edge
• Ramp edge—an edge modeled as a ramp, increasing in DN value from a low to a high level,
or vice versa. Distinguished by DN change, slope, and slope midpoint.
• Line—a region bounded on each end by an edge; width must be less than the moving
window size.
The models in Figure 6-41 represent ideal theoretical edges. However, real data values vary to
produce a more distorted edge due to sensor noise or vibration (see Figure 6-42). There are no
perfect edges in raster data, hence the need for edge detection algorithms.
Edge detection algorithms can be broken down into 1st-order derivative and 2nd-order
derivative operations. Figure 6-43 shows ideal one-dimensional edge and line intensity curves
with the associated 1st-order and 2nd-order derivatives.
g(x) g(x)
Original Feature
x x
∂-----g ∂-----g
1st Derivative
∂x ∂x
x x
2 2
2nd Derivative ∂ g- ∂g
------- --------
2 2
∂x ∂x
x x
The 1st-order derivative kernel(s) derives from the simple Prewitt kernel:
1 1 1 and 1 0 –1
∂- =
----
∂
----- = 1 0 – 1
∂x 0 0 0 ∂y
–1 –1 –1 1 0 –1
–1 2 –1 –1 –1 –1
∂ 2- = and ∂ 2- =
------- –1 2 –1 ------- 2 2 2
2 2
∂x ∂y
–1 2 –1 –1 –1 –1
210 ERDAS
Radar Imagery Enhancement
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
1 1 0 –1 –1
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
or
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
2 1 0 –1 –2 4 2 0 –2 –4
Larger template arrays provide a greater noise immunity, but are computationally more
demanding.
Zero-Sum Filters
A common type of edge detection kernel is a zero-sum filter. For this type of filter, the
coefficients are designed to add up to zero. Following are examples of two zero-sum filters:
–1 –2 –1 1 0 –1
0 0 0 2 0 –2
Sobel= 1 0 –1
1 2 1
horizontal vertical
–1 –1 –1 1 0 –1
0 0 0 1 0 –1
Prewitt= 1 1 1 1 0 –1
horizontal vertical
Prior to edge enhancement, you should reduce speckle noise by using the ERDAS
IMAGINE Radar Interpreter Speckle Suppression function.
Unweighted line:
–1 2 –1
–1 2 –1
–1 2 –1
Weighted line:
–1 2 –1
–2 4 –2
–1 2 –1
Some researchers have found that a combination of 1st- and 2nd-order derivative images
produces the best output. See Eberlein and Weszka (Eberlein and Weszka, 1975) for
information about subtracting the 2nd-order derivative (Laplacian) image from the 1st-
order derivative image (gradient).
Texture According to Pratt (Pratt, 1991), “Many portions of images of natural scenes are devoid of sharp
edges over large areas. In these areas the scene can often be characterized as exhibiting a
consistent structure analogous to the texture of cloth. Image texture measurements can be used
to segment an image and classify its segments.”
As an enhancement, texture is particularly applicable to radar data, although it may be applied
to any type of data with varying results. For example, it has been shown (Blom and Daily, 1982)
that a three-layer variance image using 15 × 15, 31 × 31, and 61 × 61 windows can be combined
into a three-color RGB (red, green, blue) image that is useful for geologic discrimination. The
same could apply to a vegetation classification.
You could also prepare a three-color image using three different functions operating through the
same (or different) size moving window(s). However, each data set and application would need
different moving window sizes and/or texture measures to maximize the discrimination.
212 ERDAS
Radar Imagery Enhancement
The texture transforms can be used in several ways to enhance the use of radar imagery. Adding
the radar intensity image as an additional layer in a (vegetation) classification is fairly
straightforward and may be useful. However, the proper texture image (function and window
size) can greatly increase the discrimination. Using known test sites, one can experiment to
discern which texture image best aids the classification. For example, the texture image could
then be added as an additional layer to the TM bands.
As radar data come into wider use, other mathematical texture definitions may prove
useful and will be added to the ERDAS IMAGINE Radar Interpreter. In practice, you
interactively decide which algorithm and window size is best for your data and
application.
• variance (2nd-order)
• skewness (3rd-order)
• kurtosis (4th-order)
1
---
2
Σ [ Σ λ ( x cλ – x ijλ ) ] 2
Mean Euclidean Distance = ---------------------------------------------
-
n–1
Where:
xijλ = DN value for spectral band λ and pixel (i,j) of a multispectral image
xcλ = DN value for spectral band λ of a window’s center pixel
n = number of pixels in a window
Variance
2
Σ ( x ij – M )
Variance = ----------------------------
-
n–1
Where:
xij = DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window, where:
Σx
Mean = --------ij-
n
Skewness
3
Σ ( x ij – M )
Skew = -------------------------------
3
---
2
(n – 1)(V)
Where:
xij = DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)
Kurtosis
4
Σ ( x ij – M )
Kurtosis = ----------------------------
-
2
(n – 1 )(V)
Where:
xij = DN value of pixel (i,j)
n = number of pixels in a window
M = Mean of the moving window (see above)
V = Variance (see above)
Texture analysis is available from the Texture function in Image Interpreter and from the
ERDAS IMAGINE Radar Interpreter Texture Analysis function.
Radiometric The raw radar image frequently contains radiometric errors due to:
Correction: Radar
Imagery • imperfections in the transmit and receive pattern of the radar antenna
214 ERDAS
Radar Imagery Enhancement
• the inherently stronger signal from a near range (closest to the sensor flight path) than a far
range (farthest from the sensor flight path) target
Many imaging radar systems use a single antenna that transmits the coherent radar burst and
receives the return echo. However, no antenna is perfect; it may have various lobes, dead spots,
and imperfections. This causes the received signal to be slightly distorted radiometrically. In
addition, range fall-off causes far range targets to be darker (less return signal).
These two problems can be addressed by adjusting the average brightness of each range line to
a constant—usually the average overall scene brightness (Chavez and Berlin, 1986). This
requires that each line of constant range be long enough to reasonably approximate the overall
scene brightness (see Figure 6-44). This approach is generic; it is not specific to any particular
radar sensor.
The Adjust Brightness function in ERDAS IMAGINE works by correcting each range line
average. For this to be a valid approach, the number of data values must be large enough
to provide good average values. Be careful not to use too small an image. This depends
upon the character of the scene itself.
Overall average
= calibration coefficient
ax of line x
• Lines of constant range—lines that are parallel to the flight of the sensor
Because radiometric errors are a function of the imaging geometry, the image must be correctly
oriented during the correction process. For the algorithm to correctly address the data set, you
must tell ERDAS IMAGINE whether the lines of constant range are in columns or rows in the
displayed image.
Figure 6-45 shows the lines of constant range in columns, parallel to the sides of the display
screen:
Range Direction
Slant-to-Ground Radar images also require slant-to-ground range correction, which is similar in concept to
Range Correction orthocorrecting a VIS/IR image. By design, an imaging radar is always side-looking. In
practice, the depression angle is usually 75o at most. In operation, the radar sensor determines
the range (distance to) each target, as shown in Figure 6-46.
θ = depression angle
C
Dists
90o
θ
A B
Distg
216 ERDAS
Radar Imagery Enhancement
Where:
Dists = slant range distance
Distg = ground range distance
Dist
cos θ = ------------s-
Dist g
This has the effect of compressing the near range areas more than the far range areas. For many
applications, this may not be important. However, to geocode the scene or to register radar to
infrared or visible imagery, the scene must be corrected to a ground range format. To do this,
the following parameters relating to the imaging geometry are needed:
• Depression angle (θ)—angular distance between sensor horizon and scene center
• Sensor height (H)—elevation of sensor (in meters) above its nadir point
• Beam width—angular distance between near range and far range for entire scene
This information is usually found in the header file of data. Use the Data View option to
view this information. If it is not contained in the header file, you must obtain this
information from the data supplier.
Once the scene is range-format corrected, pixel size can be changed for coregistration with other
data sets.
Merging Radar with As aforementioned, the phenomena involved in radar imaging is quite different from that in
VIS/IR Imagery VIS/IR imaging. Because these two sensor types give different information about the same
target (chemical vs. physical), they are complementary data sets. If the two images are correctly
combined, the resultant image conveys both chemical and physical information and could prove
more useful than either image alone.
The methods for merging radar and VIS/IR data are still experimental and open for exploration.
The following methods are suggested for experimentation:
• Codisplaying in a Viewer
• Multiplicative
Codisplaying
The simplest and most frequently used method of combining radar with VIS/IR imagery is
codisplaying on an RGB color monitor. In this technique, the radar image is displayed with one
(typically the red) gun, while the green and blue guns display VIS/IR bands or band ratios. This
technique follows from no logical model and does not truly merge the two data sets.
Use the Viewer with the Clear Display option disabled for this type of merge. Select the
color guns to display the different layers.
Multiplicative
A final method to consider is the multiplicative technique. This requires several chromatic
components and a multiplicative component, which is assigned to the image intensity. In
practice, the chromatic components are usually band ratios or PCs; the radar image is input
multiplicatively as intensity (Croft (Holcomb), 1993).
218 ERDAS
Radar Imagery Enhancement
The two sensor merge models using transforms to integrate the two data sets (PC and RGB to
IHS) are based on the assumption that the radar intensity correlates with the intensity that the
transform derives from the data inputs. However, the logic of mathematically merging radar
with VIS/IR data sets is inherently different from the logic of the SPOT/TM merges (as
discussed in “Resolution Merge”). It cannot be assumed that the radar intensity is a surrogate
for, or equivalent to, the VIS/IR intensity. The acceptability of this assumption depends on the
specific case.
For example, Landsat TM imagery is often used to aid in mineral exploration. A common
display for this purpose is RGB = TM5/TM7, TM5/TM4, TM3/TM1; the logic being that if all
three ratios are high, the sites suited for mineral exploration are bright overall. If the target area
is accompanied by silicification, which results in an area of dense angular rock, this should be
the case. However, if the alteration zone is basaltic rock to kaolinite/alunite, then the radar
return could be weaker than the surrounding rock. In this case, radar would not correlate with
high 5/7, 5/4, 3/1 intensity and the substitution would not produce the desired results (Croft
(Holcomb), 1993).
220 ERDAS
Chapter 7
Classification
Introduction Multispectral classification is the process of sorting pixels into a finite number of individual
classes, or categories of data, based on their data file values. If a pixel satisfies a certain set of
criteria, the pixel is assigned to the class that corresponds to that criteria. This process is also
referred to as image segmentation.
Depending on the type of information you want to extract from the original data, classes may
be associated with known features on the ground or may simply represent areas that look
different to the computer. An example of a classified image is a land cover map, showing
vegetation, bare land, pasture, urban, etc.
The Classification
Process
Pattern Recognition Pattern recognition is the science—and art—of finding meaningful patterns in data, which can
be extracted through classification. By spatially and spectrally enhancing an image, pattern
recognition can be performed with the human eye; the human brain automatically sorts certain
textures and colors into categories.
In a computer system, spectral pattern recognition can be more scientific. Statistics are derived
from the spectral characteristics of all pixels in an image. Then, the pixels are sorted based on
mathematical criteria. The classification process breaks down into two parts: training and
classifying (using a decision rule).
Training First, the computer system must be trained to recognize patterns in the data. Training is the
process of defining the criteria by which these patterns are recognized (Hord, 1982). Training
can be performed with either a supervised or an unsupervised method, as explained below.
Supervised Training
Supervised training is closely controlled by the analyst. In this process, you select pixels that
represent patterns or land cover features that you recognize, or that you can identify with help
from other sources, such as aerial photos, ground truth data, or maps. Knowledge of the data,
and of the classes desired, is required before classification.
By identifying patterns, you can instruct the computer system to identify pixels with similar
characteristics. If the classification is accurate, the resulting classes represent the categories
within the data that you originally identified.
Unsupervised Training
Unsupervised training is more computer-automated. It enables you to specify some parameters
that the computer uses to uncover statistical patterns that are inherent in the data. These patterns
do not necessarily correspond to directly meaningful characteristics of the scene, such as
contiguous, easily recognized areas of a particular soil type or land use. They are simply clusters
of pixels with similar spectral characteristics. In some cases, it may be more important to
identify groups of pixels with similar spectral characteristics than it is to sort pixels into
recognizable categories.
Unsupervised training is dependent upon the data itself for the definition of classes. This method
is usually used when less is known about the data before classification. It is then the analyst’s
responsibility, after classification, to attach meaning to the resulting classes (Jensen, 1996).
Unsupervised classification is useful only if the classes can be appropriately interpreted.
Signatures The result of training is a set of signatures that defines a training sample or cluster. Each
signature corresponds to a class, and is used with a decision rule (explained below) to assign the
pixels in the image file to a class. Signatures in ERDAS IMAGINE can be parametric or
nonparametric.
A parametric signature is based on statistical parameters (e.g., mean and covariance matrix) of
the pixels that are in the training sample or cluster. Supervised and unsupervised training can
generate parametric signatures. A set of parametric signatures can be used to train a statistically-
based classifier (e.g., maximum likelihood) to define the classes.
A nonparametric signature is not based on statistics, but on discrete objects (polygons or
rectangles) in a feature space image. These feature space objects are used to define the
boundaries for the classes. A nonparametric classifier uses a set of nonparametric signatures to
assign pixels to a class based on their location either inside or outside the area in the feature
space image. Supervised training is used to generate nonparametric signatures (Kloer, 1994).
ERDAS IMAGINE enables you to generate statistics for a nonparametric signature. This
function allows a feature space object to be used to create a parametric signature from the image
being classified. However, since a parametric classifier requires a normal distribution of data,
the only feature space object for which this would be mathematically valid would be an ellipse
(Kloer, 1994).
When both parametric and nonparametric signatures are used to classify an image, you are more
able to analyze and visualize the class definitions than either type of signature provides
independently (Kloer, 1994).
See Appendix A “Math Topics” for information on feature space images and how they are
created.
Decision Rule After the signatures are defined, the pixels of the image are sorted into classes based on the
signatures by use of a classification decision rule. The decision rule is a mathematical algorithm
that, using data contained in the signature, performs the actual sorting of pixels into distinct
class values.
222 ERDAS
Classification Tips
Classification Tips
Classification Scheme Usually, classification is performed with a set of target classes in mind. Such a set is called a
classification scheme (or classification system). The purpose of such a scheme is to provide a
framework for organizing and categorizing the information that can be extracted from the data
(Jensen et al, 1983). The proper classification scheme includes classes that are both important
to the study and discernible from the data on hand. Most schemes have a hierarchical structure,
which can describe a study area in several levels of detail.
A number of classification schemes have been developed by specialists who have inventoried a
geographic region. Some references for professionally-developed schemes are listed below:
• Anderson, J.R., et al. 1976. “A Land Use and Land Cover Classification System for Use
with Remote Sensor Data.” U.S. Geological Survey Professional Paper 964.
• Cowardin, Lewis M., et al. 1979. Classification of Wetlands and Deepwater Habitats of the
United States. Washington, D.C.: U.S. Fish and Wildlife Service.
• Florida Topographic Bureau, Thematic Mapping Section. 1985. Florida Land Use, Cover
and Forms Classification System. Florida Department of Transportation, Procedure No.
550-010-001-a.
• Michigan Land Use Classification and Reference Committee. 1975. Michigan Land
Cover/Use Classification System. Lansing, Michigan: State of Michigan Office of Land
Use.
Other states or government agencies may also have specialized land use/cover studies.
It is recommended that the classification process is begun by defining a classification scheme
for the application, using previously developed schemes, like those above, as a general
framework.
Iterative Classification A process is iterative when it repeats an action. The objective of the ERDAS IMAGINE system
is to enable you to iteratively create and refine signatures and classified image files to arrive at
a desired final classification. The ERDAS IMAGINE classification utilities are tools to be used
as needed, not a numbered list of steps that must always be followed in order.
The total classification can be achieved with either the supervised or unsupervised methods, or
a combination of both. Some examples are below:
• Signatures created from both supervised and unsupervised training can be merged and
appended together.
• Signature evaluation tools can be used to indicate which signatures are spectrally similar.
This helps to determine which signatures should be merged or deleted. These tools also
help define optimum band combinations for classification. Using the optimum band
combination may reduce the time required to run a classification process.
Supervised vs. In supervised training, it is important to have a set of desired classes in mind, and then create
Unsupervised Training the appropriate signatures from the data. You must also have some way of recognizing pixels
that represent the classes that you want to extract.
Supervised classification is usually appropriate when you want to identify relatively few
classes, when you have selected training sites that can be verified with ground truth data, or
when you can identify distinct, homogeneous regions that represent each class.
On the other hand, if you want the classes to be determined by spectral distinctions that are
inherent in the data so that you can define the classes later, then the application is better suited
to unsupervised training. Unsupervised training enables you to define many classes easily, and
identify classes that are not in contiguous, easily recognized regions.
NOTE: Supervised classification also includes using a set of classes that is generated from an
unsupervised classification. Using a combination of supervised and unsupervised classification
may yield optimum results, especially with large data sets (e.g., multiple Landsat scenes). For
example, unsupervised classification may be useful for generating a basic set of classes, then
supervised classification can be used for further definition of the classes.
Classifying Enhanced For many specialized applications, classifying data that have been merged, spectrally merged
Data or enhanced—with principal components, image algebra, or other transformations—can
produce very specific and meaningful results. However, without understanding the data and the
enhancements used, it is recommended that only the original, remotely-sensed data be
classified.
Dimensionality Dimensionality refers to the number of layers being classified. For example, a data file with 3
layers is said to be 3-dimensional, since 3-dimensional feature space is plotted to analyze the
data.
224 ERDAS
Supervised Training
Adding Dimensions
Using programs in ERDAS IMAGINE, you can add layers to existing image files. Therefore,
you can incorporate data (called ancillary data) other than remotely-sensed data into the
classification. Using ancillary data enables you to incorporate variables into the classification
from, for example, vector layers, previously classified data, or elevation data. The data file
values of the ancillary data become an additional feature of each pixel, thus influencing the
classification (Jensen, 1996).
Limiting Dimensions
Although ERDAS IMAGINE allows an unlimited number of layers of data to be used for one
classification, it is usually wise to reduce the dimensionality of the data as much as possible.
Often, certain layers of data are redundant or extraneous to the task at hand. Unnecessary data
take up valuable disk space, and cause the computer system to perform more arduous
calculations, which slows down processing.
Use the Signature Editor to evaluate separability to calculate the best subset of layer
combinations. Use the Image Interpreter functions to merge or subset layers. Use the
Image Information tool (on the Viewer’s tool bar) to delete a layer(s).
Supervised Supervised training requires a priori (already known) information about the data, such as:
Training
• What type of classes need to be extracted? Soil type? Land use? Vegetation?
• What classes are most likely to be present in the data? That is, which types of land cover,
soil, or vegetation (or whatever) are represented by the data?
In supervised training, you rely on your own pattern recognition skills and a priori knowledge
of the data to help the system determine the statistical criteria (signatures) for data classification.
To select reliable samples, you should know some information—either spatial or spectral—
about the pixels that you want to classify.
The location of a specific characteristic, such as a land cover type, may be known through
ground truthing. Ground truthing refers to the acquisition of knowledge about the study area
from field work, analysis of aerial photography, personal experience, etc. Ground truth data are
considered to be the most accurate (true) data available about the area of study. They should be
collected at the same time as the remotely sensed data, so that the data correspond as much as
possible (Star and Estes, 1990). However, some ground data may not be very accurate due to a
number of errors and inaccuracies.
Training Samples and Training samples (also called samples) are sets of pixels that represent what is recognized as a
Feature Space Objects discernible pattern, or potential class. The system calculates statistics from the sample pixels to
create a parametric signature for the class.
The following terms are sometimes used interchangeably in reference to training samples. For
clarity, they are used in this documentation as follows:
• Training sample, or sample, is a set of pixels selected to represent a potential class. The data
file values for these pixels are used to generate a parametric signature.
• Training field, or training site, is the geographical AOI in the image represented by the
pixels in a sample. Usually, it is previously identified with the use of ground truth data.
Feature space objects are user-defined AOIs in a feature space image. The feature space
signature is based on these objects.
Selecting Training It is important that training samples be representative of the class that you are trying to identify.
Samples This does not necessarily mean that they must contain a large number of pixels or be dispersed
across a wide region of the data. The selection of training samples depends largely upon your
knowledge of the data, of the study area, and of the classes that you want to extract.
ERDAS IMAGINE enables you to identify training samples using one or more of the following
methods:
• identifying a training sample of contiguous pixels within a certain area, with or without
similar spectral characteristics
• using a class from a thematic raster layer from an image file of the same area (i.e., the result
of an unsupervised classification)
Digitized Polygon
Training samples can be identified by their geographical location (training sites, using maps,
ground truth data). The locations of the training sites can be digitized from maps with the
ERDAS IMAGINE Vector or AOI tools. Polygons representing these areas are then stored as
vector layers. The vector layers can then be used as input to the AOI tools and used as training
samples to create signatures.
Use the Vector and AOI tools to digitize training samples from a map. Use the Signature
Editor to create signatures from training samples that are identified with digitized
polygons.
User-defined Polygon
Using your pattern recognition skills (with or without supplemental ground truth information),
you can identify samples by examining a displayed image of the data and drawing a polygon
around the training site(s) of interest. For example, if it is known that oak trees reflect certain
frequencies of green and infrared light according to ground truth data, you may be able to base
your sample selections on the data (taking atmospheric conditions, sun angle, time, date, and
other variations into account). The area within the polygon(s) would be used to create a
signature.
226 ERDAS
Selecting Training Samples
Use the AOI tools to define the polygon(s) to be used as the training sample. Use the
Signature Editor to create signatures from training samples that are identified with the
polygons.
Select the Seed Properties option in the Viewer to identify training samples with a seed
pixel.
Vector layers (polygons or lines) can be displayed as the top layer in the Viewer, and the
boundaries can then be used as an AOI for training samples defined under Seed
Properties.
NOTE: The thematic raster layer must have the same coordinate system as the image file being
classified.
Evaluating Training Selecting training samples is often an iterative process. To generate signatures that accurately
Samples represent the classes to be identified, you may have to repeatedly select training samples,
evaluate the signatures that are generated from the samples, and then either take new samples
or manipulate the signatures as necessary. Signature manipulation may involve merging,
deleting, or appending from one file to another. It is also possible to perform a classification
using the known signatures, then mask out areas that are not classified to use in gathering more
signatures.
See “Evaluating Signatures” for methods of determining the accuracy of the signatures
created from your training samples.
Selecting Feature The ERDAS IMAGINE Feature Space tools enable you to interactively define feature space
Space Objects objects (AOIs) in the feature space image(s). A feature space image is simply a graph of the data
file values of one band of data against the values of another band (often called a scatterplot). In
ERDAS IMAGINE, a feature space image has the same data structure as a raster image;
therefore, feature space images can be used with other ERDAS IMAGINE utilities, including
zoom, color level slicing, virtual roam, Spatial Modeler, and Map Composer.
228 ERDAS
Selecting Feature Space Objects
band 2
band 1
The transformation of a multilayer raster image into a feature space image is done by mapping
the input pixel values to a position in the feature space image. This transformation defines only
the pixel position in the feature space image. It does not define the pixel’s value.
The pixel values in the feature space image can be the accumulated frequency, which is
calculated when the feature space image is defined. The pixel values can also be provided by a
thematic raster layer of the same geometry as the source multilayer image. Mapping a thematic
layer into a feature space image can be useful for evaluating the validity of the parametric and
nonparametric decision boundaries of a classification (Kloer, 1994).
When you display a feature space image file (.fsp.img) in a Viewer, the colors reflect the
density of points for both bands. The bright tones represent a high density and the dark
tones represent a low density.
Create feature space image from the image file being classified
(layer 1 vs. layer 2).
Use the Feature Space tools in the Signature Editor to create a feature space image and
mask the signature. Use the AOI tools to draw polygons.
230 ERDAS
Unsupervised Training
Advantages Disadvantages
Provide an accurate way to classify a class The classification decision process allows overlap and
with a nonnormal distribution unclassified pixels.
(e.g., residential and urban).
Certain features may be more visually The feature space image may be difficult to interpret.
identifiable in a feature space image.
The classification decision process is fast.
Unsupervised Unsupervised training requires only minimal initial input from you. However, you have the task
Training of interpreting the classes that are created by the unsupervised training algorithm.
Unsupervised training is also called clustering, because it is based on the natural groupings of
pixels in image data when they are plotted in feature space. According to the specified
parameters, these groups can later be merged, disregarded, otherwise manipulated, or used as
the basis of a signature.
Clusters
Clusters are defined with a clustering algorithm, which often uses all or many of the pixels in
the input data file for its analysis. The clustering algorithm has no regard for the contiguity of
the pixels that define each cluster.
• The Iterative Self-Organizing Data Analysis Technique (ISODATA) (Tou and Gonzalez,
1974) clustering method uses spectral distance as in the sequential method, but iteratively
classifies the pixels, redefines the criteria for each class, and classifies again, so that the
spectral distance patterns in the data gradually emerge.
• The RGB clustering method is more specialized than the ISODATA method. It applies to
three-band, 8-bit data. RGB clustering plots pixels in three-dimensional feature space, and
divides that space into sections that are used to define clusters.
Each of these methods is explained below, along with its advantages and disadvantages.
Some of the statistics terms used in this section are explained in Appendix A “Math
Topics”.
ISODATA Clustering ISODATA is iterative in that it repeatedly performs an entire classification (outputting a
thematic raster layer) and recalculates statistics. Self-Organizing refers to the way in which it
locates clusters with minimum user input.
The ISODATA method uses minimum spectral distance to assign a cluster for each candidate
pixel. The process begins with a specified number of arbitrary cluster means or the means of
existing signatures, and then it processes repetitively, so that those means shift to the means of
the clusters in the data.
Because the ISODATA method is iterative, it is not biased to the top of the data file, as are the
one-pass clustering algorithms.
Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA
clustering.
• N - the maximum number of clusters to be considered. Since each cluster is the basis for a
class, this number becomes the maximum number of classes to be formed. The ISODATA
process begins by determining N arbitrary cluster means. Some clusters with too few pixels
can be eliminated, leaving less than N clusters.
• T - a convergence threshold, which is the maximum percentage of pixels whose class values
are allowed to be unchanged between iterations.
232 ERDAS
Unsupervised Training
µB+ σB
Band B
µB- σB
0
0 µA - σ µA µA+σA
A
Band A
data file values
Pixel Analysis
Pixels are analyzed beginning with the upper left corner of the image and going left to right,
block by block.
The spectral distance between the candidate pixel and each cluster mean is calculated. The pixel
is assigned to the cluster whose mean is the closest. The ISODATA function creates an output
image file with a thematic raster layer and/or a signature file (.sig) as a result of the clustering.
At the end of each iteration, an image file exists that shows the assignments of the pixels to the
clusters.
Considering the regular, arbitrary assignment of the initial cluster means, the first iteration of
the ISODATA algorithm always gives results similar to those in Figure 7-4.
Cluster
Band B
Cluster
1
Band A
data file values
For the second iteration, the means of all clusters are recalculated, causing them to shift in
feature space. The entire process is repeated—each candidate pixel is compared to the new
cluster means and assigned to the closest cluster mean.
Percentage Unchanged
After each iteration, the normalized percentage of pixels whose assignments are unchanged
since the last iteration is displayed in the dialog. When this number reaches T (the convergence
threshold), the program terminates.
It is possible for the percentage of unchanged pixels to never converge or reach T (the
convergence threshold). Therefore, it may be beneficial to monitor the percentage, or specify a
reasonable maximum number of iterations, M, so that the program does not run indefinitely.
Advantages Disadvantages
Because it is iterative, clustering is not The clustering process is time-consuming, because it
geographically biased to the top or bottom can repeat many times.
pixels of the data file.
This algorithm is highly successful at finding Does not account for pixel spatial homogeneity.
the spectral clusters that are inherent in the
data. It does not matter where the initial
arbitrary cluster means are located, as long as
enough iterations are allowed.
A preliminary thematic raster layer is created,
which gives results similar to using a
minimum distance classifier (as explained
below) on the signatures that are created. This
thematic raster layer can be used for
analyzing and manipulating the signatures
before actual classification takes place.
234 ERDAS
Unsupervised Training
The resulting bands are noncorrelated and independent. You may find these bands more
interpretable than the source data. PCA can be performed on up to 256 bands with ERDAS
IMAGINE. As a type of spectral enhancement, you are required to specify the number of
components you want output from the original data.
Use the Merge and Delete options in the Signature Editor to manipulate signatures.
Use the Unsupervised Classification utility in the Signature Editor to perform ISODATA
clustering, generate signatures, and classify the resulting signatures.
RGB Clustering
The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create
a thematic raster layer. However, no signature file is created and no other classification
decision rule is used. In practice, RGB Clustering differs greatly from the other clustering
methods, but it does employ a clustering algorithm.
RGB clustering is a simple classification and data compression technique for three bands of
data. It is a fast and simple algorithm that quickly compresses a three-band image into a single
band pseudocolor image, without necessarily classifying any particular features.
The algorithm plots all pixels in 3-dimensional feature space and then partitions this space into
clusters on a grid. In the more simplistic version of this function, each of these clusters becomes
a class in the output thematic raster layer.
The advanced version requires that a minimum threshold on the clusters be set so that only
clusters at least as large as the threshold become output classes. This allows for more color
variation in the output file. Pixels that do not fall into any of the remaining clusters are assigned
to the cluster with the smallest city-block distance from the pixel. In this case, the city-block
distance is calculated as the sum of the distances in the red, green, and blue directions in 3-
dimensional space.
Along each axis of the three-dimensional scatterplot, each input histogram is scaled so that the
partitions divide the histograms between specified limits—either a specified number of standard
deviations above and below the mean, or between the minimum and maximum data values for
each band.
The default number of divisions per band is listed below:
frequency
between 16 and 34 in RED,
B and between 35 and 55 in
GREEN, and between 0 and
16 16 in BLUE.
0 35 195 255
16 98
98 G
R
195
16
34 R
55
0
35
G
0
35
16
B
25
B
5
Partitioning Parameters
It is necessary to specify the number of R, G, and B sections in each dimension of the
3-dimensional scatterplot. The number of sections should vary according to the histograms of
each band. Broad histograms should be divided into more sections, and narrow histograms
should be divided into fewer sections (see Figure 7-6).
236 ERDAS
Signature Files
Advantages Disadvantages
The fastest classification method. It is Exactly three bands must be input, which is not suitable
designed to provide a fast, simple for all applications.
classification for applications that do not
require specific classes.
Not biased to the top or bottom of the data Does not always create thematic classes that can be
file. The order in which the pixels are analyzed for informational purposes.
examined does not influence the outcome.
(Advanced version only) A highly interactive
function, allowing an iterative adjustment of
the parameters until the number of clusters
and the thresholds are satisfactory for
analysis.
Tips
Some starting values that usually produce good results with the simple RGB clustering are:
R = 7
G = 6
B = 6
which results in 7 × 6 × 6 = 252 classes.
To decrease the number of output colors/classes or to darken the output, decrease these values.
For the Advanced RGB clustering function, start with higher values for R, G, and B. Adjust by
raising the threshold parameter and/or decreasing the R, G, and B parameter values until the
desired number of output classes is obtained.
Signature Files A signature is a set of data that defines a training sample, feature space object (AOI), or cluster.
The signature is used in a classification process. Each classification decision rule (algorithm)
requires some signature attributes as input—these are stored in the signature file (.sig).
Signatures in ERDAS IMAGINE can be parametric or nonparametric.
The following attributes are standard for all signatures (parametric and nonparametric):
• name—identifies the signature and is used as the class name in the output thematic raster
layer. The default signature name is Class <number>.
• color—the color for the signature and the color for the class in the output thematic raster
layer. This color is also used with other signature visualization functions, such as alarms,
masking, ellipses, etc.
• value—the output class value for the signature. The output class value does not necessarily
need to be the class number of the signature. This value should be a positive integer.
• order—the order to process the signatures for order-dependent processes, such as signature
alarms and parallelepiped classifications.
Parametric Signature
A parametric signature is based on statistical parameters (e.g., mean and covariance matrix) of
the pixels that are in the training sample or cluster. A parametric signature includes the
following attributes in addition to the standard attributes for signatures:
• the number of bands in the input image (as processed by the training program)
• the minimum and maximum data file value in each band for each sample or cluster
(minimum vector and maximum vector)
• the mean data file value in each band for each sample or cluster (mean vector)
Nonparametric Signature
A nonparametric signature is based on an AOI that you define in the feature space image for the
image file being classified. A nonparametric classifier uses a set of nonparametric signatures to
assign pixels to a class based on their location, either inside or outside the area in the feature
space image.
The format of the .sig file is described in the On-Line Help. Information on these statistics
can be found in Appendix A “Math Topics”.
Evaluating Once signatures are created, they can be evaluated, deleted, renamed, and merged with
Signatures signatures from other files. Merging signatures enables you to perform complex classifications
with signatures that are derived from more than one training method (supervised and/or
unsupervised, parametric and/or nonparametric).
Use the Signature Editor to view the contents of each signature, manipulate signatures,
and perform your own mathematical tests on the statistics.
238 ERDAS
Evaluating Signatures
• Alarm—using your own pattern recognition ability, you view the estimated classified area
for a signature (using the parallelepiped decision rule) against a display of the original
image.
• Ellipse—view ellipse diagrams and scatterplots of data file values for every pair of bands.
NOTE: If the signature is nonparametric (i.e., a feature space signature), you can use only the
alarm evaluation method.
After analyzing the signatures, it would be beneficial to merge or delete them, eliminate
redundant bands from the data, add new bands of data, or perform any other operations to
improve the classification.
Alarm The alarm evaluation enables you to compare an estimated classification of one or more
signatures against the original data, as it appears in the Viewer. According to the parallelepiped
decision rule, the pixels that fit the classification criteria are highlighted in the displayed image.
You also have the option to indicate an overlap by having it appear in a different color.
With this test, you can use your own pattern recognition skills, or some ground truth data, to
determine the accuracy of a signature.
Use the Signature Alarm utility in the Signature Editor to perform n-dimensional alarms
on the image in the Viewer, using the parallelepiped decision rule. The alarm utility
creates a functional layer, and the Viewer allows you to toggle between the image layer
and the functional layer.
Ellipse In this evaluation, ellipses of concentration are calculated with the means and standard
deviations stored in the signature file. It is also possible to generate parallelepiped rectangles,
means, and labels.
In this evaluation, the mean and the standard deviation of every signature are used to represent
the ellipse in 2-dimensional feature space. The ellipse is displayed in a feature space image.
Ellipses are explained and illustrated in Appendix A “Math Topics” under the discussion
of Scatterplots.
When the ellipses in the feature space image show extensive overlap, then the spectral
characteristics of the pixels represented by the signatures cannot be distinguished in the two
bands that are graphed. In the best case, there is no overlap. Some overlap, however, is expected.
Figure 7-7 shows how ellipses are plotted and how they can overlap. The first graph shows how
the ellipses are plotted based on the range of 2 standard deviations from the mean. This range
can be altered, changing the ellipse plots. Analyzing the plots with differing numbers of
standard deviations is useful for determining the limits of a parallelepiped classification.
values
values
signature 2 signature 1
µµB2
values
values
+2 s
B2+2s
DD
signature 2 signature 1
µµB2 +2 s
B2+2s
µ
µD1
Band
D1 signature 1
µ
filefile
µD1
filefile
Band
D1
µµB2 signature 2
data
data
µµB2
B2
µ D2 signature 2
data
data
B2 D2
µ D2
D2
µ µB2B2-2-2s
s
µ µB2B2-2-2s
s
µC2 µµC1
µ A2µ-2
µ A2µ A2
µ A2µ+2s
µA2µA2
A2-2s
A2+2s
C2 C1
µC2 µµC1
µA2µ-2s
µA2µ+2s C1
A2s -2 s
A2 +2s
Band A Band C
dataBand A
file values dataBand C
file values
data file values data file values
µA2 = mean in Band A for signature 2,
µA2
B2 = mean in Band B
A for signature 2, etc.
µ
µ A1B2= mean in Band Aforforsignature
= mean in Band B 2, etc.
signature 1,
µ A1=
µ mean in
in Band
Band A
A for
for signature
signature 2,
1, etc.
A2= mean
By analyzing the ellipse graphs for all band pairs, you can determine which signatures and
which bands provide accurate classification results.
Use the Signature Editor to create a feature space image and to view an ellipse(s) of
signature data.
Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the results
to the pixels of a training sample.
The pixels of each training sample are not always so homogeneous that every pixel in a sample
is actually classified to its corresponding class. Each sample pixel only weights the statistics that
determine the classes. However, if the signature statistics for each sample are distinct from those
of the other samples, then a high percentage of each sample’s pixels is classified as expected.
In this evaluation, a quick classification of the sample pixels is performed using the minimum
distance, maximum likelihood, or Mahalanobis distance decision rule. Then, a contingency
matrix is presented, which contains the number and percentages of pixels that are classified as
expected.
240 ERDAS
Evaluating Signatures
Separability Signature separability is a statistical measure of distance between two signatures. Separability
can be calculated for any combination of bands that is used in the classification, enabling you
to rule out any bands that are not useful in the results of the classification.
For the distance (Euclidean) evaluation, the spectral distance between the mean vectors of each
pair of signatures is computed. If the spectral distance between two samples is not significant
for any pair of bands, then they may not be distinct enough to produce a successful
classification.
The spectral distance is also the basis of the minimum distance classification (as explained
below). Therefore, computing the distances between signatures helps you predict the results of
a minimum distance classification.
Use the Signature Editor to compute signature separability and distance and
automatically generate the report.
The formulas used to calculate separability are related to the maximum likelihood decision rule.
Therefore, evaluating signature separability helps you predict the results of a maximum
likelihood classification. The maximum likelihood decision rule is explained below.
There are three options for calculating the separability. All of these formulas take into account
the covariances of the signatures in the bands being compared, as well as the mean vectors of
the signatures.
Refer to Appendix A “Math Topics” for information on the mean vector and covariance
matrix.
Divergence
The formula for computing Divergence (Dij) is as follows:
1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2
Where:
i and j = the two signatures (classes) being compared
Ci = the covariance matrix of signature i
µi = the mean vector of signature i
tr = the trace function (matrix algebra)
T = the transposition function
Source: Swain and Davis, 1978
Transformed Divergence
The formula for computing Transformed Divergence (TD) is as follows:
1 –1 –1 1 –1 –1 T
D ij = --- tr ( ( C i – C j ) ( C i – C j ) ) + --- tr ( ( C i – C j ) ( µ i – µ j ) ( µ i – µ j ) )
2 2
– D ij
TD ij = 2000 1 – exp ----------
8
Where:
i and j = the two signatures (classes) being compared
Ci = the covariance matrix of signature i
µi = the mean vector of signature i
tr = the trace function (matrix algebra)
T = the transposition function
Source: Swain and Davis, 1978
Jeffries-Matusita Distance
The formula for computing Jeffries-Matusita Distance (JM) is as follows:
1 T Ci + Cj
–1
1 ( Ci + Cj ) ⁄ 2
α = --- ( µ i – µ j ) ----------------- ( µ i – µ j ) + --- ln --------------------------------
8 2 2 C × C
i j
–α
JM ij = 2(1 – e )
Where:
i and j = the two signatures (classes) being compared
Ci = the covariance matrix of signature i
µi = the mean vector of signature i
ln = the natural logarithm function
|Ci | = the determinant of Ci (matrix algebra)
Source: Swain and Davis, 1978
According to Jensen, “The JM distance has a saturating behavior with increasing class
separation like transformed divergence. However, it is not as computationally efficient as
transformed divergence” (Jensen, 1996).
242 ERDAS
Evaluating Signatures
Separability
Both transformed divergence and Jeffries-Matusita distance have upper and lower bounds. If
the calculated divergence is equal to the appropriate upper bound, then the signatures can be
said to be totally separable in the bands being studied. A calculated divergence of zero means
that the signatures are inseparable.
A separability listing is a report of the computed divergence for every class pair and one band
combination. The listing contains every divergence value for the bands studied for every
possible pair of signatures.
The separability listing also contains the average divergence and the minimum divergence for
the band set. These numbers can be compared to other separability listings (for other band
combinations), to determine which set of bands is the most useful for classification.
Weight Factors
As with the Bayesian classifier (explained below with maximum likelihood), weight factors
may be specified for each signature. These weight factors are based on a priori probabilities that
any given pixel is assigned to each class. For example, if you know that twice as many pixels
should be assigned to Class A as to Class B, then Class A should receive a weight factor that is
twice that of Class B.
NOTE: The weight factors do not influence the divergence equations (for TD or JM), but they
do influence the report of the best average and best minimum separability.
The weight factors for each signature are used to compute a weighted divergence with the
following calculation:
c–1
c
∑ ∑ i j ij
f f U
i = 1 j = i + 1
W ij = ------------------------------------------------------
2
c c
1--- –
2 ∑ i ∑ fi 2
f
i = 1 i=1
Where:
i and j = the two signatures (classes) being compared
Uij = the unweighted divergence between i and j
Wij = the weighted divergence between i and j
c = the number of signatures (classes)
fi = the weight factor for signature i
Probability of Error
The Jeffries-Matusita distance is related to the pairwise probability of error, which is the
probability that a pixel assigned to class i is actually in class j. Within a range, this probability
can be estimated according to the expression below:
1- 2 2 1 1 2
----- ( 2 – JM ij ) ≤ P e ≤ 1 – --- 1 + --- JM ij
16 2 2
Where:
i and j = the signatures (classes) being compared
JMij = the Jeffries-Matusita distance between i and j
Pe = the probability that a pixel is misclassified from i to j
Source: Swain and Davis, 1978
Signature In many cases, training must be repeated several times before the desired signatures are
Manipulation produced. Signatures can be gathered from different sources—different training samples,
feature space images, and different clustering programs—all using different techniques. After
each signature file is evaluated, you may merge, delete, or create new signatures. The desired
signatures can finally be moved to one signature file to be used in the classification.
The following operations upon signatures and signature files are possible with ERDAS
IMAGINE:
• View histograms of the samples or clusters that were used to derive the signatures
• Merge signatures together, so that they form one larger class when classified
• Append signatures from other files. You can combine signatures that are derived from
different training methods for use in one classification.
Use the Signature Editor to view statistics and histogram listings and to delete, merge,
append, and rename signatures within a signature file.
Classification Once a set of reliable signatures has been created and evaluated, the next step is to perform a
Decision Rules classification of the data. Each pixel is analyzed independently. The measurement vector for
each pixel is compared to each signature, according to a decision rule, or algorithm. Pixels that
pass the criteria that are established by the decision rule are then assigned to the class for that
signature. ERDAS IMAGINE enables you to classify the data both parametrically with
statistical representation, and nonparametrically as objects in feature space. Figure 7-8 shows
the flow of an image pixel through the classification decision making process in ERDAS
IMAGINE (Kloer, 1994).
244 ERDAS
Classification Decision Rules
If a nonparametric rule is not set, then the pixel is classified using only the parametric rule. All
of the parametric signatures are tested. If a nonparametric rule is set, the pixel is tested against
all of the signatures with nonparametric definitions. This rule results in the following
conditions:
• If the nonparametric test results in one unique class, the pixel is assigned to that class.
• If the nonparametric test results in zero classes (i.e., the pixel lies outside all the
nonparametric decision boundaries), then the unclassified rule is applied. With this rule, the
pixel is either classified by the parametric rule or left unclassified.
• If the pixel falls into more than one class as a result of the nonparametric test, the overlap
rule is applied. With this rule, the pixel is either classified by the parametric rule, processing
order, or left unclassified.
Nonparametric Rules ERDAS IMAGINE provides these decision rules for nonparametric signatures:
• parallelepiped
• feature space
Unclassified Options
ERDAS IMAGINE provides these options if the pixel is not classified by the nonparametric
rule:
• parametric rule
• unclassified
Overlap Options
ERDAS IMAGINE provides these options if the pixel falls into more than one feature space
object:
• parametric rule
• by order
• unclassified
Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for parametric signatures:
• minimum distance
• Mahalanobis distance
Candidate Pixel
No Nonparametric Rule
Yes
0 >1
By Order
Parametric Unclassified Parametric
Unclassified
Parametric Rule
Unclassified
Assignment
Class
Assignment
Parallelepiped In the parallelepiped decision rule, the data file values of the candidate pixel are compared to
upper and lower limits. These limits can be either:
• the minimum and maximum data file values of each band in the signature,
• the mean of each band, plus and minus a number of standard deviations, or
• any limits that you specify, based on your knowledge of the data and signatures. This
knowledge may come from the signature evaluation techniques discussed above.
These limits can be set using the Parallelepiped Limits utility in the Signature Editor.
246 ERDAS
Classification Decision Rules
There are high and low limits for every signature in every band. When a pixel’s data file values
are between the limits for every band in a signature, then the pixel is assigned to that signature’s
class. Figure 7-9 is a two-dimensional example of a parallelepiped classification.
● = pixels in class 1
? ? ?
◆ ◆
class 3 ▲ = pixels in class 2
? ? ◆ ◆
? ? ? ◆◆ ◆ ◆ = pixels in class 3
data file values
◆ ◆ ◆◆ ?
? ? ? ? ◆ ◆ ? = unclassified pixels
µB2+2s ◆ ◆
Band B
◆ ? ?
▲ ▲ ▲▲ ◆ ◆ ◆ ?
?
▲
▲
▲ ▲ ? ? ?
?
µA2 = mean of Band A,
▲ ▲ ▲▲ ? ? ?
? ? class 2
? ? ?
µB2 ▲
▲
▲
▲ ● ● ● ● ● µB2 = mean of Band B,
? ● class 1
▲ ▲ ● class 2
? ▲ ? ?
▲
?
?
µB2-2s class 2
µA2+2s
µA2
µA2-2s
Band A
data file values
The large rectangles in Figure 7-9 are called parallelepipeds. They are the regions within the
limits for each signature.
Overlap Region
In cases where a pixel may fall into the overlap region of two or more parallelepipeds, you must
define how the pixel can be classified.
• The pixel can be classified by the order of the signatures. If one of the signatures is first and
the other signature is fourth, the pixel is assigned to the first signature’s class. This order
can be set in the Signature Editor.
• The pixel can be classified by the defined parametric decision rule. The pixel is tested
against the overlapping signatures only. If neither of these signatures is parametric, then the
pixel is left unclassified. If only one of the signatures is parametric, then the pixel is
automatically assigned to that signature’s class.
• The pixel can be classified by the defined parametric decision rule. The pixel is tested
against all of the parametric signatures. If none of the signatures is parametric, then the
pixel is left unclassified.
Advantages Disadvantages
Fast and simple, since the data file values are Since parallelepipeds have corners, pixels that are
compared to limits that remain constant for actually quite far, spectrally, from the mean of the
each band in each signature. signature may be classified. An example of this is
shown in Figure 7-10.
Often useful for a first-pass, broad
classification, this decision rule quickly
narrows down the number of possible classes
to which each pixel can be assigned before
the more time-consuming calculations are
made, thus cutting processing time (e.g.,
minimum distance, Mahalanobis distance, or
maximum likelihood).
Not dependent on normal distributions.
Signature Ellipse
data file values
Band B
µB Parallelepiped
boundary
*
candidate pixel
µA
Band A
data file values
Feature Space The feature space decision rule determines whether or not a candidate pixel lies within the
nonparametric signature in the feature space image. When a pixel’s data file values are in the
feature space signature, then the pixel is assigned to that signature’s class. Figure 7-11 is a two-
dimensional example of a feature space classification. The polygons in this figure are AOIs used
to define the feature space signatures.
248 ERDAS
Classification Decision Rules
◆ ◆ ◆ ◆ ◆
◆ ? ?
◆ ◆ ◆ ?
◆
◆ ◆ ◆ class 3 ? ?
?
?
● = pixels in class 1
?? ? ?
Band B
▲ ▲ ▲ ? ?
?
? ◆ = pixels in class 3
▲ ▲ ?
▲
▲ ▲ ▲ ▲
? ? ? = unclassified pixels
▲
▲ ▲
class 2 ▲ ● ● ● ● ●
● ●
● ● ●
● ● ● ●
class 1
? ?
? ?
?? ?
?
? ? ?
?
Band A
d t fil l
Overlap Region
In cases where a pixel may fall into the overlap region of two or more AOIs, you must define
how the pixel can be classified.
• The pixel can be classified by the order of the feature space signatures. If one of the
signatures is first and the other signature is fourth, the pixel is assigned to the first
signature’s class. This order can be set in the Signature Editor.
• The pixel can be classified by the defined parametric decision rule. The pixel is tested
against the overlapping signatures only. If neither of these feature space signatures is
parametric, then the pixel is left unclassified. If only one of the signatures is parametric,
then the pixel is assigned automatically to that signature’s class.
• The pixel can be classified by the defined parametric decision rule. The pixel is tested
against all of the parametric signatures. If none of the signatures is parametric, then the
pixel is left unclassified.
Advantages Disadvantages
Often useful for a first-pass, broad The feature space decision rule allows overlap and
classification. unclassified pixels.
Provides an accurate way to classify a class The feature space image may be difficult to interpret.
with a nonnormal distribution (e.g.,
residential and urban).
Certain features may be more visually
identifiable, which can help discriminate
between classes that are spectrally similar and
hard to differentiate with parametric
information.
The feature space method is fast.
Use the Decision Rules utility in the Signature Editor to perform a feature space
classification.
Minimum Distance The minimum distance decision rule (also called spectral distance) calculates the spectral
distance between the measurement vector for the candidate pixel and the mean vector for each
signature.
candidate pixel
µB3 µ3
◆
data file values
Band B
µB2 ◆
µ2
µB1 ◆ µ1
o
o µA1 µA2 µA3
Band A
data file values
In Figure 7-12, spectral distance is illustrated by the lines from the candidate pixel to the means
of the three signatures. The candidate pixel is assigned to the class with the closest mean.
The equation for classifying by spectral distance is based on the equation for Euclidean
distance:
250 ERDAS
Classification Decision Rules
n
2
SD xyc = ∑ ( µci – Xxyi )
i=1
Where:
n = number of bands (dimensions)
i = a particular band
c = a particular class
Xxyi = data file value of pixel x,y in band i
µci = mean of data file values in band i for the sample for class c
SDxyc = spectral distance from pixel x,y to the mean of class c
Source: Swain and Davis, 1978
When spectral distance is computed for all possible values of c (all possible classes), the class
of the candidate pixel is assigned to the class for which SD is the lowest.
Advantages Disadvantages
Since every pixel is spectrally closer to either Pixels that should be unclassified (i.e., they are not
one sample mean or another, there are no spectrally close to the mean of any sample, within limits
unclassified pixels. that are reasonable to you) become classified. However,
this problem is alleviated by thresholding out the pixels
that are farthest from the means of their classes. (See the
discussion on “Thresholding”.)
The fastest decision rule to compute, except Does not consider class variability. For example, a class
for parallelepiped. like an urban land cover class is made up of pixels with
a high variance, which may tend to be farther from the
mean of the signature. Using this decision rule, outlying
urban pixels may be improperly classified. Inversely, a
class with less variance, like water, may tend to
overclassify (that is, classify more pixels than are
appropriate to the class), because the pixels that belong
to the class are usually spectrally closer to their mean
than those of other classes to their means.
Mahalanobis Distance
The Mahalanobis distance algorithm assumes that the histograms of the bands have
normal distributions. If this is not the case, you may have better results with the
parallelepiped or minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
Mahalanobis distance is similar to minimum distance, except that the covariance matrix is used
in the equation. Variance and covariance are figured in so that clusters that are highly varied
lead to similarly varied classes, and vice versa. For example, when classifying urban areas—
typically a class whose pixels vary widely—correctly classified pixels may be farther from the
mean than those of a class for water, which is usually not a highly varied class (Swain and Davis,
1978).
The equation for the Mahalanobis distance classifier is as follows:
D = (X-Mc)T (Covc-1) (X-Mc)
Where:
D = Mahalanobis distance
c = a particular class
X = the measurement vector of the candidate pixel
Mc = the mean vector of the signature of class c
Covc = the covariance matrix of the pixels in the signature of class c
Covc-1 = inverse of Covc
T = transposition function
The pixel is assigned to the class, c, for which D is the lowest.
Advantages Disadvantages
Takes the variability of classes into account, Tends to overclassify signatures with relatively large
unlike minimum distance or parallelepiped. values in the covariance matrix. If there is a large
dispersion of the pixels in a cluster or training sample,
then the covariance matrix of that signature contains
large values.
May be more useful than minimum distance Slower to compute than parallelepiped or minimum
in cases where statistical criteria (as distance.
expressed in the covariance matrix) must be
taken into account, but the weighting factors
that are available with the maximum
likelihood/Bayesian option are not needed.
Mahalanobis distance is parametric, meaning that it
relies heavily on a normal distribution of the data in
each input band.
Maximum
Likelihood/Bayesian
The maximum likelihood algorithm assumes that the histograms of the bands of data have
normal distributions. If this is not the case, you may have better results with the
parallelepiped or minimum distance decision rule, or by performing a first-pass
parallelepiped classification.
252 ERDAS
Classification Decision Rules
The maximum likelihood decision rule is based on the probability that a pixel belongs to a
particular class. The basic equation assumes that these probabilities are equal for all classes, and
that the input bands have normal distributions.
Bayesian Classifier
If you have a priori knowledge that the probabilities are not equal for all classes, you can specify
weight factors for particular classes. This variation of the maximum likelihood decision rule is
known as the Bayesian decision rule (Hord, 1982). Unless you have a priori knowledge of the
probabilities, it is recommended that they not be specified. In this case, these weights default to
1.0 in the equation.
The equation for the maximum likelihood/Bayesian classifier is as follows:
D = ln(ac) - [0.5 ln(|Covc|)] - [0.5 (X-Mc)T (Covc-1) (X-Mc)]
Where:
D = weighted distance (likelihood)
c = a particular class
X = the measurement vector of the candidate pixel
Mc = the mean vector of the sample of class c
ac = percent probability that any candidate pixel is a member of class c (defaults to
1.0, or is entered from a priori knowledge)
Covc = the covariance matrix of the pixels in the sample of class c
|Covc| = determinant of Covc (matrix algebra)
Covc-1 = inverse of Covc (matrix algebra)
ln = natural logarithm function
T = transposition function (matrix algebra)
The inverse and determinant of a matrix, along with the difference and transposition of vectors,
would be explained in a textbook of matrix algebra.
The pixel is assigned to the class, c, for which D is the lowest.
Advantages Disadvantages
The most accurate of the classifiers in the An extensive equation that takes a long time to
ERDAS IMAGINE system (if the input compute. The computation time increases with the
samples/clusters have a normal distribution), number of input bands.
because it takes the most variables into
consideration.
Takes the variability of classes into account Maximum likelihood is parametric, meaning that it
by using the covariance matrix, as does relies heavily on a normal distribution of the data in
Mahalanobis distance. each input band.
Tends to overclassify signatures with relatively large
values in the covariance matrix. If there is a large
dispersion of the pixels in a cluster or training sample,
then the covariance matrix of that signature contains
large values.
Fuzzy
Methodology
Fuzzy Classification The Fuzzy Classification method takes into account that there are pixels of mixed make-up, that
is, a pixel cannot be definitively assigned to one category. Jensen notes that, “Clearly, there
needs to be a way to make the classification algorithms more sensitive to the imprecise (fuzzy)
nature of the real world” (Jensen, 1996).
Fuzzy classification is designed to help you work with data that may not fall into exactly one
category or another. Fuzzy classification works using a membership function, wherein a pixel’s
value is determined by whether it is closer to one class than another. A fuzzy classification does
not have definite boundaries, and each pixel can belong to several different classes (Jensen,
1996).
Like traditional classification, fuzzy classification still uses training, but the biggest difference
is that “it is also possible to obtain information on the various constituent classes found in a
mixed pixel. . .” (Jensen, 1996). Jensen goes on to explain that the process of collecting training
sites in a fuzzy classification is not as strict as a traditional classification. In the fuzzy method,
the training sites do not have to have pixels that are exactly the same.
Once you have a fuzzy classification, the fuzzy convolution utility allows you to perform a
moving window convolution on a fuzzy classification with multiple output class assignments.
Using the multilayer classification and distance file, the convolution creates a new single class
output file by computing a total weighted distance for all classes in the window.
Fuzzy Convolution The Fuzzy Convolution operation creates a single classification layer by calculating the total
weighted inverse distance of all the classes in a window of pixels. Then, it assigns the center
pixel in the class with the largest total inverse distance summed over the entire set of fuzzy
classification layers.
This has the effect of creating a context-based classification to reduce the speckle or salt and
pepper in the classification. Classes with a very small distance value remain unchanged while
classes with higher distance values may change to a neighboring value if there is a sufficient
number of neighboring pixels with class values and small corresponding distance values. The
following equation is used in the calculation:
s s n
w ij
T[ k] = ∑ ∑ ∑ --------------
D ijl [ k ]
-
i = 0j = 0l = 0
254 ERDAS
Expert Classification
Where:
i = row index of window
j = column index of window
s = size of window (3, 5, or 7)
l = layer index of fuzzy set
n = number of fuzzy layers used
W = weight table for window
k = class value
D[k] = distance file value for class k
T[k] = total weighted distance of window for class k
The center pixel is assigned the class with the maximum T[k].
Expert Expert classification can be performed using the IMAGINE Expert Classifier™. The expert
Classification classification software provides a rules-based approach to multispectral image classification,
post-classification refinement, and GIS modeling. In essence, an expert classification system is
a hierarchy of rules, or a decision tree, that describes the conditions under which a set of low
level constituent information gets abstracted into a set of high level informational classes. The
constituent information consists of user-defined variables and includes raster imagery, vector
coverages, spatial models, external programs, and simple scalars.
A rule is a conditional statement, or list of conditional statements, about the variable’s data
values and/or attributes that determine an informational component or hypotheses. Multiple
rules and hypotheses can be linked together into a hierarchy that ultimately describes a final set
of target informational classes or terminal hypotheses. Confidence values associated with each
condition are also combined to provide a confidence image corresponding to the final output
classified image.
The IMAGINE Expert Classifier is composed of two parts: the Knowledge Engineer and the
Knowledge Classifier. The Knowledge Engineer provides the interface for an expert with first-
hand knowledge of the data and the application to identify the variables, rules, and output
classes of interest and create the hierarchical decision tree. The Knowledge Classifier provides
an interface for a nonexpert to apply the knowledge base and create the output classification.
Knowledge Engineer With the Knowledge Engineer, you can open knowledge bases, which are presented as decision
trees in editing windows.
In Figure 7-13, the upper left corner of the editing window is an overview of the entire decision
tree with a green box indicating the position within the knowledge base of the currently
displayed portion of the decision tree. This box can be dragged to change the view of the
decision tree graphic in the display window on the right. The branch containing the currently
selected hypotheses, rule, or condition is highlighted in the overview.
The decision tree grows in depth when the hypothesis of one rule is referred to by a condition
of another rule. The terminal hypotheses of the decision tree represent the final classes of
interest. Intermediate hypotheses may also be flagged as being a class of interest. This may
occur when there is an association between classes.
Figure 7-14 represents a single branch of a decision tree depicting a hypothesis, its rule, and
conditions.
256 ERDAS
Expert Classification
Hypothesis Rule
Slope > 0
In this example, the rule, which is Gentle Southern Slope, determines the hypothesis, Good
Location. The rule has four conditions depicted on the right side, all of which must be satisfied
for the rule to be true.
However, the rule may be split if either Southern or Gentle slope defines the Good Location
hypothesis. While both conditions must still be true to fire a rule, only one rule must be true to
satisfy the hypothesis.
Gentle Slope
Slope < 12
Slope > 0
Variable Editor
The Knowledge Engineer also makes use of a Variable Editor when classifying images. The
Variable editor provides for the definition of the variable objects to be used in the rules
conditions.
The two types of variables are raster and scalar. Raster variables may be defined by imagery,
feature layers (including vector layers), graphic spatial models, or by running other programs.
Scalar variables my be defined with an explicit value, or defined as the output from a model or
external program.
Knowledge Classifier The Knowledge Classifier is composed of two parts: an application with a user interface, and a
command line executable. The user interface application allows you to input a limited set of
parameters to control the use of the knowledge base. The user interface is designed as a wizard
to lead you though pages of input parameters.
After selecting a knowledge base, you are prompted to select classes. The following is an
example classes dialog:
After you select the input data for classification, the classification output options, output files,
output area, output cell size, and output map projection, the Knowledge Classifier process can
begin. An inference engine then evaluates all hypotheses at each location (calculating variable
values, if required), and assigns the hypothesis with the highest confidence. The output of the
Knowledge Classifier is a thematic image, and optionally, a confidence image.
Evaluating After a classification is performed, these methods are available for testing the accuracy of the
Classification classification:
Thresholding Thresholding is the process of identifying the pixels in a classified image that are the most likely
to be classified incorrectly. These pixels are put into another class (usually class 0). These pixels
are identified statistically, based upon the distance measures that were used in the classification
decision rule.
Distance File
When a minimum distance, Mahalanobis distance, or maximum likelihood classification is
performed, a distance image file can be produced in addition to the output thematic raster layer.
A distance image file is a one-band, 32-bit offset continuous raster layer in which each data file
value represents the result of a spectral distance equation, depending upon the decision rule
used.
258 ERDAS
Evaluating Classification
• In a minimum distance classification, each distance value is the Euclidean spectral distance
between the measurement vector of the pixel and the mean vector of the pixel’s class.
The brighter pixels (with the higher distance file values) are spectrally farther from the signature
means for the classes to which they re assigned. They are more likely to be misclassified.
The darker pixels are spectrally nearer, and more likely to be classified correctly. If supervised
training was used, the darkest pixels are usually the training samples.
number of pixels
0
0
distance value
Figure 7-17 shows how the histogram of the distance image usually appears. This distribution
is called a chi-square distribution, as opposed to a normal distribution, which is a symmetrical
bell curve.
Threshold
The pixels that are the most likely to be misclassified have the higher distance file values at the
tail of this histogram. At some point that you define—either mathematically or visually—the
tail of this histogram is cut off. The cutoff point is the threshold.
To determine the threshold:
• interactively change the threshold with the mouse, when a distance histogram is displayed
while using the threshold function. This option enables you to select a chi-square value by
selecting the cut-off value in the distance histogram, or
In both cases, thresholding has the effect of cutting the tail off of the histogram of the distance
image file, representing the pixels with the highest distance values.
Figure 7-18 shows some example distance histograms. With each example is an explanation of
what the curve might mean, and how to threshold it.
Chi-square Statistics
If the minimum distance classifier is used, then the threshold is simply a certain spectral
distance. However, if Mahalanobis or maximum likelihood are used, then chi-square statistics
are used to compare probabilities (Swain and Davis, 1978).
When statistics are used to calculate the threshold, the threshold is more clearly defined as
follows:
T is the distance value at which C% of the pixels in a class have a distance value greater than or
equal to T.
Where:
T = the threshold for a class
C% = the percentage of pixels that are believed to be misclassified, known as the
confidence level
260 ERDAS
Evaluating Classification
T is related to the distance values by means of chi-square statistics. The value X2 (chi-squared)
is used in the equation. X2 is a function of:
• the number of bands of data used—known in chi-square statistics as the number of degrees
of freedom
When classifying an image in ERDAS IMAGINE, the classified image automatically has the
degrees of freedom (i.e., number of bands) used for the classification. The chi-square table is
built into the threshold application.
Accuracy Assessment Accuracy assessment is a general term for comparing the classification to geographical data that
are assumed to be true, in order to determine the accuracy of the classification process. Usually,
the assumed-true data are derived from ground truth data.
It is usually not practical to ground truth or otherwise test every pixel of a classified image.
Therefore, a set of reference pixels is usually used. Reference pixels are points on the classified
image for which actual data are (or will be) known. The reference pixels are randomly selected
(Congalton, 1991).
NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform an accuracy
assessment for any thematic layer. This layer does not have to be classified by ERDAS
IMAGINE (e.g., you can run an accuracy assessment on a thematic layer that was classified in
ERDAS Version 7.5 and imported into ERDAS IMAGINE).
Use the Accuracy Assessment CellArray to enter reference pixels for the class values.
Error Reports
From the Accuracy Assessment CellArray, two kinds of reports can be derived.
• The error matrix simply compares the reference points to the classified points in a c × c
matrix, where c is the number of classes (including class 0).
• The accuracy report calculates statistics of the percentages of accuracy, based upon the
results of the error matrix.
When interpreting the reports, it is important to observe the percentage of correctly classified
pixels and to determine the nature of errors of the producer and yourself.
Use the Accuracy Assessment utility to generate the error matrix and accuracy reports.
Kappa Coefficient
The Kappa coefficient expresses the proportionate reduction in error generated by a
classification process compared with the error of a completely random classification. For
example, a value of .82 implies that the classification process is avoiding 82 percent of the errors
that a completely random classification generates (Congalton, 1991).
Output File When classifying an image file, the output file is an image file with a thematic raster layer. This
file automatically contains the following data:
• class values
• class names
• color table
262 ERDAS
Output File
• statistics
• histogram
The image file also contains any signature attributes that were selected in the ERDAS
IMAGINE Supervised Classification utility.
The class names, values, and colors can be set with the Signature Editor or the Raster
Attribute Editor.
264 ERDAS
Chapter 8
Photogrammetric Concepts
Introduction
What is Photogrammetry is the "art, science and technology of obtaining reliable information about
Photogrammetry? physical objects and the environment through the process of recording, measuring and
interpreting photographic images and patterns of electromagnetic radiant imagery and other
phenomena" (American Society of Photogrammetry, 1980).
Photogrammetry was invented in 1851 by Laussedat, and has continued to develop over the last
140 years. Over time, the development of photogrammetry has passed through the phases of
plane table photogrammetry, analog photogrammetry, analytical photogrammetry, and has now
entered the phase of digital photogrammetry (Konecny, 1994).
The traditional, and largest, application of photogrammetry is to extract topographic
information (e.g., topographic maps) from aerial images. However, photogrammetric
techniques have also been applied to process satellite images and close range images in order to
acquire topographic or nontopographic information of photographed objects.
Prior to the invention of the airplane, photographs taken on the ground were used to extract the
relationships between objects using geometric principles. This was during the phase of plane
table photogrammetry.
In analog photogrammetry, starting with stereomeasurement in 1901, optical or mechanical
instruments were used to reconstruct three-dimensional geometry from two overlapping
photographs. The main product during this phase was topographic maps.
In analytical photogrammetry, the computer replaces some expensive optical and mechanical
components. The resulting devices were analog/digital hybrids. Analytical aerotriangulation,
analytical plotters, and orthophoto projectors were the main developments during this phase.
Outputs of analytical photogrammetry can be topographic maps, but can also be digital
products, such as digital maps and DEMs.
Digital photogrammetry is photogrammetry as applied to digital images that are stored and
processed on a computer. Digital images can be scanned from photographs or can be directly
captured by digital cameras. Many photogrammetric tasks can be highly automated in digital
photogrammetry (e.g., automatic DEM extraction and digital orthophoto generation). Digital
photogrammetry is sometimes called softcopy photogrammetry. The output products are in
digital form, such as digital maps, DEMs, and digital orthophotos saved on computer storage
media. Therefore, they can be easily stored, managed, and applied by you. With the
development of digital photogrammetry, photogrammetric techniques are more closely
integrated into remote sensing and GIS.
Digital photogrammetric systems employ sophisticated software to automate the tasks
associated with conventional photogrammetry, thereby minimizing the extent of manual
interaction required to perform photogrammetric operations. IMAGINE OrthoBASE® is such
a photogrammetric system.
Photogrammetry can be used to measure and interpret information from hardcopy photographs
or images. Sometimes the process of measuring information from photography and satellite
imagery is considered metric photogrammetry, such as creating DEMs. Interpreting information
from photography and imagery is considered interpretative photogrammetry, such as
identifying and discriminating between various tree types as represented on a photograph or
image (Wolf, 1983).
Types of Photographs The types of photographs and images that can be processed within IMAGINE OrthoBASE
and Images include aerial, terrestrial, close range, and oblique. Aerial or vertical (near vertical) photographs
and images are taken from a high vantage point above the Earth’s surface. The camera axis of
aerial or vertical photography is commonly directed vertically (or near vertically) down. Aerial
photographs and images are commonly used for topographic and planimetric mapping projects.
Aerial photographs and images are commonly captured from an aircraft or satellite.
Terrestrial or ground-based photographs and images are taken with the camera stationed on or
close to the Earth’s surface. Terrestrial and close range photographs and images are commonly
used for applications involved with archeology, geomorphology, civil engineering, architecture,
industry, etc.
Oblique photographs and images are similar to aerial photographs and images, except the
camera axis is intentionally inclined at an angle with the vertical. Oblique photographs and
images are commonly used for reconnaissance and corridor mapping applications.
Digital photogrammetric systems use digitized photographs or digital images as the primary
source of input. Digital imagery can be obtained from various sources. These include:
• Using sensors on board satellites such as Landsat and SPOT to record imagery
This document uses the term imagery in reference to photography and imagery obtained
from various sources. This includes aerial and terrestrial photography, digital and video
camera imagery, 35 mm photography, medium to large format photography, scanned
photography, and satellite imagery.
Why use As stated in the previous section, raw aerial photography and satellite imagery have large
Photogrammetry? geometric distortion that is caused by various systematic and nonsystematic factors. The
photogrammetric modeling based on collinearity equations eliminates these errors most
efficiently, and creates the most reliable orthoimages from the raw imagery. It is unique in terms
of considering the image-forming geometry, utilizing information between overlapping images,
and explicitly dealing with the third dimension: elevation.
266 ERDAS
Introduction
Photogrammetry vs. Conventional techniques of geometric correction such as polynomial transformation are based
Conventional on general functions not directly related to the specific distortion or error sources. They have
Geometric Correction been successful in the field of remote sensing and GIS applications, especially when dealing
with low resolution and narrow field of view satellite imagery such as Landsat and SPOT data
(Yang, 1997). General functions have the advantage of simplicity. They can provide a
reasonable geometric modeling alternative when little is known about the geometric nature of
the image data.
However, conventional techniques generally process the images one at a time. They cannot
provide an integrated solution for multiple images or photographs simultaneously and
efficiently. It is very difficult, if not impossible, for conventional techniques to achieve a
reasonable accuracy without a great number of GCPs when dealing with large-scale imagery,
images having severe systematic and/or nonsystematic errors, and images covering rough
terrain. Misalignment is more likely to occur when mosaicking separately rectified images. This
misalignment could result in inaccurate geographic information being collected from the
rectified images. Furthermore, it is impossible for a conventional technique to create a three-
dimensional stereo model or to extract the elevation information from two overlapping images.
There is no way for conventional techniques to accurately derive geometric information about
the sensor that captured the imagery.
Photogrammetric techniques overcome all the problems mentioned above by using least squares
bundle block adjustment. This solution is integrated and accurate.
IMAGINE OrthoBASE can process hundreds of images or photographs with very few GCPs,
while at the same time eliminating the misalignment problem associated with creating image
mosaics. In short, less time, less money, less manual effort, but more geographic fidelity can be
realized using the photogrammetric solution.
Single Frame Single frame orthorectification techniques orthorectify one image at a time using a technique
Orthorectification vs. known as space resection. In this respect, a minimum of three GCPs is required for each image.
Block Triangulation For example, in order to orthorectify 50 aerial photographs, a minimum of 150 GCPs is
required. This includes manually identifying and measuring each GCP for each image
individually. Once the GCPs are measured, space resection techniques compute the
camera/sensor position and orientation as it existed at the time of data capture. This information,
along with a DEM, is used to account for the negative impacts associated with geometric errors.
Additional variables associated with systematic error are not considered.
Single frame orthorectification techniques do not utilize the internal relationship between
adjacent images in a block to minimize and distribute the errors commonly associated with
GCPs, image measurements, DEMs, and camera/sensor information. Therefore, during the
mosaic procedure, misalignment between adjacent images is common since error has not been
minimized and distributed throughout the block.
• To determine the position and orientation for each image in a project as they existed at the
time of photographic or image exposure. The resulting parameters are referred to as exterior
orientation parameters. In order to estimate the exterior orientation parameters, a minimum
of three GCPs is required for the entire block, regardless of how many images are contained
within the project.
• To determine the ground coordinates of any tie points manually or automatically measured
on the overlap areas of multiple images. The highly precise ground point determination of
tie points is useful for generating control points from imagery in lieu of ground surveying
techniques. Additionally, if a large number of ground points is generated, then a DEM can
be interpolated using the Create Surface tool in ERDAS IMAGINE.
• To minimize and distribute the errors associated with the imagery, image measurements,
GCPs, and so forth. The bundle block adjustment processes information from an entire
block of imagery in one simultaneous solution (i.e., a bundle) using statistical techniques
(i.e., adjustment component) to automatically identify, distribute, and remove error.
Because the images are processed in one step, the misalignment issues associated with creating
mosaics are resolved.
Image and Data During photographic or image collection, overlapping images are exposed along a direction of
Acquisition flight. Most photogrammetric applications involve the use of overlapping images. In using more
than one image, the geometry associated with the camera/sensor, image, and ground can be
defined to greater accuracies and precision.
During the collection of imagery, each point in the flight path at which the camera exposes the
film, or the sensor captures the imagery, is called an exposure station (see Figure 8-1).
268 ERDAS
Image and Data Acquisition
Flight path
Flight Line 3 of airplane
Flight Line 2
Flight Line 1
Exposure station
Each photograph or image that is exposed has a corresponding image scale associated with it.
The image scale expresses the average ratio between a distance in the image and the same
distance on the ground. It is computed as focal length divided by the flying height above the
mean ground elevation. For example, with a flying height of 1000 m and a focal length of 15
cm, the image scale (SI) would be 1:6667.
NOTE: The flying height above ground is used, versus the altitude above sea level.
A strip of photographs consists of images captured along a flight line, normally with an overlap
of 60%. All photos in the strip are assumed to be taken at approximately the same flying height
and with a constant distance between exposure stations. Camera tilt relative to the vertical is
assumed to be minimal.
The photographs from several flight paths can be combined to form a block of photographs. A
block of photographs consists of a number of parallel strips, normally with a sidelap of 20-30%.
Block triangulation techniques are used to transform all of the images in a block and ground
points into a homologous coordinate system.
A regular block of photos is a rectangular block in which the number of photos in each strip is
the same. Figure 8-2 shows a block of 5 × 2 photographs.
Strip 2
Photographic
block 20-30%
sidelap
Photogrammetric Photogrammetric quality scanners are special devices capable of high image quality and
Scanners excellent positional accuracy. Use of this type of scanner results in geometric accuracies similar
to traditional analog and analytical photogrammetric instruments. These scanners are necessary
for digital photogrammetric applications that have high accuracy requirements.
These units usually scan only film because film is superior to paper, both in terms of image
detail and geometry. These units usually have a Root Mean Square Error (RMSE) positional
accuracy of 4 microns or less, and are capable of scanning at a maximum resolution of 5 to 10
microns (5 microns is equivalent to approximately 5,000 pixels per inch).
The required pixel resolution varies depending on the application. Aerial triangulation and
feature collection applications often scan in the 10- to 15-micron range. Orthophoto
applications often use 15- to 30-micron pixels. Color film is less sharp than panchromatic,
therefore color ortho applications often use 20- to 40-micron pixels.
Desktop Scanners Desktop scanners are general purpose devices. They lack the image detail and geometric
accuracy of photogrammetric quality units, but they are much less expensive. When using a
desktop scanner, you should make sure that the active area is at least 9 × 9 inches (i.e., A3 type
scanners), enabling you to capture the entire photo frame.
Desktop scanners are appropriate for less rigorous uses, such as digital photogrammetry in
support of GIS or remote sensing applications. Calibrating these units improves geometric
accuracy, but the results are still inferior to photogrammetric units. The image correlation
techniques that are necessary for automatic tie point collection and elevation extraction are often
sensitive to scan quality. Therefore, errors can be introduced into the photogrammetric solution
that are attributable to scanning errors. IMAGINE OrthoBASE accounts for systematic errors
attributed to scanning errors.
Scanning Resolutions One of the primary factors contributing to the overall accuracy of block triangulation and
orthorectification is the resolution of the imagery being used. Image resolution is commonly
determined by the scanning resolution (if film photography is being used), or by the pixel
resolution of the sensor. In order to optimize the attainable accuracy of a solution, the scanning
resolution must be considered. The appropriate scanning resolution is determined by balancing
the accuracy requirements versus the size of the mapping project and the time required to
process the project. Table 8-1 lists the scanning resolutions associated with various scales of
photography and image file size.
270 ERDAS
Image and Data Acquisition
1
dots per inch
The ground coverage column refers to the ground coverage per pixel. Thus, a 1:40000 scale
photograph scanned at 25 microns [1016 dots per inch (dpi)] has a ground coverage per pixel of
1 m × 1 m. The resulting file size is approximately 85 MB, assuming a square 9 × 9 inch
photograph.
Coordinate Systems Conceptually, photogrammetry involves establishing the relationship between the camera or
sensor used to capture imagery, the imagery itself, and the ground. In order to understand and
define this relationship, each of the three variables associated with the relationship must be
defined with respect to a coordinate space and coordinate system.
c
Origin of pixel
coordinate system
Origin of image
coordinate system
272 ERDAS
Image and Data Acquisition
z
y
S x
Z
Height
A
Y
Terrestrial Photogrammetric applications associated with terrestrial or ground-based images utilize slightly
Photography different image and ground space coordinate systems. Figure 8-5 illustrates the two coordinate
systems associated with image space and ground space.
YG
ϕ Ground point A
Ground space ZA
ω YA
XG
κ
XA
ZG
xa’
Image space a’
ya’
x
z
Z
Y ZL
ϕ' Perspective Center
κ' XL
YL
X
ω'
The image and ground space coordinate systems are right-handed coordinate systems. Most
terrestrial applications use a ground space coordinate system that was defined using a localized
Cartesian coordinate system.
The image space coordinate system directs the z-axis toward the imaged object and the y-axis
directed North up. The image x-axis is similar to that used in aerial applications. The XL, YL, and
ZL coordinates define the position of the perspective center as it existed at the time of image
capture. The ground coordinates of ground point A (XA, YA, and ZA) are defined within the
ground space coordinate system (XG, YG, and ZG). With this definition, the rotation angles ω, ϕ,
and κ are still defined as in the aerial photography conventions. In IMAGINE OrthoBASE, you
can also use the ground (X, Y, Z) coordinate system to directly define GCPs. Thus, GCPs do
not need to be transformed. Then the definition of rotation angles ω’, ϕ’, and κ’ are different,
as shown in Figure 8-5.
274 ERDAS
Interior Orientation
Interior Orientation Interior orientation defines the internal geometry of a camera or sensor as it existed at the time
of data capture. The variables associated with image space are defined during the process of
interior orientation. Interior orientation is primarily used to transform the image pixel
coordinate system or other image coordinate measurement system to the image space coordinate
system.
Figure 8-6 illustrates the variables associated with the internal geometry of an image captured
from an aerial camera, where o represents the principal point and a represents an image point.
z
Perspective Center
yo x
ya’
Image plane xo O
xa’ a
• Principal point
• Focal length
• Fiducial marks
• Lens distortion
Principal Point and The principal point is mathematically defined as the intersection of the perpendicular line
Focal Length through the perspective center of the image plane. The length from the principal point to the
perspective center is called the focal length (Wang, Z., 1990).
The image plane is commonly referred to as the focal plane. For wide-angle aerial cameras, the
focal length is approximately 152 mm, or 6 inches. For some digital cameras, the focal length
is 28 mm. Prior to conducting photogrammetric projects, the focal length of a metric camera is
accurately determined or calibrated in a laboratory environment.
This mathematical definition is the basis of triangulation, but difficult to determine optically.
The optical definition of principal point is the image position where the optical axis intersects
the image plane. In the laboratory, this is calibrated in two forms: principal point of
autocollimation and principal point of symmetry, which can be seen from the camera calibration
report. Most applications prefer to use the principal point of symmetry since it can best
compensate for the lens distortion.
Fiducial Marks As stated previously, one of the steps associated with interior orientation involves determining
the image position of the principal point for each image in the project. Therefore, the image
positions of the fiducial marks are measured on the image, and subsequently compared to the
calibrated coordinates of each fiducial mark.
Since the image space coordinate system has not yet been defined for each image, the measured
image coordinates of the fiducial marks are referenced to a pixel or file coordinate system. The
pixel coordinate system has an x coordinate (column) and a y coordinate (row). The origin of
the pixel coordinate system is the upper left corner of the image having a row and column value
of 0 and 0, respectively. Figure 8-7 illustrates the difference between the pixel coordinate
system and the image space coordinate system.
Figure 8-7: Pixel Coordinate System vs. Image Space Coordinate System
Ya-file Yo-file
xa Θ
Xa-file a
Using a two-dimensional affine transformation, the relationship between the pixel coordinate
system and the image space coordinate system is defined. The following two-dimensional affine
transformation equations can be used to determine the coefficients required to transform pixel
coordinate measurements to the image coordinates:
x = a1 + a2 X + a3 Y
y = b1 + b2 X + b3 Y
The x and y image coordinates associated with the calibrated fiducial marks and the X and Y
pixel coordinates of the measured fiducial marks are used to determine six affine transformation
coefficients. The resulting six coefficients can then be used to transform each set of row (y) and
column (x) pixel coordinates to image coordinates.
276 ERDAS
Interior Orientation
The quality of the two-dimensional affine transformation is represented using a root mean
square (RMS) error. The RMS error represents the degree of correspondence between the
calibrated fiducial mark coordinates and their respective measured image coordinate values.
Large RMS errors indicate poor correspondence. This can be attributed to film deformation,
poor scanning quality, out-of-date calibration information, or image mismeasurement.
The affine transformation also defines the translation between the origin of the pixel coordinate
system and the image coordinate system (xo-file and yo-file). Additionally, the affine
transformation takes into consideration rotation of the image coordinate system by considering
angle Θ (see Figure 8-7). A scanned image of an aerial photograph is normally rotated due to
the scanning procedure.
The degree of variation between the x- and y-axis is referred to as nonorthogonality. The two-
dimensional affine transformation also considers the extent of nonorthogonality. The scale
difference between the x-axis and the y-axis is also considered using the affine transformation.
Lens Distortion Lens distortion deteriorates the positional accuracy of image points located on the image plane.
Two types of radial lens distortion exist: radial and tangential lens distortion. Lens distortion
occurs when light rays passing through the lens are bent, thereby changing directions and
intersecting the image plane at positions deviant from the norm. Figure 8-8 illustrates the
difference between radial and tangential lens distortion.
∆r ∆t
radial distance (r)
o x
Radial lens distortion causes imaged points to be distorted along radial lines from the principal
point o. The effect of radial lens distortion is represented as ∆r. Radial lens distortion is also
commonly referred to as symmetric lens distortion. Tangential lens distortion occurs at right
angles to the radial lines from the principal point. The effect of tangential lens distortion is
represented as ∆t. Since tangential lens distortion is much smaller in magnitude than radial lens
distortion, it is considered negligible.
The effects of lens distortion are commonly determined in a laboratory during the camera
calibration procedure.
The effects of radial lens distortion throughout an image can be approximated using a
polynomial. The following polynomial is used to determine coefficients associated with radial
lens distortion:
3 5
∆r = k 0 r + k 1 r + k 2 r
∆r represents the radial distortion along a radial distance r from the principal point (Wolf,
1983). In most camera calibration reports, the lens distortion value is provided as a function of
radial distance from the principal point or field angle. IMAGINE OrthoBASE accommodates
radial lens distortion parameters in both scenarios.
Three coefficients (k0, k1, and k2) are computed using statistical techniques. Once the
coefficients are computed, each measurement taken on an image is corrected for radial lens
distortion.
Exterior Exterior orientation defines the position and angular orientation associated with an image. The
Orientation variables defining the position and orientation of an image are referred to as the elements of
exterior orientation. The elements of exterior orientation define the characteristics associated
with an image at the time of exposure or capture. The positional elements of exterior orientation
include Xo, Yo, and Zo. They define the position of the perspective center (O) with respect to
the ground space coordinate system (X, Y, and Z). Zo is commonly referred to as the height of
the camera above sea level, which is commonly defined by a datum.
The angular or rotational elements of exterior orientation describe the relationship between the
ground space coordinate system (X, Y, and Z) and the image space coordinate system (x, y, and
z). Three rotation angles are commonly used to define angular orientation. They are omega (ω),
phi (ϕ), and kappa (κ). Figure 8-9 illustrates the elements of exterior orientation.
278 ERDAS
Exterior Orientation
z
y y´
ϕ
κ ω
x
O x´
f
o p yp
xp
Zo
Ground Point P
Z
Zp
Y
Xp
Xo
Yp
Yo
X
Omega is a rotation about the photographic x-axis, phi is a rotation about the photographic y-
axis, and kappa is a rotation about the photographic z-axis, which are defined as being positive
if they are counterclockwise when viewed from the positive end of their respective axis.
Different conventions are used to define the order and direction of the three rotation angles
(Wang, Z., 1990). The ISPRS recommends the use of the ω, ϕ, κ convention. The photographic
z-axis is equivalent to the optical axis (focal length). The x’, y’, and z’ coordinates are parallel
to the ground space coordinate system.
Using the three rotation angles, the relationship between the image space coordinate system (x,
y, and z) and ground space coordinate system (X, Y, and Z or x’, y’, and z’) can be determined.
A 3 × 3 matrix defining the relationship between the two systems is used. This is referred to as
the orientation or rotation matrix, M. The rotation matrix can be defined as follows:
m 11 m 12 m 13
M = m 21 m 22 m 23
m 31 m 32 m 33
The rotation matrix is derived by applying a sequential rotation of omega about the x-axis, phi
about the y-axis, and kappa about the z-axis.
The Collinearity The following section defines the relationship between the camera/sensor, the image, and the
Equation ground. Most photogrammetric tools utilize the following formulations in one form or another.
With reference to Figure 8-9, an image vector a can be defined as the vector from the exposure
station O to the image point p. A ground space or object space vector A can be defined as the
vector from the exposure station O to the ground point P. The image vector and ground vector
are collinear, inferring that a line extending from the exposure station to the image point and to
the ground is linear.
The image vector and ground vector are only collinear if one is a scalar multiple of the other.
Therefore, the following statement can be made:
a = kA
Where k is a scalar multiple. The image and ground vectors must be within the same coordinate
system. Therefore, image vector a is comprised of the following components:
xp – xo
a = y –y
p o
–f
Xp – Xo
A = Yp – Yo
Zp – Zo
In order for the image and ground vectors to be within the same coordinate system, the ground
vector must be multiplied by the rotation matrix M. The following equation can be formulated:
a = kMA
Where:
280 ERDAS
Photogrammetric Solutions
xp – xo Xp – Xo
yp – yo = kM Y p – Y o
–f Zp – Zo
The above equation defines the relationship between the perspective center of the camera/sensor
exposure station and ground point P appearing on an image with an image point location of p.
This equation forms the basis of the collinearity condition that is used in most photogrammetric
operations. The collinearity condition specifies that the exposure station, ground point, and its
corresponding image point location must all lie along a straight line, thereby being collinear.
Two equations comprise the collinearity condition.
m 11 ( X p – X o ) + m 12 ( Y p – Y o ) + m 13 ( Z p – Z o )
x p – x o = – f --------------------------------------------------------------------------------------------------------------------------
1 1 1
-
m 31 ( X p – X o ) + m 32 ( Y p – Y o ) + m 33 ( Z p – Z o )
1 1 1
m 21 ( X p – X o ) + m 22 ( Y p – Y o ) + m 23 ( Z p – Z o )
y p – y o = – f --------------------------------------------------------------------------------------------------------------------------
1 1 1
-
m 31 ( X p – X o ) + m 32 ( Y p – Y o ) + m 33 ( Z p – Z o )
1 1 1
One set of equations can be formulated for each ground point appearing on an image. The
collinearity condition is commonly used to define the relationship between the camera/sensor,
the image, and the ground.
Photogrammetric As stated previously, digital photogrammetry is used for many applications, ranging from
Solutions orthorectification, automated elevation extraction, stereopair creation, feature collection, highly
accurate point determination, and control point extension.
For any of the aforementioned tasks to be undertaken, a relationship between the camera/sensor,
the image(s) in a project, and the ground must be defined. The following variables are used to
define the relationship:
Well-known obstacles in photogrammetry include defining the interior and exterior orientation
parameters for each image in a project using a minimum number of GCPs. Due to the costs and
labor intensive procedures associated with collecting ground control, most photogrammetric
applications do not have an abundant number of GCPs. Additionally, the exterior orientation
parameters associated with an image are normally unknown.
Depending on the input data provided, photogrammetric techniques such as space resection,
space forward intersection, and bundle block adjustment are used to define the variables
required to perform orthorectification, automated DEM extraction, stereopair creation, highly
accurate point determination, and control point extension.
Space Resection Space resection is a technique that is commonly used to determine the exterior orientation
parameters associated with one image or many images based on known GCPs. Space resection
is based on the collinearity condition. Space resection using the collinearity condition specifies
that, for any image, the exposure station, the ground point, and its corresponding image point
must lie along a straight line.
If a minimum number of three GCPs is known in the X, Y, and Z direction, space resection
techniques can be used to determine the six exterior orientation parameters associated with an
image. Space resection assumes that camera information is available.
Space resection is commonly used to perform single frame orthorectification, where one image
is processed at a time. If multiple images are being used, space resection techniques require that
a minimum of three GCPs be located on each image being processed.
Using the collinearity condition, the positions of the exterior orientation parameters are
computed. Light rays originating from at least three GCPs intersect through the image plane
through the image positions of the GCPs and resect at the perspective center of the camera or
sensor. Using least squares adjustment techniques, the most probable positions of exterior
orientation can be computed. Space resection techniques can be applied to one image or
multiple images.
Space Forward Space forward intersection is a technique that is commonly used to determine the ground
Intersection coordinates X, Y, and Z of points that appear in the overlapping areas of two or more images
based on known interior orientation and known exterior orientation parameters. The collinearity
condition is enforced, stating that the corresponding light rays from the two exposure stations
pass through the corresponding image points on the two images and intersect at the same ground
point. Figure 8-10 illustrates the concept associated with space forward intersection.
282 ERDAS
Photogrammetric Solutions
O1
O2
o1
p2 o2
p1
Z
Zp
Y
Xo2
Xp
Xo1 Yo2
Yp
Yo1
X
Space forward intersection techniques assume that the exterior orientation parameters
associated with the images are known. Using the collinearity equations, the exterior orientation
parameters along with the image coordinate measurements of point p on Image 1 and Image 2
are input to compute the Xp, Yp, and Zp coordinates of ground point p.
Space forward intersection techniques can be used for applications associated with collecting
GCPs, cadastral mapping using airborne surveying techniques, and highly accurate point
determination.
Bundle Block For mapping projects having more than two images, the use of space intersection and space
Adjustment resection techniques is limited. This can be attributed to the lack of information required to
perform these tasks. For example, it is fairly uncommon for the exterior orientation parameters
to be highly accurate for each photograph or image in a project, since these values are generated
photogrammetrically. Airborne GPS and INS techniques normally provide initial
approximations to exterior orientation, but the final values for these parameters must be
adjusted to attain higher accuracies.
Similarly, rarely are there enough accurate GCPs for a project of 30 or more images to perform
space resection (i.e., a minimum of 90 is required). In the case that there are enough GCPs, the
time required to identify and measure all of the points would be costly.
The costs associated with block triangulation and orthorectification are largely dependent on the
number of GCPs used. To minimize the costs of a mapping project, fewer GCPs are collected
and used. To ensure that high accuracies are attained, an approach known as bundle block
adjustment is used.
A bundle block adjustment is best defined by examining the individual words in the term. A
bundled solution is computed including the exterior orientation parameters of each image in a
block and the X, Y, and Z coordinates of tie points and adjusted GCPs. A block of images
contained in a project is simultaneously processed in one solution. A statistical technique known
as least squares adjustment is used to estimate the bundled solution for the entire block while
also minimizing and distributing error.
Block triangulation is the process of defining the mathematical relationship between the images
contained within a block, the camera or sensor model, and the ground. Once the relationship has
been defined, accurate imagery and information concerning the Earth’s surface can be created.
When processing frame camera, digital camera, videography, and nonmetric camera imagery,
block triangulation is commonly referred to as aerial triangulation (AT). When processing
imagery collected with a pushbroom sensor, block triangulation is commonly referred to as
triangulation.
There are several models for block triangulation. The common models used in photogrammetry
are block triangulation with the strip method, the independent model method, and the bundle
method. Among them, the bundle block adjustment is the most rigorous of the above methods,
considering the minimization and distribution of errors. Bundle block adjustment uses the
collinearity condition as the basis for formulating the relationship between image space and
ground space. IMAGINE OrthoBASE uses bundle block adjustment techniques.
In order to understand the concepts associated with bundle block adjustment, an example
comprising two images with three GCPs with X, Y, and Z coordinates that are known is used.
Additionally, six tie points are available. Figure 8-11 illustrates the photogrammetric
configuration.
Tie point
GCP
m 11 ( X A – X o ) + m 12 ( Y A – Y o ) + m 13 ( Z A – Z o )
x a – x o = – f -----------------------------------------------------------------------------------------------------------------------------
1 1 1
-
1
m 31 ( X A – X o ) + m 32 ( Y A – Y o ) + m 33 ( Z A – Z o )
1 1 1
284 ERDAS
Photogrammetric Solutions
m 21 ( X A – X o ) + m 22 ( Y A – Y o ) + m 23 ( Z A – Z o )
y a – y o = – f -----------------------------------------------------------------------------------------------------------------------------
1 1 1
-
1
m 31 ( X A – X o ) + m 32 ( Y A – Y o ) + m 33 ( Z A – Z o )
1 1 1
m′ 11 ( X A – X o ) + m′ 12 ( Y A – Y o ) + m′ 13 ( Z A – Z o )
x a – x o = – f -----------------------------------------------------------------------------------------------------------------------------------
2 2 2
2
m′ 31 ( X A – X o ) + m′ 32 ( Y A – Y o ) + m′ 33 ( Z A – Z o )
2 2 2
m′ 21 ( X A – X o ) + m′ 22 ( Y A – Y o ) + m′ 23 ( Z A – Z o )
y a – y o = – f -----------------------------------------------------------------------------------------------------------------------------------
2 2 2
2
m′ 31 ( X A – X o ) + m′ 32 ( Y A – Y o ) + m′ 33 ( Z A – Z o )
2 2 2
xa , ya
1 1
xa , ya
2 2
Xo , Yo , Z
1 1 o1
Xo , Yo , Z
2 2 o2
If three GCPs have been measured on the overlap areas of two images, twelve equations can be
formulated, which includes four equations for each GCP (refer to Figure 8-11).
Additionally, if six tie points have been measured on the overlap areas of the two images,
twenty-four equations can be formulated, which includes four for each tie point. This is a total
of 36 observation equations (refer to Figure 8-11).
The previous example has the following unknowns:
• Six exterior orientation parameters for the left image (i.e., X, Y, Z, omega, phi, kappa)
• Six exterior orientation parameters for the right image (i.e., X, Y, Z, omega, phi and kappa),
and
• X, Y, and Z coordinates of the tie points. Thus, for six tie points, this includes eighteen
unknowns (six tie points times three X, Y, Z coordinates).
The total number of unknowns is 30. The overall quality of a bundle block adjustment is largely
a function of the quality and redundancy in the input data. In this scenario, the redundancy in
the project can be computed by subtracting the number of unknowns, 30, by the number of
knowns, 36. The resulting redundancy is six. This term is commonly referred to as the degrees
of freedom in a solution.
Once each observation equation is formulated, the collinearity condition can be solved using an
approach referred to as least squares adjustment.
Least Squares Least squares adjustment is a statistical technique that is used to estimate the unknown
Adjustment parameters associated with a solution while also minimizing error within the solution. With
respect to block triangulation, least squares adjustment techniques are used to:
Data error is attributed to the inaccuracy associated with the input GCP coordinates, measured
tie point and GCP image positions, camera information, and systematic errors.
The least squares approach requires iterative processing until a solution is attained. A solution
is obtained when the residuals associated with the input data are minimized.
The least squares approach involves determining the corrections to the unknown parameters
based on the criteria of minimizing input measurement residuals. The residuals are derived from
the difference between the measured (i.e., user input) and computed value for any particular
measurement in a project. In the block triangulation process, a functional model can be formed
based upon the collinearity equations.
The functional model refers to the specification of an equation that can be used to relate
measurements to parameters. In the context of photogrammetry, measurements include the
image locations of GCPs and GCP coordinates, while the exterior orientations of all the images
are important parameters estimated by the block triangulation process.
The residuals, which are minimized, include the image coordinates of the GCPs and tie points
along with the known ground coordinates of the GCPs. A simplified version of the least squares
condition can be broken down into a formulation as follows:
Where:
V = the matrix containing the image coordinate residuals
A = the matrix containing the partial derivatives with respect to the unknown
parameters, including exterior orientation, interior orientation, XYZ tie
point, and GCP coordinates
X = the matrix containing the corrections to the unknown parameters
L = the matrix containing the input observations (i.e., image coordinates and
GCP coordinates)
286 ERDAS
Photogrammetric Solutions
The components of the least squares condition are directly related to the functional model based
on collinearity equations. The A matrix is formed by differentiating the functional model, which
is based on collinearity equations, with respect to the unknown parameters such as exterior
orientation, etc. The L matrix is formed by subtracting the initial results obtained from the
functional model with newly estimated results determined from a new iteration of processing.
The X matrix contains the corrections to the unknown exterior orientation parameters. The X
matrix is calculated in the following manner:
t –1 t
X = ( A PA ) A PL
Where:
X = the matrix containing the corrections to the unknown parameters t
A = the matrix containing the partial derivatives with respect to the unknown
parameters
t
= the matrix transposed
P = the matrix containing the weights of the observations
L = the matrix containing the observations
Once a least squares iteration of processing is completed, the corrections to the unknown
parameters are added to the initial estimates. For example, if initial approximations to exterior
orientation are provided from Airborne GPS and INS information, the estimated corrections
computed from the least squares adjustment are added to the initial value to compute the
updated exterior orientation values. This iterative process of least squares adjustment continues
until the corrections to the unknown parameters are less than a user-specified threshold
(commonly referred to as a convergence value).
The V residual matrix is computed at the end of each iteration of processing. Once an iteration
is completed, the new estimates for the unknown parameters are used to recompute the input
observations such as the image coordinate values. The difference between the initial
measurements and the new estimates is obtained to provide the residuals. Residuals provide
preliminary indications of the accuracy of a solution. The residual values indicate the degree to
which a particular observation (input) fits with the functional model. For example, the image
residuals have the capability of reflecting GCP collection in the field. After each successive
iteration of processing, the residuals become smaller until they are satisfactorily minimized.
Once the least squares adjustment is completed, the block triangulation results include:
• Final exterior orientation parameters of each image in a block and their accuracy
• Final interior orientation parameters of each image in a block and their accuracy
The results from the block triangulation are then used as the primary input for the following
tasks:
• Feature collection
• DEM extraction
• Orthorectification
Self-calibrating Normally, there are more or less systematic errors related to the imaging and processing system,
Bundle Adjustment such as lens distortion, film distortion, atmosphere refraction, scanner errors, etc. These errors
reduce the accuracy of triangulation results, especially in dealing with large-scale imagery and
high accuracy triangulation. There are several ways to reduce the influences of the systematic
errors, like a posteriori-compensation, test-field calibration, and the most common approach:
self-calibration (Konecny and Lehmann, 1984; Wang, Z., 1990).
The self-calibrating methods use additional parameters in the triangulation process to eliminate
the systematic errors. How well it works depends on many factors such as the strength of the
block (overlap amount, crossing flight lines), the GCP and tie point distribution and amount, the
size of systematic errors versus random errors, the significance of the additional parameters, the
correlation between additional parameters, and other unknowns.
There was intensive research and development for additional parameter models in
photogrammetry in the 70s and the 80s, and many research results are available (e.g., Bauer and
Müller, 1972; Brown 1975; Ebner, 1976; Grün, 1978; Jacobsen, 1980; Jacobsen, 1982; Li,
1985; Wang, Y., 1988a, Stojic et al, 1998). Based on these scientific reports, IMAGINE
OrthoBASE provides four groups of additional parameters for you to choose for different
triangulation circumstances. In addition, IMAGINE OrthoBASE allows the interior orientation
parameters to be analytically calibrated within its self-calibrating bundle block adjustment
capability.
Automatic Gross Error Normal random errors are subject to statistical normal distribution. In contrast, gross errors refer
Detection to errors that are large and are not subject to normal distribution. The gross errors among the
input data for triangulation can lead to unreliable results. Research during the 80s in the
photogrammetric community resulted in significant achievements in automatic gross error
detection in the triangulation process (e.g., Kubik, 1982; Li, 1983; Li, 1985; Jacobsen, 1984; El-
Hakim and Ziemann, 1984; Wang, Y., 1988a).
Methods for gross error detection began with residual checking using data-snooping and were
later extended to robust estimation (Wang, Z., 1990). The most common robust estimation
method is the iteration with selective weight functions. Based on the scientific research results
from the photogrammetric community, IMAGINE OrthoBASE offers two robust error
detection methods within the triangulation process.
It is worth mentioning that the effect of the automatic error detection depends not only on the
mathematical model, but also depends on the redundancy in the block. Therefore, more tie
points in more overlap areas contribute better gross error detection. In addition, inaccurate
GCPs can distribute their errors to correct tie points, therefore the ground and image coordinates
of GCPs should have better accuracy than tie points when comparing them within the same scale
space.
288 ERDAS
GCPs
GCPs The instrumental component of establishing an accurate relationship between the images in a
project, the camera/sensor, and the ground is GCPs. GCPs are identifiable features located on
the Earth’s surface that have known ground coordinates in X, Y, and Z. A full GCP has X,Y,
and Z (elevation of the point) coordinates associated with it. Horizontal control only specifies
the X,Y, while vertical control only specifies the Z. The following features on the Earth’s
surface are commonly used as GCPs:
• Intersection of roads
• Survey benchmarks
Depending on the type of mapping project, GCPs can be collected from the following sources:
• Planimetric and topographic maps (accuracy varies as a function of map scale, approximate
accuracy between several meters to 40 meters or more)
• DEMs (for the collection of vertical GCPs having Z coordinates associated with them,
where accuracy is dependent on the resolution of the DEM and the accuracy of the input
DEM)
When imagery or photography is exposed, GCPs are recorded and subsequently displayed on
the photography or imagery. During GCP measurement in IMAGINE OrthoBASE, the image
positions of GCPs appearing on an image or on the overlap areas of the images are collected.
It is highly recommended that a greater number of GCPs be available than are actually used in
the block triangulation. Additional GCPs can be used as check points to independently verify
the overall quality and accuracy of the block triangulation solution. A check point analysis
compares the photogrammetrically computed ground coordinates of the check points to the
original values. The result of the analysis is an RMSE that defines the degree of correspondence
between the computed values and the original values. Lower RMSE values indicate better
results.
GCP Requirements The minimum GCP requirements for an accurate mapping project vary with respect to the size
of the project. With respect to establishing a relationship between image space and ground
space, the theoretical minimum number of GCPs is two GCPs having X, Y, and Z coordinates
and one GCP having a Z coordinate associated with it. This is a total of seven observations.
In establishing the mathematical relationship between image space and object space, seven
parameters defining the relationship must be determined. The seven parameters include a scale
factor (describing the scale difference between image space and ground space); X, Y, Z
(defining the positional differences between image space and object space); and three rotation
angles (omega, phi, and kappa) that define the rotational relationship between image space and
ground space.
In order to compute a unique solution, at least seven known parameters must be available. In
using the two X, Y, Z GCPs and one vertical (Z) GCP, the relationship can be defined. However,
to increase the accuracy of a mapping project, using more GCPs is highly recommended.
The following descriptions are provided for various projects:
Processing Multiple Figure 8-13 depicts the standard GCP configuration for a block of images, comprising four
Strips of Imagery strips of images, each containing eight overlapping images.
290 ERDAS
Tie Points
In this case, the GCPs form a strong geometric network of observations. As a general rule, it is
advantageous to have at least one GCP on every third image of a block. Additionally, whenever
possible, locate GCPs that lie on multiple images, around the outside edges of a block, and at
certain distances from one another within the block.
Tie Points A tie point is a point that has ground coordinates that are not known, but is visually recognizable
in the overlap area between two or more images. The corresponding image positions of tie
points appearing on the overlap areas of multiple images is identified and measured. Ground
coordinates for tie points are computed during block triangulation. Tie points can be measured
both manually and automatically.
Tie points should be visually well-defined in all images. Ideally, they should show good contrast
in two directions, like the corner of a building or a road intersection. Tie points should also be
well distributed over the area of the block. Typically, nine tie points in each image are adequate
for block triangulation. Figure 8-14 depicts the placement of tie points.
Tie points in a
single image
In a block of images with 60% overlap and 25-30% sidelap, nine points are sufficient to tie
together the block as well as individual strips (see Figure 8-15).
Tie points
Automatic Tie Point Selecting and measuring tie points is very time-consuming and costly. Therefore, in recent
Collection years, one of the major focal points of research and development in photogrammetry has
concentrated on the automated triangulation where the automatic tie point collection is the main
issue.
The other part of the automated triangulation is the automatic control point identification, which
is still unsolved due to the complication of the scenario. There are several valuable research
results available for automated triangulation (e.g., Agouris and Schenk, 1996; Heipke, 1996;
Krzystek, 1998; Mayr, 1995; Schenk, 1997; Tang et al, 1997; Tsingas, 1995; Wang, Y., 1998b).
After investigating the advantages and the weaknesses of the existing methods, IMAGINE
OrthoBASE was designed to incorporate an advanced method for automatic tie point collection.
It is designed to work with a variety of digital images such as aerial images, satellite images,
digital camera images, and close range images. It also supports the processing of multiple strips
including adjacent, diagonal, and cross-strips.
Automatic tie point collection within IMAGINE OrthoBASE successfully performs the
following tasks:
292 ERDAS
Image Matching Techniques
• Automatic tie point extraction. The feature point extraction algorithms are used here to
extract the candidates of tie points.
• Point transfer. Feature points appearing on multiple images are automatically matched and
identified.
• Gross error detection. Erroneous points are automatically identified and removed from the
solution.
• Tie point selection. The intended number of tie points defined by you is automatically
selected as the final number of tie points.
The image matching strategies incorporated in IMAGINE OrthoBASE for automatic tie point
collection include the coarse-to-fine matching; feature-based matching with geometrical and
topological constraints, which is simplified from the structural matching algorithm (Wang, Y.,
1998b); and the least square matching for the high accuracy of tie points.
Image Matching Image matching refers to the automatic identification and measurement of corresponding image
Techniques points that are located on the overlapping area of multiple images. The various image matching
methods can be divided into three categories including:
Area Based Matching Area based matching is also called signal based matching. This method determines the
correspondence between two image areas according to the similarity of their gray level values.
The cross correlation and least squares correlation techniques are well-known methods for area
based matching.
Correlation Windows
Area based matching uses correlation windows. These windows consist of a local neighborhood
of pixels. One example of correlation windows is square neighborhoods (e.g., 3 × 3, 5 × 5, 7 ×
7 pixels). In practice, the windows vary in shape and dimension based on the matching
technique. Area correlation uses the characteristics of these windows to match ground feature
locations in one image to ground features on the other.
A reference window is the source window on the first image, which remains at a constant
location. Its dimensions are usually square in size (e.g., 3 × 3, 5 × 5, etc.). Search windows are
candidate windows on the second image that are evaluated relative to the reference window.
During correlation, many different search windows are examined until a location is found that
best matches the reference window.
Correlation Calculations
Two correlation calculations are described below: cross correlation and least squares
correlation. Most area based matching calculations, including these methods, normalize the
correlation windows. Therefore, it is not necessary to balance the contrast or brightness prior to
running correlation. Cross correlation is more robust in that it requires a less accurate a priori
position than least squares. However, its precision is limited to one pixel. Least squares
correlation can achieve precision levels of one-tenth of a pixel, but requires an a priori position
that is accurate to about two pixels. In practice, cross correlation is often followed by least
squares for high accuracy.
Cross Correlation
Cross correlation computes the correlation coefficient of the gray values between the template
window and the search window according to the following equation:
∑ [ g1 ( c1, r1 ) – g1 ] [ g2 ( c2, r2 ) – g2 ]
i, j
ρ = -------------------------------------------------------------------------------------------------------
-
2 2
∑ [ g1 ( c1, r1 ) – g1 ] ∑ [ g2 ( c2, r2 ) – g2 ]
i, j i, j
with
1 1
g 1 = --- ∑ g 1 ( c 1, r 1 ) g 2 = --- ∑ g 2 ( c 2, r 2 )
n n
i, j i, j
Where:
ρ = the correlation coefficient
g(c,r) = the gray value of the pixel (c,r)
c1,r1 = the pixel coordinates on the left image
c2,r2 = the pixel coordinates on the right image
n = the total number of pixels in the window
i, j = pixel index into the correlation window
When using the area based cross correlation, it is necessary to have a good initial position for
the two correlation windows. If the exterior orientation parameters of the images being matched
are known, a good initial position can be determined. Also, if the contrast in the windows is very
poor, the correlation can fail.
294 ERDAS
Image Matching Techniques
Least squares correlation is iterative. The parameters calculated during the initial pass are used
in the calculation of the second pass and so on, until an optimum solution is determined. Least
squares matching can result in high positional accuracy (about 0.1 pixels). However, it is
sensitive to initial approximations. The initial coordinates for the search window prior to
correlation must be accurate to about two pixels or better.
When least squares correlation fits a search window to the reference window, both radiometric
(pixel gray values) and geometric (location, size, and shape of the search window)
transformations are calculated.
For example, suppose the change in gray values between two correlation windows is
represented as a linear relationship. Also assume that the change in the window’s geometry is
represented by an affine transformation.
g 2 ( c 2, r 2 ) = h 0 + h 1 g 1 ( c 1, r 1 )
c2 = a0 + a1 c1 + a2 r1
r2 = b0 + b1 c1 + b2 r1
Where:
c1,r1 = the pixel coordinate in the reference window
c2,r2 = the pixel coordinate in the search window
g1(c1,r1) = the gray value of pixel (c1,r1)
g2(c2,r2) = the gray value of pixel (c1,r1)
h 0 , h1 = linear gray value transformation parameters
a0, a1, a2 = affine geometric transformation parameters
b 0 , b1 , b 2 = affine geometric transformation parameters
Based on this assumption, the error equation for each pixel is derived, as shown in the following
equation:
v = ( a1 + a 2 c 1 + a 3 r 1 )g c + ( b 1 + b 2 c 1 + b 3 r 1 )g r – h 1 – h 2 g 1 ( c 1, r 1 ) + ∆g
with ∆g = g 2 ( c 2, r 2 ) – g 1 ( c 1, r 1 )
Where:
gc and gr are the gradients of g2 (c2,r2).
Feature Based Feature based matching determines the correspondence between two image features. Most
Matching feature based techniques match extracted point features (this is called feature point matching),
as opposed to other features, such as lines or complex objects. The feature points are also
commonly referred to as interest points. Poor contrast areas can be avoided with feature based
matching.
In order to implement feature based matching, the image features must initially be extracted.
There are several well-known operators for feature point extraction. Examples include the
Moravec Operator, the Dreschler Operator, and the Förstner Operator (Förstner and Gülch,
1987; Lü, 1988).
After the features are extracted, the attributes of the features are compared between two images.
The feature pair having the attributes with the best fit is recognized as a match. IMAGINE
OrthoBASE utilizes the Förstner interest operator to extract feature points.
Relation Based Relation based matching is also called structural matching (Vosselman and Haala, 1992; Wang,
Matching Y., 1994; and Wang, Y., 1995). This kind of matching technique uses the image features and
the relationship between the features. With relation based matching, the corresponding image
structures can be recognized automatically, without any a priori information. However, the
process is time-consuming since it deals with varying types of information. Relation based
matching can also be applied for the automatic recognition of control points.
Image Pyramid Because of the large amount of image data, the image pyramid is usually adopted during the
image matching techniques to reduce the computation time and to increase the matching
reliability. The pyramid is a data structure consisting of the same image represented several
times, at a decreasing spatial resolution each time. Each level of the pyramid contains the image
at a particular resolution.
The matching process is performed at each level of resolution. The search is first performed at
the lowest resolution level and subsequently at each higher level of resolution. Figure 8-16
shows a four-level image pyramid.
Level 3
128 x 128 pixels
Resolution of 1:4
and
Level 2
256 x 256 pixels
Resolution of 1:2
There are different resampling methods available for generating an image pyramid. Theoretical
and practical investigations show that the resampling methods based on the Gaussian filter,
which are approximated by a binomial filter, have the superior properties concerning preserving
the image contents and reducing the computation time (Wang, Y., 1994). Therefore, IMAGINE
OrthoBASE uses this kind of pyramid layer instead of those currently available under ERDAS
IMAGINE, which are overwritten automatically by IMAGINE OrthoBASE.
296 ERDAS
Satellite Photogrammetry
The SPOT satellite carries two high resolution visible (HRV) sensors, each of which is a
pushbroom scanner that takes a sequence of line images while the satellite circles the Earth. The
focal length of the camera optic is 1084 mm, which is very large relative to the length of the
camera (78 mm). The field of view is 4.1 degrees. The satellite orbit is circular, North-South
and South-North, about 830 km above the Earth, and sun-synchronous. A sun-synchronous
orbit is one in which the orbital rotation is the same rate as the Earth’s rotation.
The Indian Remote Sensing (IRS-1C) satellite utilizes a pushbroom sensor consisting of three
individual CCDs. The ground resolution of the imagery ranges between 5 to 6 meters. The focal
length of the optic is approximately 982 mm. The pixel size of the CCD is 7 microns. The
images captured from the three CCDs are processed independently or merged into one image
and system corrected to account for the systematic error associated with the sensor.
Both the SPOT and IRS-1C satellites collect imagery by scanning along a line. This line is
referred to as the scan line. For each line scanned within the SPOT and IRS-1C sensors, there
is a unique perspective center and a unique set of rotation angles. The location of the perspective
center relative to the line scanner is constant for each line (interior orientation and focal length).
Since the motion of the satellite is smooth and practically linear over the length of a scene, the
perspective centers of all scan lines of a scene are assumed to lie along a smooth line. Figure 8-
17 illustrates the scanning technique.
scan lines
on image
ground
The satellite exposure station is defined as the perspective center in ground coordinates for the
center scan line. The image captured by the satellite is called a scene. For example, a SPOT Pan
1A scene is composed of 6000 lines. For SPOT Pan 1A imagery, each of these lines consists of
6000 pixels. Each line is exposed for 1.5 milliseconds, so it takes 9 seconds to scan the entire
scene. (A scene from SPOT XS 1A is composed of only 3000 lines and 3000 columns and has
20-meter pixels, while Pan has 10-meter pixels.)
NOTE: The following section addresses only the 10 meter SPOT Pan scenario.
A pixel in the SPOT image records the light detected by one of the 6000 light sensitive elements
in the camera. Each pixel is defined by file coordinates (column and row numbers). The physical
dimension of a single, light-sensitive element is 13 × 13 microns. This is the pixel size in image
coordinates. The center of the scene is the center pixel of the center scan line. It is the origin of
the image coordinate system. Figure 8-18 depicts image coordinates in a satellite scene:
A XF
6000 lines
C x
(rows)
YF
Where:
A = origin of file coordinates
A-XF, A-YF = file coordinate axes
C = origin of image coordinates (center of scene)
C-x, C-y = image coordinate axes
SPOT Interior Figure 8-19 shows the interior orientation of a satellite scene. The transformation between file
Orientation coordinates and image coordinates is constant.
298 ERDAS
Satellite Photogrammetry
On
Ok f
f
O1
orbiting direction
(N —> S)
f
PPn Pn
xn
scan lines
(image plane) Pk xk
PPk
P1 x1
PP1 P1 ln
lk
l1
For each scan line, a separate bundle of light rays is defined, where:
Pk = image point
xk = x value of image coordinates for scan line k
f = focal length of the camera
Ok = perspective center for scan line k, aligned along the orbit
PPk = principal point for scan line k
lk = light rays for scan line, bundled at perspective center Ok
SPOT Exterior SPOT satellite geometry is stable and the sensor parameters, such as focal length, are well-
Orientation known. However, the triangulation of SPOT scenes is somewhat unstable because of the
narrow, almost parallel bundles of light rays.
Ephemeris data for the orbit are available in the header file of SPOT scenes. They give the
satellite’s position in three-dimensional, geocentric coordinates at 60-second increments. The
velocity vector and some rotational velocities relating to the attitude of the camera are given, as
well as the exact time of the center scan line of the scene. The header of the data file of a SPOT
scene contains ephemeris data, which provides information about the recording of the data and
the satellite orbit.
Ephemeris data that can be used in satellite triangulation include:
• Position of the satellite in geocentric coordinates (with the origin at the center of the Earth)
to the nearest second
The geocentric coordinates included with the ephemeris data are converted to a local ground
system for use in triangulation. The center of a satellite scene is interpolated from the header
data.
Light rays in a bundle defined by the SPOT sensor are almost parallel, lessening the importance
of the satellite’s position. Instead, the inclination angles (incidence angles) of the cameras on
board the satellite become the critical data.
The scanner can produce a nadir view. Nadir is the point directly below the camera. SPOT has
off-nadir viewing capability. Off-nadir refers to any point that is not directly beneath the
satellite, but is off to an angle (i.e., East or West of the nadir).
A stereo scene is achieved when two images of the same area are acquired on different days
from different orbits, one taken East of the other. For this to occur, there must be significant
differences in the inclination angles.
Inclination is the angle between a vertical on the ground at the center of the scene and a light
ray from the exposure station. This angle defines the degree of off-nadir viewing when the scene
was recorded. The cameras can be tilted in increments of a minimum of 0.6 to a maximum of
27 degrees to the East (negative inclination) or West (positive inclination). Figure 8-20
illustrates the inclination.
300 ERDAS
Satellite Photogrammetry
sensors
I-
I+
EAST WEST
Where:
C = center of the scene
I- = eastward inclination
I+ = westward inclination
O1,O2 = exposure stations (perspective centers of imagery)
The orientation angle of a satellite scene is the angle between a perpendicular to the center scan
line and the North direction. The spatial motion of the satellite is described by the velocity
vector. The real motion of the satellite above the ground is further distorted by the Earth’s
rotation.
The velocity vector of a satellite is the satellite’s velocity if measured as a vector through a point
on the spheroid. It provides a technique to represent the satellite’s speed as if the imaged area
were flat instead of being a curved surface (see Figure 8-21).
orbital path V
Where:
O = orientation angle
C = center of the scene
V = velocity vector
Satellite block triangulation provides a model for calculating the spatial relationship between a
satellite sensor and the ground coordinate system for each line of data. This relationship is
expressed as the exterior orientation, which consists of
• the perspective center of the center scan line (i.e., X, Y, and Z),
• the three rotations of the center scan line (i.e., omega, phi, and kappa), and
In addition to fitting the bundle of light rays to the known points, satellite block triangulation
also accounts for the motion of the satellite by determining the relationship of the perspective
centers and rotation angles of the scan lines. It is assumed that the satellite travels in a smooth
motion as a scene is being scanned. Therefore, once the exterior orientation of the center scan
line is determined, the exterior orientation of any other scan line is calculated based on the
distance of that scan line from the center, and the changes of the perspective center location and
rotation angles.
Bundle adjustment for triangulating a satellite scene is similar to the bundle adjustment used for
aerial images. A least squares adjustment is used to derive a set of parameters that comes the
closest to fitting the control points to their known ground coordinates, and to intersecting tie
points.
The resulting parameters of satellite bundle adjustment are:
302 ERDAS
Satellite Photogrammetry
• Coefficients, from which the perspective center and rotation angles of all other scan lines
are calculated
Collinearity Equations Modified collinearity equations are used to compute the exterior orientation parameters
and Satellite Block associated with the respective scan lines in the satellite scenes. Each scan line has a unique
Triangulation perspective center and individual rotation angles. When the satellite moves from one scan line
to the next, these parameters change. Due to the smooth motion of the satellite in orbit, the
changes are small and can be modeled by low order polynomial functions.
Figure 8-22: Ideal Point Distribution Over a Satellite Scene for Triangulation
y
GCP
horizontal x
scan lines
Orthorectification As stated previously, orthorectification is the process of removing geometric errors inherent
within photography and imagery. The variables contributing to geometric errors include, but are
not limited to:
• Earth curvature
By performing block triangulation or single frame resection, the parameters associated with
camera and sensor orientation are defined. Utilizing least squares adjustment techniques during
block triangulation minimizes the errors associated with camera or sensor instability.
Additionally, the use of self-calibrating bundle adjustment (SCBA) techniques along with
Additional Parameter (AP) modeling accounts for the systematic errors associated with camera
interior geometry. The effects of the Earth’s curvature are significant if a large photo block or
satellite imagery is involved. They are accounted for during the block triangulation procedure
by setting the relevant option. The effects of topographic relief displacement are accounted for
by utilizing a DEM during the orthorectification procedure.
The orthorectification process takes the raw digital imagery and applies a DEM and
triangulation results to create an orthorectified image. Once an orthorectified image is created,
each pixel within the image possesses geometric fidelity. Thus, measurements taken off an
orthorectified image represent the corresponding measurements as if they were taken on the
Earth’s surface (see Figure 8-23).
DEM
Orthorectified image
An image or photograph with an orthographic projection is one for which every point looks as
if an observer were looking straight down at it, along a line of sight that is orthogonal
(perpendicular) to the Earth. The resulting orthorectified image is known as a digital orthoimage
(see Figure 8-24).
Relief displacement is corrected by taking each pixel of a DEM and finding the equivalent
position in the satellite or aerial image. A brightness value is determined for this location based
on resampling of the surrounding pixels. The brightness value, elevation, and exterior
orientation information are used to calculate the equivalent location in the orthoimage file.
304 ERDAS
Satellite Photogrammetry
Pl
f
Z
P
DTM
orthoimage
gray values
Where:
P = ground point
P1 = image point
O = perspective center (origin)
X,Z = ground coordinates (in DTM file)
f = focal length
In contrast to conventional rectification techniques, orthorectification relies on the digital
elevation data, unless the terrain is flat. Various sources of elevation data exist, such as the
USGS DEM and a DEM automatically created from stereo image pairs. They are subject to data
uncertainty, due in part to the generalization or imperfections in the creation process. The
quality of the digital orthoimage is significantly affected by this uncertainty. For different image
data, different accuracy levels of DEMs are required to limit the uncertainty-related errors
within a controlled limit. While the near-vertical viewing SPOT scene can use very coarse
DEMs, images with large incidence angles need better elevation data such as USGS level-1
DEMs. For aerial photographs with a scale larger than 1:60000, elevation data accurate to 1
meter is recommended. The 1 meter accuracy reflects the accuracy of the Z coordinates in the
DEM, not the DEM resolution or posting.
Detailed discussion of DEM requirements for orthorectification can be found in Yang and
Williams (Yang and Williams, 1997). See “Bibliography”.
Resampling methods used are nearest neighbor, bilinear interpolation, and cubic convolution.
Generally, when the cell sizes of orthoimage pixels are selected, they should be similar or larger
than the cell sizes of the original image. For example, if the image was scanned at 25 microns
(1016 dpi) producing an image of 9K × 9K pixels, one pixel would represent 0.025 mm on the
image. Assuming that the image scale of this photo is 1:40000, then the cell size on the ground
is about 1 m. For the orthoimage, it is appropriate to choose a pixel spacing of 1 m or larger.
Choosing a smaller pixel size oversamples the original image.
For SPOT Pan images, a cell size of 10 meters is appropriate. Any further enlargement from the
original scene to the orthophoto does not improve the image detail. For IRS-1C images, a cell
size of 6 meters is appropriate.
306 ERDAS
Chapter 9
Radar Concepts
Introduction Radar images are quite different from other remotely sensed imagery you might use with
ERDAS IMAGINE software. For example, radar images may have speckle noise. Radar
images, do, however, contain a great deal of information. ERDAS IMAGINE has many radar
packages, including IMAGINE Radar Interpreter, IMAGINE OrthoRadar, IMAGINE
StereoSAR DEM, IMAGINE IFSAR DEM, and the Generic SAR Node with which you can
analyze your radar imagery.You have already learned about the various methods of speckle
suppression—those are IMAGINE Radar Interpreter functions.
This chapter tells you about the advanced radar processing packages that ERDAS IMAGINE
has to offer. The following sections go into detail about the geometry and functionality of those
modules of the IMAGINE Radar Mapping Suite.
IMAGINE
OrthoRadar
Theory
Parameters Required SAR image orthorectification requires certain information about the sensor and the SAR image.
for Orthorectification Different sensors (RADARSAT, ERS, etc.) express these parameters in different ways and in
different units. To simplify the design of our SAR tools and easily support future sensors, all
SAR images and sensors are described using our Generic SAR model. The sensor-specific
parameters are converted to a Generic SAR model on import.
The following table lists the parameters of the Generic SAR model and their units. These
parameters can be viewed in the SAR Parameters tab on the main Generic SAR Model
Properties (IMAGINE OrthoRadar) dialog.
308 ERDAS
IMAGINE OrthoRadar Theory
Algorithm Description
Overview
The orthorectification process consists of several steps:
Ephemeris Modeling
The platform ephemeris is described by three or more platform locations and velocities. To
predict the platform position and velocity at some time (t):
R s,x = a 1 + a 2 t + a 3 t 2
R s,y = b 1 + b 2 t + b 3 t 2
R s,z = c 1 + c 2 t + c 3 t 2
V s,x = d 1 + d 2 t + d 3 t 2
V s,y = e 1 + e 2 t + e 3 t 2
V s,z = f 1 + f 2 t + f 3 t 2
Where Rs is the sensor position and Vs is the sensor velocity:
2
1.0 t 1 t 1
A = 1.0 t t 2
2 1
2
1.0 t 3 t 3
Where t1, t2, and t3 are the times associated with each platform position. Select t such that t =
0.0 corresponds to the time of the second position point. Form vector b:
b = [ R s,x (1) R s,x (2) R s,x (3) ] T
Where Rs,x(i) is the x-coordinate of the i-th platform position (i =1: 3). We wish to solve Ax =
b where x is:
x = [ a1 a2 a3 ] T
To do so, use LU decomposition. The process is repeated for: Rs,y, Rs,z, Vs,x, Vs,y, and Vs,z
310 ERDAS
IMAGINE OrthoRadar Theory
For each range line and range pixel in the SAR image, the corresponding target location (Rt) is
determined. The target location can be described as (lat, lon, elev) or (x, y, z) in ECS. The target
can either lie on a smooth Earth ellipsoid or on a smooth Earth ellipsoid plus an elevation model.
In either case, the location of Rt is determined by finding the intersection of the Doppler cone,
range sphere, and Earth model. In order to do this, first find the Doppler centroid and slant range
for a given SAR image pixel.
Let i = range pixel and j = range line.
Time
Time T(j) is thus:
j–1
T(j) = T(0) + --------------- * t dur
Na – 1
Where T(0) is the image start time, Na is the number of range lines, and tdur is the image
duration time.
Doppler Centroid
The computation of the Doppler centroid fd to use with the SAR imaging model depends on how
the data was processed. If the data was deskewed, this value is always 0. If the data is skewed,
then this value may be a nonzero constant or may vary with i.
Slant Range
The computation of the slant range to the pixel i depends on the projection of the image. If the
data is in a slant range projection, then the computation of slant range is straightforward:
R sl (i) = r sl + (i-1)*∆r sr
Where Rsl(i) is the slant range to pixel i, rsl is the near slant range, and ∆rsr is the slant range
pixel spacing.
If the projection is a ground range projection, then this computation is potentially more
complicated and depends on how the data was originally projected into a ground range
projection by the SAR processor.
2
f D = ----------- ( R s – R t ) ⋅ ( V s – V t ) , f D > 0 for forward squint
λR sl
R sl = | R s - R t |
2 2 2
Rs ( x ) + Rs ( y ) Rs ( z )
-------------------------------------- + ------------------------------
- = 1
2 2
( R e + h targ ) ( R m + h targ )
Where Rs and Vs are the platform position and velocity respectively, Vt is the target velocity
( = 0, in this coordinate system), Re is the Earth semimajor axis, and Rm is the Earth semiminor
axis. The platform position and velocity vectors Rs and Vs can be found as a function of time
T(j) using the ephemeris equations developed previously
Figure 9-1 graphically illustrates the solution for the target location given the sensor ephemeris,
doppler cone, range sphere, and flat Earth model.
ψ Rt
Rs R
(on Earth model)
(Earth center)
Ephemeris Adjustment
There are three possible adjustments that can be made: along track, cross track, and radial. In
IMAGINE OrthoRadar, the along track adjustment is performed separately. The cross track and
radial adjustments are made simultaneously. These adjustments are made using residuals
associated with GCPs. Each GCP has a map coordinate (such as lat, lon) and an elevation. Also,
an SAR image range line and range pixel must be given. The SAR image range line and range
pixel are converted to Rt using the method described previously (substituting htarg = elevation
of GCP above ellipsoid used in SAR processing).
The along track adjustment is computed first, followed by the cross track and radial
adjustments. The two adjustment steps are then repeated.
For more information, consult SAR Geocoding: Data and Systems Gunter Schreier, Ed.
Orthorectification
The ultimate goal in orthorectification is to determine, for a given target location on the ground,
the associated range line and range pixel from the input SAR image, including the effects of
terrain.
312 ERDAS
IMAGINE StereoSAR DEM Theory
To do this, there are several steps. First, take the target location and locate the associated range
line and range pixel from the input SAR image assuming smooth terrain. This places you in
approximately the correct range line. Next, look up the elevation at the target from the input
DEM. The elevation, in combination with the known slant range to the target, is used to
determine the correct range pixel. The data can now be interpolated from the input SAR image.
Output Formation
For each point in the output grid, there is an associated Rt. This target should fall on the surface
of the Earth model used for SAR processing, thus a conversion is made between the Earth model
used for the output grid and the Earth model used during SAR processing.
The process of orthorectification starts with a location on the ground. The line and pixel location
of the pixel to this map location can be determined from the map location and the sparse
mapping grid. The value at this pixel location is then assigned to the map location. Figure 9-2
illustrates this process.
Rt
output grid
IMAGINE
StereoSAR DEM
Theory
Introduction This chapter details the theory that supports IMAGINE StereoSAR DEM processing.
To understand the way IMAGINE StereoSAR DEM works to create DEMs, it is first helpful to
look at the process from beginning to end. Figure 9-3 shows a stylized process for basic
operation of the IMAGINE StereoSAR DEM module.
Import
Affine Tie
Registration Points
Image 2
Coregistered
Automatic Image
Correlation
Parallax File
Range/Doppler
Stereo
Sensor-
based DEM
Resample
Reproject
Digital
Elevation
Input There are many elements to consider in the Input step. These include beam mode selection,
importing files, orbit correction, and ephemeris data.
314 ERDAS
IMAGINE StereoSAR DEM Theory
The two initial calculation sequences have disparate beam mode demands. Automatic
correlation works best with images acquired with as little angular divergence as possible. This
is because different imaging angles produce different-looking images, and the automatic
correlator is looking for image similarity. The requirement of image similarity is the same
reason images acquired at different times can be hard to correlate. For example, images taken
of agricultural areas during different seasons can be extremely different and, therefore, difficult
or impossible for the automatic correlator to process successfully.
Conversely, the triangulation calculation is most accurate when there is a large intersection
angle between the two images (see Figure 9-4). This results in images that are truly different
due to geometric distortion. The ERDAS IMAGINE automatic image correlator has proven
sufficiently robust to match images with significant distortion if the proper correlator
parameters are used.
Incidence angles
Elevation
of point
NOTE: IMAGINE StereoSAR DEM has built-in checks that assure the sensor associated with
the Reference image is closer to the imaged area than the sensor associated with the Match
image.
A third factor, cost effectiveness, must also often be evaluated. First, select either Fine or
Standard Beam modes. Fine Beam images with a pixel size of six meters would seem, at first
glance, to offer a much better DEM than Standard Beam with 12.5-meter pixels. However, a
Fine Beam image covers only one-fourth the area of a Standard Beam image and produces a
DEM only minimally better.
Various Standard Beam combinations, such as an S3/S6 or an S3/S7, cover a larger area per
scene, but only for the overlap area which might be only three-quarters of the scene area.
Testing at both ERDAS and RADARSAT has indicated that a stereopair consisting of a Wide
Beam mode 2 image and a Standard Beam mode 7 image produces the most cost-effective DEM
at a resolution consistent with the resolution of the instrument and the technique.
Import
The imagery required for the IMAGINE StereoSAR DEM module can be imported using the
ERDAS IMAGINE radar-specific importers for either RADARSAT or ESA (ERS-1, ERS-2).
These importers automatically extract data from the image header files and store it in an Hfa file
attached to the image. In addition, they abstract key parameters necessary for sensor modeling
and attach these to the image as a Generic SAR Node Hfa file. Other radar imagery (e.g., SIR-
C) can be imported using the Generic Binary Importer. The Generic SAR Node can then be used
to attach the Generic SAR Node Hfa file.
Orbit Correction
Extensive testing of both the IMAGINE OrthoRadar and IMAGINE StereoSAR DEM modules
has indicated that the ephemeris data from the RADARSAT and the ESA radar satellites is very
accurate (see appended accuracy reports). However, the accuracy does vary with each image,
and there is no a priori way to determine the accuracy of a particular data set.
The modules of the IMAGINE Radar Mapping Suite: IMAGINE OrthoRadar, IMAGINE
StereoSAR DEM, and IMAGINE IFSAR DEM, allow for correction of the sensor model using
GCPs. Since the supplied orbit ephemeris is very accurate, orbit correction should only be
attempted if you have very good GCPs. In practice, it has been found that GCPs from 1:24 000
scale maps or a handheld GPS are the minimum acceptable accuracy. In some instances, a single
accurate GCP has been found to result in a significant increase in accuracy.
As with image warping, a uniform distribution of GCPs results in a better overall result and a
lower RMS error. Again, accurate GCPs are an essential requirement. If your GCPs are
questionable, you are probably better off not using them. Similarly, the GCP must be
recognizable in the radar imagery to within plus or minus one to two pixels. Road intersections,
reservoir dams, airports, or similar man-made features are usually best. Lacking one very
accurate and locatable GCP, it would be best to utilize several good GCPs dispersed throughout
the image as would be done for a rectification.
Ellipsoid vs. Geoid Heights
The IMAGINE Radar Mapping Suite is based on the World Geodetic System (WGS) 84 Earth
ellipsoid. The sensor model uses this ellipsoid for the sensor geometry. For maximum accuracy,
all GCPs used to refine the sensor model for all IMAGINE Radar Mapping Suite modules
(IMAGINE OrthoRadar, IMAGINE StereoSAR DEM, or IMAGINE IFSAR DEM) should be
converted to this ellipsoid in all three dimensions: latitude, longitude, and elevation.
Note that, while ERDAS IMAGINE reprojection converts latitude and longitude to UTM WGS
84 for many input projections, it does not modify the elevation values. To do this, it is necessary
to determine the elevation offset between WGS 84 and the datum of the input GCPs. For some
input datums this can be accomplished using the Web site:
www.ngs.noaa.gov/GEOID/geoid.html. This offset must then be added to, or subtracted from,
the input GCP. Many handheld GPS units can be set to output in WGS 84 coordinates.
One elegant feature of the IMAGINE StereoSAR DEM module is that orbit refinement using
GCPs can be applied at any time in the process flow without losing the processing work to that
stage. The stereopair can even be processed all the way through to a final DEM and then you
can go back and refine the orbit. This refined orbit is transferred through all the intermediate
files (Subset, Despeckle, etc.). Only the final step Height would need to be rerun using the new
refined orbit model.
316 ERDAS
IMAGINE StereoSAR DEM Theory
The ephemeris normally received with RADARSAT, or ERS-1 and ERS-2 imagery is based on
an extrapolation of the sensor orbit from previous positions. If the satellite received an orbit
correction command, this effect might not be reflected in the previous position extrapolation.
The receiving stations for both satellites also do ephemeris calculations that include post image
acquisition sensor positions. These are generally more accurate. They are not, unfortunately,
easy to acquire and attach to the imagery.
Refined Ephemeris
Subset Use of the Subset option is straightforward. It is not necessary that the two subsets define
exactly the same area: an approximation is acceptable. This option is normally used in two
circumstances. First, it can be used to define a small subset for testing correlation parameters
prior to running a full scene. Also, it would be used to constrain the two input images to only
the overlap area. Constraining the input images is useful for saving data space, but is not
necessary for functioning of IMAGINE StereoSAR DEM — it is purely optional.
Despeckle The functions to despeckle the images prior to automatic correlation are optional. The rationale
for despeckling at this time is twofold. One, image speckle noise is not correlated between the
two images: it is randomly distributed in both. Thus, it only serves to confuse the automatic
correlation calculation. Presence of speckle noise could contribute to false positives during the
correlation process.
Secondly, as discussed under Beam Mode Selection, the two images the software is trying to
match are different due to viewing geometry differences. The slight low-pass character of the
despeckle algorithm may actually move both images toward a more uniform appearance, which
aids automatic correlation.
Functionally, the despeckling algorithms presented here are identical to those available in the
IMAGINE Radar Interpreter. In practice, a 3 × 3 or 5 × 5 kernel has been found to work
acceptably. Note that all ERDAS IMAGINE speckle reduction algorithms allow the kernel to
be tuned to the image being processed via the Coefficient of Variation. Calculation of this
parameter is accessed through the IMAGINE Radar Interpreter Speckle Suppression interface.
See the ERDAS IMAGINE Tour Guides for the IMAGINE Radar Interpreter tour guide.
Degrade The Degrade option offered at this step in the processing is commonly used for two purposes.
If the input imagery is Single Look Complex (SLC), the pixels are not square (this is shown as
the Range and Azimuth pixel spacing sizes). It may be desirable at this time to adjust the Y scale
factor to produce pixels that are more square. This is purely an option; the software accurately
processes undegraded SLC imagery.
Secondly, if data space or processing time is limited, it may be useful to reduce the overall size
of the image file while still processing the full images. Under those circumstances, a reduction
of two or three in both X and Y might be appropriate. Note that the processing flow
recommended for maximum accuracy processes the full resolution scenes and correlates for
every pixel. Degrade is used subsequent to Match to lower DEM variance (LE90) and increase
pixel size to approximately the desired output posting.
Rescale
This operation converts the input imagery bit format, commonly unsigned 16-bit, to unsigned
8-bit using a two standard deviations stretch. This is done to reduce the overall data file sizes.
Testing has not shown any advantage to retaining the original 16-bit format, and use of this
option is routinely recommended.
Register Register is the first of the Process Steps (other than Input) that must be done. This operation
serves two important functions, and proper user input at this processing level affects the speed
of subsequent processing, and may affect the accuracy of the final output DEM.
The registration operation uses an affine transformation to rotate the Match image so that it
more closely aligns with the Reference image. The purpose is to adjust the images so that the
elevation-induced pixel offset (parallax) is mostly in the range (x-axis) direction (i.e., the
images are nearly epipolar). Doing this greatly reduces the required size of the search window
in the Match step.
One output of this step is the minimum and maximum parallax offsets, in pixels, in both the x-
and y-axis directions. These values must be recorded by the operator and are used in the Match
step to tune the IMAGINE StereoSAR DEM correlator parameter file (.ssc). These values are
critical to this tuning operation and, therefore, must be correctly extracted from the Register
step.
Two basic guidelines define the selection process for the tie points used for the registration.
First, as with any image-to-image registration, a better result is obtained if the tie points are
uniformly distributed throughout the images. Second, since you want the calculation to output
the minimum and maximum parallax offsets in both the x- and y-axis directions, the tie points
selected must be those that have the minimum and maximum parallax.
In practice, the following procedure has been found successful. First, select a fairly uniform grid
of about eight tie points that defines the lowest elevation within the image. Coastlines, river
flood plains, roads, and agricultural fields commonly meet this criteria. Use of the Solve
Geometric Model icon on the StereoSAR Registration Tool should yield values in the -5 to +5
range at this time. Next, identify and select three or four of the highest elevations within the
image. After selecting each tie point, click the Solve Geometric Model icon and note the effect
of each tie point on the minimum and maximum parallax values. When you feel you have
quantified these values, write them down and apply the resultant transform to the image.
Constrain This option is intended to allow you to define areas where it is not necessary to search the entire
search window area. A region of lakes would be such an area. This reduces processing time and
also minimizes the likelihood of finding false positives. This option is not implemented at
present.
Match An essential component, and the major time-saver of the IMAGINE StereoSAR DEM software
is automatic image correlation.
In automatic image correlation, a small subset (image chip) of the Reference image termed the
template (see Figure 9-5), is compared to various regions of the Match image’s search area
(Figure 9-6) to find the best Match point. The center pixel of the template is then said to be
correlated with the center pixel of the Match region. The software then proceeds to the next
pixel of interest, which becomes the center pixel of the new template.
Figure 9-5 shows the upper left (UL) corner of the Reference image. An 11 × 11 pixel template
is shown centered on the pixel of interest: X = 8, Y = 8.
318 ERDAS
IMAGINE StereoSAR DEM Theory
Figure 9-6 shows the UL corner of the Match image. The 11 × 11 pixel template is shown
centered on the initial estimated correlation pixel X = 8, Y = 8. The 15 × 7 pixel search area is
shown in a dashed line. Since most of the parallax shift is in the range direction (x-axis), the
search area should always be a rectangle to minimize search time.
The ERDAS IMAGINE automatic image correlator works on the hierarchical pyramid
technique. This means that the image is successively reduced in resolution to provide a
coregistered set of images of increasing pixel size (see Figure 9-7). The automatic correlation
software starts at the top of the resolution pyramid with the lowest resolution image being
processed first. The results of this process are filtered and interpolated before being passed to
the next highest resolution layer as the initial estimated correlation point. From this estimated
point, the search is performed on this higher resolution layer.
Level 3
128 × 128 pixels
Resolution of 1:4
and
Level 2
256 × 256 pixels
Resolution of 1:2
Level 1
512 × 512 pixels
Full resolution (1:1) Matching ends
on level 1
Template Size
The size of the template directly affects computation time: a larger image chip takes more time.
However, too small of a template could contain insufficient image detail to allow accurate
matching. A balance must be struck between these two competing criteria, and is somewhat
image-dependent. A suitable template for a suburban area with roads, fields, and other features
could be much smaller than the required template for a vast region of uniform ground cover.
Because of viewing
geometry-induced differences in the Reference and Match images, the template from the
Reference image is never identical to any area of the Match image. The template must be large
enough to minimize this effect.
The IMAGINE StereoSAR DEM correlator parameters shown in Table 9-2 are for the library
file Std_LP_HD.ssc. These parameters are appropriate for a RADARSAT Standard Beam mode
(Std) stereopair with low parallax (LP) and high density of detail (HD). The low parallax
parameters are appropriate for images of low to moderate topography. The high density of detail
(HD) parameters are appropriate for the suburban area discussed above.
320 ERDAS
IMAGINE StereoSAR DEM Theory
Note that the size of the template (Size X and Size Y) increases as you go up the resolution
pyramid. This size is the effective size if it were on the bottom of the pyramid (i.e., the full
resolution image). Since they are actually on reduced resolution levels of the pyramid, they are
functionally smaller. Thus, the 220 × 220 template on Level 6 is actually only 36 × 36 during
the actual search. By stating the template size relative to the full resolution image, it is easy to
display a box of approximate size on the input image to evaluate the amount of detail available
to the correlator, and thus optimize the template sizes.
Search Area
Considerable computer time is expended in searching the Match image for the exact Match
point. Thus, this search area should be minimized. (In addition, searching too large of an area
increases the possibility of a false match.) For this reason, the software first requires that the two
images be registered. This gives the software a rough idea of where the Match point might be.
In stereo DEM generation, you are looking for the offset of a point in the Match image from its
corresponding point in the Reference image (parallax). The minimum and maximum
displacement is quantified in the Register step and is used to restrain the search area.
In Figure 9-6, the search area is defined by four parameters: -X, +X, -Y, and +Y. Most of the
displacement in radar imagery is a function of the look angle and is in the range or x-axis
direction. Thus, the search area is always a rectangle emphasizing the x-axis. Because the total
search area (and, therefore, the total time) is X times Y, it is important to keep these values to a
minimum. Careful use at the Register step easily achieves this.
Step Size
Because a radar stereopair typically contains millions of pixels, it is not desirable to correlate
every pixel at every level of the hierarchal pyramid, nor is this even necessary to achieve an
accurate result. The density at which the automatic correlator is to operate at each level in the
resolution pyramid is determined by the step size (posting). The approach used is to keep
posting tighter (smaller step size) as the correlator works down the resolution pyramid. For
maximum accuracy, it is recommended to correlate every pixel at the full resolution level. This
result is then compressed by the Degrade step to the desired DEM cell size.
Threshold
The degree of similarity between the Reference template and each possible Match region within
the search area must be quantified by a mathematical metric. IMAGINE StereoSAR DEM uses
the widely accepted normalized correlation coefficient. The range of possible values extends
from -1 to +1, with +1 being an identical match. The algorithm uses the maximum value within
the search area as the correlation point.
The threshold in Table 9-2 is the minimum numerical value of the normalized correlation
coefficient that is accepted as a correlation point. If no value within the entire search area attains
this minimum, there is not a Match point for that level of the resolution pyramid. In this case,
the initial estimated position, passed from the previous level of the resolution pyramid, is
retained as the Match point.
Correlator Library
To aid both the novice and the expert in rapidly selecting and refining an IMAGINE StereoSAR
DEM correlator parameter file for a specific image pair, a library of tested parameter files has
been assembled and is included with the software. These files are labeled using the following
syntax: (RADARSAT Beam mode)_(Magnitude of Parallax)_(Density of Detail).
Magnitude of Parallax
The magnitude of the parallax is divided into high parallax (_HP) and low parallax (_LP)
options. This determination is based upon the elevation changes and slopes within the images
and is somewhat subjective. This parameter determines the size of the search area.
Density of Detail
The level of detail within each template is divided into high density (_HD) and low density
(_LD) options. The density of detail for a suburban area with roads, fields, and other features
would be much higher than the density of detail for a vast region of uniform ground cover. This
parameter, in conjunction with beam mode, determines the required template sizes.
Quick Tests
It is often advantageous to quickly produce a low resolution DEM to verify that the automatic
image correlator is optimum before correlating on every pixel to produce the final DEM.
322 ERDAS
IMAGINE IFSAR DEM Theory
For this purpose, a Quick Test (_QT) correlator parameter file has been provided for each of the
full resolution correlator parameter files in the .ssc library. These correlators process the image
only through resolution pyramid Level 3. Processing time up to this level has been found to be
acceptably fast, and testing has shown that if the image is successfully processed to this level,
the correlator parameter file is probably appropriate.
Evaluation of the parallax files produced by the Quick Test correlators and subsequent
modification of the correlator parameter file is discussed in "IMAGINE StereoSAR DEM
Application" in the IMAGINE Radar Mapping Suite Tour Guide.
Degrade The second Degrade step compresses the final parallax image file (Level 1). While not strictly
necessary, it is logical and has proven advantageous to reduce the pixel size at this time to
approximately the intended posting of the final output DEM. Doing so at this time decreases the
variance (LE90) of the final DEM through averaging.
Height This step combines the information from the above processing steps to derive surface
elevations. The sensor models of the two input images are combined to derive the stereo
intersection geometry. The parallax values for each pixel are processed through this geometric
relationship to derive a DEM in sensor (pixel) coordinates.
Comprehensive testing of the IMAGINE StereoSAR DEM module has indicated that, with
reasonable data sets and careful work, the output DEM falls between DTED Level I and DTED
Level II. This corresponds to between USGS 30-meter and USGS 90-meter DEMs. Thus, an
output pixel size of 40 to 50 meters is consistent with this expected precision.
The final step is to resample and reproject this sensor DEM in to the desired final output DEM.
The entire ERDAS IMAGINE reprojection package is accessed within the IMAGINE
StereoSAR DEM module.
IMAGINE IFSAR
DEM Theory
Introduction Terrain height extraction is one of the most important applications for SAR images. There are
two basic techniques for extracting height from SAR images: stereo and interferometry. Stereo
height extraction is much like the optical process and is discussed in “IMAGINE StereoSAR
DEM Theory”. The subject of this section is SAR interferometry (IFSAR).
Height extraction from IFSAR takes advantage of one of the unique qualities of SAR images:
distance information from the sensor to the ground is recorded for every pixel in the SAR image.
Unlike optical and IR images, which contain only the intensity of the energy received to the
sensor, SAR images contain distance information in the form of phase. This distance is simply
the number of wavelengths of the source radiation from the sensor to a given point on the
ground. SAR sensors can record this information because, unlike optical and IR sensors, their
radiation source is active and coherent.
Unfortunately, this distance phase information in a single SAR image is mixed with phase noise
from the ground and other effects. For this reason, it is impossible to extract just the distance
phase from the total phase in a single SAR image. However, if two SAR images are available
that cover the same area from slightly different vantage points, the phase of one can be
subtracted from the phase of the other to produce the distance difference of the two SAR images
(hence the term interferometry). This is because the other phase effects for the two images are
approximately equal and cancel out each other when subtracted. What is left is a measure of the
distance difference from one image to the other. From this difference and the orbit information,
the height of every pixel can be calculated.
This chapter covers basic concepts and processing steps needed to extract terrain height from a
pair of interferometric SAR images.
Electromagnetic Wave In order to understand the SAR interferometric process, one must have a general understanding
Background of electromagnetic waves and how they propagate. An electromagnetic wave is a changing
electric field that produces a changing magnetic field that produces a changing electric field, and
so on. As this process repeats, energy is propagated through empty space at the speed of light.
Figure 9-8 gives a description of the type of electromagnetic wave that we are interested in. In
this diagram, E indicates the electric field and H represents the magnetic field. The directions
of E and H are mutually perpendicular everywhere. In a uniform plane, wave E and H lie in a
plane and have the same value everywhere in that plane.
A wave of this type with both E and H transverse to the direction of propagation is called a
Transverse ElectroMagnetic (TEM) wave. If the electric field E has only a component in the y
direction and the magnetic field H has only a component in the z direction, then the wave is said
to be polarized in the y direction (vertically polarized). Polarization is generally defined as the
direction of the electric field component with the understanding that the magnetic field is
perpendicular to it.
Ey
Hz Direction of
propagation
z
The electromagnetic wave described above is the type that is sent and received by an SAR. The
SAR, like most equipment that uses electromagnetic waves, is only sensitive to the electric field
component of the wave; therefore, we restrict our discussion to it. The electric field of the wave
has two main properties that we must understand in order to understand SAR and
interferometry. These are the magnitude and phase of the wave. Figure 9-9 shows that the
electric field varies with time.
324 ERDAS
IMAGINE IFSAR DEM Theory
λ P
t = 0
t = T
---
4
t = T
---
2
π 2π 3π 4π
The figure shows how the wave phase varies with time at three different moments. In the figure
λ is the wavelength and T is the time required for the wave to travel one full wavelength. P is
a point of constant phase and moves to the right as time progresses. The wave has a specific
phase value at any given moment in time and at a specific point along its direction of travel. The
wave can be expressed in the form of Equation 1.
Equation 1
E y = cos ( ωt + βx ) Where
2π
w = ------
T and
2π
β = ------
λ
Equation 1 is expressed in Cartesian coordinates and assumes that the maximum magnitude of
E y is unity. It is more useful to express this equation in exponential form and include a
maximum term as in Equation 2.
Equation 2
j 〈 ωt ± βx〉
Ey = E0 ⋅ e
So far we have described the definition and behavior of the electromagnetic wave phase as a
function of time and distance. It is also important to understand how the strength or magnitude
behaves with time and distance from the transmitter. As the wave moves away from the
transmitter, its total energy stays the same but is spread over a larger distance. This means that
the energy at any one point (or its energy density) decreases with time and distance as shown in
Figure 9-10.
t, x
The magnitude of the wave decreases exponentially as the distance from the transmitter
increases. Equation 2 represents the general form of the electromagnetic wave that we are
interested in for SAR and IFSAR applications. Later, we further simplify this expression given
certain restrictions of an SAR sensor.
The Interferometric Most uses of SAR imagery involve a display of the magnitude of the image reflectivity and
Model discard the phase when the complex image is magnitude-detected. The phase of an image pixel
representing a single scatterer is deterministic; however, the phase of an image pixel represents
multiple scatterers (in the same resolution cell), and is made up of both a deterministic and
nondeterministic, statistical part. For this reason, pixel phase in a single SAR image is generally
not useful. However, with proper selection of an imaging geometry, two SAR images can be
collected that have nearly identical nondeterministic phase components. These two SAR images
can be subtracted, leaving only a useful deterministic phase difference of the two images.
Figure 9-11 provides the basic geometric model for an interferometric SAR system.
326 ERDAS
IMAGINE IFSAR DEM Theory
R1 – R2 R2
Z ac R1
Where:
A1 = antenna 1
A2 = antenna 2
Bi = baseline
R1 = vector from antenna 1 to point of interest
R2 = vector from antenna 2 to point of interest
Ψ = angle between R1 and baseline vectors (depression angle)
Zac = antenna 1 height
A rigid baseline B i separates two antennas, A1 and A2. This separation causes the two antennas
to illuminate the scene at slightly different depression angles relative to the baseline. Here, ψ
is the nominal depression angle from A1 to the scatterer relative to the baseline. The model
assumes that the platform travels at constant velocity in the X direction while the baseline
remains parallel to the Y axis at a constant height Z ac above the XY plane.
The electromagnetic wave Equation 2 describes the signal data collected by each antenna. The
two sets of signal data differ primarily because of the small differences in the data collection
geometry. Complex images are generated from the signal data received by each antenna.
As stated earlier, the phase of an image pixel represents the phase of multiple scatters in the
same resolution cell and consists of both deterministic and unknown random components. A
data collection for SAR interferometry adheres to special conditions to ensure that the random
component of the phase is nearly identical in the two images. The deterministic phase in a single
image is due to the two-way propagation path between the associated antenna and the target.
From our previously derived equation for an electromagnetic wave, and assuming the standard
SAR configuration in which the perpendicular distance from the SAR to the target does not
change, we can write the complex quantities representing a corresponding pair of image pixels,
P 1 and P 2 , from image 1 and image 2 as Equation 3and Equation 4.
Equation 3
j ( θ1 + Φ1 )
P1 = a1 ⋅ e
and
Equation 4
j ( θ2 + Φ2 )
P2 = a2 ⋅ e
The quantities a 1 and a 2 represent the magnitudes of each image pixel. Generally, these
magnitudes are approximately equal. The quantities θ 1 and θ2 are the random components of
pixel phase. They represent the vector summations of returns from all unresolved scatterers
within the resolution cell and include contributions from receiver noise. With proper system
design and collection geometry, they are nearly equal. The quantities Φ 1 and Φ 2 are the
deterministic contribution to the phase of the image pixel. The desired function of the
interferometer is to provide a measure of the phase difference, Φ 1 – Φ 2 .
Next, we must relate the phase value to the distance vector from each antenna to the point of
interest. This is done by recognizing that phase and the wavelength of the electromagnetic wave
represent distance in number of wavelengths. Equation 5 relates phase to distance and
wavelength.
Equation 5
4πR
Φ i = ------------i
λ
Multiplication of one image and the complex conjugate of the second image on a pixel-by-pixel
basis yields the phase difference between corresponding pixels in the two images. This complex
product produces the interferogram I with
Equation 6
I = P1 ⋅ P 2'
328 ERDAS
IMAGINE IFSAR DEM Theory
Where ’ denotes the complex conjugate operation. With θ 1 and θ 2 nearly equal and a 1 and a 2
nearly equal, the two images differ primarily in how the slight difference in collection
depression angles affects Φ 1 and Φ 2 . Ideally then, each pixel in the interferogram has the form:
Equation 7
4π
– j ------ ( R 1 – R 2 )
2 λ 2 jφ 12
I = a ⋅e = a ⋅e
2
using a 1 = a 2 = a . The amplitude a of the interferogram corresponds to image intensity. The
phase φ12 of the interferogram becomes
Equation 8
4π ( R 2 – R 1 )
φ 12 = -----------------------------
-
λ
which is the quantity used to derive the depression angle to the point of interest relative to the
baseline and, eventually, information about the scatterer height relative to the XY plane. Using
the following approximation allows us to arrive at an equation relating the interferogram phase
to the nominal depression angle.
Equation 9
R 2 – R 1 ≈ B i cos ( ψ )
Equation 10
4πB i cos ( ψ )
φ 12 ≈ ------------------------------
λ
In Equation 9 and Equation 10, ψ is the nominal depression angle from the center of the
baseline to the scatterer relative to the baseline. No phase difference indicates that ψ = 90
degrees and the scatterer is in the plane through the center of and orthogonal to the baseline. The
interferometric phase involves many radians of phase for scatterers at other depression angles
since the range difference R 2 – R 1 is many wavelengths. In practice, however, an
interferometric system does not measure the total pixel phase difference. Rather, it measures
only the phase difference that remains after subtracting all full 2π intervals present (module-
2π ).
To estimate the actual depression angle to a particular scatterer, the interferometer must
measure the total pixel phase difference of many cycles. This information is available, for
instance, by unwrapping the raw interferometric phase measurements beginning at a known
scene location. Phase unwrapping is discussed in further detail in “Phase Unwrapping”.
Because of the ambiguity imposed by the wrapped phase problem, it is necessary to seek the
relative depression angle and relative height among scatterers within a scene rather then their
absolute depression angle and height. The differential of Equation 10 with respect to ψ provides
this relative measure. This differential is
Equation 11
4πB i
∆φ T = – ------------ sin ( ψ )∆ψ
λ
or
Equation 12
λ
∆ψ = – ----------------------------- ∆φ 12
4πB i sin ( ψ )
This result indicates that two pixels in the interferogram that differ in phase by φ 12 represent
scatterers differing in depression angle by ∆ψ . Figure 9-12 shows the differential collection
geometry.
ψ
ψ – ∆ψ
Z ac Z ac – ∆h
Z ac
-----------------
sin ( ψ )
∆ψ
ψ ∆h
X
From this geometry, a change ∆ψ in depression angle is related to a change ∆h in height (at the
same range from mid-baseline) by Equation 13.
Equation 13
Z ac sin ( ψ – ∆ψ )
Z ac – ∆h = --------------------------------------
-
sin ( ψ )
330 ERDAS
IMAGINE IFSAR DEM Theory
Equation 14
∆h ≈ Z ac cot ( ψ )∆ψ
λZ ac cot ( ψ )
∆h = – ----------------------------- ∆φ 12
4πB i sin ( ψ )
Note that, because we are calculating differential height, we need at least one known height
value in order to calculate absolute height. This translates into a need for at least one GCP in
order to calculate absolute heights from the IMAGINE IFSAR DEM process.
In this section, we have derived the mathematical model needed to calculate height from
interferometric phase information. In order to put this model into practice, there are several
important processes that must be performed. These processes are image registration, phase
noise reduction, phase flattening, and phase unwrapping. These processes are discussed in the
following sections.
Image Registration In the discussion of the interferometric model of the last section, we assumed that the pixels had
been identified in each image that contained the phase information for the scatterer of interest.
Aligning the images from the two antennas is the purpose of the image registration step. For
interferometric systems that employ two antennas attached by a fixed boom and collect data
simultaneously, this registration is simple and deterministic. Given the collection geometry, the
registration can be calculated without referring to the data. For repeat pass systems, the
registration is not quite so simple. Since the collection geometry cannot be precisely known, we
must use the data to help us achieve image registration.
The registration process for repeat pass interferometric systems is generally broken into two
steps: pixel and sub-pixel registration. Pixel registration involves using the magnitude (visible)
part of each image to remove the image misregistration down to around a pixel. This means that,
after pixel registration, the two images are registered to within one or two pixels of each other
in both the range and azimuth directions.
Pixel registration is best accomplished using a standard window correlator to compare the
magnitudes of the two images over a specified window. You usually specify a starting point in
the two images, a window size, and a search range for the correlator to search over. The process
identifies the pixel offset that produces the highest match between the two images, and therefore
the best interferogram. One offset is enough to pixel register the two images.
Pixel registration, in general, produces a reasonable interferogram, but not the best possible.
This is because of the nature of the phase function for each of the images. In order to form an
image from the original signal data collected for each image, it is required that the phase
functions in range and azimuth be Nyquist sampled.
Nyquist sampling simply means that the original continuous function can be reconstructed from
the sampled data. This means that, while the magnitude resolution is limited to the pixel sizes
(often less than that), the phase function can be reconstructed to much higher resolutions.
Because it is the phase functions that ultimately provide the height information, it is important
to register them as closely as possible. This fine registration of the phase functions is the goal
of the sub-pixel registration step.
Sub-pixel registration is achieved by starting at the pixel registration offset and searching over
upsampled versions of the phase functions for the best possible interferogram. When this best
interferogram is found, the sub-pixel offset has been identified. In order to accomplish this, we
must construct higher resolution phase functions from the data. In general this is done using the
relation from signal processing theory shown in Equation 15.
Equation 15
–1 – j ( u∆r + v∆a )
i ( r + ∆r, a + ∆a ) = ζ [ I ( ( u, v ) ⋅ e )]
Where:
r = range independent variable
a = azimuth independent variable
i(r, a) = interferogram in spacial domain
I(u, v) = interferogram in frequency domain
∆r = sub-pixel range offset (i.e., 0.25)
∆a = sub-pixel azimuth offset (i.e., 0.75)
ζ-1 = inverse Fourier transform
Applying this relation directly requires two-dimensional (2D) Fourier transforms and inverse
Fourier transforms for each window tested. This is impractical given the computing
requirements of Fourier transforms. Fortunately, we can achieve the upsampled phase functions
we need using 2D sinc interpolation, which involves convolving a 2D sync function of a given
size over our search region. Equation 16 defines the sync function for one dimension.
Equation 16
sin ( nπ -)
------------------
nπ
Using sync interpolation is a fast and efficient method of reconstructing parts of the phase
functions which are at sub-pixel locations.
In general, one sub-pixel offset is not enough to sub-pixel register two SAR images over the
entire collection. Unlike the pixel registration, sub-pixel registration is dependent on the pixel
location, especially the range location. For this reason, it is important to generate a sub-pixel
offset function that varies with range position. Two sub-pixel offsets, one at the near range and
one at the far range, are enough to generate this function. This sub-pixel register function
provides the weights for the sync interpolator needed to register one image to the other during
the formation of the interferogram.
Phase Noise We mentioned in “The Interferometric Model” that it is necessary to unwrap the phase of the
Reduction interferogram before it can be used to calculate heights. From a practical and implementational
point of view, the phase unwrapping step is the most difficult. We discuss phase unwrapping
more in “Phase Unwrapping”.
332 ERDAS
IMAGINE IFSAR DEM Theory
Before unwrapping, we can do a few things to the data that make the phase unwrapping easier.
The first of these is to reduce the noise in the interferometric phase function. Phase noise is
introduced by radar system noise, image misregistration, and speckle effects caused by the
complex nature of the imagery. Reducing this noise is done by applying a coherent average filter
of a given window size over the entire interferogram. This filter is similar to the more familiar
averaging filter, except that it operates on the complex function instead of just the magnitudes.
The form of this filter is given in Equation 17.
Equation 17
N M
∑ ∑ RE [ i ( r + i, a + j ) ] + jImg [ ( r + i, a + j ) ]
î ( r, a ) = i = 0j = 0
--------------------------------------------------------------------------------------------------------------------
-
M+N
Figure 9-13 shows an interferometric phase image without filtering; Figure 9-14 shows the
same phase image with filtering.
The sharp ridges that look like contour lines in Figure 9-13 and Figure 9-14 show where the
phase functions wrap. The goal of the phase unwrapping step is to make this one continuous
function. This is discussed in greater detail in “Phase Unwrapping”. Notice how the filtered
image of Figure 9-14 is much cleaner then that of Figure 9-13. This filtering makes the phase
unwrapping much easier.
Phase Flattening The phase function of Figure 9-14 is fairly well behaved and is ready to be unwrapped. There
are relatively few wrap lines and they are distinct. Notice in the areas where the elevation is
changing more rapidly (mountain regions) the frequency of the wrapping increases. In general,
the higher the wrapping frequency, the more difficult the area is to unwrap. Once the wrapping
frequency exceeds the spacial sampling of the phase image, information is lost. An important
technique in reducing this wrapping frequency is phase flattening.
Phase flattening involves removing high frequency phase wrapping caused by the collection
geometry. This high frequency wrapping is mainly in the range direction, and is because of the
range separation of the antennas during the collection. Recall that it is this range separation that
gives the phase difference and therefore the height information. The phase function of Figure 9-
14 has already had phase flattening applied to it. Figure 9-15 shows this same phase function
without phase flattening applied.
Phase flattening is achieved by removing the phase function that would result if the imaging
area was flat from the actual phase function recorded in the interferogram. It is possible, using
the equations derived in “The Interferometric Model”, to calculate this flat Earth phase function
and subtract it from the data phase function.
It should be obvious that the phase function in Figure 9-14 is easier to unwrap then the phase
function of Figure 9-15.
Phase Unwrapping We stated in “The Interferometric Model” that we must unwrap the interferometric phase before
we can use it to calculate height values. In “Phase Noise Reduction” and “Phase Flattening”, we
develop methods of making the phase unwrapping job easier. This section further defines the
phase unwrapping problem and describes how to solve it.
As an electromagnetic wave travels through space, it cycles through its maximum and minimum
phase values many times as shown in Figure 9-16.
334 ERDAS
IMAGINE IFSAR DEM Theory
π 2π 3π 4π 5π 6π 7π
11π
φ 1 = 3π
------ φ 2 = ---------
2 2
Equation 18
φ 2 – φ 1 = 11π
--------- – 3π
------ = 4π
2 2
Recall from Equation 8 that finding the phase difference at two points is the key to extracting
height from interferometric phase. Unfortunately, an interferometric system does not measure
the total pixel phase difference. Rather, it measures only the phase difference that remains after
subtracting all full 2π intervals present (module- 2π ). This results in the following value for the
phase difference of Equation 18.
Equation 19
11π 3π 3π 3π
φ 2 – φ 1 = ( mod2π ) --------- – ( mod2π ) ------ = ------ – ------ = 0
2 2 2 2
Figure 9-17 further illustrates the difference between a one-dimensional continuous and
wrapped phase function. Notice that when the phase value of the continuous function reaches
2π , the wrapped phase function returns to 0 and continues from there. The job of the phase
unwrapping is to take a wrapped phase function and reconstruct the continuous function from it.
10π
continuous function
8π
6π
4π
wrapped function
2π
There has been much research and many different methods derived for unwrapping the 2D
phase function of an interferometric SAR phase image. A detailed discussion of all or any one
of these methods is beyond the scope of this chapter. The most successful approaches employ
algorithms which unwrap the easy or good areas first and then move on to more difficult areas.
Good areas are regions in which the phase function is relatively flat and the correlation is high.
This prevents errors in the tough areas from corrupting good regions. Figure 9-18 shows a
sequence of unwrapped phase images for the phase function of Figure 9-14.
336 ERDAS
IMAGINE IFSAR DEM Theory
Figure 9-19 shows the wrapped phase compared to the unwrapped phase image.
The unwrapped phase values can now be combined with the collection position information to
calculate height values for each pixel in the interferogram.
Conclusions SAR interferometry uses the unique properties of SAR images to extract height information
from SAR interferometric image pairs. Given a good image pair and good information about the
collection geometry, IMAGINE IFSAR DEM can produce very high quality results. The best
IMAGINE IFSAR DEM results are acquired with dual antenna systems that collect both images
at once. It is also possible to do IFSAR processing on repeat pass systems. These systems have
the advantage of only requiring one antenna, and therefore are cheaper to build. However, the
quality of repeat pass IFSAR is very sensitive to the collection conditions because of the fact
that the images were not collected at the same time. Weather and terrain changes that occur
between the collection of the two images can greatly degrade the coherence of the image pair.
This reduction in coherence makes each part of the IMAGINE IFSAR DEM process more
difficult.
338 ERDAS
Chapter 10
Rectification
Introduction Raw, remotely sensed image data gathered by a satellite or aircraft are representations of the
irregular surface of the Earth. Even images of seemingly flat areas are distorted by both the
curvature of the Earth and the sensor being used. This chapter covers the processes of
geometrically correcting an image so that it can be represented on a planar surface, conform to
other images, and have the integrity of a map.
A map projection system is any system designed to represent the surface of a sphere or spheroid
(such as the Earth) on a plane. There are a number of different map projection methods. Since
flattening a sphere to a plane causes distortions to the surface, each map projection system
compromises accuracy between certain properties, such as conservation of distance, angle, or
area. For example, in equal area map projections, a circle of a specified diameter drawn at any
location on the map represents the same total area. This is useful for comparing land use area,
density, and many other applications. However, to maintain equal area, the shapes, angles, and
scale in parts of the map may be distorted (Jensen, 1996).
There are a number of map coordinate systems for determining location on an image. These
coordinate systems conform to a grid, and are expressed as X,Y (column, row) pairs of numbers.
Each map projection system is associated with a map coordinate system.
Rectification is the process of transforming the data from one grid system into another grid
system using a geometric transformation. While polynomial transformation and triangle-based
methods are described in this chapter, discussion about various rectification techniques can be
found in Yang (Yang, 1997). Since the pixels of the new grid may not align with the pixels of
the original grid, the pixels must be resampled. Resampling is the process of extrapolating data
values for the pixels on the new grid from the values of the source pixels.
Registration In many cases, images of one area that are collected from different sources must be used
together. To be able to compare separate images pixel by pixel, the pixel grids of each image
must conform to the other images in the data base. The tools for rectifying image data are used
to transform disparate images to the same coordinate system.
Registration is the process of making an image conform to another image. A map coordinate
system is not necessarily involved. For example, if image A is not rectified and it is being used
with image B, then image B must be registered to image A so that they conform to each other.
In this example, image A is not rectified to a particular map projection, so there is no need to
rectify image B to a map projection.
Georeferencing Georeferencing refers to the process of assigning map coordinates to image data. The image
data may already be projected onto the desired plane, but not yet referenced to the proper
coordinate system. Rectification, by definition, involves georeferencing, since all map
projection systems are associated with map coordinates. Image-to-image registration involves
georeferencing only if the reference image is already georeferenced. Georeferencing, by itself,
involves changing only the map coordinate information in the image file. The grid of the image
does not change.
Geocoded data are images that have been rectified to a particular map projection and pixel size,
and usually have had radiometric corrections applied. It is possible to purchase image data that
is already geocoded. Geocoded data should be rectified only if they must conform to a different
projection system or be registered to other rectified data.
Latitude/Longitude Lat/Lon is a spherical coordinate system that is not associated with a map projection. Lat/Lon
expresses locations in the terms of a spheroid, not a plane. Therefore, an image is not usually
rectified to Lat/Lon, although it is possible to convert images to Lat/Lon, and some tips for
doing so are included in this chapter.
You can view map projection information for a particular file using the Image Information
utility. Image Information allows you to modify map information that is incorrect.
However, you cannot rectify data using Image Information. You must use the Rectification
tools described in this chapter.
The properties of map projections and of particular map projection systems are discussed
in Chapter 13 “Cartography” and Appendix B “Map Projections”.
Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement and can be
used if there is a DEM of the study area. It is based on collinearity equations, which can be
derived by using 3D GCPs. In relatively flat areas, orthorectification is not necessary, but in
mountainous areas (or on aerial photographs of buildings), where a high degree of accuracy is
required, orthorectification is recommended.
When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed to fit a
map projection system or a reference image. There are several reasons for rectifying image data:
• comparing pixels scene to scene in applications, such as change detection or thermal inertia
mapping (day and night comparison)
• mosaicking images
340 ERDAS
When to Rectify
Before rectifying the data, you must determine the appropriate coordinate system for the data
base. To select the optimum map projection and coordinate system, the primary use for the data
base must be considered.
If you are doing a government project, the projection may be predetermined. A commonly used
projection in the United States government is State Plane. Use an equal area projection for
thematic or distribution maps and conformal or equal area projections for presentation maps.
Before selecting a map projection, consider the following:
• How large or small an area is mapped? Different projections are intended for different size
areas.
• Where on the globe is the study area? Polar regions and equatorial regions require different
projections for maximum accuracy.
• What is the extent of the study area? Circular, north-south, east-west, and oblique areas
may all require different projection systems (Environmental Systems Research Institute,
1992).
When to Georeference Rectification is not necessary if there is no distortion in the image. For example, if an image file
Only is produced by scanning or digitizing a paper map that is in the desired projection system, then
that image is already planar and does not require rectification unless there is some skew or
rotation of the image. Scanning and digitizing produce images that are planar, but do not contain
any map coordinate information. These images need only to be georeferenced, which is a much
simpler process than rectification. In many cases, the image header can simply be updated with
new map coordinate information. This involves redefining:
This information is usually the same for each layer of an image file, although it could be
different. For example, the cell size of band 6 of Landsat TM data is different than the cell size
of the other bands.
Use the Image Information utility to modify image file header information that is
incorrect.
Disadvantages of During rectification, the data file values of rectified pixels must be resampled to fit into a new
Rectification grid of pixel rows and columns. Although some of the algorithms for calculating these values
are highly reliable, some spectral integrity of the data can be lost during rectification. If map
coordinates or map units are not needed in the application, then it may be wiser not to rectify
the image. An unrectified image is more spectrally correct than a rectified image.
Classification
Some analysts recommend classification before rectification, since the classification is then
based on the original data values. Another benefit is that a thematic file has only one band to
rectify instead of the multiple bands of a continuous file. On the other hand, it may be beneficial
to rectify the data first, especially when using GPS data for the GCPs. Since these data are very
accurate, the classification may be more accurate if the new coordinates help to locate better
training samples.
Thematic Files
Nearest neighbor is the only appropriate resampling method for thematic files, which may be a
drawback in some applications. The available resampling methods are discussed in detail later
in this chapter.
Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. Throughout this
documentation, many references to rectification also apply to image-to-image registration.
Usually, rectification is the conversion of data file coordinates to some other grid and coordinate
system, called a reference system. Rectifying or registering image data on disk involves the
following general steps, regardless of the application:
1. Locate GCPs.
3. Create an output image file with the new coordinate information in the header. The pixels must
be resampled to conform to the new grid.
Images can be rectified on the display (in a Viewer) or on the disk. Display rectification is
temporary, but disk rectification is permanent, because a new file is created. Disk rectification
involves:
• rearranging the pixels of the image onto a new grid, which conforms to a plane in the new
map projection and coordinate system
• inserting new information to the header of the file, such as the upper left corner map
coordinates and the area represented by each pixel
Ground Control GCPs are specific pixels in an image for which the output map coordinates (or other output
Points coordinates) are known. GCPs consist of two X,Y pairs of coordinates:
• reference coordinates—the coordinates of the map or reference image to which the source
image is being registered
The term map coordinates is sometimes used loosely to apply to reference coordinates and
rectified coordinates. These coordinates are not limited to map coordinates. For example, in
image-to-image registration, map coordinates are not necessary.
342 ERDAS
Ground Control Points
GCPs in ERDAS Any ERDAS IMAGINE image can have one GCP set associated with it. The GCP set is stored
IMAGINE in the image file along with the raster layers. If a GCP set exists for the top file that is displayed
in the Viewer, then those GCPs can be displayed when the GCP Tool is opened.
In the CellArray of GCP data that displays in the GCP Tool, one column shows the point ID of
each GCP. The point ID is a name given to GCPs in separate files that represent the same
geographic location. Such GCPs are called corresponding GCPs.
A default point ID string is provided (such as GCP #1), but you can enter your own unique ID
strings to set up corresponding GCPs as needed. Even though only one set of GCPs is associated
with an image file, one GCP set can include GCPs for a number of rectifications by changing
the point IDs for different groups of corresponding GCPs.
Entering GCPs Accurate GCPs are essential for an accurate rectification. From the GCPs, the rectified
coordinates for all other points in the image are extrapolated. Select many GCPs throughout the
scene. The more dispersed the GCPs are, the more reliable the rectification is. GCPs for large-
scale imagery might include the intersection of two roads, airport runways, utility corridors,
towers, or buildings. For small-scale imagery, larger features such as urban areas or geologic
features may be used. Landmarks that can vary (e.g., the edges of lakes or other water bodies,
vegetation, etc.) should not be used.
The source and reference coordinates of the GCPs can be entered in the following ways:
• Use the mouse to select a pixel from an image in the Viewer. With both the source and
destination Viewers open, enter source coordinates and reference coordinates for image-to-
image registration.
Information on the use and setup of a digitizing tablet is discussed in Chapter 2 “Vector
Layers”.
Mouse Option
When entering GCPs with the mouse, you should try to match coarser resolution imagery to
finer resolution imagery (i.e., Landsat TM to SPOT), and avoid stretching resolution spans
greater than a cubic convolution radius (a 4 × 4 area). In other words, you should not try to
match Landsat MSS to SPOT or Landsat TM to an aerial photograph.
GCP Prediction and Automated GCP prediction enables you to pick a GCP in either coordinate system and
Matching automatically locate that point in the other coordinate system based on the current
transformation parameters.
Automated GCP matching is a step beyond GCP prediction. For image-to-image rectification,
a GCP selected in one image is precisely matched to its counterpart in the other image using the
spectral characteristics of the data and the geometric transformation. GCP matching enables you
to fine tune a rectification for highly accurate results.
Both of these methods require an existing transformation which consists of a set of coefficients
used to convert the coordinates from one system to another.
GCP Prediction
GCP prediction is a useful technique to help determine if enough GCPs have been gathered.
After selecting several GCPs, select a point in either the source or the destination image, then
use GCP prediction to locate the corresponding GCP on the other image (map). This point is
determined based on the current transformation derived from existing GCPs. Examine the
automatically generated point and see how accurate it is. If it is within an acceptable range of
accuracy, then there may be enough GCPs to perform an accurate rectification (depending upon
how evenly dispersed the GCPs are). If the automatically generated point is not accurate, then
more GCPs should be gathered before rectifying the image.
GCP prediction can also be used when applying an existing transformation to another image in
a data set. This saves time in selecting another set of GCPs by hand. Once the GCPs are
automatically selected, those that do not meet an acceptable level of error can be edited.
GCP Matching
In GCP matching, you can select which layers from the source and destination images to use.
Since the matching process is based on the reflectance values, select layers that have similar
spectral wavelengths, such as two visible bands or two infrared bands. You can perform
histogram matching to ensure that there is no offset between the images. You can also select the
radius from the predicted GCP from which the matching operation searches for a spectrally
similar pixels. The search window can be any odd size between 5 × 5 and 21 × 21.
A correlation threshold is used to accept or discard points. The correlation ranges from -1.000
to +1.000. The threshold is an absolute value threshold ranging from 0.000 to 1.000. A value of
0.000 indicates a bad match and a value of 1.000 indicates an exact match. Values above 0.8000
or 0.9000 are recommended. If a match cannot be made because the absolute value of the
correlation is less than the threshold, you have the option to discard points.
344 ERDAS
Polynomial Transformation
Polynomial Polynomial equations are used to convert source file coordinates to rectified map coordinates.
Transformation Depending upon the distortion in the imagery, the number of GCPs used, and their locations
relative to one another, complex polynomial equations may be required to express the needed
transformation. The degree of complexity of the polynomial is expressed as the order of the
polynomial. The order is simply the highest exponent used in the polynomial.
The order of transformation is the order of the polynomial used in the transformation. ERDAS
IMAGINE allows 1st- through nth-order transformations. Usually, 1st-order or 2nd-order
transformations are used.
You can specify the order of the transformation you want to use in the Transform Editor.
Transformation Matrix
A transformation matrix is computed from the GCPs. The matrix consists of coefficients that
are used in polynomial equations to convert the coordinates. The size of the matrix depends
upon the order of transformation. The goal in calculating the coefficients of the transformation
matrix is to derive the polynomial equations for which there is the least possible amount of error
when they are used to transform the reference coordinates of the GCPs into the source
coordinates. It is not always possible to derive coefficients that produce no error. For example,
in Figure 10-1, GCPs are plotted on a graph and compared to the curve that is expressed by a
polynomial.
GCP
Polynomial curve
Source X coordinate
Every GCP influences the coefficients, even if there is not a perfect fit of each GCP to the
polynomial that the coefficients represent. The distance between the GCP reference coordinate
and the curve is called RMS error, which is discussed later in this chapter. The least squares
regression method is used to calculate the transformation matrix from the GCPs. This common
method is discussed in statistics textbooks.
• scale in X and/or Y
• skew in X and/or Y
• rotation
First-order transformations can be used to project raw imagery to a planar map projection,
convert a planar map projection to another planar map projection, and when rectifying relatively
small image areas. You can perform simple linear transformations to an image displayed in a
Viewer or to the transformation matrix itself. Linear transformations may be required before
collecting GCPs on the displayed image. You can reorient skewed Landsat TM data, rotate
scanned quad sheets according to the angle of declination stated in the legend, and rotate
descending data so that north is up.
A 1st-order transformation can also be used for data that are already projected onto a plane. For
example, SPOT and Landsat Level 1B data are already transformed to a plane, but may not be
rectified to the desired map projection. When doing this type of rectification, it is not advisable
to increase the order of transformation if at first a high RMS error occurs. Examine other factors
first, such as the GCP source and distribution, and look for systematic errors.
ERDAS IMAGINE provides the following options for 1st-order transformations:
• scale
• offset
• rotate
• reflect
Scale
Scale is the same as the zoom option in the Viewer, except that you can specify different scaling
factors for X and Y.
If you are scaling an image in the Viewer, the zoom option undoes any changes to the scale
that you do, and vice versa.
Offset
Offset moves the image by a user-specified number of pixels in the X and Y directions. For
rotation, you can specify any positive or negative number of degrees for clockwise and
counterclockwise rotation. Rotation occurs around the center pixel of the image.
Reflection
Reflection options enable you to perform the following operations:
346 ERDAS
Polynomial Transformation
Linear adjustments are available from the Viewer or from the Transform Editor. You can
perform linear transformations in the Viewer and then load that transformation to the
Transform Editor, or you can perform the linear transformations directly on the
transformation matrix.
Figure 10-2 illustrates how the data are changed in linear transformations.
The transformation matrix for a 1st-order transformation consists of six coefficients—three for
each coordinate (X and Y).
a0 a1 a2
b0 b1 b2
yo = b0 + b1 x + b2 y
Where:
x and y are source coordinates (input)
xo and yo are rectified coordinates (output)
the coefficients of the transformation matrix are as above
The position of the coefficients in the matrix and the assignment of the coefficients in the
polynomial is an ERDAS IMAGINE convention. Other representations of a 1st-order
transformation matrix may take a different form.
original image
Second-order transformations can be used to convert Lat/Lon data to a planar projection, for
data covering a large area (to account for the Earth’s curvature), and with distorted data (for
example, due to camera lens distortion). Third-order transformations are used with distorted
aerial photographs, on scans of warped maps and with radar imagery. Fourth-order
transformations can be used on very distorted aerial photographs.
The transformation matrix for a transformation of order t contains this number of coefficients:
t+1
2∑i
i=1
It is multiplied by two for the two sets of coefficients—one set for X, one for Y.
An easier way to arrive at the same number is:
(t + 1) × (t + 2)
Clearly, the size of the transformation matrix increases with the order of the transformation.
t i
xo = Σ Σ
i–j j
ak × x ×y
i = o j = o
348 ERDAS
Polynomial Transformation
t i
y o = Σ Σ
i–j j
bk × x ×y
i = o j = o
Where:
t is the order of the polynomial
ak and bk are coefficients
the subscript k in ak and bk is determined by:
⋅i+j+j
k = i---------------
2
(3 + 1) × (3 + 2)
Effects of Order The computation and output of a higher-order polynomial equation are more complex than that
of a lower-order polynomial equation. Therefore, higher-order polynomials are used to perform
more complicated image rectifications. To understand the effects of different orders of
transformation in image rectification, it is helpful to see the output of various orders of
polynomials.
The following example uses only one coordinate (X), instead of two (X,Y), which are used in
the polynomials for rectification. This enables you to draw two-dimensional graphs that
illustrate the way that higher orders of transformation affect the output image.
NOTE: Because only the X coordinate is used in these examples, the number of GCPs used is
less than the number required to actually perform the different orders of transformation.
Coefficients like those presented in this example would generally be calculated by the least
squares regression method. Suppose GCPs are entered with these X coordinates:
These GCPs allow a 1st-order transformation of the X coordinates, which is satisfied by this
equation (the coefficients are in parentheses):
x r = ( 25 ) + ( – 8 ) x i
Where:
xr = the reference X coordinate
xi = the source X coordinate
This equation takes on the same format as the equation of a line (y = mx + b). In mathematical
terms, a 1st-order polynomial is linear. Therefore, a 1st-order transformation is also known as
a linear transformation. This equation is graphed in Figure 10-4.
16
reference X coordinate
12 xr = (25) + (-8)xi
0
0 1 2 3 4
source X coordinate
16
reference X coordinate
12
0
0 1 2 3 4
source X coordinate
A line cannot connect these points, which illustrates that they cannot be expressed by a 1st-order
polynomial, like the one above. In this case, a 2nd-order polynomial equation expresses these
points:
2
x r = ( 31 ) + ( – 16 )x i + ( 2 )x i
350 ERDAS
Polynomial Transformation
Polynomials of the 2nd-order or higher are nonlinear. The graph of this curve is drawn in Figure
10-6.
16
reference X coordinate
12 xr = (31) + (-16)xi + (2)xi2
0
0 1 2 3 4
source X coordinate
16
reference X coordinate
8
(4,5)
4
0
0 1 2 3 4
source X coordinate
As illustrated in Figure 10-7, this fourth GCP does not fit on the curve of the 2nd-order
polynomial equation. To ensure that all of the GCPs fit, the order of the transformation could
be increased to 3rd-order. The equation and graph in Figure 10-8 could then result.
16
reference X coordinate
12 xr = (25) + (-5)xi + (-4)xi2 + (1)xi3
0
0 1 2 3 4
source X coordinate
Figure 10-8 illustrates a 3rd-order transformation. However, this equation may be unnecessarily
complex. Performing a coordinate transformation with this equation may cause unwanted
distortions in the output image for the sake of a perfect fit for all the GCPs. In this example, a
3rd-order transformation probably would be too high, because the output pixels would be
arranged in a different order than the input pixels, in the X direction.
1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
1 2 3 4 3 4 2 1
In this case, a higher order of transformation would probably not produce the desired results.
352 ERDAS
Polynomial Transformation
Minimum Number of Higher orders of transformation can be used to correct more complicated types of distortion.
GCPs However, to use a higher order of transformation, more GCPs are needed. For instance, three
points define a plane. Therefore, to perform a 1st-order transformation, which is expressed by
the equation of a plane, at least three GCPs are needed. Similarly, the equation used in a 2nd-
order transformation is the equation of a paraboloid. Six points are required to define a
paraboloid. Therefore, at least six GCPs are required to perform a 2nd-order transformation.
The minimum number of points required to perform a transformation of order t equals:
(------------------------------------
( t + 1 ) ( t + 2 ) )-
2
Use more than the minimum number of GCPs whenever possible. Although it is possible to get
a perfect fit, it is rare, no matter how many GCPs are used.
For 1st- through 10th-order transformations, the minimum number of GCPs required to perform
a transformation is listed in the following table:
For the best rectification results, you should always use more than the minimum number
of GCPs, and they should be well-distributed.
Rubber Sheeting
Triangle-Based Finite The finite element analysis is a powerful tool for solving complicated computation problems
Element Analysis which can be approached by small simpler pieces. It has been widely used as a local
interpolation technique in geographic applications. For image rectification, the known control
points can be triangulated into many triangles. Each triangle has three control points as its
vertices. Then, the polynomial transformation can be used to establish mathematical
relationships between source and destination systems for each triangle. Because the
transformation exactly passes through each control point and is not in a uniform manner, finite
element analysis is also called rubber sheeting. It can also be called as the triangle-based
rectification because the transformation and resampling for image rectification are performed
on a triangle-by-triangle basis.
This triangle-based technique should be used when other rectification methods such as
polynomial transformation and photogrammetric modeling cannot produce acceptable results.
Triangulation To perform the triangle-based rectification, it is necessary to triangulate the control points into
a mesh of triangles. Watson (Watson, 1992) summarily listed four kinds of triangulation,
including the arbitrary, optimal, Greedy and Delaunay triangulation. Of the four kinds, the
Delaunay triangulation is most widely used and is adopted because of the smaller angle
variations of the resulting triangles.
The Delaunay triangulation can be constructed by the empty circumcircle criterion. The
circumcircle formed from three points of any triangle does not have any other point inside. The
triangles defined this way are the most equiangular possible.
Figure 10-10 shows an example of the triangle network formed by 13 control points.
p0 p2
p5
p7
p6
p4
p12
p3
p9
p8 p11
p10
Triangle-based Once the triangle mesh has been generated and the spatial order of the control points is
rectification available, the geometric rectification can be done on a triangle-by-triangle basis. This triangle-
based method is appealing because it breaks the entire region into smaller subsets. If the
geometric problem of the entire region is very complicated, the geometry of each subset can be
much simpler and modeled through simple transformation.
354 ERDAS
RMS Error
For each triangle, the polynomials can be used as the general transformation form between
source and destination systems.
Linear transformation The easiest and fastest is the linear transformation with the first order polynomials:
x o = a0 + a1 x + a2 y
y o = b0 + b1 x + b2 y
There is no need for extra information because there are three known conditions in each triangle
and three unknown coefficients for each polynomial.
Nonlinear Even though the linear transformation is easy and fast, it has one disadvantage. The transitions
transformation between triangles are not always smooth. This phenomenon is obvious when shaded relief or
contour lines are derived from the DEM which is generated by the linear rubber sheeting. It is
caused by incorporating the slope change of the control data at the triangle edges and vertices.
In order to distribute the slope change smoothly across triangles, the nonlinear transformation
with polynomial order larger than one is used by considering the gradient information.
The fifth order or quintic polynomial transformation is chosen here as the nonlinear rubber
sheeting technique in this dissertation. It is a smooth function. The transformation function and
its first order partial derivative are continuous. It is not difficult to construct (Akima, 1978). The
formulation is simply as the following:
5 i
i–j j
xo =
∑ ∑ ak ⋅ x ⋅y
i=0 j=0
5 i
yo = i–j j
∑ ∑ bk ⋅ x ⋅y
i=0 j=0
It has 21 coefficients for each polynomial to be determined. For solving these unknowns, 21
conditions should be available. For each vertex of the triangle, one point value is given, and two
first order and three second order partial derivatives can be easily derived by establishing a
second order polynomial using vertices in the neighborhood of the vertex. Then the total 18
conditions are ready to be used. Three more conditions can be obtained by assuming that the
normal partial derivative on each edge of the triangle is a cubic polynomial, which means that
the sum of the polynomial items beyond the third order in the normal partial derivative has a
value zero.
Check Point Analysis It should be emphasized that the independent check point analysis is critical for determining the
accuracy of rubber sheeting modeling. For an exact modeling method like rubber sheeting, the
ground control points, which are used in the modeling process, do not have much geometric
residuals remaining. To evaluate the geometric transformation between source and destination
coordinate systems, the accuracy assessment using independent check points is recommended.
RMS Error RMS error is the distance between the input (source) location of a GCP and the retransformed
location for the same GCP. In other words, it is the difference between the desired output
coordinate for a GCP and the actual output coordinate for the same point, when the point is
transformed with the geometric transformation.
2 2
RMS error = ( xr – xi ) + ( yr – yi )
Where:
xi and yi are the input source coordinates
xr and yr are the retransformed coordinates
RMS error is expressed as a distance in the source coordinate system. If data file coordinates
are the source coordinates, then the RMS error is a distance in pixel widths. For example, an
RMS error of 2 means that the reference pixel is 2 pixels away from the retransformed pixel.
Residuals and RMS The GCP Tool contains columns for the X and Y residuals. Residuals are the distances between
Error Per GCP the source and retransformed coordinates in one direction. They are shown for each GCP. The
X residual is the distance between the source X coordinate and the retransformed X coordinate.
The Y residual is the distance between the source Y coordinate and the retransformed Y
coordinate.
If the GCPs are consistently off in either the X or the Y direction, more points should be added
in that direction. This is a common problem in off-nadir data.
Ri = XR i2 + YR i2
Where:
Ri = the RMS error for GCPi
XRi = the X residual for GCPi
YRi = the Y residual for GCPi
Figure 10-11 illustrates the relationship between the residuals and the RMS error per point.
RMS error
Y residual
retransformed GCP
Total RMS Error From the residuals, the following calculations are made to determine the total RMS error, the X
RMS error, and the Y RMS error:
356 ERDAS
RMS Error
n
1---
Rx =
n ∑ XRi2
i=1
n
1---
Ry =
n ∑ YRi2
i=1
n
1---
T = R x2 + R y2 or
n ∑ XRi2 + YRi2
i=1
Where:
Rx = X RMS error
Ry = Y RMS error
T = total RMS error
n = the number of GCPs
i = GCP number
XRi = the X residual for GCPi
YRi = the Y residual for GCPi
Error Contribution by A normalized value representing each point’s RMS error in relation to the total RMS error is
Point also reported. This value is listed in the Contribution column of the GCP Tool.
R
E i = -----i
T
Where:
Ei = error contribution of GCPi
Ri = the RMS error for GCPi
T = total RMS error
Tolerance of RMS In most cases, it is advantageous to tolerate a certain amount of error rather than take a more
Error complex transformation. The amount of RMS error that is tolerated can be thought of as a
window around each source coordinate, inside which a retransformed coordinate is considered
to be correct (that is, close enough to use). For example, if the RMS error tolerance is 2, then
the retransformed pixel can be 2 pixels away from the source pixel and still be considered
accurate.
Retransformed coordinates
within this range are considered
correct
Acceptable RMS error is determined by the end use of the data base, the type of data being used,
and the accuracy of the GCPs and ancillary data being used. For example, GCPs acquired from
GPS should have an accuracy of about 10 m, but GCPs from 1:24,000-scale maps should have
an accuracy of about 20 m.
It is important to remember that RMS error is reported in pixels. Therefore, if you are rectifying
Landsat TM data and want the rectification to be accurate to within 30 meters, the RMS error
should not exceed 1.00. Acceptable accuracy depends on the image area and the particular
project.
Evaluating RMS Error To determine the order of polynomial transformation, you can assess the relative distortion in
going from image to map or map to map. One should start with a 1st-order transformation unless
it is known that it does not work. It is possible to repeatedly compute transformation matrices
until an acceptable RMS error is reached.
Most rectifications are either 1st-order or 2nd-order. The danger of using higher order
rectifications is that the more complicated the equation for the transformation, the less
regular and predictable the results are. To fit all of the GCPs, there may be very high
distortion in the image.
After each computation of a transformation and RMS error, there are four options:
• Throw out the GCP with the highest RMS error, assuming that this GCP is the least
accurate. Another transformation can then be computed from the remaining GCPs. A closer
fit should be possible. However, if this is the only GCP in a particular region of the image,
it may cause greater error to remove it.
• Select only the points for which you have the most confidence.
358 ERDAS
Resampling Methods
Resampling The next step in the rectification/registration process is to create the output file. Since the grid
Methods of pixels in the source image rarely matches the grid for the reference image, the pixels are
resampled so that new data file values for the output file can be calculated.
GCP GCP
GCP GCP
• Nearest neighbor—uses the value of the closest pixel to assign to the output pixel value.
In all methods, the number of rows and columns of pixels in the output is calculated from the
dimensions of the output map, which is determined by the geometric transformation and the cell
size. The output corners (upper left and lower right) of the output file can be specified. The
default values are calculated so that the entire source file is resampled to the destination file.
If an image to image rectification is being performed, it may be beneficial to specify the output
corners relative to the reference file system, so that the images are coregistered. In this case, the
upper left X and upper left Y coordinate are 0,0 and not the defaults.
If the output units are pixels, then the origin of the image is the upper left corner.
Otherwise, the origin is the lower left corner.
Rectifying to Lat/Lon You can specify the nominal cell size if the output coordinate system is Lat/Lon. The output cell
size for a geographic projection (i.e., Lat/Lon) is always in angular units of decimal degrees.
However, if you want the cell to be a specific size in meters, you can enter meters and calculate
the equivalent size in decimal degrees. For example, if you want the output file cell size to be
30 × 30 meters, then the program would calculate what this size would be in decimal degrees
and automatically update the output cell size. Since the transformation between angular
(decimal degrees) and nominal (meters) measurements varies across the image, the
transformation is based on the center of the output file.
Enter the nominal cell size in the Nominal Cell Size dialog.
Nearest Neighbor To determine an output pixel’s nearest neighbor, the rectified coordinates (xo, yo) of the pixel
are retransformed back to the source coordinate system using the inverse of the transformation.
The retransformed coordinates (xr, yr) are used in bilinear interpolation and cubic convolution
as well. The pixel that is closest to the retransformed coordinates (xr, yr) is the nearest neighbor.
The data file value(s) for that pixel become the data file value(s) of the pixel in the output image.
(xr,yr)
nearest to
(xr,yr)
Advantages Disadvantages
Transfers original data values without When this method is used to resample from a
averaging them as the other methods do; larger to a smaller grid size, there is usually a
therefore, the extremes and subtleties of the stair stepped effect around diagonal lines and
data values are not lost. This is an important curves.
consideration when discriminating between
vegetation types, locating an edge associated
with a lineament, or determining different
levels of turbidity or temperatures in a lake
(Jensen, 1996).
360 ERDAS
Resampling Methods
Advantages Disadvantages
Suitable for use before classification. Data values may be dropped, while other values
may be duplicated.
The easiest of the three methods to compute Using on linear thematic data (e.g., roads,
and the fastest to use. streams) may result in breaks or gaps in a
network of linear data.
Appropriate for thematic files, which can have
data file values based on a qualitative (nominal
or ordinal) system or a quantitative (interval or
ratio) system. The averaging that is performed
with bilinear interpolation and cubic
convolution is not suited to a qualitative class
value system.
Bilinear Interpolation In bilinear interpolation, the data file value of the rectified pixel is based upon the distances
between the retransformed coordinate location (xr, yr) and the four closest pixels in the input
(source) image (see Figure 10-15). In this example, the neighbor pixels are numbered 1, 2, 3,
and 4. Given the data file values of these four pixels on a grid, the task is to calculate a data file
value for r (Vr).
m r n
dx
(xr,yr)
3 4
D
To calculate Vr, first Vm and Vn are considered. By interpolating Vm and Vn, you can perform
linear interpolation, which is a simple process to illustrate. If the data file values are plotted in
a graph relative to their distances from one another, then a visual linear interpolation is apparent.
The data file value of m (Vm) is a function of the change in the data file value between pixels 3
and 1 (that is, V3 - V1).
V3
Y1 Ym Y3
D
data file coordinates
(Y)
V3 – V1
V m = ------------------ × dy + V 1
D
Where:
Yi = the Y coordinate for pixel i
Vi = the data file value for pixel i
dy = the distance between Y1 and Ym in the source coordinate system
D = the distance between Y1 and Y3 in the source coordinate system
If one considers that (V3 - V1 / D) is the slope of the line in the graph above, then this equation
translates to the equation of a line in y = mx + b form.
Similarly, the equation for calculating the data file value for n (Vn) in the pixel grid is:
V4 – V2
V n = ------------------ × dy + V 2
D
From Vn and Vm, the data file value for r, which is at the retransformed coordinate location
(xr,yr),can be calculated in the same manner:
Vn – Vm
V r = ------------------- × dx + Vm
D
The following is attained by plugging in the equations for Vm and Vn to this final equation for Vr :
362 ERDAS
Resampling Methods
V4 – V2 V3 – V1
------------------ × dy + V 2 – ----------------- - × dy + V 1 V3 – V1
D D
V r = -------------------------------------------------------------------------------------------------------- × dx + ------------------ × dy + V 1
D D
V 1 ( D – dx ) ( D – dy ) + V 2 ( dx ) ( D – dy ) + V 3 ( D – dx ) ( dy ) + V 4 ( dx ) ( dy )
V r = ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
D2
In most cases D = 1, since data file coordinates are used as the source coordinates and data file
coordinates increment by 1.
Some equations for bilinear interpolation express the output data file value as:
Vr = ∑ wi Vi
Where:
wi is a weighting factor
The equation above could be expressed in a similar format, in which the calculation of wi is
apparent:
4
( D – ∆x i ) ( D – ∆y i )
Vr = ∑ ---------------------------------------------
D
2
- × Vi
i=1
Where:
∆xi = the change in the X direction between (xr,yr) and the data file coordinate of
pixel i
∆yi = the change in the Y direction between (xr,yr) and the data file coordinate of
pixel i
Vi = the data file value for pixel i
D = the distance between pixels (in X or Y) in the source coordinate system
For each of the four pixels, the data file value is weighted more if the pixel is closer to (xr, yr).
Advantages Disadvantages
Results in output images that are smoother, Since pixels are averaged, bilinear interpolation
without the stair-stepped effect that is possible has the effect of a low-frequency convolution.
with nearest neighbor. Edges are smoothed, and some extremes of the
data file values are lost.
More spatially accurate than nearest neighbor.
This method is often used when changing the
cell size of the data, such as in SPOT/TM
merges within the 2 × 2 resampling matrix limit.
• a set of 16 pixels, in a 4 × 4 array, are averaged to determine the output data file value, and
To identify the 16 pixels in relation to the retransformed coordinate (xr,yr), the pixel (i,j) is used,
such that:
i = int (xr)
j = int (yr)
This assumes that (xr,yr) is expressed in data file coordinates (pixels). The pixels around (i,j)
make up a 4 × 4 grid of input pixels, as illustrated in Figure 10-17.
(i,j)
(Xr,Yr)
Since a cubic, rather than a linear, function is used to weight the 16 input pixels, the pixels
farther from (xr, yr) have exponentially less weight than those closer to (xr, yr).
Several versions of the cubic convolution equation are used in the field. Different equations
have different effects upon the output data file values. Some convolutions may have more of the
effect of a low-frequency filter (like bilinear interpolation), serving to average and smooth the
values. Others may tend to sharpen the image, like a high-frequency filter. The cubic
convolution used in ERDAS IMAGINE is a compromise between low-frequency and high-
frequency. The general effect of the cubic convolution depends upon the data.
The formula used in ERDAS IMAGINE is:
364 ERDAS
Resampling Methods
Vr = ∑ V ( i – 1, j + n – 2 ) × f ( d ( i – 1, j + n – 2 ) + 1 ) +
n=1
V ( i, j + n – 2 ) × f ( d ( i, j + n – 2 ) ) +
V ( i + 1, j + n – 2 ) × f ( d ( i + 1, j + n – 2 ) – 1 ) +
V ( i + 2, j + n – 2 ) × f ( d ( i + 2, j + n – 2 ) – 2 )
Where:
i = int (xr)
j = int (yr)
d(i,j) = the distance between a pixel with coordinates (i,j) and (xr,yr)
V(i,j) = the data file value of pixel (i,j)
Vr = the output data file value
a = -0.5 (a constant which differs in other applications of cubic convolution)
f(x) = the following function:
(a + 2) x 3 – (a + 3) x 2 + 1 if x < 1
f ( x ) = a x 3 – 5a x 2 2 + 8a x – 4a if 1 < x < 2
0 otherwise
In most cases, a value for a of -0.5 tends to produce output layers with a mean and standard
deviation closer to that of the original data (Atkinson, 1985).
Advantages Disadvantages
Uses 4 × 4 resampling. In most cases, the mean Data values may be altered.
and standard deviation of the output pixels match
the mean and standard deviation of the input
pixels more closely than any other resampling
method.
The effect of the cubic curve weighting can both This method is extremely slow.
sharpen the image and smooth out noise
(Atkinson, 1985). The actual effects depend upon
the data being used.
This method is recommended when you are
dramatically changing the cell size of the data,
such as in TM/aerial photo merges (i.e., matches
the 4 × 4 window more closely than the 2 × 2
window).
Bicubic Spline Bicubic Spline Interpolation is based on fitting a cubic spline surface through the current block
Interpolation of points. The output value is derived from the fitting surface that will retain the values of the
known points. This algorithm is much slower than other methods of interpolation, but it has the
advantage of giving a more exact fit to the curve without the oscillations that other interpolation
methods can create. Bicubic Spline Interpolation is so similar to Bilinear Interpolation that
unless you have the need to maximize surface smoothness, you should use Bilinear
Interpolation.
Data Points
The known data points are an array of raster of m × n,
x1 x2 xm-1 xm
y1
V1,1 V2,1 Vm-1,1 Vm,1
y2
V1,2 V2,2 Vm-1,2 Vm,2
yn-1
V1,n-1 V2,n-1 Vm-1,n-1 Vm,n-1
yn
V1,n V2,n Vm-1,n Vm,n
366 ERDAS
Resampling Methods
Xi + 1 = Xi + d
Yj + 1 = Yj + d
Where:
1< i < m
1< j <n
d is the cell size of the raster
Vi,j is the cell value in (xi ,yj)
Equations
A bicubic polynomial function V(x,y) is constructed as following:
3 3
( i, j ) p q
V ( x, y ) = ∑ q∑= 0 ap, q( x – xi ) ( y – yj )
p=0
in each cell
R ij = { ( x, y ) x i ≤ x ≤ x i + 1, y j ≤ y ≤ y j + 1 }
i = 1, 2,… , m ;j = 1, 2, …, n
• The functions and their first and second derivatives must be continuous across the interval
and equal at the endpoints and the fourth derivatives of the equations should be zero.
V ( x i ,y j ) = V i, j ;i = 1, 2, …, m ;j = 1, 2, …, n
i.e., the spline must interpolate all data points.
( i, j )
• Coefficients a p, q can be obtained by resolving the known points together with the
selection of the boundary condition type. Please refer to Shikin and Plis (Shikin and Plis,
1995) for the boundary conditions and the mathematical details for solving the equations.
IMAGINE uses the first type of boundary condition. Because in IMAGINE the input raster
grid has been expanded two cells around the boundary, the boundary condition has no
significant effects on the resampling.
3 3
( i r, j r ) p q
V ( x r, y r ) = ∑ q∑= 0 ap, q ( xr – xi ) r
( yr – yj )
r
p=0
xi
r
(ir, jr)
yj
r
(xr ,yr)
d
( i ,j )
The value is determined by 16 coefficients a pr, qr , x jr , y jr , and xr, yr. Because the coefficients
are resolved by using all other known points, all other points contribute to the value. The nearer
points contribute more whereas the farther points contribute less.
Advantages Disadvantages
Results in the smoothest output images. The most computationally intensive resampling
method, and is therefore the slowest.
More spatially accurate than nearest neighbor.
This method is often used when upsampling.
Map-to-Map There are many instances when you may need to change a map that is already registered to a
Coordinate planar projection to another projection. Some examples of when this is required are as follows
Conversions (Environmental Systems Research Institute, 1992):
• When the projection used for the files in the data base does not produce the desired
properties of a map.
• When it is necessary to combine data from more than one zone of a projection, such as
UTM or State Plane.
A change in the projection is a geometric change—distances, areas, and scale are represented
differently. Therefore, the conversion process requires that pixels be resampled.
368 ERDAS
Map-to-Map Coordinate Conversions
Resampling causes some of the spectral integrity of the data to be lost (see the disadvantages of
the resampling methods explained previously). So, it is not usually wise to resample data that
have already been resampled if the accuracy of data file values is important to the application.
If the original unrectified data are available, it is usually wiser to rectify that data to a second
map projection system than to lose a generation by converting rectified data and resampling it
a second time.
Conversion Process To convert the map coordinate system of any georeferenced image, ERDAS IMAGINE
provides a shortcut to the rectification process. In this procedure, GCPs are generated
automatically along the intersections of a grid that you specify. The program calculates the
reference coordinates for the GCPs with the appropriate conversion formula and a
transformation that can be used in the regular rectification process.
Vector Data Converting the map coordinates of vector data is much easier than converting raster data. Since
vector data are stored by the coordinates of nodes, each coordinate is simply converted using
the appropriate conversion formula. There are no coordinates between nodes to extrapolate.
370 ERDAS
Chapter 11
Terrain Analysis
Introduction Terrain analysis involves the processing and graphic simulation of elevation data. Terrain
analysis software functions usually work with topographic data (also called terrain data or
elevation data), in which an elevation (or Z value) is recorded at each X,Y location. However,
terrain analysis functions are not restricted to topographic data. Any series of values, such as
population densities, ground water pressure values, magnetic and gravity measurements, and
chemical concentrations, may be used.
Topographic data are essential for studies of trafficability, route design, nonpoint source
pollution, intervisibility, siting of recreation areas, etc. (Welch, 1990). Especially useful are
products derived from topographic data. These include:
• slope images—illustrates changes in elevation over distance. Slope images are usually
color-coded according to the steepness of the terrain at each pixel.
• aspect images—illustrates the prevailing direction that the slope faces at each pixel.
Topographic data and its derivative products have many applications, including:
• calculating the shortest and most navigable path over a mountain range for constructing a
road or routing a transmission line
• determining rates of snow melt based on variations in sun shadow, which is influenced by
slope, aspect, and elevation
Terrain data are often used as a component in complex GIS modeling or classification routines.
They can, for example, be a key to identifying wildlife habitats that are associated with specific
elevations. Slope and aspect images are often an important factor in assessing the suitability of
a site for a proposed use. Terrain data can also be used for vegetation classification based on
species that are terrain-sensitive (e.g., Alpine vegetation).
Although this chapter mainly discusses the use of topographic data, the ERDAS IMAGINE
terrain analysis functions can be used on data types other than topographic data.
See Chapter 12 “Geographic Information Systems” for more information about GIS
modeling.
Topographic Data Topographic data are usually expressed as a series of points with X,Y, and Z values. When
topographic data are collected in the field, they are surveyed at a series of points including the
extreme high and low points of the terrain along features of interest that define the topography
such as streams and ridge lines, and at various points in between.
DEM and DTED are expressed as regularly spaced points. To create DEM and DTED files, a
regular grid is overlaid on the topographic contours. Elevations are read at each grid intersection
point, as shown in Figure 11-1.
20
30 20 22 29 34
40
31 39 38 34
30
50 20 45 48 41 30
Elevation data are derived from ground surveys and through manual photogrammetric methods.
Elevation points can also be generated through digital orthographic methods.
See Chapter 3 “Raster and Vector Data Sources” for more details on DEM and DTED
data. See Chapter 8 “Photogrammetric Concepts” for more information on the digital
orthographic process.
DEMs can be edited with the Raster Editing capabilities of ERDAS IMAGINE. See
Chapter 1 “Raster Data” for more information.
Slope Images Slope is expressed as the change in elevation over a certain distance. In this case, the certain
distance is the size of the pixel. Slope is most often expressed as a percentage, but can also be
calculated in degrees.
372 ERDAS
Slope Images
In ERDAS IMAGINE, the relationship between percentage and degree expressions of slope is
as follows:
• slopes between 45° and 90° are expressed as 100 - 200% slopes
A 3 × 3 pixel window is used to calculate the slope at each pixel. For a pixel at location X,Y,
the elevations around it are used to calculate the slope as shown in Figure 11-2. In Figure 11-2,
each pixel has a ground resolution of 30 × 30 meters.
a b c
Pixel X,Y has 10 m 20 m 25 m
elevation e.
d e f 22 m 30 m 25 m
g h i 20 m 24 m 18 m
First, the average elevation changes per unit of distance in the x and y direction (∆x and ∆y) are
calculated as:
∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i
∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3 × x s
∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3 × y s
Where:
a...i = elevation values of pixels in a 3 × 3 window, as shown above
xs = x pixel size = 30 meters
ys = y pixel size = 30 meters
The slope at pixel x,y is calculated as:
( ∆x ) 2 + ( ∆y ) 2 s = 0.0967
s = --------------------------------------
2
if s ≤ 1 percent slope = s × 100
if s > 1 percent slope = 200 – ---------100-
s
180
slope in degrees = tan– 1 ( s ) × ----------
π
Example
Slope images are often used in road planning. For example, if the Department of Transportation
specifies a maximum of 15% slope on any road, it would be possible to recode all slope values
that are greater than 15% as unsuitable for road building.
A hypothetical example is given in Figure 11-3, which shows how the slope is calculated for a
single pixel.
10 m 20 m 25 m
22 m 25 m
20 m 24 m 18 m
15 + 3 – 2
∆x = ------------------------- = 0.177
10 – 4 + 7- = – 0.078
∆y = –-----------------------------
30 × 3 30 × 3
180
slope in degrees = tan–1 ( s ) × ---------- = tan–1 ( 0.0967 ) × 57.30 = 5.54
π
percent slope = 0.0967 × 100 = 9.67%
374 ERDAS
Aspect Images
Aspect Images An aspect image is an image file that is gray scale coded according to the prevailing direction
of the slope at each pixel. Aspect is expressed in degrees from north, clockwise, from 0 to 360.
Due north is 0 degrees. A value of 90 degrees is due east, 180 degrees is due south, and 270
degrees is due west. A value of 361 degrees is used to identify flat surfaces such as water bodies.
As with slope calculations, aspect uses a 3 × 3 window around each pixel to calculate the
prevailing direction it faces. For pixel x,y with the following elevation values around it, the
average changes in elevation in both x and y directions are calculated first. Each pixel is 30 ×
30 meters in the following example:
d e f
g h i
∆x 1 = c – a ∆y 1 = a – g
∆x 2 = f – d ∆y 2 = b – h
∆x 3 = i – g ∆y 3 = c – i
Where:
a...i = elevation values of pixels in a 3 × 3 window as shown above
∆x = ( ∆x 1 + ∆x 2 + ∆x 3 ) ⁄ 3
∆y = ( ∆y 1 + ∆y 2 + ∆y 3 ) ⁄ 3
If ∆x = 0 and ∆y = 0, then the aspect is flat (coded to 361 degrees). Otherwise,θ is calculated as:
∆x
θ = tan–1 -------
∆y
Example
Aspect files are used in many of the same applications as slope files. In transportation planning,
for example, north facing slopes are often avoided. Especially in northern climates, these would
be exposed to the most severe weather and would hold snow and ice the longest. It would be
possible to recode all pixels with north facing aspects as undesirable for road building.
A hypothetical example is given in Figure 11-5, which shows how the aspect is calculated for a
single pixel.
10 m 20 m 25 m
22 m 25 m
20 m 24 m 18 m
∆x = 15 + 3 – 2- = 5.33 10 – 4 + 7 = – 2.33
----------------------- ∆y = –----------------------------
3 3
5.33
θ = tan– 1 ------------- = 1.98
– 2.33
Shaded Relief A shaded relief image provides an illustration of variations in elevation. Based on a user-
specified position of the sun, areas that would be in sunlight are highlighted and areas that would
be in shadow are shaded. Shaded relief images are generated from an elevation surface, alone
or in combination with an image file draped over the terrain.
It is important to note that the relief program identifies shadowed areas—i.e., those that are not
in direct sun. It does not calculate the shadow that is cast by topographic features onto the
surrounding surface.
For example, a high mountain with sunlight coming from the northwest would be symbolized
as follows in shaded relief. Only the portions of the mountain that would be in shadow from a
northwest light would be shaded. The software would not simulate a shadow that the mountain
would cast on the southeast side.
376 ERDAS
Topographic Normalization
30
40
50
= ≠
in sun shaded
Shaded relief images are an effective graphic tool. They can also be used in analysis, e.g., snow
melt over an area spanned by an elevation surface. A series of relief images can be generated to
simulate the movement of the sun over the landscape. Snow melt rates can then be estimated for
each pixel based on the amount of time it spends in sun or shadow. Shaded relief images can
also be used to enhance subtle detail in gray scale images such as aeromagnetic, radar, gravity
maps, etc.
Use the Shaded Relief function in Image Interpreter to generate a relief image.
In calculating relief, the software compares the user-specified sun position and angle with the
angle each pixel faces. Each pixel is assigned a value between -1 and +1 to indicate the amount
of light reflectance at that pixel.
• Positive numbers represent sunny areas, with +1 assigned to the areas of highest
reflectance.
The reflectance values are then applied to the original pixel values to get the final result. All
negative values are set to 0 or to the minimum light level specified by you. These indicate
shadowed areas. Light reflectance in sunny areas falls within a range of values depending on
whether the pixel is directly facing the sun or not. (In the example above, pixels facing
northwest would be the brightest. Pixels facing north-northwest and west-northwest would not
be quite as bright.)
In a relief file, which is a DEM that shows surface relief, the surface reflectance values are
multiplied by the color lookup values for the image file.
Topographic Digital imagery from mountainous regions often contains a radiometric distortion known as
Normalization topographic effect. Topographic effect results from the differences in illumination due to the
angle of the sun and the angle of the terrain. This causes a variation in the image brightness
values. Topographic effect is a combination of:
• incident illumination —the orientation of the surface with respect to the rays of the sun
One way to reduce topographic effect in digital imagery is by applying transformations based
on the Lambertian or Non-Lambertian reflectance models. These models normalize the
imagery, which makes it appear as if it were a flat surface.
When using the Topographic Normalization model, the following information is needed:
• DEM file
Lambertian The Lambertian Reflectance model assumes that the surface reflects incident solar energy
Reflectance Model uniformly in all directions, and that variations in reflectance are due to the amount of incident
radiation.
The following equation produces normalized brightness values (Colby, 1991; Smith et al,
1980):
BVnormal λ = BV observed λ / cos i
Where:
BVnormal λ = normalized brightness values
BVobserved λ = observed brightness values
cos i = cosine of the incidence angle
Incidence Angle
The incidence angle is defined from:
cos i = cos (90 - θs) cos θn + sin (90 - θs) sin θn cos (φs - φn)
Where:
i = the angle between the solar rays and the normal to the surface
θs = the elevation of the sun
φs = the azimuth of the sun
θn = the slope of each surface element
φn = the aspect of each surface element
If the surface has a slope of 0 degrees, then aspect is undefined and i is simply 90 - θs.
Non-Lambertian Model Minnaert (Minnaert and Szeicz, 1961) proposed that the observed surface does not reflect
incident solar energy uniformly in all directions. Instead, he formulated the Non-Lambertian
model, which takes into account variations in the terrain. This model, although more
computationally demanding than the Lambertian model, may present more accurate results.
378 ERDAS
Topographic Normalization
Minnaert Constant
The Minnaert constant (k) may be found by regressing a set of observed brightness values from
the remotely sensed imagery with known slope and aspect values, provided that all the
observations in this set are the same type of land cover. The k value is the slope of the regression
line (Hodgson and Shelley, 1994):
log (BVobserved λ cos e) = log BVnormal λ+ k log (cos i cos e)
Use the Spatial Modeler to create a model based on the Non-Lambertian model.
NOTE: The Non-Lambertian model does not detect surfaces that are shadowed by intervening
topographic features between each pixel and the sun. For these areas, a line-of-sight algorithm
can identify such shadowed pixels.
380 ERDAS
Chapter 12
Introduction The dawning of GIS can legitimately be traced back to the beginning of the human race. The
earliest known map dates back to 2500 B.C., but there were probably maps before that time.
Since then, humans have been continually improving the methods of conveying spatial
information. The mid-eighteenth century brought the use of map overlays to show troop
movements in the Revolutionary War. This could be considered an early GIS. The first British
census in 1825 led to the science of demography, another application for GIS. During the 1800s,
many different cartographers and scientists were all discovering the power of overlays to
convey multiple levels of information about an area (Star and Estes, 1990).
Frederick Law Olmstead has long been considered the father of Landscape Architecture for his
pioneering work in the early 20th century. Many of the methods Olmstead used in Landscape
Architecture also involved the use of hand-drawn overlays. This type of analysis was beginning
to be used for a much wider range of applications, such as change detection, urban planning,
and resource management (Rado, 1992).
The first system to be called a GIS was the Canadian Geographic Information System,
developed in 1962 by Roger Tomlinson of the Canada Land Inventory. Unlike earlier systems
that were developed for a specific application, this system was designed to store digitized map
data and land-based attributes in an easily accessible format for all of Canada. This system is
still in operation today (Parent and Church, 1987).
In 1969, Ian McHarg’s influential work, Design with Nature, was published. This work on land
suitability/capability analysis (SCA), a system designed to analyze many data layers to produce
a plan map, discussed the use of overlays of spatially referenced data layers for resource
planning and management (Star and Estes, 1990).
The era of modern GIS really started in the 1970s, as analysts began to program computers to
automate some of the manual processes. Software companies like ESRI and ERDAS developed
software packages that could input, display, and manipulate geographic data to create new
layers of information. The steady advances in features and power of the hardware over the last
ten years—and the decrease in hardware costs—have made GIS technology accessible to a wide
range of users. The growth rate of the GIS industry in the last several years has exceeded even
the most optimistic projections.
Today, a GIS is a unique system designed to input, store, retrieve, manipulate, and analyze
layers of geographic data to produce interpretable information. A GIS should also be able to
create reports and maps (Marble, 1990). The GIS database may include computer images,
hardcopy maps, statistical data, or any other data that is needed in a study. Although the term
GIS is commonly used to describe software packages, a true GIS includes knowledgeable staff,
a training program, budgets, marketing, hardware, data, and software (Walker and Miller,
1990). GIS technology can be used in almost any geography-related discipline, from Landscape
Architecture to natural resource management to transportation routing.
The central purpose of a GIS is to turn geographic data into useful information—the answers to
real-life questions—questions such as:
• How can we monitor the influence of global climatic changes on the Earth’s resources?
• Where is the best place for a shopping center that is most convenient to shoppers and least
harmful to the local ecology?
• How can communities be better prepared to face natural disasters, such as earthquakes,
tornadoes, hurricanes, and floods?
Information vs. Data Information, as opposed to data, is independently meaningful. It is relevant to a particular
problem or question:
• “The land cover at coordinate N875250, E757261 has a data file value 8,” is data.
• “Land cover with a value of 8 are on slopes too steep for development,” is information.
You can input data into a GIS and output information. The information you wish to derive
determines the type of data that must be input. For example, if you are looking for a suitable
refuge for bald eagles, zip code data is probably not needed, while land cover data may be
useful.
For this reason, the first step in any GIS project is usually an assessment of the scope and goals
of the study. Once the project is defined, you can begin the process of building the database.
Although software and data are commercially available, a custom database must be created for
the particular project and study area. The database must be designed to meet the needs of the
organization and objectives. ERDAS IMAGINE provides tools required to build and
manipulate a GIS database.
Successful GIS implementation typically includes two major steps:
• data input
• analysis
Data input involves collecting the necessary data layers into a GIS database. In the analysis
phase, these data layers are combined and manipulated in order to create new layers and to
extract meaningful information from them. This chapter discusses these steps in detail.
Data Input Acquiring the appropriate data for a project involves creating a database of layers that
encompasses the study area. A database created with ERDAS IMAGINE can consist of:
382 ERDAS
Data Input
Landsat TM Roads
SPOT panchromatic Census data
Aerial photograph Ownership parcels
Soils data Political boundaries
Land cover Landmarks
• site selection
• petroleum exploration
• mission planning
• change detection
On the other hand, vector data may be better suited for these applications:
• urban planning
• traffic engineering
• facilities management
The advantage of an integrated raster and vector system such as ERDAS IMAGINE is that one
data structure does not have to be chosen over the other. Both data formats can be used and the
functions of both types of systems can be accessed. Depending upon the project, only raster or
vector data may be needed, but most applications benefit from using both.
Continuous Layers Continuous raster layers are quantitative (measuring a characteristic) and have related,
continuous values. Continuous raster layers can be multiband (e.g., Landsat TM) or single band
(e.g., SPOT panchromatic).
Satellite images, aerial photographs, elevation data, scanned maps, and other continuous raster
layers can be incorporated into a database and provide a wealth of information that is not
available in thematic layers or vector layers. In fact, these layers often form the foundation of
the database. Extremely accurate base maps can be created from rectified satellite images or
aerial photographs. Then, all other layers that are added to the database can be registered to this
base map.
Once used only for image processing, continuous data are now being incorporated into GIS
databases and used in combination with thematic data to influence processing algorithms or as
backdrop imagery on which to display the results of analyses. Current satellite data and aerial
photographs are also effective in updating outdated vector data. The vectors can be overlaid on
the raster backdrop and updated dynamically to reflect new or changed features, such as roads,
utility lines, or land use. This chapter explores the many uses of continuous data in a GIS.
384 ERDAS
Thematic Layers
Thematic Layers Thematic data are typically represented as single layers of information stored as image files and
containing discrete classes. Classes are simply categories of pixels which represent the same
condition. An example of a thematic layer is a vegetation classification with discrete classes
representing coniferous forest, deciduous forest, wetlands, agriculture, urban, etc.
A thematic layer is sometimes called a variable, because it represents one of many
characteristics about the study area. Since thematic layers usually have only one band, they are
usually displayed in pseudo color mode, where particular colors are often assigned to help
visualize the information. For example, blues are usually used for water features, greens for
healthy vegetation, etc.
• Nominal classes represent categories with no particular order. Usually, these are
characteristics that are not associated with quantities (e.g., soil type or political area).
• Ordinal classes are those that have a sequence, such as poor, good, better, and best. An
ordinal class numbering system is often created from a nominal system, in which classes
have been ranked by some criteria. In the case of the recreation department database used
in the previous example, the final layer may rank the proposed park sites according to their
overall suitability.
• Interval classes also have a natural sequence, but the distance between each value is
meaningful as well. This numbering system might be used for temperature data.
• Ratio classes differ from interval classes only in that ratio classes have a natural zero point,
such as rainfall amounts.
The variable being analyzed, and the way that it contributes to the final product, determines the
class numbering system used in the thematic layers. Layers that have one numbering system can
easily be recoded to a new system. This is discussed in detail under “Recoding”.
Classification
Thematic layers can be generated from remotely sensed data (e.g., Landsat TM, SPOT) by using
the ERDAS IMAGINE Image Interpreter, Classification, and Spatial Modeler tools. A frequent
and popular application is the creation of land cover classification schemes through the use of
both supervised (user-assisted) and unsupervised (automatic) pattern-recognition algorithms
contained within ERDAS IMAGINE. The output is a single thematic layer that represents
specific classes based on the approach selected.
Use the Vector Utilities menu from the Vector icon in the ERDAS IMAGINE icon panel to
convert vector layers to raster format, or use the vector layers directly in Spatial Modeler.
Other sources of raster data are discussed in Chapter 3 “Raster and Vector Data
Sources”.
Statistics Both continuous and thematic layers include statistical information. Thematic layers contain the
following information:
• a histogram of the data values, which is the total number of pixels in each class
• a color table, stored as brightness values in red, green, and blue, which make up the colors
of each class when the layer is displayed
For thematic data, these statistics are called attributes and may be accompanied by many other
types of information, as described in “Attributes”.
Use the Image Information option on the Viewer’s tool bar to generate or update statistics
for image files.
See Chapter 1 “Raster Data” for more information about the statistics stored with
continuous layers.
Vector Layers The vector layers used in ERDAS IMAGINE are based on the ArcInfo data model and consist
of points, lines, and polygons. These layers are topologically complete, meaning that the spatial
relationships between features are maintained. Vector layers can be used to represent
transportation routes, utility corridors, communication lines, tax parcels, school zones, voting
districts, landmarks, population density, etc. Vector layers can be analyzed independently or in
combination with continuous and thematic raster layers.
In ERDAS IMAGINE, vector layers may also be shapefiles based on the ArcView data model.
Vector data can be acquired from several private and governmental agencies. Vector data can
also be created in ERDAS IMAGINE by digitizing on the screen, using a digitizing tablet, or
converting other data types to vector format.
386 ERDAS
Attributes
See Chapter 2 “Vector Layers” for more information on the characteristics of vector data.
Attributes Text and numerical data that are associated with the classes of a thematic layer or the features
in a vector layer are called attributes. This information can take the form of character strings,
integer numbers, or floating point numbers. Attributes work much like the data that are handled
by database management software. You may define fields, which are categories of information
about each class. A record is the set of all attribute data for one class. Each record is like an index
card, containing information about one class or feature in a file of many index cards, which
contain similar information for the other classes or features.
Attribute information for raster layers is stored in the image file. Vector attribute information is
stored in either an INFO file, dbf file, or SDE database. In both cases, there are fields that are
automatically generated by the software, but more fields can be added as needed to fully
describe the data. Both are viewed in CellArrays, which allow you to display and manipulate
the information. However, raster and vector attributes are handled slightly differently, so a
separate section on each follows.
Raster Attributes In ERDAS IMAGINE, raster attributes for image files are accessible from the Raster Attribute
Editor. The Raster Attribute Editor contains a CellArray, which is similar to a table or
spreadsheet that not only displays the information, but also includes options for importing,
exporting, copying, editing, and other operations.
Figure 12-2 shows the attributes for a land cover classification layer.
• Class Name
• Class Value
• Opacity percentage
As many additional attribute fields as needed can be defined for each class.
See Chapter 7 “Classification” for more information about the attribute information that
is automatically generated when new thematic layers are created in the classification
process.
• cut, copy, and paste individual cells, rows, or columns to and from the same Raster
Attribute Editor or among several Raster Attribute Editors
• generate reports that include all or a subset of the information in the Raster Attribute Editor
The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column, so that
class (object) colors can be viewed or changed. In addition to direct manipulation, attributes can
be changed by other programs. For example, some of the Image Interpreter functions calculate
statistics that are automatically added to the Raster Attribute Editor. Models that read and/or
modify attribute information can also be written.
See Chapter 6 “Enhancement” for more information on the Image Interpreter. There is
more information on GIS modeling in “Graphical Modeling”.
Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays. You can simply view attributes
or use them to:
388 ERDAS
Analysis
• label features
See Chapter 2 “Vector Layers” for more information about vector attributes.
Analysis
ERDAS IMAGINE In ERDAS IMAGINE, GIS analysis functions and algorithms are accessible through three main
Analysis Tools tools:
Model Maker
Model Maker is essentially SML linked to a graphical interface. This enables you to create
graphical models using a palette of easy-to-use tools. Graphical models can be run, edited, saved
in libraries, or converted to script form and edited further, using SML.
NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can
be accomplished using both Model Maker and SML.
Image Interpreter
The Image Interpreter houses a set of common functions that were all created using either Model
Maker or SML. They have been given a dialog interface to match the other processes in ERDAS
IMAGINE. In most cases, these processes can be run from a single dialog. However, the actual
models are also provided with the software to enable customized processing.
Many of the functions described in the following sections can be accomplished using any of
these tools. Model Maker is also easy to use and utilizes many of the same steps that would be
performed when drawing a flow chart of an analysis. SML is intended for more advanced
analyses, and has been designed using natural language commands and simple syntax rules.
Some applications may require a combination of these tools.
See the ERDAS IMAGINE On-Line Help for more information about EML and the
IMAGINE Developers’ Toolkit.
Analysis Procedures Once the database (layers and attribute data) is assembled, the layers can be analyzed and new
information extracted. Some information can be extracted simply by looking at the layers and
visually comparing them to other layers. However, new information can be retrieved by
combining and comparing layers using the following procedures:
• Contiguity analysis—enables you to identify regions of pixels in the same class and to filter
out small regions.
• Neighborhood analysis —any image processing technique that takes surrounding pixels
into consideration, such as convolution filtering and scanning. This is similar to the
convolution filtering performed on continuous data. Several types of analyses can be
performed, such as boundary, density, mean, sum, etc.
• Recoding—enables you to assign new class values to all or a subset of the classes in a layer.
• Overlaying—creates a new file with either the maximum or minimum value of the input
layers.
390 ERDAS
Proximity Analysis
• Script modeling—offers all of the capabilities of graphical modeling with the ability to
perform more complex functions, such as conditional looping.
Proximity Analysis Many applications require some measurement of distance or proximity. For example, a real
estate developer would be concerned with the distance between a potential site for a shopping
center and an interchange to a major highway.
Proximity analysis determines which pixels of a layer are located at specified distances from
pixels in a certain class or classes. A new thematic layer (image file) is created, which is
categorized by the distance of each pixel from specified classes of the input layer. This new file
then becomes a new layer of the database and provides a buffer zone around the specified
class(es). In further analysis, it may be beneficial to weight other factors, based on whether they
fall inside or outside the buffer zone.
Figure 12-4 shows a layer containing lakes and streams and the resulting layer after a proximity
analysis is run to create a buffer zone around all of the water features.
Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform
a proximity analysis.
Buffer
zones
Contiguity Contiguity analysis is concerned with the ways in which pixels of a class are grouped together.
Analysis Groups of contiguous pixels in the same class, called raster regions, or clumps, can be identified
by their sizes and manipulated. One application of this tool would be an analysis for locating
helicopter landing zones that require at least 250 contiguous pixels at 10-meter resolution.
Contiguity analysis can be used to: 1) divide a large class into separate raster regions, or 2)
eliminate raster regions that are too small to be considered for an application.
Filtering Clumps
In cases where very small clumps are not useful, they can be filtered out according to their sizes.
This is sometimes referred to as eliminating the salt and pepper effects, or sieving. In Figure 12-
5, all of the small clumps in the original (clumped) layer are eliminated.
Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler
to perform contiguity analysis.
Neighborhood With a process similar to the convolution filtering of continuous raster layers, thematic raster
Analysis layers can also be filtered. The GIS filtering process is sometimes referred to as scanning, but
is not to be confused with data capture via a digital camera. Neighborhood analysis is based on
local or neighborhood characteristics of the data (Star and Estes, 1990).
Every pixel is analyzed spatially, according to the pixels that surround it. The number and the
location of the surrounding pixels is determined by a scanning window, which is defined by you.
These operations are known as focal operations. The scanning window can be of any size in
SML. In Model Maker, it has the following constraints:
• rectangular, up to 512 × 512 pixels, with the option to mask-out certain pixels
392 ERDAS
Neighborhood Analysis
Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler to
perform neighborhood analysis. The scanning window used in Image Interpreter can be
3 × 3, 5 × 5, or 7 × 7. The scanning window in Model Maker is defined by you and can be
up to 512 × 512. The scanning window in SML can be of any size.
• Specify a rectangular portion of the file to scan. The output layer contains only the specified
area.
• Specify an area that is defined by an existing AOI layer, an annotation overlay, or a vector
layer. The area(s) within the polygon are scanned, and the other areas remain the same. The
output layer is the same size as the input layer or the selected rectangular portion.
• Specify a class or classes in another thematic layer to be used as a mask. The pixels in the
scanned layer that correspond to the pixels of the selected class or classes in the mask layer
are scanned, while the other pixels remain the same.
8 2 8 4 4 5
8 6 8 5 4 5
2 6 8 5 4 5
2 6 8 3 5 5
2 6 8 3 4 5
2 2 6 3 4
8 2 6 4 4
8 6 8 4 4
2 6 3 5 5
2 6 3 4 5
2 8 4 4 5
In Figure 12-6, class 2 in the mask layer was selected for the mask. Only the corresponding
(shaded) pixels in the target layer are scanned—the other values remain unchanged.
Neighborhood analysis creates a new thematic layer. There are several types of analysis that can
be performed upon each window of pixels, as described below:
• Boundary—detects boundaries between classes. The output layer contains only boundary
pixels. This is useful for creating boundary or edge lines from classes, such as a land/water
interface.
• Density—outputs the number of pixels that have the same class value as the center
(analyzed) pixel. This is also a measure of homogeneity (sameness), based upon the
analyzed pixel. This is often useful in assessing vegetation crown closure.
• Diversity—outputs the number of class values that are present within the window.
Diversity is also a measure of heterogeneity (difference).
• Majority—outputs the class value that represents the majority of the class values in the
window. The value is defined by you. This option operates like a low-frequency filter to
clean up a salt and pepper layer.
• Maximum—outputs the greatest class value within the window. This can be used to
emphasize classes with the higher class values or to eliminate linear features or boundaries.
• Mean—averages the class values. If class values represent quantitative data, then this
option can work like a convolution filter. This is mostly used on ordinal or interval data.
• Median—outputs the statistical median of the class values in the window. This option may
be useful if class values represent quantitative data.
• Minimum—outputs the least or smallest class value within the window. The value is
defined by you. This can be used to emphasize classes with the low class values.
• Minority—outputs the least common of the class values that are within the window. This
option can be used to identify the least common classes. It can also be used to highlight
disconnected linear features.
• Rank—outputs the number of pixels in the scan window whose value is less than the center
pixel.
• Sum—totals the class values. In a file where class values are ranked, totaling enables you
to further rank pixels based on their proximity to high-ranking pixels.
2 8 6 6 6 8 6 6
Output of one
2 8 6 6 6 2 48 6 iteration of the
2 2 8 6 6 2 2 8 sum operation
2 2 2 8 6
2 2 2 2 8
8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48
The analyzed pixel is always the center pixel of the scanning window. In this example, only
the pixel in the third column and third row of the file is summed.
394 ERDAS
Overlaying
Recoding Class values can be recoded to new values. Recoding involves the assignment of new values to
one or more classes. Recoding is used to:
• combine classes
When an ordinal, ratio, or interval class numbering system is used, recoding can be used to
assign classes to appropriate values. Recoding is often performed to make later steps easier. For
example, in creating a model that outputs good, better, and best areas, it may be beneficial to
recode the input layers so all of the best classes have the highest class values.
In the following example (Table 12-1), a land cover layer is recoded so that the most
environmentally sensitive areas (Riparian and Wetlands) have higher class values.
Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode
layers.
Overlaying Thematic data layers can be overlaid to create a composite layer. The output layer contains
either the minimum or the maximum class values of the input layers. For example, if an area
was in class 5 in one layer, and in class 3 in another, and the maximum class value dominated,
then the same area would be coded to class 5 in the output layer, as shown in Figure 12-8.
Overlay Composite
9
1 = commercial
9 4 2 = residential
9 1 2 3 = forest
9 5 4 = industrial
5 = wetlands
3 9 = steep slopes
(Land Use masked)
The application example in Figure 12-8 shows the result of combining two layers—slope and
land use. The slope layer is first recoded to combine all steep slopes into one value. When
overlaid with the land use layer, the highest data file values (the steep slopes) dominate in the
output layer.
Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to
overlay layers.
Indexing Thematic layers can be indexed (added) to create a composite layer. The output layer contains
the sums of the input layer values. For example, the intersection of class 3 in one layer and class
5 in another would result in class 8 in the output layer, as shown in Figure 12-9.
396 ERDAS
Indexing
The application example in Figure 12-9 shows the result of indexing. In this example, you want
to develop a new subdivision, and the most likely sites are where there is the best combination
(highest value) of good soils, good slope, and good access. Because good slope is a more critical
factor to you than good soils or good access, a weighting factor is applied to the slope layer. A
weighting factor has the effect of multiplying all input values by some constant. In this example,
slope is given a weight of 2.
Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to index
layers.
Matrix Analysis Matrix analysis produces a thematic layer that contains a separate class for every coincidence
of classes in two layers. The output is best described with a matrix diagram.
0 1 2 3 4 5
0 0 0 0 0 0 0
input layer 1 1 0 1 2 3 4 5
data values
(rows) 2 0 6 7 8 9 10
3 0 11 12 13 14 15
In this diagram, the classes of the two input layers represent the rows and columns of the matrix.
The output classes are assigned according to the coincidence of any two input classes.
All combinations of 0 and any other class are coded to 0, because 0 is usually the
background class, representing an area that is not being studied.
Unlike overlaying or indexing, the resulting class values of a matrix operation are unique for
each coincidence of two input class values. In this example, the output class value at column 1,
row 3 is 11, and the output class at column 3, row 1 is 3. If these files were indexed (summed)
instead of matrixed, both combinations would be coded to class 4.
Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrix
layers.
Modeling Modeling is a powerful and flexible analysis tool. Modeling is the process of creating new
layers from combining or operating upon existing layers. Modeling enables you to create a small
set of layers—perhaps even a single layer—which, at a glance, contains many types of
information about the study area.
For example, if you want to find the best areas for a bird sanctuary, taking into account
vegetation, availability of water, climate, and distance from highly developed areas, you would
create a thematic layer for each of these criteria. Then, each of these layers would be input to a
model. The modeling process would create one thematic layer, showing only the best areas for
the sanctuary.
The set of procedures that define the criteria is called a model. In ERDAS IMAGINE, models
can be created graphically and resemble a flow chart of steps, or they can be created using a
script language. Although these two types of models look different, they are essentially the
same—input files are defined, functions and/or operators are specified, and outputs are defined.
The model is run and a new output layer(s) is created. Models can utilize analysis functions that
have been previously defined, or new functions can be created by you.
398 ERDAS
Graphical Modeling
Use the Model Maker function in Spatial Modeler to create graphical models and SML to
create script models.
Data Layers
In modeling, the concept of layers is especially important. Before computers were used for
modeling, the most widely used approach was to overlay registered maps on paper or
transparencies, with each map corresponding to a separate theme. Today, digital files replace
these hardcopy layers and allow much more flexibility for recoloring, recoding, and
reproducing geographical information (Steinitz et al, 1976).
In a model, the corresponding pixels at the same coordinates in all input layers are addressed as
if they were physically overlaid like hardcopy maps.
Graphical Graphical modeling enables you to draw models using a palette of tools that defines inputs,
Modeling functions, and outputs. This type of modeling is very similar to drawing flowcharts, in that you
identify a logical flow of steps needed to perform the desired action. Through the extensive
functions and operators available in the ERDAS IMAGINE graphical modeling program, you
can analyze many layers of data in very few steps without creating intermediate files that occupy
extra disk space. Modeling is performed using a graphical editor that eliminates the need to
learn a programming language. Complex models can be developed easily and then quickly
edited and re-run on another data set.
Use the Model Maker function in Spatial Modeler to create graphical models.
See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating
the environmental sensitivity model in Figure 12-10. Descriptions of all of the graphical
models delivered with ERDAS IMAGINE are available in the On-Line Help.
Model Structure
A model created with Model Maker is essentially a flow chart that defines:
The graphical models created in Model Maker all have the same basic structure: input, function,
output. The number of inputs, functions, and outputs can vary, but the overall form remains
constant. All components must be connected to one another before the model can be executed.
The model on the left in Figure 12-11 is the most basic form. The model on the right is more
complex, but it retains the same input/function/output flow.
400 ERDAS
Graphical Modeling
Function
Input Function Input
Output
Output
Graphical models are stored in ASCII files with the .gmd extension. There are several sample
graphical models delivered with ERDAS IMAGINE that can be used as is or edited for more
customized processing.
Model Maker The functions available in Model Maker are divided into the following categories:
Functions
Category Description
Analysis Includes convolution filtering, histogram matching, contrast stretch, principal
components, and more.
Arithmetic Perform basic arithmetic functions including addition, subtraction,
multiplication, division, factorial, and modulus.
Bitwise Use bitwise and, or, exclusive or, and not.
Boolean Perform logical functions including and, or, and not.
Color Manipulate colors to and from RGB (red, green, blue) and IHS (intensity, hue,
saturation).
Conditional Run logical tests using conditional statements and either...if...or...otherwise.
Data Generation Create raster layers from map coordinates, column numbers, or row numbers.
Create a matrix or table from a list of scalars.
Descriptor Read attribute information and map a raster through an attribute column.
Distance Perform distance functions, including proximity analysis.
Exponential Use exponential operators, including natural and common logarithmic, power,
and square root.
Focal (Scan) Perform neighborhood analysis functions, including boundary, density, diversity,
majority, mean, minority, rank, standard deviation, sum, and others.
Focal Use Opts Constraints on which pixel values to include in calculations for the Focal (Scan)
function.
Focal Apply Opts Constraints on which pixel values to apply the results of calculations for the
Focal (Scan) function.
Global Analyze an entire layer and output one value, such as diversity, maximum, mean,
minimum, standard deviation, sum, and more.
Matrix Multiply, divide, and transpose matrices, as well as convert a matrix to a table
and vice versa.
Other Includes over 20 miscellaneous functions for data type conversion, various tests,
and other utilities.
Relational Includes equality, inequality, greater than, less than, greater than or equal, less
than or equal, and others.
Size Measure cell X and Y size, layer width and height, number of rows and columns,
etc.
Stack Statistics Perform operations over a stack of layers including diversity, majority, max,
mean, median, min, minority, standard deviation, and sum.
Statistical Includes density, diversity, majority, mean, rank, standard deviation, and more.
String Manipulate character strings.
402 ERDAS
Graphical Modeling
Category Description
Surface Calculate aspect and degree/percent slope and produce shaded relief.
Trigonometric Use common trigonometric functions, including sine/arcsine, cosine/arccosine,
tangent/arctangent, hyperbolic arcsine, arccosine, cosine, sine, and tangent.
Zonal Perform zonal operations including summary, diversity, majority, max, mean,
min, range, and standard deviation.
See the ERDAS IMAGINE Tour Guides and the On-Line SML manual for complete
instructions on using Model Maker, and more detailed information about the available
functions and operators.
Objects Within Model Maker, an object is an input to or output from a function. The five basic object
types used in Model Maker are:
• raster
• vector
• matrix
• table
• scalar
Raster
A raster object is a single layer or multilayer array of pixel data. Rasters are typically used to
specify and manipulate data from image files.
Vector
Vector data in either a vector coverage, shapefile, or annotation layer can be read directly into
the Model Maker, converted from vector to raster, then processed similarly to raster data; Model
Maker cannot write to coverages, or shapefiles or annotation layers.
Matrix
A matrix object is a set of numbers arranged in a two-dimensional array. A matrix has a fixed
number of rows and columns. Matrices may be used to store convolution kernels or the
neighborhood definition used in neighborhood functions. They can also be used to store
covariance matrices, eigenvector matrices, or matrices of linear combination coefficients.
Table
A table object is a series of numeric values, colors, or character strings. A table has one column
and a fixed number of rows. Tables are typically used to store columns from the Raster Attribute
Editor or a list of values that pertains to the individual layers of a set of layers. For example, a
table with four rows could be used to store the maximum value from each layer of a four layer
image file. A table may consist of up to 32,767 rows. Information in the table can be attributes,
calculated (e.g., histograms), or defined by you.
Scalar
A scalar object is a single numeric value, color, or character string. Scalars are often used as
weighting factors.
The graphics used in Model Maker to represent each of these objects are shown in Figure 12-12.
+
Matrix Scalar
+
Vector
Table
Raster
Data Types The five object types described above may be any of the following data types:
Input and output data types do not have to be the same. Using SML, you can change the data
type of input files before they are processed.
Output Parameters Since it is possible to have several inputs in one model, you can optionally define the working
window and the pixel cell size of the output data along with the output map projection.
Working Window
Raster layers of differing areas can be input into one model. However, the image area, or
working window, must be specified in order to use it in the model calculations. Either of the
following options can be selected:
404 ERDAS
Graphical Modeling
• Union—the model operates on the union of all input rasters. (This is the default.)
• Intersection—the model uses only the area of the rasters that is common to all input rasters.
• Minimum—the minimum cell size of the input layers is used (this is the default setting).
Map Projection
The output map projection defaults to be the same as the first input, or projection may be
selected to be the same as a chosen input. The output projection may also be selected from a
projection library.
Using Attributes in With the criteria function in Model Maker, attribute data can be used to determine output values.
Models The criteria function simplifies the process of creating a conditional statement. The criteria
function can be used to build a table of conditions that must be satisfied to output a particular
row value for an attribute (or cell value) associated with the selected raster.
The inputs to a criteria function are rasters or vectors. The columns of the criteria table represent
either attributes associated with a raster layer or the layer itself, if the cell values are of direct
interest. Criteria which must be met for each output column are entered in a cell in that column
(e.g., >5). Multiple sets of criteria may be entered in multiple rows. The output raster contains
the first row number of a set of criteria that were met for a raster cell.
Example
For example, consider the sample thematic layer, parks.img, that contains the following
attribute information:
Car
Class Name Histogram Acres Path Condition Turf Condition
Spaces
Grant Park 2456 403.45 Fair Good 127
Piedmont Park 5167 547.88 Good Fair 94
Candler Park 763 128.90 Excellent Excellent 65
Springdale Park 548 46.33 None Excellent 0
A simple model could create one output layer that shows only the parks in need of repairs. The
following logic would therefore be coded into the model:
“If Turf Condition is not Good or Excellent, and if Path Condition is not Good or Excellent,
then the output class value is 1. Otherwise, the output class value is 2.”
More than one input layer can also be used. For example, a model could be created, using the
input layers parks.img and soils.img, that shows the soil types for parks with either fair or poor
turf condition. Attributes can be used from every input file.
The following is a slightly more complex example:
If you have a land cover file and you want to create a file of pine forests larger than 10 acres,
the criteria function could be used to output values only for areas that satisfy the conditions of
being both pine forest and larger than 10 acres. The output file would have two classes: pine
forests larger than 10 acres and background. If you want the output file to show varying sizes
of pine forest, you would simply add more conditions to the criteria table.
Comparisons of attributes can also be combined with mathematical and logical functions on the
class values of the input file(s). With these capabilities, highly complex models can be created.
See the ERDAS IMAGINE Tour Guides or the On-Line Help for specific instructions on
using the criteria function.
Script Modeling SML is a script language used internally by Model Maker to execute the operations specified in
the graphical models that are created. SML can also be used to directly write to models you
create. It includes all of the functions available in Model Maker, plus:
Graphical models created with Model Maker can be output to a script file (text only) in SML.
These scripts can then be edited with a text editor using SML syntax and rerun or saved in a
library. Script models can also be written from scratch in the text editor. They are stored in
ASCII .mdl files.
The Text Editor is available from the Tools menu located on the ERDAS IMAGINE menu
bar and from the Model Librarian (Spatial Modeler).
In Figure 12-13, both the graphical and script models are shown for a tasseled cap
transformation. Notice how even the annotation on the graphical model is included in the
automatically generated script model. Generating script models from graphical models may aid
in learning SML.
406 ERDAS
Script Modeling
Figure 12-13: Graphical and Script Models For Tasseled Cap Transformation
Tasseled Cap
Transformation
Models
Graphical Model
Script Model
Convert graphical models to scripts using Model Maker. Open existing script models from
the Model Librarian (Spatial Modeler).
Statements A script model consists primarily of one or more statements. Each statement falls into one of
the following categories:
• Show and View—enables you to see and interpret results from the model
• Set—defines the scope of the model or establishes default values used by the Modeler
SML also includes flow control structures so that you can utilize conditional branching and
looping in the models and statement block structures, which cause a set of statements to be
executed as a group.
Declaration Example
In the script model in Figure 12-13, the following lines form the declaration portion of the
model:
INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR
"/usr/imagine/examples/tm_lanier.img";
FLOAT MATRIX n2_Custom_Matrix;
FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE
"/usr/imagine/examples/lntassel.img";
Set Example
The following set statements are used:
SET CELLSIZE MIN;
SET WINDOW UNION;
Assignment Example
The following assignment statements are used:
n2_Custom_Matrix = MATRIX(3, 7:
0.331830, 0.331210, 0.551770, 0.425140, 0.480870, 0.000000,
0.252520,
-0.247170, -0.162630, -0.406390, 0.854680, 0.054930,
0.000000, -0.117490,
0.139290, 0.224900, 0.403590, 0.251780, -0.701330, 0.000000,
-0.457320);
n4_lntassel = LINEARCOMB ( $n1_tm_lanier , $n2_Custom_Matrix
) ;
Data Types In addition to the data types utilized by Graphical Modeling, script model objects can store data
in the following data types:
• Color—three floating point numbers in the range of 0.0 to 1.0, representing intensity of red,
green, and blue
408 ERDAS
Vector Analysis
Variables Variables are objects in the Modeler that have been associated with a name using a declaration
statement. The declaration statement defines the data type and object type of the variable. The
declaration may also associate a raster variable with certain layers of an image file or a table
variable with an attribute table. Assignment statements are used to set or change the value of a
variable.
For script model syntax rules, descriptions of all available functions and operators, and
sample models, see the On-Line SML manual.
Vector Analysis Most of the operations discussed in the previous pages of this chapter focus on raster data.
However, in a complete GIS database, both raster and vector layers are present. One of the most
common applications involving the combination of raster and vector data is the updating of
vector layers using current raster imagery as a backdrop for vector editing. For example, if a
vector database is more than one or two years old, then there are probably errors due to changes
in the area (new roads, moved roads, new development, etc.). When displaying existing vector
layers over a raster layer, you can dynamically update the vector layer by digitizing new or
changed features on the screen.
Vector layers can also be used to indicate an AOI for further processing. Assume you want to
run a site suitability model on only areas designated for commercial development in the zoning
ordinances. By selecting these zones in a vector polygon layer, you could restrict the model to
only those areas in the raster input files.
Vector layers can also be used as inputs to models. Updated or new attributes may also be
written to vector layers in models.
Editing Vector Layers Editable features are polygons (as lines), lines, label points, and nodes. There can be multiple
features selected with a mixture of any and all feature types. Editing operations and commands
can be performed on multiple or single selections. In addition to the basic editing operations
(e.g., cut, paste, copy, delete), you can also perform the following operations on the line features
in multiple or single selections:
• spline—smooths or generalizes all currently selected lines using a specified grain tolerance
• split/unsplit—makes two lines from one by adding a node or joins two lines by removing a
node
• reshape (for single lines only)—enables you to move the vertices of a line
Reshaping (adding, deleting, or moving a vertex or node) can be done on a single selected line.
Table 12-4 details general editing operations and the feature types that support each of those
operations.
The Undo utility may be applied to any edits. The software stores all edits in sequential order,
so that continually pressing Undo reverses the editing.
For more information on vectors, see Chapter 3 “Raster and Vector Data Sources”.
Constructing Either the Build or Clean option can be used to construct topology. To create spatial
Topology relationships between features in a vector layer, it is necessary to create topology. After a vector
(Coverages Only) layer is edited, the topology must be constructed to maintain the topological relationships
between features. When topology is constructed, each feature is assigned an internal number.
These numbers are then used to determine line connectivity and polygon contiguity. Once
calculated, these values are recorded and stored in that layer’s associated attribute table.
You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE.
When topology is constructed, feature attribute tables are created with several automatically
created fields. Different fields are stored for the different types of layers. The automatically
generated fields for a line layer are:
• LPOLY#—the internal number for the polygon to the left of the line (zero for layers
containing only lines and no polygons)
• RPOLY#—the internal number for the polygon to the right of the line (zero for layers
containing only lines and no polygons)
410 ERDAS
Constructing Topology (Coverages Only)
• AREA—area of each polygon, measured in layer units (zero for layers containing only
points and no polygons)
• PERIMETER—length of each polygon boundary, measured in layer units (zero for layers
containing only points and no polygons)
Building and Cleaning The Build option processes points, lines, and polygons, but the Clean option processes only
Coverages lines and polygons. Build recognizes only existing intersections (nodes), whereas Clean creates
intersections (nodes) wherever lines cross one another. The differences in these two options are
summarized in Table 12-5 (Environmental Systems Research Institute, 1990).
Processes:
Polygons Yes Yes
Lines Yes Yes
Points Yes No
Errors
Constructing topology also helps to identify errors in the layer. Some of the common errors
found are:
Constructing typology can identify the errors mentioned above. When topology is constructed,
line intersections are created, the lines that make up each polygon are identified, and a label
point is associated with each polygon. Until topology is constructed, no polygons exist and lines
that cross each other are not connected at a node, because there is no intersection.
Construct topology using the Vector Utilities menu from the Vector icon in the ERDAS
IMAGINE icon panel.
You should not build or clean a layer that is displayed in a Viewer, nor should you try to
display a layer that is being built or cleaned.
When the Build or Clean options are used to construct the topology of a vector layer, potential
node errors are marked with special symbols. These symbols are listed below (Environmental
Systems Research Institute, 1990).
Pseudo nodes, drawn with a diamond symbol, occur where a single line connects with itself
(an island) or where only two lines intersect. Pseudo nodes do not necessarily indicate an error
or a problem. Acceptable pseudo nodes may represent an island (a spatial pseudo node) or the
point where a road changes from pavement to gravel (an attribute pseudo node).
No label point
Pseudo node in polygon
(island)
Dangling nodes
Errors detected in a layer can be corrected by changing the tolerances set for that layer and
building or cleaning again, or by editing the layer manually, then running Build or Clean.
Refer to the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on
editing vector layers.
412 ERDAS
Chapter 13
Cartography
Introduction Maps and mapping are the subject of the art and science known as cartography—creating two-
dimensional representations of our three-dimensional Earth. These representations were once
hand-drawn with paper and pen. But now, map production is largely automated—and the final
output is not always paper. The capabilities of a computer system are invaluable to map users,
who often need to know much more about an area than can be reproduced on paper, no matter
how large that piece of paper is or how small the annotation is. Maps stored on a computer can
be queried, analyzed, and updated quickly.
As the veteran GIS and image processing authority, Roger F. Tomlinson, said: “Mapped and
related statistical data do form the greatest storehouse of knowledge about the condition of the
living space of mankind.” With this thought in mind, it only makes sense that maps be created
as accurately as possible and be as accessible as possible.
In the past, map making was carried out by mapping agencies who took the analyst’s (be they
surveyors, photogrammetrists, or draftsmen) information and created a map to illustrate that
information. But today, in many cases, the analyst is the cartographer and can design his maps
to best suit the data and the end user.
This chapter defines some basic cartographic terms and explains how maps are created within
the ERDAS IMAGINE environment.
Use the Map Composer to create hardcopy and softcopy maps and presentation graphics.
This chapter concentrates on the production of digital maps. See Chapter 14 “Hardcopy
Output” for information about printing hardcopy maps.
Types of Maps A map is a graphic representation of spatial relationships on the Earth or other planets. Maps
can take on many forms and sizes, depending on the intended use of the map. Maps no longer
refer only to hardcopy output. In this manual, the maps discussed begin as digital files and may
be printed later as desired.
Some of the different types of maps are defined in Table 13-1.
Map Purpose
Aspect A map that shows the prevailing direction that a slope faces at each pixel. Aspect maps are
often color-coded to show the eight major compass directions, or any of 360 degrees.
Base A map portraying background reference information onto which other information is placed.
Base maps usually show the location and extent of natural Earth surface features and permanent
human-made objects. Raster imagery, orthophotos, and orthoimages are often used as base
maps.
Bathymetric A map portraying the shape of a water body or reservoir using isobaths (depth contours).
Cadastral A map showing the boundaries of the subdivisions of land for purposes of describing and
recording ownership or taxation.
Choropleth A map portraying properties of a surface using area symbols. Area symbols usually represent
categorized classes of the mapped phenomenon.
Composite A map on which the combined information from different thematic maps is presented.
Contour A map in which lines are used to connect points of equal elevation. Lines are often spaced in
increments of ten or twenty feet or meters.
Derivative A map created by altering, combining, or analyzing other maps.
Index A reference map that outlines the mapped area, identifies all of the component maps for the area
if several map sheets are required, and identifies all adjacent map sheets.
Inset A map that is an enlargement of some congested area of a smaller scale map, and that is usually
placed on the same sheet with the smaller scale main map.
Isarithmic A map that uses isorithms (lines connecting points of the same value for any of the
characteristics used in the representation of surfaces) to represent a statistical surface. Also
called an isometric map.
Isopleth A map on which isopleths (lines representing quantities that cannot exist at a point, such as
population density) are used to represent some selected quantity.
Morphometric A map representing morphological features of the Earth’s surface.
Outline A map showing the limits of a specific set of mapping entities, such as counties, NTS quads,
etc. Outline maps usually contain a very small number of details over the desired boundaries
with their descriptive codes.
Planimetric A map showing only the horizontal position of geographic objects, without topographic
features or elevation contours.
Relief Any map that appears to be, or is, three-dimensional. Also called a shaded relief map.
Slope A map that shows changes in elevation over distance. Slope maps are usually color-coded
according to the steepness of the terrain at each pixel.
Thematic A map illustrating the class characterizations of a particular spatial variable (e.g., soils, land
cover, hydrology, etc.)
Topographic A map depicting terrain relief.
Viewshed A map showing only those areas visible (or invisible) from a specified point(s). Also called a
line-of-sight map or a visibility map.
In ERDAS IMAGINE, maps are stored as a map file with a .map extension.
414 ERDAS
Types of Maps
Thematic Maps Thematic maps comprise a large portion of the maps that many organizations create. For this
reason, this map type is explored in more detail.
Thematic maps may be subdivided into two groups:
• qualitative
• quantitative
A qualitative map shows the spatial distribution or location of a kind of nominal data. For
example, a map showing corn fields in the United States would be a qualitative map. It would
not show how much corn is produced in each location, or production relative to the other areas.
A quantitative map displays the spatial aspects of numerical data. A map showing corn
production (volume) in each area would be a quantitative map. Quantitative maps show ordinal
(less than/greater than) and interval/ratio (difference) scale data (Dent, 1985).
You can create thematic data layers from continuous data (aerial photography and
satellite images) using the ERDAS IMAGINE classification capabilities. See Chapter 7
“Classification” for more information.
Base Information
Thematic maps should include a base of information so that the reader can easily relate the
thematic data to the real world. This base may be as simple as an outline of counties, states, or
countries, to something more complex, such as an aerial photograph or satellite image. In the
past, it was difficult and expensive to produce maps that included both thematic and continuous
data, but technological advances have made this easy.
For example, in a thematic map showing flood plains in the Mississippi River valley, you could
overlay the thematic data onto a line coverage of state borders or a satellite image of the area.
The satellite image can provide more detail about the areas bordering the flood plains. This may
be valuable information when planning emergency response and resource management efforts
for the area. Satellite images can also provide very current information about an area, and can
assist you in assessing the accuracy of a thematic image.
In ERDAS IMAGINE, you can include multiple layers in a single map composition. See
“Map Composition” for more information about creating maps.
Color Selection
The colors used in thematic maps may or may not have anything to do with the class or category
of information shown. Cartographers usually try to use a color scheme that highlights the
primary purpose of the map. The map reader’s perception of colors also plays an important role.
Most people are more sensitive to red, followed by green, yellow, blue, and purple. Although
color selection is left entirely up to the map designer, some guidelines have been established
(Robinson and Sale, 1969).
• When mapping interval or ordinal data, the higher ranks and greater amounts are generally
represented by darker colors.
• When mapping elevation data, start with blues for water, greens in the lowlands, ranging
up through yellows and browns to reds in the higher elevations. This progression should
not be used for series other than elevation.
• In temperature mapping, use red, orange, and yellow for warm temperatures and blue,
green, and gray for cool temperatures.
• In land cover mapping, use yellows and tans for dryness and sparse vegetation and greens
for lush vegetation.
Use the Raster Attributes option in the Viewer to select and modify class colors.
Annotation A map is more than just an image(s) on a background. Since a map is a form of communication,
it must convey information that may not be obvious by looking at the image. Therefore, maps
usually contain several annotation elements to explain the map. Annotation is any explanatory
material that accompanies a map to denote graphical features on the map. This annotation may
take the form of:
• scale bars
• legends
The annotation listed above is made up of single elements. The basic annotation elements in
ERDAS IMAGINE include:
• text
416 ERDAS
Scale
These elements can be used to create more complex annotation, such as legends, scale bars, etc.
These annotation components are actually groups of the basic elements and can be ungrouped
and edited like any other graphic. You can also create your own groups to form symbols that are
not in the ERDAS IMAGINE symbol library. (Symbols are discussed in more detail under
“Symbols”.)
Create annotation using the Annotation tool palette in the Viewer or in a map composition.
Scale Map scale is a statement that relates distance on a map to distance on the Earth’s surface. It is
perhaps the most important information on a map, since the level of detail and map accuracy are
both factors of the map scale. Scale is directly related to the map extent, or the area of the Earth’s
surface to be mapped. If a relatively small area is to be mapped, such as a neighborhood or
subdivision, then the scale can be larger. If a large area is to be mapped, such as an entire
continent, the scale must be smaller. Generally, the smaller the scale, the less detailed the map
can be. As a rule, anything smaller than 1:250,000 is considered small-scale.
Scale can be reported in several ways, including:
• representative fraction
• verbal statement
• scale bar
Representative Fraction
Map scale is often noted as a simple ratio or fraction called a representative fraction. A map in
which one inch on the map equals 24,000 inches on the ground could be described as having a
scale of 1:24,000 or 1/24,000. The units on both sides of the ratio must be the same.
Verbal Statement
A verbal statement of scale describes the distance on the map to the distance on the ground. A
verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to 16 miles. The units
on the map and on the ground do not have to be the same in a verbal statement. One-inch and
6-inch maps of the British Ordnance Survey are often referred to by this method (1 inch to 1
mile, 6 inches to 1 mile) (Robinson and Sale, 1969).
Scale Bars
A scale bar is a graphic annotation element that describes map scale. It shows the distance on
paper that represents a geographical distance on the map. Maps often include more than one
scale bar to indicate various measurement systems, such as kilometers and miles.
Miles
1 0 1 2
Use the Scale Bar tool in the Annotation tool palette to automatically create representative
fractions and scale bars. Use the Text tool to create a verbal statement.
418 ERDAS
Scale
1 mile is
1/40 inch 1 inch 1 centimeter 1 kilometer is
Map Scale represented
represents represents represents represented by
by
1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm
1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm
1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm
1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm
1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm
1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm
1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm
1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm
1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm
1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm
1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm
1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm
1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm
1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm
1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm
1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm
1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm
1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm
Table 13-2 shows the number of pixels per inch for selected scales and pixel sizes.
Pixel SCALE
Size 1”=100’ 1”=200’ 1”=500’ 1”=1000’ 1”=1500’ 1”=2000’ 1”=4167’ 1”=1 mile
(m) 1:1200 1:2400 1:6000 1:12000 1:18000 1:24000 1:50000 1:63360
1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35
2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67
2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74
5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87
10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93
15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29
20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47
25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37
30 1.02 2.03 5.08 10.16 15.240 20.32 42.33 53.64
35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98
40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23
45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76
50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19
75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46
100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09
150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73
200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05
250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44
300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36
350 .09 .17 .44 .87 1.31 1.74 3.63 4.60
400 .08 .15 .38 .76 1.14 1.52 3.18 4.02
450 .07 .14 .34 .68 1.02 1.35 2.82 3.58
500 .06 .12 .30 .61 .91 1.22 2.54 3.22
600 .05 .10 .25 .51 .76 1.02 2.12 2.69
700 .04 .09 .22 .44 .65 .87 1.81 2.30
800 .04 .08 .19 .38 .57 .76 1.59 2.01
900 .03 .07 .17 .34 .51 .68 1.41 1.79
1000 .03 .06 .15 .30 .46 .61 1.27 1.61
420 ERDAS
Scale
Table 13-3 lists the number of acres and hectares per pixel for various pixel sizes.
Legends A legend is a key to the colors, symbols, and line styles that are used in a map. Legends are
especially useful for maps of categorical data displayed in pseudo color, where each color
represents a different feature or category. A legend can also be created for a single layer of
continuous data, displayed in gray scale. Legends are likewise used to describe all unknown or
unique symbols utilized. Symbols in legends should appear exactly the same size and color as
they appear on the map (Robinson and Sale, 1969).
forest
swamp
developed
Use the Legend tool in the Annotation tool palette to automatically create color legends.
Symbol legends are not created automatically, but can be created manually.
Neatlines, Tick Neatlines, tick marks, and grid lines serve to provide a georeferencing system for map detail and
Marks, and Grid are based on the map projection of the image shown.
Lines
• A neatline is a rectangular border around the image area of a map. It differs from the map
border in that the border usually encloses the entire map, not just the image area.
• Tick marks are small lines along the edge of the image area or neatline that indicate regular
intervals of distance.
• Grid lines are intersecting lines that indicate regular intervals of distance, based on a
coordinate system. Usually, they are an extension of tick marks. It is often helpful to place
grid lines over the image area of a map. This is becoming less common on thematic maps,
but is really up to the map designer. If the grid lines help readers understand the content of
the map, they should be used.
422 ERDAS
Symbols
neatline
grid lines
tick marks
Use the Grid/Tick tool in the Annotation tool palette to create neatlines, tick marks, and
grid lines. Tick marks and grid lines can also be created over images displayed in a
Viewer. See the On-Line Help for instructions.
Symbols Since maps are a greatly reduced version of the real-world, objects cannot be depicted in their
true shape or size. Therefore, a set of symbols is devised to represent real-world objects. There
are two major classes of symbols:
• replicative
• abstract
Replicative symbols are designed to look like their real-world counterparts; they represent
tangible objects, such as coastlines, trees, railroads, and houses. Abstract symbols usually take
the form of geometric shapes, such as circles, squares, and triangles. They are traditionally used
to represent amounts that vary from place to place, such as population density, amount of
rainfall, etc. (Dent, 1985).
Both replicative and abstract symbols are composed of one or more of the following annotation
elements:
• point
• line
• area
Symbol Types
These basic elements can be combined to create three different types of replicative symbols:
• plan—formed after the basic outline of the object it represents. For example, the symbol for
a house might be a square, because most houses are rectangular.
• profile—formed like the profile of an object. Profile symbols generally represent vertical
objects, such as trees, windmills, oil wells, etc.
• function—formed after the activity that a symbol represents. For example, on a map of a
state park, a symbol of a tent would indicate the location of a camping area.
Symbols can have different sizes, colors, and patterns to indicate different meanings within a
map. The use of size, color, and pattern generally shows qualitative or quantitative differences
among areas marked. For example, if a circle is used to show cities and towns, larger circles
would be used to show areas with higher population. A specific color could be used to indicate
county seats. Since symbols are not drawn to scale, their placement is crucial to effective
communication.
Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols
in maps.
Labels and Place names and other labels convey important information to the reader about the features on
Descriptive Text the map. Any features that help orient the reader or are important to the content of the map
should be labeled. Descriptive text on a map can include the map title and subtitle, copyright
information, captions, credits, production notes, or other explanatory material.
Title
The map title usually draws attention by virtue of its size. It focuses the reader’s attention on the
primary purpose of the map. The title may be omitted, however, if captions are provided outside
of the image area (Dent, 1985).
Credits
Map credits (or source information) can include the data source and acquisition date, accuracy
information, and other details that are required or helpful to readers. For example, if you include
data that you do not own in a map, you must give credit to the owner.
424 ERDAS
Labels and Descriptive Text
Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps.
Typography and The choice of type fonts and styles and how names are lettered can make the difference between
Lettering a clear and attractive map and a jumble of imagery and text. As with many other aspects of map
design, this is a very subjective area and many organizations already have guidelines to use.
This section is intended as an introduction to the concepts involved and to convey traditional
guidelines, where available.
If your organization does not have a set of guidelines for the appearance of maps and you plan
to produce many in the future, it would be beneficial to develop a style guide specifically for
mapping. This ensures that all of the maps produced follow the same conventions, regardless of
who actually makes the map.
ERDAS IMAGINE enables you to make map templates to facilitate the development of map
standards within your organization.
Type Styles
Type style refers to the appearance of the text and may include font, size, and style (bold, italic,
underline, etc.). Although the type styles used in maps are purely a matter of the designer’s taste,
the following techniques help to make maps more legible (Robinson and Sale, 1969; Dent,
1985).
• Do not use too many different typefaces in a single map. Generally, one or two styles are
enough when also using the variations of those type faces (e.g., bold, italic, underline, etc.).
When using two typefaces, use a serif and a sans serif, rather than two different serif fonts
or two different sans serif fonts [e.g., Sans (sans serif) and Roman (serif) could be used
together in one map].
• Exercise caution in using very thin letters that may not reproduce well. On the other hand,
using letters that are too bold may obscure important information in the image.
• Use different sizes of type for showing varying levels of importance. For example, on a
map with city and town labels, city names are usually in a larger type size than the town
names. Use no more than four to six different type sizes.
• Put more important text in labels, titles, and names in all capital letters and lesser important
text in lowercase with initial capitals. This is a matter of personal preference, although
names in which the letters must be spread out across a large area are better in all capital
letters. (Studies have found that capital letters are more difficult to read, therefore
lowercase letters might improve the legibility of the map.)
• In the past, hydrology, landform, and other natural features were labeled in italic. However,
this is not strictly adhered to by map makers today, although water features are still nearly
always labeled in italic.
Figure 13-5: Sample Sans Serif and Serif Typefaces with Various Styles Applied
Sans Serif Serif
Sans 10 pt regular Roman 10 pt regular
Lettering
Lettering refers to the way in which place names and other labels are added to a map. Letter
spacing, orientation, and position are the three most important factors in lettering. Here again,
there are no set rules for how lettering is to appear. Much is determined by the purpose of the
map and the end user. Many organizations have developed their own rules for lettering. Here is
a list of guidelines that have been used by cartographers in the past (Robinson and Sale, 1969;
Dent, 1985).
• Lettering should generally be oriented to match the orientation structure of the map. In
large-scale maps this means parallel with the upper and lower edges, and in small-scale
maps, this means in line with the parallels of latitude.
• Type should not be curved (i.e., different from preceding bullet) unless it is necessary to do
so.
• If lettering must be disoriented, it should never be set in a straight line, but should always
have a slight curve.
• Names should be letter spaced (i.e., space between individual letters, or kerning) as little as
necessary.
• Where the continuity of names and other map data, such as lines and tones, conflicts with
the lettering, the data, but not the names, should be interrupted.
• Lettering that refers to point locations should be placed above or below the point,
preferably above and to the right.
• The letters identifying linear features (roads, rivers, railroads, etc.) should not be spaced.
The word(s) should be repeated along the feature as often as necessary to facilitate
identification. These labels should be placed above the feature and river names should slant
in the direction of the river flow (if the label is italic).
426 ERDAS
Map Projections
• For geographical names, use the native language of the intended map user. For an English-
speaking audience, the name Germany should be used, rather than Deutscheland.
Atlanta
Atlanta
GEORGIA G e o r g i a
Savannah
Savannah
Text Color
Many cartographers argue that all lettering on a map should be black. However, the map may
be well-served by incorporating color into its design. In fact, studies have shown that coding
labels by color can improve a reader’s ability to find information (Dent, 1985).
Map Projections
This section is adapted from “Map Projections for Use with the Geographic Information
System” by Lee and Walsh (Lee and Walsh, 1984).
A map projection is the manner in which the spherical surface of the Earth is represented on a
flat (two-dimensional) surface. This can be accomplished by direct geometric projection or by
a mathematically derived transformation. There are many kinds of projections, but all involve
transfer of the distinctive global patterns of parallels of latitude and meridians of longitude onto
an easily flattened surface, or developable surface.
The three most common developable surfaces are the cylinder, cone, and plane (Figure 13-7).
A plane is already flat, while a cylinder or cone may be cut and laid out flat, without stretching.
Thus, map projections may be classified into three general families: cylindrical, conical, and
azimuthal or planar.
Map projections are selected in the Projection Chooser. For more information about the
Projection Chooser, see the ERDAS IMAGINE On-Line Help.
Properties of Map Regardless of what type of projection is used, it is inevitable that some error or distortion occurs
Projections in transforming a spherical surface into a flat surface. Ideally, a distortion-free map has four
valuable properties:
• conformality
• equivalence
• equidistance
• true direction
Each of these properties is explained below. No map projection can be true in all of these
properties. Therefore, each projection is devised to be true in selected properties, or most often,
a compromise among selected properties. Projections that compromise in this manner are
known as compromise projections.
Conformality is the characteristic of true shape, wherein a projection preserves the shape of any
small geographical area. This is accomplished by exact transformation of angles around points.
One necessary condition is the perpendicular intersection of grid lines as on the globe. The
property of conformality is important in maps which are used for analyzing, guiding, or
recording motion, as in navigation. A conformal map or projection is one that has the property
of true shape.
Equivalence is the characteristic of equal area, meaning that areas on one portion of a map are
in scale with areas in any other portion. Preservation of equivalence involves inexact
transformation of angles around points and thus, is mutually exclusive with conformality except
along one or two selected lines. The property of equivalence is important in maps that are used
for comparing density and distribution data, as in populations.
Equidistance is the characteristic of true distance measuring. The scale of distance is constant
over the entire map. This property can be fulfilled on any given map from one, or at most two,
points in any direction or along certain lines. Equidistance is important in maps that are used for
analyzing measurements (i.e., road distances). Typically, reference lines such as the equator or
a meridian are chosen to have equidistance and are termed standard parallels or standard
meridians.
True direction is characterized by a direction line between two points that crosses reference
lines (e.g., meridians) at a constant angle or azimuth. An azimuth is an angle measured
clockwise from a meridian, going north to east. The line of constant or equal direction is termed
a rhumb line.
The property of constant direction makes it comparatively easy to chart a navigational course.
However, on a spherical surface, the shortest surface distance between two points is not a rhumb
line, but a great circle, being an arc of a circle whose center is the center of the Earth. Along a
great circle, azimuths constantly change (unless the great circle is the equator or a meridian).
Thus, a more desirable property than true direction may be where great circles are represented
by straight lines. This characteristic is most important in aviation. Note that all meridians are
great circles, but the only parallel that is a great circle is the equator.
428 ERDAS
Map Projections
Regular Cylindrical
Regular Conic
Oblique Azimuthal
(planar)
Oblique Cylindrical
Projection Types Although a great number of projections have been devised, the majority of them are geometric
or mathematical variants of the basic direct geometric projection families described below.
Choice of the projection to be used depends upon the true property or combination of properties
desired for effective cartographic analysis.
Azimuthal Projections
Azimuthal projections, also called planar projections, are accomplished by drawing lines from
a given perspective point through the globe onto a tangent plane. This is conceptually equivalent
to tracing a shadow of a figure cast by a light source. A tangent plane intersects the global
surface at only one point and is perpendicular to a line passing through the center of the sphere.
Thus, these projections are symmetrical around a chosen center or central meridian. Choice of
the projection center determines the aspect, or orientation, of the projection surface.
Azimuthal projections may be centered:
The origin of the projection lines—that is, the perspective point—may also assume various
positions. For example, it may be:
Conical Projections
Conical projections are accomplished by intersecting, or touching, a cone with the global
surface and mathematically projecting lines onto this developable surface.
A tangent cone intersects the global surface to form a circle. Along this line of intersection, the
map is error-free and possess equidistance. Usually, this line is a parallel, termed the standard
parallel.
Cones may also be secant, and intersect the global surface, forming two circles that possess
equidistance. In this case, the cone slices underneath the global surface, between the standard
parallels. Note that the use of the word secant, in this instance, is only conceptual and not
geometrically accurate. Conceptually, the conical aspect may be polar, equatorial, or oblique.
Only polar conical projections are supported in ERDAS IMAGINE.
Tangent Secant
one standard parallel two standard parallels
Cylindrical Projections
Cylindrical projections are accomplished by intersecting, or touching, a cylinder with the global
surface. The surface is mathematically projected onto the cylinder, which is then cut and
unrolled.
A tangent cylinder intersects the global surface on only one line to form a circle, as with a
tangent cone. This central line of the projection is commonly the equator and possesses
equidistance.
430 ERDAS
Geographical and Planar Coordinates
If the cylinder is rotated 90 degrees from the vertical (i.e., the long axis becomes horizontal),
then the aspect becomes transverse, wherein the central line of the projection becomes a chosen
standard meridian as opposed to a standard parallel. A secant cylinder, one slightly less in
diameter than the globe, has two lines possessing equidistance.
Tangent Secant
one standard parallel two standard parallels
Perhaps the most famous cylindrical projection is the Mercator, which became the standard
navigational map. Mercator possesses true direction and conformality.
Other Projections
The projections discussed so far are projections that are created by projecting from a sphere (the
Earth) onto a plane, cone, or cylinder. Many other projections cannot be created so easily.
Modified projections are modified versions of another projection. For example, the Space
Oblique Mercator projection is a modification of the Mercator projection. These modifications
are made to reduce distortion, often by including additional standard lines or a different pattern
of distortion.
Pseudo projections have only some of the characteristics of another class projection. For
example, the Sinusoidal is called a pseudocylindrical projection because all lines of latitude are
straight and parallel, and all meridians are equally spaced. However, it cannot truly be a
cylindrical projection, because all meridians except the central meridian are curved. This results
in the Earth appearing oval instead of rectangular (Environmental Systems Research Institute,
1991).
Geographical and Map projections require a point of reference on the Earth’s surface. Most often this is the center,
Planar or origin, of the projection. This point is defined in two coordinate systems:
Coordinates
• geographical
• planar
Geographical
Geographical, or spherical, coordinates are based on the network of latitude and longitude
(Lat/Lon) lines that make up the graticule of the Earth. Within the graticule, lines of longitude
are called meridians, which run north/south, with the prime meridian at 0° (Greenwich,
England). Meridians are designated as 0° to 180°, east or west of the prime meridian. The 180°
meridian (opposite the prime meridian) is the International Dateline.
Lines of latitude are called parallels, which run east/west. Parallels are designated as 0° at the
equator to 90° at the poles. The equator is the largest parallel. Latitude and longitude are defined
with respect to an origin located at the intersection of the equator and the prime meridian.
Lat/Lon coordinates are reported in degrees, minutes, and seconds. Map projections are various
arrangements of the Earth’s latitude and longitude lines onto a plane.
Planar
Planar, or Cartesian, coordinates are defined by a column and row position on a planar grid
(X,Y). The origin of a planar coordinate system is typically located south and west of the origin
of the projection. Coordinates increase from 0,0 going east and north. The origin of the
projection, being a false origin, is defined by values of false easting and false northing. Grid
references always contain an even number of digits, and the first half refers to the easting and
the second half the northing.
In practice, this eliminates negative coordinate values and allows locations on a map projection
to be defined by positive coordinate pairs. Values of false easting are read first and may be in
meters or feet.
Available Map In ERDAS IMAGINE, map projection information appears in the Projection Chooser, which is
Projections used to georeference images and to convert map coordinates from one type of projection to
another. The Projection Chooser provides the following projections:
USGS Projections
• Alaska Conformal
• Azimuthal Equidistant
• Behrmann
• Bonne
• Cassini
• Conic Equidistant
• Eckert I
• Eckert II
• Eckert III
• Eckert IV
• Eckert V
• Eckert VI
432 ERDAS
Available Map Projections
• EOSAT SOM
• Equidistant Conic
• Equidistant Cylindrical
• Equirectangular
• Gall Stereographic
• Gauss Kruger
• Geographic (Lat/Lon)
• Gnomonic
• Hammer
• Interrupted Mollweide
• Loximuthal
• Mercator
• Miller Cylindrical
• Mollweide
• Orthographic
• Plate Carrée
• Polar Stereographic
• Polyconic
• Quartic Authalic
• Robinson
• RSO
• Sinusoidal
• State Plane
• Stereographic
• Stereographic (Extended)
• Transverse Mercator
• UTM
• Wagner IV
• Wagner VII
• Winkel I
• Winkel II
External Projections
• Azimuthal Equidistant
• Cassini-Soldner
• Conic Equidistant
434 ERDAS
Available Map Projections
• Mercator
• Modified Polyconic
• Modified Stereographic
• Oblique Mercator
• Orthographic
• Plate Carrée
• Regular Polyconic
• Robinson Pseudocylindrical
• Sinusoidal
• Stereographic
• Swiss Cylindrical
• Stereographic (Oblique)
• Transverse Mercator
• Winkel’s Tripel
Choice of the projection to be used depends upon the desired major property and the region to
be mapped (see Table 13-4). After choosing the desired map projection, several parameters are
required for its definition (see Table 13-5). These parameters fall into three general classes: (1)
definition of the spheroid, (2) definition of the surface viewing window, and (3) definition of
scale.
For each map projection, a menu of spheroids displays, along with appropriate prompts that
enable you to specify these parameters.
Units
Use the units of measure that are appropriate for the map projection type.
• Lat/Lon coordinates are expressed in decimal degrees. When prompted, you can use the
DD function to convert coordinates in degrees, minutes, seconds format to decimal. For
example, for 30°51’12’’:
dd(30,51,12) = 30.85333
-dd(30,51,12) = -30.85333
or
30:51:12 = 30.85333
You can also enter Lat/Lon coordinates in radians.
Note also that values for longitude west of Greenwich, England, and values for latitude
south of the equator are to be entered as negatives.
436 ERDAS
Available Map Projections
Parameter 3 4 5 6 7 8b 9 10 11 12 13 14 15 16 17 18 19 20 b 21 b 22
Definition of
Spheroid
Spheroid X X X X X X X X X X X X X X X X X X X
selections
Definition of
Surface Viewing Window
False easting X X X X X X X X X X X X X X X X X X X X
False northing X X X X X X X X X X X X X X X X X X X X
Longitude of X X X X X X X X X X
central meridian
Latitude of origin of X X X X X X
projection
Longitude of center of X X X X X X
projection
Latitude of center of X X X X X X
projection
Latitude of first X X X
standard parallel
Latitude of second X X X
standard parallel
Latitude of true scale X X
Longitude below pole X
Definition of Scale
Scale factor at X
central meridian
Height of perspective X
point above sphere
Scale factor at X
center of projection
a. Numbers are used for reference only and correspond to the numbers used in Table 13-4. Parameters for definition of map projection types 0-2 are not appli-
cable and are described in the text.
b. Additional parameters required for definition of the map projection are described in the text of Appendix B “Map Projections”.
438 ERDAS
Choosing a Map Projection
Choosing a Map
Projection
Map Projection Uses in Selecting a map projection for the GIS database enables you to (Maling, 1992):
a GIS
• decide how to best display the area of interest or illustrate the results of analysis
• test the accuracy of the information and perform measurements on the data
Deciding Factors Depending on your applications and the uses for the maps created, one or several map
projections may be used. Many factors must be weighed when selecting a projection, including:
• type of map
• map accuracy
• scale
If you are mapping a relatively small area, virtually any map projection is acceptable. In
mapping large areas (entire countries, continents, and the world), the choice of map projection
becomes more critical. In small areas, the amount of distortion in a particular projection is
barely, if at all, noticeable. In large areas, there may be little or no distortion in the center of the
map, but distortion increases outward toward the edges of the map.
Guidelines Since the sixteenth century, there have been three fundamental rules regarding map projection
use (Maling, 1992):
• if the country to be mapped lies in the temperate latitudes, use a conical projection
• if the map is required to show one of the polar regions, use an azimuthal projection
These rules are no longer held so strongly. There are too many factors to consider in map
projection selection for broad generalizations to be effective today. The purpose of a particular
map and the merits of the individual projections must be examined before an educated choice
can be made. However, there are some guidelines that may help you select a projection
(Pearson, 1990):
• Statistical data should be displayed using an equal area projection to maintain proper
proportions (although shape may be sacrificed).
Spheroids The previous discussion of direct geometric map projections assumes that the Earth is a sphere,
and for many maps this is satisfactory. However, due to rotation of the Earth around its axis, the
planet bulges slightly at the equator. This flattening of the sphere makes it an oblate spheroid,
which is an ellipse rotated around its shorter axis.
Major axis
semi-major axis
semi-minor
axis
Where:
a = the equatorial radius (semi-major axis)
b = the polar radius (semi-minor axis)
Most map projections use eccentricity (e2) rather than flattening. The relationship is:
e 2 = 2f – f 2
The flattening of the Earth is about 1 part in 300, and becomes significant in map accuracy at a
scale of 1:100,000 or larger.
Calculation of a map projection requires definition of the spheroid (or ellipsoid) in terms of the
length of axes and eccentricity squared (or radius of the reference sphere). Several principal
spheroids are in use by one or more countries. Differences are due primarily to calculation of
the spheroid for a particular region of the Earth’s surface. Only recently have satellite tracking
data provided spheroid determinations for the entire Earth. However, these spheroids may not
give the best fit for a particular region. In North America, the spheroid in use is the Clarke 1866
for NAD27 and GRS 1980 for NAD83 (State Plane).
If other regions are to be mapped, different projections should be used. Upon choosing a desired
projection type, you have the option to choose from the following list of spheroids:
• Airy
• Australian National
• Bessel
• Clarke 1866
440 ERDAS
Spheroids
• Clarke 1880
• Everest
• GRS 1980
• Helmert
• Hough
• International 1909
• Krasovsky
• Mercury 1960
• Modified Airy
• Modified Everest
• Southeast Asia
• Walbeck
• WGS 66
• WGS 72
• WGS 84
The spheroids listed above are the most commonly used. There are many other spheroids
available, and they are listed in the Projection Chooser. These additional spheroids are
not documented in this manual. You can use the IMAGINE Developers’ Toolkit to add
your own map projections and spheroids to ERDAS IMAGINE.
The semi-major and semi-minor axes of all supported spheroids are listed in Table 13-6, as well
as the principal uses of these spheroids.
Semi-Major
Spheroid Semi-Minor Axis Use
Axis
165 6378165.0 6356783.0 Global
Airy (1940) 6377563.0 6356256.91 England
Airy Modified (1849) Ireland
Australian National (1965) 6378160.0 6356774.719 Australia
Bessel (1841) 6377397.155 6356078.96284 Central Europe, Chile, and Indonesia
Bessell (Namibia) 6377483.865 6356165.383 Namibia
Clarke 1858 6378293.0 6356619.0 Global
Clarke 1866 6378206.4 6356583.8 North America and the Philippines
Clarke 1880 6378249.145 6356514.86955 France and Africa
Clarke 1880 IGN 6378249.2 6356515.0 Global
Everest (1830) 6377276.3452 6356075.4133 India, Burma, and Pakistan
Everest (1956) 6377301.243 6356100.2284 India, Nepal
Everest (1969) 6377295.664 6356094.6679 Global
Everest (Malaysia & Singapore) 6377304.063 6356103.038993 Global
Everest (Pakistan) 6377309.613 6356108.570542 Pakistan
Everest (Sabah & Sarawak) 6377298.556 6356097.5503 Brunei, East Malaysia
Fischer (1960) 6378166.0 6356784.2836 Global
Fischer (1968) 6378150.0 6356768.3372 Global
GRS 1980 (Geodetic Reference System) 6378137.0 6356752.31414 Adopted in North America for 1983
Earth-centered coordinate system
(satellite)
Hayford 6378388.0 6356911.946128 Global
Helmert 6378200.0 6356818.16962789092 Egypt
Hough 6378270.0 6356794.343479 As International 1909 above, with
modification of ellipse axes
IAU 1965 6378160.0 6356775.0 Global
Indonesian 1974 6378160.0 6356774.504086 Global
International 1909 (= Hayford) 6378388.0 6356911.94613 Remaining parts of the world not listed
here
IUGG 1967 6378160.0 6356774.516 Hungary
Krasovsky (1940) 6378245.0 6356863.0188 Former Soviet Union and some East
European countries
Mercury 1960 6378166.0 6356794.283666 Early satellite, rarely used
Modified Airy 6377341.89 6356036.143 As Airy above, more recent version
442 ERDAS
Map Composition
Semi-Major
Spheroid Semi-Minor Axis Use
Axis
Modified Everest 6377304.063 6356103.039 As Everest above, more recent version
Modified Mercury 1968 6378150.0 6356768.337303 As Mercury 1960 above, more recent
calculation
Modified Fischer (1960) 6378155.0 6356773.3205 Singapore
New International 1967 6378157.5 6356772.2 As International 1909 below, more recent
calculation
SGS 85 (Soviet Geodetic System 1985) 6378136.0 6356751.3016 Soviet Union
South American (1969) 6378160.0 6356774.7192 South America
Southeast Asia 6378155.0 6356773.3205 As named
Sphere 6371000.0 6371000.0 Global
Sphere of Nominal Radius of Earth 6370997.0 6370997.0 A perfect sphere
Sphere of Radius 6370997 m 6370997.0 6370997.0 A perfect sphere with the same surface
area as the Clarke 1866 spheroid
Walbeck (1819) 6376896.0 6355834.8467 Soviet Union, up to 1910
WGS 60 (World Geodetic System 1960) 6378165.0 6356783.287 Global
WGS 66 (World Geodetic System 1966) 6378145.0 6356759.769356 As WGS 72 above, older version
WGS 72 (World Geodetic System 1972) 6378135.0 6356750.519915 NASA (satellite)
WGS 84 (World Geodetic System 1984) 6378137.0 6356752.31424517929 As WGS 72, more recent calculation
Map Composition
Learning Map Cartography and map composition may seem like an entirely new discipline to many GIS and
Composition image processing analysts—and that is partly true. But, by learning the basics of map design,
the results of your analyses can be communicated much more effectively. Map composition is
also much easier than in the past when maps were hand drawn. Many GIS analysts may already
know more about cartography than they realize, simply because they have access to map-
making software. Perhaps the first maps you made were imitations of existing maps, but that is
how we learn. This chapter is certainly not a textbook on cartography; it is merely an overview
of some of the issues involved in creating cartographically-correct products.
Plan the Map After your analysis is complete, you can begin map composition. The first step in creating a map
is to plan its contents and layout. The following questions may aid in the planning process:
• Who is the intended audience? What is the level of their knowledge about the subject
matter?
• Will it remain in digital form and be viewed on the computer screen or will it be printed?
• If it is going to be printed, how big will it be? Will it be printed in color or black and white?
The answers to these questions can help to determine the type of information that must go into
the composition and the layout of that information. For example, suppose you are going to do a
series of maps about global deforestation for presentation to Congress, and you are going to
print these maps in color on an inkjet printer. This scenario might lead to the following
conclusions:
• A format (layout) should be developed for the series, so that all the maps produced have the
same style.
• The colors used should be chosen carefully, since the maps are printed in color.
• Political boundaries might need to be included, since they influence the types of actions that
can be taken in each deforested area.
• The typeface size and style to be used for titles, captions, and labels have to be larger than
for maps printed on 8.5” × 11.0” sheets. The type styles selected should be the same for all
maps.
• Select symbols that are widely recognized, and make sure they are all explained in a legend.
• Cultural features (roads, urban centers, etc.) may be added for locational reference.
• Include a statement about the accuracy of each map, since these maps may be used in very
high-level decisions.
Once this information is in hand, you can actually begin sketching the look of the map on a sheet
of paper. It is helpful for you to know how you want the map to look before starting the ERDAS
IMAGINE Map Composer. Doing so ensures that all of the necessary data layers are available,
and makes the composition phase go quickly.
See the tour guide about Map Composer in the ERDAS IMAGINE Tour Guides for step-
by-step instructions on creating a map. Refer to the On-Line Help for details about how
Map Composer works.
444 ERDAS
Map Accuracy
Map Accuracy Maps are often used to influence legislation, promote a cause, or enlighten a particular group
before decisions are made. In these cases, especially, map accuracy is of the utmost importance.
There are many factors that influence map accuracy: the projection used, scale, base data,
generalization, etc. The analyst/cartographer must be aware of these factors before map
production begins. The accuracy of the map, in a large part, determines its usefulness. It is
usually up to individual organizations to perform accuracy assessment and decide how those
findings are reflected in the products they produce. However, several agencies have established
guidelines for map makers.
US National Map The United States Bureau of the Budget has developed the US National Map Accuracy Standard
Accuracy Standard in an effort to standardize accuracy reporting on maps. These guidelines are summarized below
(Fisher, 1991):
• On scales smaller than 1:20,000, not more than 10 percent of points tested should be more
than 1/50 inch in horizontal error, where points refer only to points that can be well-defined
on the ground.
• On maps with scales larger than 1:20,000, the corresponding error term is 1/30 inch.
• At no more than 10 percent of the elevations tested can contours be in error by more than
one half of the contour interval.
• Accuracy should be tested by comparison of actual map data with survey data of higher
accuracy (not necessarily with ground truth).
• If maps have been tested and do meet these standards, a statement should be made to that
effect in the legend.
• Maps that have been tested but fail to meet the requirements should omit all mention of the
standards on the legend.
USGS Land Use and The USGS has set standards of their own for land use and land cover maps (Fisher, 1991):
Land Cover Map
Guidelines • The minimum level of accuracy in identifying land use and land cover categories is 85%.
• The several categories shown should have about the same accuracy.
USDA SCS Soils Maps The United States Department of Agriculture (USDA) has set standards for Soil Conservation
Guidelines Service (SCS) soils maps (Fisher, 1991):
• Up to 25% of the pedons may be of other soil types than those named if they do not present
a major hindrance to land management.
• Up to only 10% of pedons may be of other soil types than those named if they do present a
major hindrance to land management.
• No single included soil type may occupy more than 10% of the area of the map unit.
Digitized Hardcopy Another method of expanding the database is by digitizing existing hardcopy maps. Although
Maps this may seem like an easy way to gather more information, care must be taken in pursuing this
avenue if it is necessary to maintain a particular level of accuracy. If the hardcopy maps that are
digitized are outdated, or were not produced using the same accuracy standards that are
currently in use, the digitized map may negatively influence the overall accuracy of the
database.
446 ERDAS
Chapter 14
Hardcopy Output
Introduction Hardcopy output refers to any output of image data to paper. These topics are covered in this
chapter:
• printing maps
For additional information, see the chapter about Windows printing in the ERDAS
IMAGINE Configuration Guide.
Printing Maps ERDAS IMAGINE enables you to create and output a variety of types of hardcopy maps, with
several referencing features.
Scaled Maps A scaled map is a georeferenced map that has been projected to a map projection, and is
accurately laid-out and referenced to represent distances and locations. A scaled map usually
has a legend, that includes a scale, such as 1 inch = 1000 feet. The scale is often expressed as a
ratio, like 1:12,000, where 1 inch on the map represents 12,000 inches on the ground.
Printing Large Maps Some scaled maps do not fit on the paper that is used by the printer. These methods are used to
print and store large maps:
• A book map is laid out like the pages of a book. Each page fits on the paper used by the
printer. There is a border, but no tick marks on every page.
• A paneled map is designed to be spliced together into a large paper map; therefore, borders
and tick marks appear on the outer edges of the large map.
+ +
+ +
neatline neatline
tick marks ++ +
+
Scale and Resolution The following scales and resolutions are noticeable during the process of creating a map
composition and sending the composition to a hardcopy device:
• device resolution
Spatial Resolution
Spatial resolution is the area on the ground represented by each raw image data pixel.
Display Scale
Display scale is the distance on the screen as related to one unit on paper. For example, if the
map composition is 24 inches by 36 inches, it would not be possible to view the entire
composition on the screen. Therefore, the scale could be set to 1:0.25 so that the entire map
composition would be in view.
Map Scale
The map scale is the distance on a map as related to the true distance on the ground, or the area
that one pixel represents measured in map units. The map scale is defined when you create an
image area in the map composition. One map composition can have multiple image areas set at
different scales. These areas may need to be shown at different scales for different applications.
448 ERDAS
Printing Maps
Device Resolution
The number of dots that are printed per unit—for example, 300 dots per inch (DPI).
Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions.
Map Scaling Examples The ERDAS IMAGINE Map Composer enables you to define a map size, as well as the size
and scale for the image area within the map composition. The examples in this section focus on
the relationship between these factors and the output file created by Map Composer for the
specific hardcopy device or file format. Figure 14-2 is the map composition that is used in the
examples. This composition was originally created using the ERDAS IMAGINE Map
Composer at a size of 22” × 34”, and the hardcopy output must be in two different formats.
• A TIFF file must be created and sent to a film recorder having a 1,000 dpi resolution.
The vertical direction is the most limiting; therefore, the map composition to paper scale would
be set for 0.23.
If the specified size of the map (width and height) is greater than the printable area for the
printer, the output hardcopy map is paneled. See the hardware manual of the hardcopy
device for information about the printable area of the device.
Use the Print Map Composition dialog to output a map composition to a PostScript
printer.
Output to TIFF
The limiting factor in this example is not page size, but disk space (600 MB total). A three-band
image file must be created in order to convert the map composition to .tif file. Due to the three
bands and the high resolution, the image file could be very large. The .tif file is output to a film
recorder with a 1,000 DPI device resolution.
To determine the number of megabytes for the map composition, the X and Y dimensions need
to be calculated:
• Y = 34 × 1,000 = 34,000
Although this appears to be an unmanageable file size, it is possible to reduce the file size with
little image degradation. The image file created from the map composition must be less than half
to accommodate the .tif file, because the total disk space is only 600 megabytes. Dividing the
map composition by three in both X and Y directions (2,244 MB / 3 /3) results in approximately
a 250 megabyte file. This file size is small enough to process and leaves enough room for the
image to TIFF conversion. This division is accomplished by specifying a 1/3 or 0.333 map
composition to paper scale when outputting the map composition to an image file.
Once the image file is created and exported to TIFF format, it can be sent to a film recorder that
accepts .tif files. Remember, the file must be enlarged three times to compensate for the
reduction during the image file creation.
See the hardware manual of the hardcopy device for information about the DPI device
resolution.
Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to
an image file.
450 ERDAS
Mechanics of Printing
Mechanics of This section describes the mechanics of transferring an image or map composition from a data
Printing file to a hardcopy map.
Halftone Printing Halftoning is the process of converting a continuous tone image into a pattern of dots. A
newspaper photograph is a common example of halftoning.
To make a color illustration, halftones in the primary colors (cyan, magenta, and yellow), plus
black, are overlaid. The halftone dots of different colors, in close proximity, create the effect of
blended colors in much the same way that phosphorescent dots on a color computer monitor
combine red, green, and blue to create other colors. By using different patterns of dots, colors
can have different intensities. The dots for halftoning are a fixed density—either a dot is there
or it is not there.
For scaled maps, each output pixel may contain one or more dot patterns. If a very large image
file is being printed onto a small piece of paper, data file pixels are skipped to accommodate the
reduction.
Hardcopy Devices
The following hardcopy devices use halftoning to output an image or map composition:
See the user’s manual for the hardcopy device for more information about halftone
printing.
Continuous Tone Continuous tone printing enables you to output color imagery using the four process colors
Printing (cyan, magenta, yellow, and black). By using varying percentages of these colors, it is possible
to create a wide range of colors. The printer converts digital data from the host computer into a
continuous tone image. The quality of the output picture is similar to a photograph. The output
is smoother than halftoning because the dots for continuous tone printing can vary in density.
Example
There are different processes by which continuous tone printers generate a map. One example
is a process called thermal dye transfer. The entire image or map composition is loaded into the
printer’s memory. While the paper moves through the printer, heat is used to transfer the dye
from a ribbon, which has the dyes for all of the four process colors, to the paper. The density of
the dot depends on the amount of heat applied by the printer to transfer the dye. The amount of
heat applied is determined by the brightness values of the input image. This allows the printer
to control the amount of dye that is transferred to the paper to create a continuous tone image.
Hardcopy Devices
The following hardcopy device uses continuous toning to output an image or map composition:
• Tektronix Phaser II SD
NOTE: The above printers do not necessarily use the thermal dye transfer process to generate
a map.
See the user’s manual for the hardcopy device for more information about continuous tone
printing.
Contrast and Color ERDAS IMAGINE contrast and color tables are used for some printing processes, just as they
Tables are used in displaying an image. For continuous raster layers, they are loaded from the ERDAS
IMAGINE contrast table. For thematic layers, they are loaded from the color table. The
translation of data file values to brightness values is performed entirely by the software
program.
RGB to CMY
Conversion
Colors
Since a printer uses ink instead of light to create a visual image, the primary colors of pigment
(cyan, magenta, and yellow) are used in printing, instead of the primary colors of light (red,
green, and blue). Cyan, magenta, and yellow can be combined to make black through a
subtractive process, whereas the primary colors of light are additive—red, green, and blue
combine to make white (Gonzalez and Wintz, 1977).
The data file values that are sent to the printer and the contrast and color tables that accompany
the data file are all in the RGB color scheme. The RGB brightness values in the contrast and
color tables must be converted to cyan, magenta, and yellow (CMY) values.
The RGB primary colors are the opposites of the CMY colors—meaning, for example, that the
presence of cyan in a color means an equal lack of red. To convert the values, each RGB
brightness value is subtracted from the maximum brightness value to produce the brightness
value for the opposite color. The following equation shows this relationship:
C = MAX - R
M = MAX - G
Y = MAX - B
Where:
MAX = the maximum brightness value
R = red value from lookup table
G = green value from lookup table
B = blue value from lookup table
C = calculated cyan value
M = calculated magenta value
Y = calculated yellow value
452 ERDAS
Mechanics of Printing
Black Ink
Although, theoretically, cyan, magenta, and yellow combine to create black ink, the color that
results is often a dark, muddy brown. Many printers also use black ink for a truer black.
NOTE: Black ink may not be available on all printers. Consult the user’s manual for your
printer.
Images often appear darker when printed than they do when displayed on the display device.
Therefore, it may be beneficial to improve the contrast and brightness of an image before it is
printed.
454 ERDAS
Appendix A
Math Topics
Introduction This appendix is a cursory overview of some of the basic mathematical concepts that are
applicable to image processing. Its purpose is to educate the novice reader, and to put these
formulas and concepts into the context of image processing and remote sensing applications.
Summation A commonly used notation throughout this and other discussions is the Sigma (Σ), used to
denote a summation of values.
For example, the notation
10
∑i
i=1
∑ Qi = 3 + 5 + 7 + 2 = 17
i=1
Where:
Q1 = 3
Q2 = 5
Q3 = 7
Q4 = 2
Statistics
Histogram In ERDAS IMAGINE image data files, each data file value (defined by its row, column, and
band) is a variable. ERDAS IMAGINE supports the following data types:
• 1, 2, and 4-bit
Distribution, as used in statistics, is the set of frequencies with which an event occurs, or that a
variable has a particular value.
A histogram is a graph of data frequency or distribution. For a single band of data, the horizontal
axis of a histogram is the range of all possible data file values. The vertical axis is the number
of pixels that have each data value.
number of pixels
histogram
300
0 X
0 100 255
data file values
Figure 14-3 shows the histogram for a band of data in which Y pixels have data value X. For
example, in this graph, 300 pixels (y) have the data file value of 100 (x).
Bin Functions Bins are used to group ranges of data values together for better manageability. Histograms and
other descriptor columns for 1, 2, 4, and 8-bit data are easy to handle since they contain a
maximum of 256 rows. However, to have a row in a descriptor table for every possible data
value in floating point, complex, and 32-bit integer data would yield an enormous amount of
information. Therefore, the bin function is provided to serve as a data reduction tool.
456 ERDAS
Statistics
Then, for example, row 23 of the histogram table would contain the number of pixels in the layer
whose value fell between .023 and .024.
• DIRECT—one bin per integer value. Used by default for 1, 2, 4, and 8-bit integer data, but
may be used for other data types as well. The direct bin function may include an offset for
negative data or data in which the minimum value is greater than zero.
For example, a direct bin with 900 bins and an offset of -601 would look like the following:
• LINEAR—establishes a linear mapping between data values and bin numbers, as in our
first example, mapping the data range 0.0 to 1.0 to bin numbers 0 to 99.
• LOG—establishes a logarithmic mapping between data values and bin numbers. The bin
number is computed by:
• EXPLICIT—explicitly defines mapping between each bin number and data range.
Mean The mean (µ) of a set of values is its statistical average, such that, if Qi represents a set of k
values:
Q 1 + Q 2 + Q 3 + ... + Q k
µ = --------------------------------------------------------
-
k
or
k
Qi
µ = ∑ -----k
i=1
The mean of data with a normal distribution is the value at the peak of the curve—the point
where the distribution balances.
Normal Distribution Our general ideas about an average, whether it be average age, average test score, or the average
amount of spectral reflectance from oak trees in the spring, are made visible in the graph of a
normal distribution, or bell curve.
458 ERDAS
Statistics
number of pixels
0
0 255
data file values
Average usually refers to a central value on a bell curve, although all distributions have
averages. In a normal distribution, most values are at or near the middle, as shown by the peak
of the bell curve. Values that are more extreme are more rare, as shown by the tails at the ends
of the curve.
The Normal Distributions are a family of bell shaped distributions that turn up frequently under
certain special circumstances. For example, a normal distribution would occur if you were to
compare the bands in a desert image. The bands would be very similar, but would vary slightly.
Each Normal Distribution uses just two parameters, σ and µ, to control the shape and location
of the resulting probability graph through the equation:
2
x–µ
– ------------
2σ
e
f ( x ) = ----------------------
-
σ 2π
Where:
x = the quantity’s distribution that is being approximated
π and e = famous mathematical constants
The parameter µ controls how much the bell is shifted horizontally so that its average matches
the average of the distribution of x, while σ adjusts the width of the bell to try to encompass the
spread of the given distribution. In choosing to approximate a distribution by the nearest of the
Normal Distributions, we describe the many values in the bin function of its distribution with
just two parameters. It is a significant simplification that can greatly ease the computational
burden of many operations, but like all simplifications, it reduces the accuracy of the
conclusions we can draw.
The normal distribution is the most widely encountered model for probability. Many natural
phenomena can be predicted or estimated according to the law of averages that is implied by the
bell curve (Larsen and Marx, 1981).
A normal distribution in remotely sensed data is meaningful—it is a sign that some
characteristic of an object can be measured by the average amount of electromagnetic radiation
that the object reflects. This relationship between the data and a physical scene or object is what
makes image processing applicable to various types of land analysis.
The mean and standard deviation are often used by computer programs that process and analyze
image data.
Variance The mean of a set of values locates only the average value—it does not adequately describe the
set of values by itself. It is helpful to know how much the data varies from its mean. However,
a simple average of the differences between each value and the mean equals zero in every case,
by definition of the mean. Therefore, the squares of these differences are averaged so that a
meaningful number results (Larsen and Marx, 1981).
In theory, the variance is calculated as follows:
2
Var Q = E 〈 ( Q – µ Q ) 〉
Where:
E = expected value (weighted average)
2 = squared to make the distance a positive number
In practice, the use of this equation for variance does not usually reflect the exact nature of the
values that are used in the equation. These values are usually only samples of a large data set,
and therefore, the mean and variance of the entire data set are estimated, not known.
The equation used in practice follows. This is called the minimum variance unbiased estimator
of the variance, or the sample variance (notated σ2).
k
2
∑ ( Qi – µQ )
2
σ Q ≈ i-----------------------------------
=1 -
k–1
Where:
i = a particular pixel
k = the number of pixels (the higher the number, the better the approximation)
The theory behind this equation is discussed in chapters on point estimates and sufficient
statistics, and covered in most statistics texts.
NOTE: The variance is expressed in units squared (e.g., square inches, square data values, etc.),
so it may result in a number that is much higher than any of the original values.
Standard Deviation Since the variance is expressed in units squared, a more useful value is the square root of the
variance, which is expressed in units and can be related back to the original values (Larsen and
Marx, 1981). The square root of the variance is the standard deviation.
Based on the equation for sample variance (s2), the sample standard deviation (sQ) for a set of
values Q is computed as follows:
k
2
∑ ( Qi – µQ )
sQ = i-----------------------------------
=1 -
k–1
In any distribution:
460 ERDAS
Statistics
• approximately 68% of the values are within one standard deviation of µ, that is, between
µ-s and µ+s
• more than 1/2 of the values are between µ-2s and µ+2s
• more than 3/4 of the values are between µ-3s and µ+3s
An example of a simple application of these rules is seen in the ERDAS IMAGINE Viewer.
When 8-bit data are displayed in the Viewer, ERDAS IMAGINE automatically applies a 2
standard deviation stretch that remaps all data file values between µ-2s and µ+2s (more than 1/2
of the data) to the range of possible brightness values on the display device.
Standard deviations are used because the lowest and highest data file values may be much
farther from the mean than 2s.
Parameters As described above, the standard deviation describes how a fixed percentage of the data varies
from the mean. The mean and standard deviation are known as parameters, which are sufficient
to describe a normal curve (Johnston, 1980).
When the mean and standard deviation are known, they can be used to estimate other
calculations about the data. In computer programs, it is much more convenient to estimate
calculations with a mean and standard deviation than it is to repeatedly sample the actual data.
Algorithms that use parameters are parametric. The closer that the distribution of the data
resembles a normal curve, the more accurate the parametric estimates of the data are. ERDAS
IMAGINE classification algorithms that use signature files (.sig) are parametric, since the mean
and standard deviation of each sample or cluster are stored in the file to represent the
distribution of the values.
Covariance In many image processing procedures, the relationships between two bands of data are
important. Covariance measures the tendencies of data file values in the same pixel, but in
different bands, to vary with each other, in relation to the means of their respective bands. These
bands must be linear.
Theoretically speaking, whereas variance is the average square of the differences between
values and their mean in one band, covariance is the average product of the differences of
corresponding values in two different bands from their respective means. Compare the
following equation for covariance to the previous one for variance:
Cov QR = E 〈 ( Q – µ Q ) ( R – µ R )〉
Where:
Q and R = data file values in two bands
E = expected value
In practice, the sample covariance is computed with this equation:
∑ ( Qi – µQ ) ( Ri – µR )
i--------------------------------------------------------
=1
C QR ≈ -
k
Where:
i = a particular pixel
k = the number of pixels
Like variance, covariance is expressed in units squared.
Covariance Matrix The covariance matrix is an n × n matrix that contains all of the variances and covariances
within n bands of data. Below is an example of a covariance matrix for four bands of data:
k k
2
∑ ( Qi – µQ ) ( Qi – µQ ) ∑ ( Qi – µQ )
i=1
C QQ = --------------------------------------------------------- = i-----------------------------------
=1 -
k–1 k–1
Therefore, the diagonal of the covariance matrix consists of the band variances.
The covariance matrix is an organized format for storing variance and covariance information
on a computer system, so that it needs to be computed only once. Also, the matrix itself can be
used in matrix equations, as in principal components analysis.
Dimensionality of Spectral Dimensionality is determined by the number of sets of values being used in a process.
Data In image processing, each band of data is a set of values. An image with four bands of data is
said to be four-dimensional (Jensen, 1996).
NOTE: The letter n is used consistently in this documentation to stand for the number of
dimensions (bands) of image data.
462 ERDAS
Dimensionality of Data
Measurement Vector The measurement vector of a pixel is the set of data file values for one pixel in all n bands.
Although image data files are stored band-by-band, it is often necessary to extract the
measurement vectors for individual pixels.
V1 n=3
Band 1
V2
Band 2
V3
Band 3
1 pixel
V1
V2
V3
Mean Vector When the measurement vectors of several pixels are analyzed, a mean vector is often calculated.
This is the vector of the means of the data file values in each band. It has n elements.
Band 3
µ1
µ2
µ3
Feature Space Many algorithms in image processing compare the values of two or more bands of data. The
programs that perform these functions abstractly plot the data file values of the bands being
studied against each other. An example of such a plot in two dimensions (two bands) is
illustrated in Figure 14-7.
255
(180, 85)
85
0
0 180 255
Band A
data file values
NOTE: If the image is 2-dimensional, the plot does not always have to be 2-dimensional.
In Figure 14-7, the pixel that is plotted has a measurement vector of:
180
85
The graph above implies physical dimensions for the sake of illustration. Actually, these
dimensions are based on spectral characteristics represented by the digital image data. As
opposed to physical space, the pixel above is plotted in feature space. Feature space is an
abstract space that is defined by spectral units, such as an amount of electromagnetic radiation.
Feature Space Images Several techniques for the processing of multiband data make use of a two-dimensional
histogram, or feature space image. This is simply a graph of the data file values of one band of
data against the values of another band.
464 ERDAS
Dimensionality of Data
255
NOTE: In this documentation, 2-dimensional examples are used to illustrate concepts that
apply to any number of dimensions of data. The 2-dimensional examples are best suited for
creating illustrations to be printed.
Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral space. It is a number that
allows two measurement vectors to be compared for similarity. The spectral distance between
two pixels can be calculated as follows:
n
2
D = ∑ ( di – ei )
i=1
Where:
D = spectral distance
n = number of bands (dimensions)
i = a particular band
di = data file value of pixel d in band i
ei = data file value of pixel e in band i
This is the equation for Euclidean distance—in two dimensions (when n = 2), it can be
simplified to the Pythagorean Theorem (c2 = a2 + b2), or in this case:
D2 = (di - ei)2 + (dj - ej)2
Order The variables in polynomial expressions can be raised to exponents. The highest exponent in a
polynomial determines the order of the polynomial.
A polynomial with one variable, x, takes this form:
A + Bx + Cx2 + Dx3 + .... + Ωxt
Where:
A, B, C, D ... Ω = coefficients
t = the order of the polynomial
NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not the complexity, of the
transformation is changed. Mathematically, Ω cannot be 0.
t i
xo = Σ Σ
i–j j
ak × x ×y
i = o j = o
t i
y o = Σ Σ
i–j j
bk × x ×y
i = o j = o
466 ERDAS
Matrix Algebra
Where:
t is the order of the polynomial
ak and bk are coefficients
the subscript k in ak and bk is determined by:
⋅i+j+j
k = i---------------
2
Polynomial equations are used in image rectification to transform the coordinates of an input
file to the coordinates of another system. The order of the polynomial used in this process is the
order of transformation.
Transformation Matrix In the case of first order image rectification, the variables in the polynomials (x and y) are the
source coordinates of a GCP. The coefficients are computed from the GCPs and stored as a
transformation matrix.
Matrix Algebra A matrix is a set of numbers or values arranged in a rectangular array. If a matrix has i rows and
j columns, it is said to be an i by j matrix.
A one-dimensional matrix, having one column (i by 1) is one of many kinds of vectors. For
example, the measurement vector of a pixel is an n-element vector of the data file values of the
pixel, where n is equal to the number of bands.
Matrix Notation Matrices and vectors are usually designated with a single capital letter, such as M. For example:
2.2 4.6
M = 6.1 8.3
10.0 12.4
One value in the matrix M would be specified by its position, which is its row and column (in
that order) in the matrix. One element of the array (one value) is designated with a lower case
letter and its position:
m3,2 = 12.4
With column vectors, it is simpler to use only one number to designate the position:
2.8
G = 6.5
10.1
G2 = 6.5
Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-order transformation
matrix. The coefficients are stored in a 2 × 3 matrix:
a1 a2 a3
C =
b1 b2 b3
Then, where:
xo = a1 + a2xi + a3yi
yo = b1 + b2xi + b3yi
xi and yi = source coordinates
+xo and yo = rectified coordinates
The coefficients of the transformation matrix are as above.
The above could be expressed by a matrix equation:
x0 a1 a2 a3 1
=
y0 b1 b2 b3 xi
yi
R = CS, or
Where:
S = a matrix of the source coordinates (3 by 1)
C = the transformation matrix (2 by 3)
R = the matrix of rectified coordinates (2 by 1)
The sizes of the matrices are shown above to demonstrate a rule of matrix multiplication. To
multiply two matrices, the first matrix must have the same number of columns as the second
matrix has rows. For example, if the first matrix is a by b, and the second matrix is m by n, then
b must equal m, and the product matrix has the size a by n.
The formula for multiplying two matrices is:
( fg ) ij = ∑ fik gkj
k=1
468 ERDAS
Matrix Algebra
Where:
i = a row in the product matrix
j = a column in the product matrix
f = an (a by b) matrix
g = an (m by n) matrix (b must equal m)
fg is an a by n matrix.
Transposition The transposition of a matrix is derived by interchanging its rows and columns. Transposition
is denoted by T, as in the following example (Cullen, 1972).
2 3
G = 6 4
10 12
= 2 6 10
T
G
3 4 12
470 ERDAS
Appendix B
Map Projections
Introduction This appendix is an alphabetical listing of the map projections supported in ERDAS IMAGINE.
It is divided into two sections:
• External Projections
The external projections were implemented outside of ERDAS IMAGINE so that you could add
to these using the IMAGINE Developers’ Toolkit. The projections in each section are presented
in alphabetical order.
The information in this appendix is adapted from:
• Map Projections for Use with the Geographic Information System (Lee and Walsh, 1984)
For general information about map projection types, refer to Chapter 13 “Cartography”.
Rectify an image to a particular map projection using the ERDAS IMAGINE Rectification
tools. View, add, or change projection information using the Image Information option.
NOTE: You cannot rectify to a new map projection using the Image Information option. You
should change map projection information using Image Information only if you know the
information to be incorrect. Use the rectification tools to actually georeference an image to a
new map projection system.
USGS Projections The following USGS map projections are supported in ERDAS IMAGINE and are described in
this section:
• Alaska Conformal
• Azimuthal Equidistant
• Behrmann
• Bonne
• Cassini
• Conic Equidistant
• Eckert I
• Eckert II
• Eckert III
• Eckert IV
• Eckert V
• Eckert VI
• EOSAT SOM
• Equidistant Cylindrical
• Equirectangular
• Gall Stereographic
• Gauss Kruger
• Geographic (Lat/Lon)
• Gnomonic
• Hammer
• Interrupted Mollweide
472 ERDAS
USGS Projections
• Loximuthal
• Mercator
• Miller Cylindrical
• Mollweide
• Orthographic
• Plate Carrée
• Polar Stereographic
• Polyconic
• Quartic Authalic
• Robinson
• RSO
• Sinusoidal
• State Plane
• Stereographic
• Stereographic (Extended)
• Transverse Mercator
• UTM
• Wagner IV
• Wagner VII
• Winkel I
• Winkel II
474 ERDAS
USGS Projections
Alaska Conformal
Property Conformal
Meridians N/A
Parallels N/A
Use of this projection results in a conformal map of Alaska. It has little scale distortion as
compared to other conformal projections. The method of projection is “modified planar. [It is]
a sixth-order-equation modification of an oblique Stereographic conformal projection on the
Clarke 1866 spheroid. The origin is at 64° N, 152° W” (Environmental Systems Research
Institute, 1997).
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Alaska Conformal is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Construction Cone
Property Equal-area
Meridians are straight lines converging on the polar axis, but not at the
Meridians
pole.
The Albers Conical Equal Area projection is mathematically based on a cone that is
conceptually secant on two parallels. There is no areal deformation. The North or South Pole is
represented by an arc. It retains its properties at various scales, and individual sheets can be
joined along their edges.
This projection produces very accurate area and distance measurements in the middle latitudes
(Figure B-1). Thus, Albers Conical Equal Area is well-suited to countries or continents where
north-south depth is about 3/5 the breadth of east-west. When this projection is used for the
continental US, the two standard parallels are 29.5° and 45.5° North.
This projection possesses the property of equal-area, and the standard parallels are correct in
scale and in every direction. Thus, there is no angular distortion (i.e., meridians intersect
parallels at right angles), and conformality exists along the standard parallels. Like other conics,
Albers Conical Equal Area has concentric arcs for parallels and equally spaced radii for
meridians. Parallels are not equally spaced, but are farthest apart between the standard parallels
and closer together on the north and south edges.
Albers Conical Equal Area is the projection exclusively used by the USGS for sectional maps
of all 50 states of the US in the National Atlas of 1970.
Prompts
The following prompts display in the Projection Chooser once Albers Conical Equal Area is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
476 ERDAS
USGS Projections
In Figure B-1, the standard parallels are 20°N and 60°N. Note the change in spacing of the
parallels.
Azimuthal Equidistant
Construction Plane
Property Equidistant
Polar aspect: the meridians are straight lines radiating from the point of
tangency.
Oblique aspect: the meridians are complex curves concave toward the
Meridians point of tangency.
The Azimuthal Equidistant projection is mathematically based on a plane tangent to the Earth.
The entire Earth can be represented, but generally less than one hemisphere is portrayed, though
the other hemisphere can be portrayed, but is much distorted. It has true direction and true
distance scaling from the point of tangency.
This projection is used mostly for polar projections because latitude rings divide meridians at
equal intervals with a polar aspect (Figure B-2). Linear scale distortion is moderate and
increases toward the periphery. Meridians are equally spaced, and all distances and directions
are shown accurately from the central point.
This projection can also be used to center on any point on the Earth (e.g., a city) and distance
measurements are true from that central point. Distances are not correct or true along parallels,
and the projection is neither equal-area nor conformal. Also, straight lines radiating from the
center of this projection represent great circles.
478 ERDAS
USGS Projections
Prompts
The following prompts display in the Projection Chooser if Azimuthal Equidistant is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough so that no
negative coordinates occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
Behrmann
Construction Cylindrical
Property Equal-area
Meridians Straight parallel lines that are equally spaced and 0.42 the length of the
Equator.
Parallels Straight lines that are unequally spaced and farthest apart near the
Equator, perpendicular to meridians.
Graticule spacing See Meridians and Parallels. Poles are straight lines the same length as the
Equator. Symmetry is present about any meridian or the Equator.
Linear scale Scale is true along latitudes 30° N and S.
With the exception of compression in the horizontal direction and expansion in the vertical
direction, the Behrmann projection is the same as the Lambert Cylindrical Equal-area
projection. These changes prevent distortion at latitudes 30° N and S instead of at the Equator.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Behrmann is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
480 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Bonne
Construction Pseudocone
Property Equal-area
Meridians N/A
Linear scale Scale is true along the central meridian and all parallels.
This projection is best used on maps of continents and small areas. There
Uses
is some distortion.
The Bonne projection is an equal-area projection. True scale is achievable along the central
meridian and all parallels. Although it was used in the 1800s and early 1900s, Bonne was
replaced by Lambert Azimuthal Equal Area (see “Lambert Azimuthal Equal Area”) by the
mapping company Rand McNally & Co. and Hammond, Inc.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Bonne is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
482 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Cassini
Construction Cylinder
Property Compromise
Meridians N/A
Parallels N/A
Prompts
The following prompts display in the Projection Chooser once Cassini is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale Factor
Enter the scale factor.
Longitude of central meridian
Latitude of origin of projection
Enter the values for longitude of central meridian and latitude of origin of projection.
False easting
False northing
484 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Eckert I
Construction Pseudocylinder
Meridians Meridians are converging straight lines that are equally spaced and
broken at the Equator.
Parallels Parallels are perpendicular to the central meridian, equally spaced straight
parallel lines.
Graticule spacing See Meridians and Parallels. Poles are lines one half the length of the
Equator. Symmetry exists about the central meridian or the Equator.
Linear scale Scale is true along latitudes 47° 10’ N and S. Scale is constant at any
latitude (and latitude of opposite sign) and any meridian.
Uses This projection is used as a novelty to show a straight-line graticule.
A great amount of distortion at the Equator is due to the break at the Equator.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Eckert I is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
486 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Eckert II
Construction Pseudocylinder
Property Equal-area
Meridians Meridians are straight lines that are equally spaced and broken at the
Equator. Central meridian is one half as long as the Equator.
Parallels Parallels are straight parallel lines that are unequally spaced. The greatest
separation is close to the Equator. Parallels are perpendicular to the
central meridian.
Graticule spacing See Meridians and Parallels. Pole lines are half the length of the Equator.
Symmetry exists at the central meridian or the Equator.
Linear scale Scale is true along altitudes 55° 10’ N, and S. Scale is constant along any
latitude.
Uses This projection is used as a novelty to show straight-line equal-area
graticule.
The break at the Equator creates a great amount of distortion there. Eckert II is similar to the
Eckert I projection. The Eckert I projection has meridians positioned identically to Eckert II, but
the Eckert I projection has equidistant parallels.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Eckert II is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
488 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Eckert III
Construction Pseudocylinder
Meridians Meridians are elliptical curves that are equally spaced elliptical curves.
The meridians +/- 180° from the central meridian are semicircles. The
poles and the central meridian are straight lines one half the length of the
Equator.
Parallels Parallels are equally spaced straight lines.
Graticule spacing See Meridians and Parallels. Pole lines are half the length of the Equator.
Symmetry exists at the central meridian or the Equator.
Linear scale Scale is correct only along 37° and 55’ N and S. Features close to poles
are compressed in the north-south direction.
Uses Used for mapping the world.
In the Eckert III projection, “no point is free of all scale distortion, but the Equator is free of
angular distortion” (Snyder and Voxland, 1989).
Prompts
The following prompts display in the Projection Chooser once Eckert III is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
490 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Eckert IV
Construction Pseudocylinder
Property Equal-area
Parallels are straight lines that are unequally spaced and closer together at
Parallels
the poles.
See Meridians and Parallels. The poles and the central meridian are
Graticule spacing
straight lines one half the length of the Equator.
“Scale is distorted north-south 40 percent along the Equator relative to the
east-west dimension. This distortion decreases to zero at 40° 30’ N and S
Linear scale and at the central meridian. Scale is correct only along these parallels.
Nearer the poles, features are compressed in the north-south direction”
(Environmental Systems Research Institute, 1997).
The Eckert IV projection is best used for thematic maps of the globe. An example of a thematic
map is one depicting land cover.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert IV is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
492 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Eckert V
Construction Pseudocylinder
Meridians Meridians are sinusoidal curves that are equally spaced. The poles and the
central meridian are straight lines one half as long as the Equator.
Parallels Parallels are straight lines that are equally spaced.
Linear scale Scale is correct only along 37° 55’ N and S. Features near the poles are
compressed in the north-south direction.
Uses This projection is best used for thematic world maps.
The Eckert V projection is only supported on a sphere. Like Eckert III, no point is free of all
scale distortion, but the Equator is free of angular distortion” (Snyder and Voxland, 1989).
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert V is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
494 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Eckert VI
Construction Pseudocylinder
Property Equal-area
Parallels Parallels are unequally spaced straight lines, closer together at the poles.
See Meridians and Parallels. The poles and the central meridian are
Graticule spacing
straight lines one half the length of the Equator.
“Scale is distorted north-south 29 percent along the Equator relative to the
east-west dimension. This distortion decreases to zero at 49° 16’ N and S
Linear scale at the central meridian. Scale is correct only along these parallels. Nearer
the poles, features are compressed in the north-south direction”
(Environmental Systems Research Institute, 1997).
The Eckert VI projection is best used for thematic maps. An example of a thematic map is one
depicting land cover.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Eckert VI is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
496 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
EOSAT SOM The EOSAT SOM projection is similar to the Space Oblique Mercator projection. The main
exception to the similarity is that the EOSAT SOM projection’s X and Y coordinates are
switched.
Prompts
The following prompts display in the Projection Chooser once EOSAT SOM is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
498 ERDAS
USGS Projections
Equidistant Conic
Construction Cone
Property Equidistant
Meridians Meridians are straight lines converging on a polar axis but not at the pole.
With Equidistant Conic (Simple Conic) projections, correct distance is achieved along the
line(s) of contact with the cone, and parallels are equidistantly spaced. It can be used with either
one (A) or two (B) standard parallels. This projection is neither conformal nor equal-area, but
the north-south scale along meridians is correct. The North or South Pole is represented by an
arc. Because scale distortion increases with increasing distance from the line(s) of contact, the
Equidistant Conic is used mostly for mapping regions predominantly east-west in extent. The
USGS uses the Equidistant Conic in an approximate form for a map of Alaska.
Prompts
The following prompts display in the Projection Chooser if Equidistant Conic is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the origin of the projection in both spherical and rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for the longitude of the desired central meridian and the latitude of the origin of
projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the intersection of the central
meridian and the latitude of the origin of projection. These values must be in meters. It is often
convenient to make them large enough so that no negative coordinates occur within the region
of the map projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
One or two standard parallels?
Latitude of standard parallel
Enter one or two values for the desired control line(s) of the projection, i.e., the standard
parallel(s). Note that if two standard parallels are used, the first is the southernmost.
500 ERDAS
USGS Projections
Equidistant Cylindrical The Equidistant Cylindrical projection is similar to the Equirectangular projection.
Prompts
The following prompts display in the Projection Chooser if Equidistant Cylindrical is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Equirectangular (Plate
Carrée)
Construction Cylinder
Property Compromise
Graticule spacing Equally spaced parallel meridians and latitude lines cross at right angles.
The scale is correct along all meridians and along the standard parallels
Linear scale
(Environmental Systems Research Institute, 1992).
Best used for city maps, or other small areas with map scales small
enough to reduce the obvious distortion. Used for simple portrayals of the
Uses
world or regions with minimal geographic data, such as index maps
(Environmental Systems Research Institute, 1992).
Prompts
The following prompts display in the Projection Chooser if Equirectangular is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
502 ERDAS
USGS Projections
Enter a value for longitude of the desired central meridian to center the projection and the
latitude of true scale.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Gall Stereographic
Construction Cylinder
Property Compromise
Parallels are straight lines that have increased space with distance from
Parallels
the Equator.
The Gall Stereographic projection was created in 1855. The two standard parallels are located
at 45° N and 45° S. This projection is used for world maps.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Gall Stereographic is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
504 ERDAS
USGS Projections
Gauss Kruger The Gauss Kruger projection is the same as the Transverse Mercator projection, with the
exception that Gauss Kruger uses a fixed scale factor of 1. Gauss Kruger is available only in
ellipsoidal form.
Many countries such as China and Germany use Gauss Kruger in 3-degree zones instead of 6-
degree zones for UTM.
Prompts
The following prompts display in the Projection Chooser once Gauss Kruger is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale factor
Designate the desired scale factor. This parameter is used to modify scale distortion. A value of
one indicates true scale only along the central meridian. It may be desirable to have true scale
along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion
away from the central meridian. A factor of less than, but close to one is often used.
Longitude of central meridian
Enter a value for the longitude of the desired central meridian to center the projection.
Latitude of origin of projection
Enter the value for the latitude of origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Construction Plane
Property Compromise
The central meridian is a straight line in all aspects. In the polar aspect all
Meridians meridians are straight. In the equatorial aspect the Equator is straight
(Environmental Systems Research Institute, 1992).
Parallels on vertical polar aspects are concentric circles. Nearly all other
Parallels parallels are elliptical arcs, except that certain angles of tilt may cause
some parallels to be shown as parabolas or hyperbolas.
Polar aspect: parallels are concentric circles that are not evenly spaced.
Meridians are evenly spaced and spacing increases from the center of the
projection.
Graticule spacing
Equatorial and oblique aspects: parallels are elliptical arcs that are not
evenly spaced. Meridians are elliptical arcs that are not evenly spaced,
except for the central meridian, which is a straight line.
Radial scale decreases from true scale at the center to zero on the
Linear scale projection edge. The scale perpendicular to the radii decreases, but not as
rapidly (Environmental Systems Research Institute, 1992).
Often used to show the Earth or other planets and satellites as seen from
Uses space. Used as an aesthetic presentation, rather than for technical
applications (Environmental Systems Research Institute, 1992).
General Vertical Near-side Perspective presents a picture of the Earth as if a photograph were
taken at some distance less than infinity. The map user simply identifies area of coverage,
distance of view, and angle of view. It is a variation of the General Perspective projection in
which the “camera” precisely faces the center of the Earth.
Central meridian and a particular parallel (if shown) are straight lines. Other meridians and
parallels are usually arcs of circles or ellipses, but some may be parabolas or hyperbolas. Like
all perspective projections, General Vertical Near-side Perspective cannot illustrate the entire
globe on one map—it can represent only part of one hemisphere.
Prompts
The following prompts display in the Projection Chooser if General Vertical Near-side
Perspective is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
506 ERDAS
USGS Projections
Enter a value for the desired height of the perspective point above the sphere in the same units
as the radius.
Then, define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough so that no
negative coordinates occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
Geographic (Lat/Lon) The Geographic is a spherical coordinate system composed of parallels of latitude (Lat) and
meridians of longitude (Lon) (Figure B-13). Both divide the circumference of the Earth into 360
degrees. Degrees are further subdivided into minutes and seconds (60 sec = 1 minute, 60 min =
1 degree).
Because the Earth spins on an axis between the North and South Poles, this allows construction
of concentric, parallel circles, with a reference line exactly at the north-south center, termed the
Equator. The series of circles north of the Equator is termed north latitudes and runs from 0°
latitude (the Equator) to 90° North latitude (the North Pole), and similarly southward. Position
in an east-west direction is determined from lines of longitude. These lines are not parallel, and
they converge at the poles. However, they intersect lines of latitude perpendicularly.
Unlike the Equator in the latitude system, there is no natural zero meridian. In 1884, it was
finally agreed that the meridian of the Royal Observatory in Greenwich, England, would be the
prime meridian. Thus, the origin of the geographic coordinate system is the intersection of the
Equator and the prime meridian. Note that the 180° meridian is the international date line.
If you choose Geographic from the Projection Chooser, the following prompts display:
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Note that in responding to prompts for other projections, values for longitude are negative
west of Greenwich and values for latitude are negative south of the Equator.
Parallel
(Latitude)
60
Equator
30 6 0
3 0
0
Meridian
(Longitude)
Figure B-13 shows the graticule of meridians and parallels on the global surface.
508 ERDAS
USGS Projections
Gnomonic
Construction Plane
Property Compromise
Polar aspect: the meridians are straight lines radiating from the point of
tangency.
Meridians
Oblique and equatorial aspects: the meridians are straight lines.
Polar aspect: the parallels are concentric circles.
Gnomonic is a perspective projection that projects onto a tangent plane from a position in the
center of the Earth. Because of the close perspective, this projection is limited to less than a
hemisphere. However, it is the only projection which shows all great circles as straight lines.
With a polar aspect, the latitude intervals increase rapidly from the center outwards.
With an equatorial or oblique aspect, the Equator is straight. Meridians are straight and parallel,
while intervals between parallels increase rapidly from the center and parallels are convex to the
Equator.
Because great circles are straight, this projection is useful for air and sea navigation. Rhumb
lines are curved, which is the opposite of the Mercator projection.
Prompts
The following prompts display in the Projection Chooser if Gnomonic is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
510 ERDAS
USGS Projections
Hammer
Property Equal-area
The central meridian is half as long as the Equator and a straight line.
Meridians Others are curved and concave toward the central meridian and unequally
spaced.
With the exception of the Equator, all parallels are complex curves that
Parallels
have a concave shape toward the nearest pole.
Graticule spacing Only the Equator and central meridian are straight lines.
Scale lessens along the Equator and central meridian as proximity to the
Linear scale
origin grows.
The Hammer projection is useful for mapping the world. In particular, the Hammer projection
is suited for thematic maps of the world, such as land cover.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Hammer is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
512 ERDAS
USGS Projections
Interrupted Goode
Homolosine
Construction Pseudocylindrical
Property Equal-area
Meridians “In the interrupted form, there are six central meridians, each a straight
line 0.22 as long as the Equator but not crossing the Equator. Other
meridians are equally spaced sinusoidal curves between latitudes 40° 44’
N and S. and elliptical arcs elsewhere, all concave toward the central
meridian. There is a slight bend in meridians at the 40° 44’ latitudes”
(Snyder and Voxland, 1989).
Parallels Parallels are straight parallel lines, which are perpendicular to the central
meridians. Between latitudes 40° 44’ N and S, they are equally spaced.
Parallels gradually get closer together closer to the poles.
Graticule spacing See Meridians and Parallels. Poles are points. Symmetry is nonexistent in
the interrupted form.
Linear scale Scale is true at each latitude between 40° 44’ N and S along the central
meridian within the same latitude range. Scale varies with increased
latitudes.
Uses This projection is useful for world maps.
Prompts
The following prompts display in the Projection Chooser once Interrupted Goode Homolosine
is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Interrupted Mollweide The interrupted Mollweide projection reduces the distortion of the Mollweide projection. It is
interrupted into six regions with fixed parameters for each region.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Interrupted Mollweide is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
514 ERDAS
USGS Projections
Lambert Azimuthal
Equal Area
Construction Plane
Property Equal-area
Polar aspect: the meridians are straight lines radiating from the point of
tangency. Oblique and equatorial aspects: meridians are complex curves
Meridians
concave toward a straight central meridian, except the outer meridian of a
hemisphere, which is a circle.
Polar aspect: parallels are concentric circles. Oblique and equatorial
Parallels aspects: the parallels are complex curves. The Equator on the equatorial
aspect is a straight line.
Polar aspect: the meridian spacing is equal and increases, and the parallel
spacing is unequal and decreases toward the periphery of the projection.
Graticule spacing
The graticule spacing, in all aspects, retains the property of equivalence of
area.
Linear scale is better than most azimuthals, but not as good as the
equidistant. Angular deformation increases toward the periphery of the
Linear scale projection. Scale decreases radially toward the periphery of the map
projection. Scale increases perpendicular to the radii toward the
periphery.
The polar aspect is used by the USGS in the National Atlas. The polar,
Uses oblique, and equatorial aspects are used by the USGS for the Circum-
Pacific Map.
The Lambert Azimuthal Equal Area projection is mathematically based on a plane tangent to
the Earth. It is the only projection that can accurately represent both area and true direction from
the center of the projection (Figure B-17). This central point can be located anywhere.
Concentric circles are closer together toward the edge of the map, and the scale distorts
accordingly. This projection is well-suited to square or round land masses. This projection
generally represents only one hemisphere.
In the polar aspect, latitude rings decrease their intervals from the center outwards. In the
equatorial aspect, parallels are curves flattened in the middle. Meridians are also curved, except
for the central meridian, and spacing decreases toward the edges.
Prompts
The following prompts display in the Projection Chooser if Lambert Azimuthal Equal Area is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
In Figure B-17, three views of the Lambert Azimuthal Equal Area projection are shown: A)
Polar aspect, showing one hemisphere; B) Equatorial aspect, frequently used in old atlases for
maps of the eastern and western hemispheres; C) Oblique aspect, centered on 40°N.
516 ERDAS
USGS Projections
Lambert Conformal
Conic
Construction Cone
Property Conformal
Parallels are arcs of concentric circles concave toward a pole and centered
Parallels
at a pole.
Meridian spacing is true on the standard parallels and decreases toward
the pole. Parallel spacing increases away from the standard parallels and
Graticule spacing decreases between them. Meridians and parallels intersect each other at
right angles. The graticule spacing retains the property of conformality.
The graticule is symmetrical.
Linear scale is true on standard parallels. Maximum scale error is 2.5% on
Linear scale a map of the United States (48 states) with standard parallels at 33°N and
45°N.
Used for large countries in the mid-latitudes having an east-west
orientation. The United States (50 states) Base Map uses standard
parallels at 37°N and 65°N. Some of the National Topographic Map
Series 7.5-minute and 15-minute quadrangles, and the State Base Map
Uses
series are constructed on this projection. The latter series uses standard
parallels of 33°N and 45°N. Aeronautical charts for Alaska use standard
parallels at 55°N and 65°N. The National Atlas of Canada uses standard
parallels at 49°N and 77°N.
This projection is very similar to Albers Conical Equal Area, described previously. It is
mathematically based on a cone that is tangent at one parallel or, more often, that is conceptually
secant on two parallels (Figure B-18). Areal distortion is minimal, but increases away from the
standard parallels. North or South Pole is represented by a point—the other pole cannot be
shown. Great circle lines are approximately straight. It retains its properties at various scales,
and sheets can be joined along their edges. This projection, like Albers, is most valuable in
middle latitudes, especially in a country sprawling east to west like the US. The standard
parallels for the US are 33° and 45°N.
The major property of this projection is its conformality. At all coordinates, meridians and
parallels cross at right angles. The correct angles produce correct shapes. Also, great circles are
approximately straight. The conformal property of Lambert Conformal Conic, and the
straightness of great circles makes it valuable for landmark flying.
Lambert Conformal Conic is the State Plane coordinate system projection for states of
predominant east-west expanse. Since 1962, Lambert Conformal Conic has been used for the
International Map of the World between 84°N and 80°S.
In comparison with Albers Conical Equal Area, Lambert Conformal Conic possesses true shape
of small areas, whereas Albers possesses equal-area. Unlike Albers, parallels of Lambert
Conformal Conic are spaced at increasing intervals the farther north or south they are from the
standard parallels.
Prompts
The following prompts display in the Projection Chooser if Lambert Conformal Conic is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
If you only have one standard parallel you should enter that same value into all three
latitude fields.
Enter values for longitude of the desired central meridian and latitude of the origin of projection.
False easting at central meridian
False northing at origin
518 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the intersection of the central
meridian, and the latitude of the origin of projection. These values must be in meters. It is often
convenient to make them large enough to ensure that there are no negative coordinates within
the region of the map projection. That is, the origin of the rectangular coordinate system should
fall outside of the map projection to the south and west.
In Figure B-18, the standard parallels are 20°N and 60°N. Note the change in spacing of the
parallels.
Loximuthal
Construction Pseudocylindrical
Meridians The “central meridian is a straight line generally over half as long as the
Equator, depending on the central latitude. If the central latitude is the
Equator, the ratio is 0.5; if it is 40° N or S, the ratio is 0.65. Other
meridians are equally spaced complex curves intersecting at the poles and
concave toward the central meridian” (Snyder and Voxland, 1989).
Parallels Parallels are straight parallel lines that are equally spaced. They are
perpendicular to the central meridian.
Graticule spacing See Meridians and Parallels. Poles are points. Symmetry exists about the
central meridian. Symmetry also exists at the Equator if it is designated as
the central latitude.
Linear scale Scale is true along the central meridian. Scale is also constant along any
given latitude, but different for the latitude of opposite sign.
Uses Used for world maps where loxodromes (rhumb lines) are emphasized.
The distortion of the Loximuthal projection is average to pronounced. Distortion is not present
at the central latitude on the central meridian. What is most noteworthy about the loximuthal
projection is the loxodromes that are “straight, true to scale, and correct in azimuth from the
center” (Snyder and Voxland, 1989).
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser if Loximuthal is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
520 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Mercator
Construction Cylinder
Property Conformal
Meridian spacing is equal and the parallel spacing increases away from
Graticule spacing the Equator. The graticule spacing retains the property of conformality.
The graticule is symmetrical. Meridians intersect parallels at right angles.
Linear scale is true along the Equator only (line of tangency), or along
two parallels equidistant from the Equator (the secant form). Scale can be
Linear scale
determined by measuring one degree of latitude, which equals 60 nautical
miles, 69 statute miles, or 111 kilometers.
An excellent projection for equatorial regions. Otherwise, the Mercator is
a special-purpose map projection best suited for navigation. Secant
constructions are used for large-scale coastal charts. The use of the
Uses
Mercator map projection as the base for nautical charts is universal.
Examples are the charts published by the National Ocean Survey, US
Dept. of Commerce.
This famous cylindrical projection was originally designed by Flemish map maker Gerhardus
Mercator in 1569 to aid navigation (Figure B-20). Meridians and parallels are straight lines and
cross at 90° angles. Angular relationships are preserved. However, to preserve conformality,
parallels are placed increasingly farther apart with increasing distance from the Equator. Due to
extreme scale distortion in high latitudes, the projection is rarely extended beyond 80°N or S
unless the latitude of true scale is other than the Equator. Distance scales are usually furnished
for several latitudes.
This projection can be thought of as being mathematically based on a cylinder tangent at the
Equator. Any straight line is a constant-azimuth (rhumb) line. Areal enlargement is extreme
away from the Equator; poles cannot be represented. Shape is true only within any small area.
It is a reasonably accurate projection within a 15° band along the line of tangency.
Rhumb lines, which show constant direction, are straight. For this reason, a Mercator map was
very valuable to sea navigators. However, rhumb lines are not the shortest path—great circles
are the shortest path. Most great circles appear as long arcs when drawn on a Mercator map.
Prompts
The following prompts display in the Projection Chooser if Mercator is selected. Respond to the
prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
522 ERDAS
USGS Projections
Define the origin of the map projection in both spherical and rectangular coordinates.
Longitude of central meridian
Latitude of true scale
Enter values for longitude of the desired central meridian and latitude at which true scale is
desired. Selection of a parameter other than the Equator can be useful for making maps in
extreme north or south latitudes.
False easting at central meridian
False northing at origin
Enter values of false easting and false northing corresponding to the intersection of the central
meridian and the latitude of true scale. These values must be in meters. It is often convenient to
make them large enough so that no negative coordinates occur within the region of the map
projection. That is, the origin of the rectangular coordinate system should fall outside of the map
projection to the south and west.
In Figure B-20, all angles are shown correctly, therefore small shapes are true (i.e., the map is
conformal). Rhumb lines are straight, which makes it useful for navigation.
Miller Cylindrical
Construction Cylinder
Property Compromise
Meridians are parallel and equally spaced, the lines of latitude are
parallel, and the distance between them increases toward the poles. Both
Graticule spacing
poles are represented as straight lines. Meridians and parallels intersect at
right angles (Environmental Systems Research Institute, 1992).
While the standard parallels, or lines, that are true to scale and free of
Linear scale
distortion, are at latitudes 45°N and S, only the Equator is standard.
Miller Cylindrical is a modification of the Mercator projection (Figure B-21). It is similar to the
Mercator from the Equator to 45°, but latitude line intervals are modified so that the distance
between them increases less rapidly. Thus, beyond 45°, Miller Cylindrical lessens the extreme
exaggeration of the Mercator. Miller Cylindrical also includes the poles as straight lines
whereas the Mercator does not.
Meridians and parallels are straight lines intersecting at right angles. Meridians are equidistant,
while parallels are spaced farther apart the farther they are from the Equator. Miller Cylindrical
is not equal-area, equidistant, or conformal. Miller Cylindrical is used for world maps and in
several atlases.
Prompts
The following prompts display in the Projection Chooser if Miller Cylindrical is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
524 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough to
prevent negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
This projection resembles the Mercator, but has less distortion in polar regions. Miller
Cylindrical is neither conformal nor equal-area.
Modified Transverse
Mercator
Construction Cone
Property Equidistant
In 1972, the USGS devised a projection specifically for the revision of a 1954 map of Alaska
which, like its predecessors, was based on the Polyconic projection. This projection was drawn
to a scale of 1:2,000,000 and published at 1:2,500,000 (map “E”) and 1:1,584,000 (map “B”).
Graphically prepared by adapting coordinates for the UTM projection, it is identified as the
Modified Transverse Mercator projection. It resembles the Transverse Mercator in a very
limited manner and cannot be considered a cylindrical projection. It resembles the Equidistant
Conic projection for the ellipsoid in actual construction. The projection was also used in 1974
for a base map of the Aleutian-Bering Sea Region published at 1:2,500,000 scale.
It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866 ellipsoid,
with the scale along the meridians reduced to 0.9992 of true scale, and the standard parallels at
latitude 66.09° and 53.50°N.
Prompts
The following prompts display in the Projection Chooser if Modified Transverse Mercator is
selected. Respond to the prompts as described.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough to
prevent negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
526 ERDAS
USGS Projections
Mollweide
Construction Pseudocylinder
Property Equal-area
Meridians are elliptical arcs that are equally spaced. The exception is the
Meridians
central meridian, which is a straight line.
Graticule spacing The Equator and the central meridian are linear graticules.
Scale is accurate along latitudes 40° 44’ N and S at the central meridian.
Linear scale Distortion becomes more pronounced farther from the lines, and is severe
at the extremes of the projection.
Prompts
The following prompts display in the Projection Chooser once Mollweide is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
528 ERDAS
USGS Projections
Property Conformal
Meridians N/A
Parallels N/A
Linear scale Scale is within 0.02 percent of actual scale for the country of New
Zealand.
Uses This projection is useful only for maps of New Zealand.
Prompts
The following prompts display in the Projection Chooser once New Zealand Map Grid is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
The Spheroid Name defaults to International 1909. The Datum Name defaults to Geodetic
Datum 1949. These fields are not editable.
Easting Shift
Northing Shift
The Easting and Northing shifts are reported in meters.
Prompts
The following prompts display in the Projection Chooser once Oblated Equal Area is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Parameter M
Parameter N
Enter the oblated equal area oval shape of parameters M and N.
Longitude of center of projection
Latitude of center of projection
Enter the longitude of the center of the projection and the latitude of the center of the projection.
Rotation angle
Enter the oblated equal area oval rotation angle.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
530 ERDAS
USGS Projections
Oblique Mercator
(Hotine)
Construction Cylinder
Property Conformal
Parallels Parallels are complex curves concave toward the nearest pole.
Graticule spacing increases away from the line of tangency and retains the
Graticule spacing
property of conformality.
Linear scale is true along the line of tangency, or along two lines of
Linear scale
equidistance from and parallel to the line of tangency.
Useful for plotting linear configurations that are situated along a line
oblique to the Earth’s Equator. Examples are: NASA Surveyor Satellite
Uses tracking charts, ERTS flight indexes, strip charts for navigation, and the
National Geographic Society’s maps “West Indies,” “Countries of the
Caribbean,” “Hawaii,” and “New Zealand.”
Oblique Mercator is a cylindrical, conformal projection that intersects the global surface along
a great circle. It is equivalent to a Mercator projection that has been altered by rotating the
cylinder so that the central line of the projection is a great circle path instead of the Equator.
Shape is true only within any small area. Areal enlargement increases away from the line of
tangency. Projection is reasonably accurate within a 15° band along the line of tangency.
The USGS uses the Hotine version of Oblique Mercator. The Hotine version is based on a study
of conformal projections published by British geodesist Martin Hotine in 1946-47. Prior to the
implementation of the Space Oblique Mercator, the Hotine version was used for mapping
Landsat satellite imagery.
Prompts
The following prompts display in the Projection Chooser if Oblique Mercator (Hotine) is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Designate the desired scale factor along the central line of the projection. This parameter may
be used to modify scale distortion away from this central line. A value of 1.0 indicates true scale
only along the central line. A value of less than, but close to one is often used to lessen scale
distortion away from the central line.
Latitude of point of origin
False easting
False northing
The center of the projection is defined by rectangular coordinates of false easting and false
northing. The origin of rectangular coordinates on this projection occurs at the nearest
intersection of the central line with the Earth’s Equator. To shift the origin to the intersection of
the latitude of the origin entered above and the central line of the projection, compute
coordinates of the latter point with zero false eastings and northings, reverse the signs of the
coordinates obtained, and use these for false eastings and northings. These values must be in
meters.
It is often convenient to add additional values so that no negative coordinates occur within the
region of the map projection. That is, the origin of the rectangular coordinate system should fall
outside of the map projection to the south and west.
Do you want to enter either:
Format A
For format A, the additional prompts are:
Azimuth east of north for central line
Longitude of point of origin
Format A defines the central line of the projection by the angle east of north to the desired great
circle path and by the latitude and longitude of the point along the great circle path from which
the angle is measured. Appropriate values should be entered.
Format B
For format B, the additional prompts are:
Longitude of 1st point
Latitude of 1st point
Longitude of 2nd point
Latitude of 2nd point
532 ERDAS
USGS Projections
Format B defines the central line of the projection by the latitude of a point on the central line
which has the desired scale factor entered previously and by the longitude and latitude of two
points along the desired great circle path. Appropriate values should be entered.
Orthographic
Construction Plane
Property Compromise
Polar aspect: the meridians are straight lines radiating from the point of
tangency.
Oblique aspect: the meridians are ellipses, concave toward the center of
Meridians
the projection.
Equatorial aspect: the meridians are ellipses concave toward the straight
central meridian.
Polar aspect: the parallels are concentric circles.
Parallels Oblique aspect: the parallels are ellipses concave toward the poles.
Uses USGS uses the Orthographic map projection in the National Atlas.
The Orthographic projection is geometrically based on a plane tangent to the Earth, and the
point of projection is at infinity (Figure B-24). The Earth appears as it would from outer space.
Light rays that cast the projection are parallel and intersect the tangent plane at right angles. This
projection is a truly graphic representation of the Earth, and is a projection in which distortion
becomes a visual aid. It is the most familiar of the azimuthal map projections. Directions from
the center of the projection are true.
This projection is limited to one hemisphere and shrinks those areas toward the periphery. In the
polar aspect, latitude ring intervals decrease from the center outwards at a much greater rate than
with Lambert Azimuthal. In the equatorial aspect, the central meridian and parallels are straight,
with spaces closing up toward the outer edge.
The Orthographic projection seldom appears in atlases. Its utility is more pictorial than
technical. Orthographic has been used as a basis for maps by Rand McNally and the USGS.
Prompts
The following prompts display in the Projection Chooser if Orthographic is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
534 ERDAS
USGS Projections
Define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough so that no
negative coordinates occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and
west.Three views of the Orthographic projection are shown in Figure B-24: A) Polar aspect; B)
Equatorial aspect; C) Oblique aspect, centered at 40°N and showing the classic globe-like view.
Plate Carrée The parameters for the Plate Carée projection are identical to that of the Equirectangular
projection.
Prompts
The following prompts display in the Projection Chooser if Plate Carrée is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
536 ERDAS
USGS Projections
Polar Stereographic
Construction Plane
Property Conformal
The distance between parallels increases with distance from the central
Graticule spacing
pole.
The scale increases with distance from the center. If a standard parallel is
Linear scale chosen rather than one of the poles, this latitude represents the true scale,
and the scale nearer the pole is reduced.
Polar regions (conformal). In the Universal Polar Stereographic (UPS)
Uses system, the scale factor at the pole is made 0.994, thus making the
standard parallel (latitude of true scale) approximately 81°07’N or S.
The Polar Stereographic may be used to accommodate all regions not included in the UTM
coordinate system, regions north of 84°N and 80°S. This form is called Universal Polar
Stereographic (UPS). The projection is equivalent to the polar aspect of the Stereographic
projection on a spheroid. The central point is either the North Pole or the South Pole. Of all the
polar aspect planar projections, this is the only one that is conformal.
The point of tangency is a single point—either the North Pole or the South Pole. If the plane is
secant instead of tangent, the point of global contact is a line of latitude (Environmental Systems
Research Institute, 1992).
Polar Stereographic is an azimuthal projection obtained by projecting from the opposite pole
(Figure B-26). All of either the northern or southern hemispheres can be shown, but not both.
This projection produces a circular map with one of the poles at the center.
Polar Stereographic stretches areas toward the periphery, and scale increases for areas farther
from the central pole. Meridians are straight and radiating; parallels are concentric circles. Even
though scale and area are not constant with Polar Stereographic, this projection, like all
stereographic projections, possesses the property of conformality.
The Astrogeology Center of the Geological Survey at Flagstaff, Arizona, has been using the
Polar Stereographic projection for the mapping of polar areas of every planet and satellite for
which there is sufficient information.
Prompts
The following prompts display in the Projection Chooser if Polar Stereographic is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the origin of the map projection in both spherical and rectangular coordinates. Ellipsoid
projections of the polar regions normally use the International 1909 spheroid (Environmental
Systems Research Institute, 1992).
Longitude below pole of map
Enter a value for longitude directed straight down below the pole for a north polar aspect, or
straight up from the pole for a south polar aspect. This is equivalent to centering the map with
a desired meridian.
Latitude of true scale
Enter a value for latitude at which true scale is desired. For secant projections, specify the
latitude of true scale as any line of latitude other than 90°N or S. For tangential projections,
specify the latitude of true scale as the North Pole, 90 00 00, or the South Pole, -90 00 00
(Environmental Systems Research Institute, 1992).
False easting
False northing
Enter values of false easting and false northing corresponding to the pole. These values must be
in meters. It is often convenient to make them large enough to prevent negative coordinates
within the region of the map projection. That is, the origin of the rectangular coordinate system
should fall outside of the map projection to the south and west.This projection is conformal and
is the most scientific projection for polar regions.
Equator
S. Pole
538 ERDAS
USGS Projections
Polyconic
Construction Cone
Property Compromise
The central meridian is a straight line, but all other meridians are complex
Meridians
curves.
Parallels (except the Equator) are nonconcentric circular arcs. The
Parallels
Equator is a straight line.
All parallels are arcs of circles, but not concentric. All meridians, except
the central meridian, are concave toward the central meridian. Parallels
Graticule spacing
cross the central meridian at equal intervals but get farther apart at the east
and west peripheries.
The scale along each parallel and along the central meridian of the
projection is accurate. It increases along the meridians as the distance
Linear scale
from the central meridian increases (Environmental Systems Research
Institute, 1992).
Used for 7.5-minute and 15-minute topographic USGS quad sheets, from
1886 to about 1957 (Environmental Systems Research Institute, 1992).
Uses
Used almost exclusively in slightly modified form for large-scale
mapping in the United States until the 1950s.
Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping the eastern
coast of the US (Figure B-27). Polyconic projections are made up of an infinite number of conic
projections tangent to an infinite number of parallels. These conic projections are placed in
relation to a central meridian. Polyconic projections compromise properties such as equal-area
and conformality, although the central meridian is held true to scale.
This projection is used mostly for north-south oriented maps. Distortion increases greatly the
farther east and west an area is from the central meridian.
Prompts
The following prompts display in the Projection Chooser if Polyconic is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the origin of the map projection in both spherical and rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for longitude of the desired central meridian and latitude of the origin of projection.
False easting at central meridian
False northing at origin
Enter values of false easting and false northing corresponding to the intersection of the central
meridian and the latitude of the origin of projection. These values must be in meters. It is often
convenient to make them large enough so that no negative coordinates occur within the region
of the map projection. That is, the origin of the rectangular coordinate system should fall outside
of the map projection to the south and west.
In Figure B-27, the central meridian is 100°W. This projection is used by the USGS for
topographic quadrangle maps.
540 ERDAS
USGS Projections
Quartic Authalic
Construction Pseudocylindrical
Property Equal-area
Meridians The central meridian is a straight line, and is 0.45 as long as the Equator.
The other meridians are curves that are equally spaced. They fit a “fourth-
order (quartic) equation and concave toward the central meridian”
(Snyder and Voxland, 1989).
Parallels Parallels are straight parallel lines that are unequally spaced. The parallels
have the greatest distance between in proximity to the Equator. Parallel
spacing changes slowly, and parallels are perpendicular to the central
meridian.
Graticule spacing See Meridians and Parallels. Poles are points. Symmetry exists about the
central meridian or the Equator.
Linear scale Scale is accurate along the Equator. Scale is constant along each latitude,
and is the same for the latitude of opposite sign.
Uses The McBryde-Thomas Flat-Polar Quartic projection uses Quartic
Authalic as its base (Snyder and Voxland, 1989). Used for world maps.
Outer meridians at high latitudes have great distortion. If the Quartic Authalic projection is
interrupted, distortion can be reduced.
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Quartic Authalic is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
542 ERDAS
USGS Projections
Robinson
Construction Pseudocylinder
Meridians Meridians are equally spaced, and concave toward the central meridian,
and look like elliptical arcs (Environmental Systems Research Institute,
1997).
Parallels Parallels are equally spaced straight lines between 38° N and S.
Graticule spacing The central meridian and all parallels are linear.
Linear scale Scale is true along latitudes 38° N and S. Scale is constant along any
specific latitude, and for the latitude of opposite sign.
Uses Useful for thematic and common world maps.
According to ESRI, the Robinson “central meridian is a straight line 0.51 times the length of the
Equator. Parallels are equally spaced straight lines between 38° N and S; spacing decreases
beyond these limits. The poles are 0.53 times the length of the Equator. The projection is based
upon tabular coordinates instead of mathematical formulas” (Environmental Systems Research
Institute, 1997).
This projection has been used both by Rand McNally and the National Geographic Society.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Robinson is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
544 ERDAS
USGS Projections
RSO
Construction Cylinder
Property Conformal
Parallels N/A
Linear scale “A line of true scale is drawn at an angle to the central meridian”
(Environmental Systems Research Institute, 1997).
Uses This projection should be used to map areas of Brunei and Malaysia.
The acronym RSO stands for Rectified Skewed Orthomorphic. This projection is used to map
areas of Brunei and Malaysia, and is each country’s national projection.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once RSO is selected. Respond to the
prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
RSO Type
Select the RSO Type. You can choose from Borneo or Malaysia.
Sinusoidal
Construction Pseudocylinder
Property Equal-area
Linear scale Linear scale is true on the parallels and the central meridian.
Prompts
The following prompts display in the Projection Chooser if Sinusoidal is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
546 ERDAS
USGS Projections
Space Oblique
Mercator
Construction Cylinder
Property Conformal
All meridians are curved lines except for the meridian crossed by the
Meridians
groundtrack at each polar approach.
The Space Oblique Mercator (SOM) projection is nearly conformal and has little scale
distortion within the sensing range of an orbiting mapping satellite such as Landsat. It is the first
projection to incorporate the Earth’s rotation with respect to the orbiting satellite.
The method of projection used is the modified cylindrical, for which the central line is curved
and defined by the groundtrack of the orbit of the satellite.The line of tangency is conceptual
and there are no graticules.
The SOM projection is defined by USGS. According to USGS, the X axis passes through the
descending node for each daytime scene. The Y axis is perpendicular to the X axis, to form a
Cartesian coordinate system. The direction of the X axis in a daytime Landsat scene is in the
direction of the satellite motion—south. The Y axis is directed east. For SOM projections used
by EOSAT, the axes are switched; the X axis is directed east and the Y axis is directed south.
The SOM projection is specifically designed to minimize distortion within sensing range of a
mapping satellite as it orbits the Earth. It can be used for the rectification of, and continuous
mapping from, satellite imagery. It is the standard format for data from Landsats 4 and 5. Plots
for adjacent paths do not match without transformation (Environmental Systems Research
Institute, 1991).
Prompts
The following prompts display in the Projection Chooser if Space Oblique Mercator is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Landsat vehicle ID (1-5)
Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Orbital path number (1-251 or 1-233)
548 ERDAS
USGS Projections
For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range
is from 1 to 233.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough to
prevent negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
Space Oblique The Space Oblique Mercator (Formats A&B) projection is similar to the Space Oblique
Mercator (Formats A & Mercator projection.
B)
Prompts
The following prompts display in the Projection Chooser once Space Oblique Mercator
(Formats A & B) is selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Format A (Generic Satellite)
Inclination of orbit at ascending node
Period of satellite revolution in minutes
Longitude of ascending orbit at equator
Landsat path flag
If you select Format A of the Space Oblique Mercator projection, you need to supply the
information listed above.
Format B (Landsat)
Landsat vehicle ID (1-5)
Specify whether the data are from Landsat 1, 2, 3, 4, or 5.
Path number (1-251 or 1-233)
For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the path range
is from 1 to 233.
550 ERDAS
USGS Projections
State Plane The State Plane is an X,Y coordinate system (not a map projection); its zones divide the US into
over 130 sections, each with its own projection surface and grid network (Figure B-32). With
the exception of very narrow states, such as Delaware, New Jersey, and New Hampshire, most
states are divided into between two and ten zones. The Lambert Conformal projection is used
for zones extending mostly in an east-west direction. The Transverse Mercator projection is
used for zones extending mostly in a north-south direction. Alaska, Florida, and New York use
either Transverse Mercator or Lambert Conformal for different areas. The Aleutian panhandle
of Alaska is prepared on the Oblique Mercator projection.
Zone boundaries follow state and county lines, and, because each zone is small, distortion is less
than one in 10,000. Each zone has a centrally located origin and a central meridian that passes
through this origin. Two zone numbering systems are currently in use—the USGS code system
and the National Ocean Service (NOS) code system (Table B-37, “NAD27 State Plane
Coordinate System for the United States,” on page 552 and Table B-38, “NAD83 State Plane
Coordinate System for the United States,” on page 557), but other numbering systems exist.
Prompts
The following prompts appear in the Projection Chooser if State Plane is selected. Respond to
the prompts as described.
State Plane Zone
Enter either the USGS zone code number as a positive value, or the NOS zone code number as
a negative value.
NAD27 or NAD83 or HARN
Either North America Datum 1927 (NAD27), North America Datum 1983 (NAD83), or High
Accuracy Reference Network (HARN) may be used to perform the State Plane calculations.
• NAD83 and HARN are based on the GRS 1980 spheroid. Some zone numbers have been
changed or deleted from NAD27.
Tables for both NAD27 and NAD83 zone numbers follow (Table B-37, “NAD27 State Plane
Coordinate System for the United States,” on page 552 and Table B-38, “NAD83 State Plane
Coordinate System for the United States,” on page 557). These tables include both USGS and
NOS code systems.
The following abbreviations are used in Table B-37, “NAD27 State Plane Coordinate System
for the United States,” on page 552 and Table B-38, “NAD83 State Plane Coordinate System
for the United States,” on page 557:
Tr Merc = Transverse Mercator
Lambert = Lambert Conformal Conic
Oblique = Oblique Mercator (Hotine)
Polycon = Polyconic
Table B-37: NAD27 State Plane Coordinate System for the United States
Code Number
552 ERDAS
USGS Projections
Table B-37: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
Table B-37: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
554 ERDAS
USGS Projections
Table B-37: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
Table B-37: NAD27 State Plane Coordinate System for the United States (Continued)
Code Number
556 ERDAS
USGS Projections
Table B-38: NAD83 State Plane Coordinate System for the United States
Code Number
Table B-38: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
558 ERDAS
USGS Projections
Table B-38: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
Table B-38: NAD83 State Plane Coordinate System for the United States (Continued)
Code Number
560 ERDAS
USGS Projections
Stereographic
Construction Plane
Property Conformal
Polar aspect: the meridians are straight lines radiating from the point of
tangency.
Meridians Oblique and equatorial aspects: the meridians are arcs of circles concave
toward a straight central meridian. In the equatorial aspect, the outer
meridian of the hemisphere is a circle centered at the projection center.
Polar aspect: the parallels are concentric circles.
Stereographic is a perspective projection in which points are projected from a position on the
opposite side of the globe onto a plane tangent to the Earth (Figure B-33 on page 562). All of
one hemisphere can easily be shown, but it is impossible to show both hemispheres in their
entirety from one center. It is the only azimuthal projection that preserves truth of angles and
local shape. Scale increases and parallels become more widely spaced farther from the center.
In the equatorial aspect, all parallels except the Equator are circular arcs. In the polar aspect,
latitude rings are spaced farther apart, with increasing distance from the pole.
Prompts
The following prompts display in the Projection Chooser if Stereographic is selected. Respond
to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Define the center of the map projection in both spherical and rectangular coordinates.
Longitude of center of projection
Latitude of center of projection
Enter values for the longitude and latitude of the desired center of the projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough so that no
negative coordinates occur within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
The Stereographic is the only azimuthal projection which is conformal. Figure B-33 shows two
views: A) Equatorial aspect, often used in the 16th and 17th centuries for maps of hemispheres;
and B) Oblique aspect, centered on 40°N.
562 ERDAS
USGS Projections
Stereographic The Stereographic (Extended) projection has the same attributes as the Stereographic
(Extended) projection, with the exception of the ability to define scale factors.
Prompts
The following prompts display in the Projection Chooser once Stereographic (Extended) is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
Scale factor
Designate the desired scale factor. This parameter is used to modify scale distortion. A value of
one indicates true scale only along the central meridian. It may be desirable to have true scale
along two lines equidistant from and parallel to the central meridian, or to lessen scale distortion
away from the central meridian. A factor of less than, but close to one is often used.
Longitude of origin of projection
Latitude of origin of projection
Enter the values for longitude of origin of projection and latitude of origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Transverse Mercator
Construction Cylinder
Property Conformal
Transverse Mercator is similar to the Mercator projection except that the axis of the projection
cylinder is rotated 90° from the vertical (polar) axis. The contact line is then a chosen meridian
instead of the Equator, and this central meridian runs from pole to pole. It loses the properties
of straight meridians and straight parallels of the standard Mercator projection (except for the
central meridian, the two meridians 90° away, and the Equator).
Transverse Mercator also loses the straight rhumb lines of the Mercator map, but it is a
conformal projection. Scale is true along the central meridian or along two straight lines
equidistant from, and parallel to, the central meridian. It cannot be edge-joined in an east-west
direction if each sheet has its own central meridian.
In the United States, Transverse Mercator is the projection used in the State Plane coordinate
system for states with predominant north-south extent. The entire Earth from 84°N to 80°S is
mapped with a system of projections called the Universal Transverse Mercator.
Prompts
The following prompts display in the Projection Chooser if Transverse Mercator is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
564 ERDAS
USGS Projections
Designate the desired scale factor at the central meridian. This parameter is used to modify scale
distortion. A value of one indicates true scale only along the central meridian. It may be
desirable to have true scale along two lines equidistant from and parallel to the central meridian,
or to lessen scale distortion away from the central meridian. A factor of less than, but close to
one is often used.
Finally, define the origin of the map projection in both spherical and rectangular coordinates.
Longitude of central meridian
Latitude of origin of projection
Enter values for longitude of the desired central meridian and latitude of the origin of projection.
False easting
False northing
Enter values of false easting and false northing corresponding to the intersection of the central
meridian and the latitude of the origin of projection. These values must be in meters. It is often
convenient to make them large enough so that there are no negative coordinates within the
region of the map projection. That is, origin of the rectangular coordinate system should fall
outside of the map projection to the south and west.
Property Compromise
Meridians N/A
Parallels N/A
The Two Point Equidistant projection “does not represent great circle
Uses paths” (Environmental Systems Research Institute, 1997). There is little
distortion if two chosen points are within 45 degrees of each other.
The Two Point Equidistant projection is used to show the distance from “either of two chosen
points to any other point on a map” (Environmental Systems Research Institute, 1997). Note
that the first point has to be west of the second point. This projection has been used by the
National Geographic Society to map areas of Asia.
Source: Environmental Systems Research Institute, 1997
Prompts
The following prompts display in the Projection Chooser once Two Point Equidistant is
selected. Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
False easting
False northing
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
Longitude of 1st point
Latitude of 1st point
Enter the longitude and latitude values of the first point.
Longitude of 2nd point
Latitude of 2nd point
566 ERDAS
USGS Projections
UTM UTM is an international plane (rectangular) coordinate system developed by the US Army that
extends around the world from 84°N to 80°S. The world is divided into 60 zones each covering
six degrees longitude. Each zone extends three degrees eastward and three degrees westward
from its central meridian. Zones are numbered consecutively west to east from the 180°
meridian (Figure B-35, Table B-42, “UTM Zones, Central Meridians, and Longitude Ranges,”
on page 569).
The Transverse Mercator projection is then applied to each UTM zone. Transverse Mercator is
a transverse form of the Mercator cylindrical projection. The projection cylinder is rotated 90°
from the vertical (polar) axis and can then be placed to intersect at a chosen central meridian.
The UTM system specifies the central meridian of each zone. With a separate projection for
each UTM zone, a high degree of accuracy is possible (one part in 1000 maximum distortion
within each zone). If the map to be projected extends beyond the border of the UTM zone, the
entire map may be projected for any UTM zone specified by you.
Prompts
The following prompts display in the Projection Chooser if UTM is chosen.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
UTM Zone
North or South
Figure B-35: Zones of the Universal Transverse Mercator Grid in the United States
126° 120° 114° 108° 102° 96° 90° 84° 78° 72° 66°
All values in Table B-42 are in full degrees east (E) or west (W) of the Greenwich prime
meridian (0).
568 ERDAS
USGS Projections
Central Central
Zone Range Zone Range
Meridian Meridian
1 177W 180W-174W 31 3E 0-6E
2 171W 174W-168W 32 9E 6E-12E
3 165W 168W-162W 33 15E 12E-18E
4 159W 162W-156W 34 21E 18E-24E
5 153W 156W-150W 35 27E 24E-30E
6 147W 150W-144W 36 33E 30E-36E
7 141W 144W-138W 37 39E 36E-42E
8 135W 138W-132W 38 45E 42E-48E
9 129W 132W-126W 39 51E 48E-54E
10 123W 126W-120W 40 57E 54E-60E
11 117W 120W-114W 41 63E 60E-66E
12 111W 114W-108W 42 69E 66E-72E
13 105W 108W-102W 43 75E 72E-78E
14 99W 102W-96W 44 81E 78E-84E
15 93W 96W-90W 45 87E 84E-90E
16 87W 90W-84W 46 93E 90E-96E
17 81W 84W-78W 47 99E 96E-102E
18 75W 78W-72W 48 105E 102E-108E
19 69W 72W-66W 49 111E 108E-114E
20 63W 66W-60W 50 117E 114E-120E
21 57W 60W-54W 51 123E 120E-126E
22 51W 54W-48W 52 129E 126E-132E
23 45W 48W-42W 53 135E 132E-138E
24 39W 42W-36W 54 141E 138E-144E
25 33W 36W-30W 55 147E 144E-150E
26 27W 30W-24W 56 153E 150E-156E
27 21W 24W-18W 57 159E 156E-162E
28 15W 18W-12W 58 165E 162E-168E
29 9W 12W-6W 59 171E 168E-174E
30 3W 6W-0 60 177E 174E-180E
Construction Miscellaneous
Property Compromise
Meridians Meridians are circular arcs concave toward a straight central meridian.
Parallels are circular arcs concave toward the poles, except for a straight
Parallels
Equator.
Meridian spacing is equal at the Equator. The parallels are spaced farther
apart toward the poles. The central meridian and Equator are straight
Graticule spacing
lines. The poles are commonly not represented. The graticule spacing
results in a compromise of all properties.
Linear scale is true along the Equator. Scale increases rapidly toward the
Linear scale
poles.
The Van der Grinten projection is used by the National Geographic
Uses Society for world maps. Used by the USGS to show distribution of
mineral resources on the sea floor.
The Van der Grinten I projection produces a map that is neither conformal nor equal-area
(Figure B-36 on page 571). It compromises all properties, and represents the Earth within a
circle.
All lines are curved except the central meridian and the Equator. Parallels are spaced farther
apart toward the poles. Meridian spacing is equal at the Equator. Scale is true along the Equator,
but increases rapidly toward the poles, which are usually not represented.
Van der Grinten I avoids the excessive stretching of the Mercator and the shape distortion of
many of the equal-area projections. It has been used to show distribution of mineral resources
on the ocean floor.
Prompts
The following prompts display in the Projection Chooser if Van der Grinten I is selected.
Respond to the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
570 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
The Van der Grinten I projection resembles the Mercator, but it is not conformal.
Wagner IV
Construction Pseudocylinder
Property Equal-area
Meridians The central meridian is a straight line one half as long as the Equator. The
other meridians are portions of ellipses that are equally spaced. They are
concave towards the central meridian. The meridians at 103° 55’ E and W
of the central meridian are circular arcs.
Parallels Parallels are unequally spaced. Parallels have the widest space between
them at the Equator, and are perpendicular to the central meridian.
Graticule spacing See Meridians and Parallels. Poles are lines one half as long as the
Equator. Symmetry exists around the central meridian or the Equator.
Linear scale Scale is accurate along latitudes 42° 59’ N and S. Scale is constant along
any specific latitude as well as the latitude of opposite sign.
Uses Useful for world maps.
Prompts
The following prompts display in the Projection Chooser if Wagner IV is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
572 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
Wagner VII
Property Equal-area
Meridians Central meridian is straight and half the Equator’s length. Other meridians
are unequally spaced curves. They are concave toward the central
meridian.
Parallels The Equator is straight; the other parallels are curved. Other parallels are
unequally spaced curves, which are concave toward the closest pole.
Graticule spacing See Meridians and Parallels. Poles are curved lines. Symmetry exists
about the central meridian or the Equator.
Linear scale Scale decreases along the central meridian and the Equator relative to
distance from the center of the Wagner VII projection.
Uses Used for world maps.
Distortion is prevalent in polar areas. The Wagner VII projection is modified based on the
Hammer projection. “The poles correspond to the 65th parallels on the Hammer [projection],
and meridians are repositioned” (Snyder and Voxland, 1989).
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser if Wagner IV is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
574 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the center of the projection.
These values must be in meters. It is often convenient to make them large enough to prevent
negative coordinates within the region of the map projection. That is, the origin of the
rectangular coordinate system should fall outside of the map projection to the south and west.
Winkel I
Construction Pseudocylinder
Meridians Central meridian is a straight line 0.61 the length of the Equator. The
other meridians are sinusoidal curves that are equally spaced and concave
toward the central meridian.
Parallels Parallels are equally spaced.
Graticule spacing See Meridians and Parallels. Pole lines are 0.61 the length of the Equator.
Symmetry exists about the central meridian or the Equator.
Linear scale Scale is true along latitudes 50° 28’ N and S. Scale is constant along any
given latitude as well as the latitude of the opposite sign.
Uses Used for world maps.
The Winkel I projection is “not free of distortion at any point” (Snyder and Voxland, 1989).
Source: Snyder and Voxland, 1989
Prompts
The following prompts display in the Projection Chooser once Winkel I is selected. Respond to
the prompts as described.
Spheroid Name
Datum Name
Select the spheroid and datum to use.
576 ERDAS
USGS Projections
Enter values of false easting and false northing corresponding to the desired center of the
projection. These values must be in meters. It is often convenient to make them large enough so
that no negative coordinates occur within the region of the map projection. That is, the origin of
the rectangular coordinate system should fall outside of the map projection to the south and
west.
External The following external projections are supported in ERDAS IMAGINE and are described in this
Projections section. Some of these projections were discussed in the previous section. Those descriptions
are not repeated here. Simply refer to the page number in parentheses for more information.
NOTE: ERDAS IMAGINE does not support datum shifts for these external projections.
• Cassini-Soldner
• Modified Polyconic
• Modified Stereographic
• Swiss Cylindrical
578 ERDAS
External Projections
• Winkel’s Tripel
Construction Cone
Property Conformal
Parallels Parallels are complex curves concave toward the nearest pole.
Graticule spacing increases away from the lines of true scale and retains
Graticule spacing
the property of conformality.
Linear scale is true along two lines that do not lie along any meridian or
parallel. Scale is compressed between these lines and expanded beyond
Linear scale
them. Linear scale is generally good, but there is as much as a 10% error
at the edge of the projection as used.
Used to represent one or both of the American continents. Examples are
Uses the Basement map of North America and the Tectonic map of North
America.
The Bipolar Oblique Conic Conformal projection was developed by O.M. Miller and William
A. Briesemeister in 1941 specifically for mapping North and South America, and maintains
conformality for these regions. It is based upon the Lambert Conformal Conic, using two
oblique conic projections side-by-side. The two oblique conics are joined with the poles 104°
apart. A great circle arc 104° long begins at 20°S and 110°W, cuts through Central America,
and terminates at 45°N and approximately 19°59’36”W. The scale of the map is then increased
by approximately 3.5%. The origin of the coordinates is made 17°15’N, 73°02’W.
Prompts
The following prompts display in the Projection Chooser if Bipolar Oblique Conic Conformal
is selected.
Projection Name
Spheroid Type
Datum Name
580 ERDAS
External Projections
Cassini-Soldner
Construction Cylinder
Property Compromise
Central meridian, each meridian 90° from the central meridian, and the
Meridians
Equator are straight lines. Other meridians are complex curves.
Complex curves for all meridians and parallels, except for the Equator,
Graticule spacing the central meridian, and each meridian 90° away from the central
meridian, all of which are straight.
Scale is true along the central meridian, and along lines perpendicular to
Linear scale the central meridian. Scale is constant but not true along lines parallel to
the central meridian on the spherical form, and nearly so for the ellipsoid.
Used for topographic mapping, formerly in England and currently in a
Uses
few other countries, such as Denmark, Germany, and Malaysia.
The Cassini projection was devised by C. F. Cassini de Thury in 1745 for the survey of France.
Mathematical analysis by J. G. von Soldner in the early 19th century led to more accurate
ellipsoidal formulas. Today, it has largely been replaced by the Transverse Mercator projection,
although it is still in limited use outside of the United States. It was one of the major topographic
mapping projections until the early 20th century.
The spherical form of the projection bears the same relation to the Equidistant Cylindrical, or
Plate Carrée, projection that the spherical Transverse Mercator bears to the regular Mercator.
Instead of having the straight meridians and parallels of the Equidistant Cylindrical, the Cassini
has complex curves for each, except for the Equator, the central meridian, and each meridian
90° away from the central meridian, all of which are straight.
There is no distortion along the central meridian if it is maintained at true scale, which is the
usual case. If it is given a reduced scale factor, the lines of true scale are two straight lines on
the map, parallel to and equidistant from, the central meridian. There is no distortion along them
instead.
The scale is correct along the central meridian, and also along any straight line perpendicular to
the central meridian. It gradually increases in a direction parallel to the central meridian as the
distance from that meridian increases, but the scale is constant along any straight line on the map
that is parallel to the central meridian. Therefore, Cassini-Soldner is more suitable for regions
that are predominantly north-south in extent, such as Great Britain, than regions extending in
other directions. The projection is neither equal-area nor conformal, but it has a compromise of
both features.
The Cassini-Soldner projection was adopted by the Ordnance Survey for the official survey of
Great Britain during the second half of the 19th century. A system equivalent to the oblique
Cassini-Soldner projection was used in early coordinate transformations for ERTS (now
Landsat) satellite imagery, but it was changed to Oblique Mercator (Hotine) in 1978, and to the
Space Oblique Mercator in 1982.
Prompts
The following prompts display in the Projection Chooser if Cassini-Soldner is selected.
Projection Name
Spheroid Type
Datum Name
582 ERDAS
External Projections
Laborde Oblique In 1928, Laborde combined a conformal sphere with a complex-algebra transformation of the
Mercator Oblique Mercator projection for the topographic mapping of Madagascar. This variation is now
known as the Laborde Oblique Mercator. The central line is a great circle arc.
Prompts
The following prompts display in the Projection Chooser if Laborde Oblique Mercator is
selected.
Projection Name
Spheroid Type
Datum Name
Minimum Error The Minimum Error Conformal projection is the same as the New Zealand Map Grid projection.
Conformal
584 ERDAS
External Projections
Modified Polyconic
Construction Cone
Property Compromise
Parallels are circular arcs. The top and bottom parallels of each sheet are
Parallels
nonconcentric circular arcs.
The top and bottom parallels of each sheet are nonconcentric circular
Graticule spacing arcs. The two parallels are spaced from each other according to the true
scale along the central meridian, which is slightly reduced.
Scale is true along each parallel and along two meridians, but no parallel
Linear scale
is standard.
Uses Used for the International Map of the World (IMW) series until 1962.
The Modified Polyconic projection was devised by Lallemand of France, and in 1909 it was
adopted by the International Map Committee (IMC) in London as the basis for the 1:1,000,000-
scale International Map of the World (IMW) series.
The projection differs from the ordinary Polyconic in two principal features: all meridians are
straight, and there are two meridians that are made true to scale. Adjacent sheets fit together
exactly not only north to south, but east to west. There is still a gap when mosaicking in all
directions, in that there is a gap between each diagonal sheet, and either one or the other adjacent
sheet.
In 1962, a U.N. conference on the IMW adopted the Lambert Conformal Conic and the Polar
Stereographic projections to replace the Modified Polyconic.
Prompts
The following prompts display in the Projection Chooser if Modified Polyconic is selected.
Projection Name
Spheroid Type
Datum Name
Modified
Stereographic
Construction Plane
Property Conformal
Graticule spacing The graticule is normally not symmetrical about any axis or point.
Scale is true along irregular lines, but the map is usually designed to
Linear scale
minimize scale variation throughout a selected region.
Used for maps of continents in the Eastern Hemisphere, for the Pacific
Uses
Ocean, and for maps of Alaska and the 50 United States.
The meridians and parallels of the Modified Stereographic projection are generally curved, and
there is usually no symmetry about any point or line. There are limitations to these
transformations. Most of them can only be used within a limited range. As the distance from the
projection center increases, the meridians, parallels, and shorelines begin to exhibit loops,
overlapping, and other undesirable curves. A world map using the GS50 (50-State) projection
is almost illegible with meridians and parallels intertwined like wild vines.
Prompts
The following prompts display in the Projection Chooser if Modified Stereographic is selected.
Projection Name
Spheroid Type
Datum Name
586 ERDAS
External Projections
Construction Pseudocylinder
Property Equal-area
All of the meridians are ellipses. The central meridian is a straight line,
Meridians
and 90° meridians are circular arcs (Pearson, 1990).
The Equator and parallels are straight lines perpendicular to the central
Parallels
meridian, but they are not equally spaced.
Linear graticules include the central meridian and the Equator
(Environmental Systems Research Institute, 1992). Meridians are equally
Graticule spacing spaced along the Equator and along all other parallels. The parallels are
straight parallel lines, but they are not equally spaced. The poles are
points.
Scale is true along latitudes 40°44’N and S. Distortion increases with
Linear scale distance from these lines and becomes severe at the edges of the
projection (Environmental Systems Research Institute, 1992).
Often used for world maps (Pearson, 1990). Suitable for thematic or
Uses distribution mapping of the entire world, frequently in interrupted form
(Environmental Systems Research Institute, 1992).
The second oldest pseudocylindrical projection that is still in use (after the Sinusoidal) was
presented by Carl B. Mollweide (1774-1825) of Halle, Germany, in 1805. It is an equal-area
projection of the Earth within an ellipse. It has had a profound effect on world map projections
in the 20th century, especially as an inspiration for other important projections, such as the Van
der Grinten.
The Mollweide is normally used for world maps and occasionally for a very large region, such
as the Pacific Ocean. This is because only two points on the Mollweide are completely free of
distortion unless the projection is interrupted. These are the points at latitudes 40°44’12”N and
S on the central meridian(s).
The world is shown in an ellipse with the Equator, its major axis, twice as long as the central
meridian, its minor axis. The meridians 90° east and west of the central meridian form a
complete circle. All other meridians are elliptical arcs which, with their opposite numbers on
the other side of the central meridian, form complete ellipses that meet at the poles.
Prompts
The following prompts display in the Projection Chooser if Mollweide Equal Area is selected.
Projection Name
Spheroid Type
Datum Name
Rectified Skew Martin Hotine (1898 - 1968) called the Oblique Mercator projection the Rectified Skew
Orthomorphic Orthomorphic projection.
Prompts
The following prompts display in the Projection Chooser if Rectified Skew Orthomorphic is
selected.
Projection Name
Spheroid Type
Datum Name
588 ERDAS
External Projections
Robinson
Pseudocylindrical
Construction Pseudocylinder
Property Compromise
Meridians are elliptical arcs, equally spaced, and concave toward the
Meridians
central meridian.
Parallels are straight lines and are parallel. The individual parallels are
Graticule spacing
evenly divided by the meridians (Pearson, 1990).
Generally, scale is made true along latitudes 38°N and S. Scale is constant
Linear scale along any given latitude, and for the latitude of opposite sign
(Environmental Systems Research Institute, 1992).
Developed for use in general and thematic world maps. Used by Rand
McNally since the 1960s and by the National Geographic Society since
Uses
1988 for general and thematic world maps (Environmental Systems
Research Institute, 1992).
The Robinson Pseudocylindrical projection provides a means of showing the entire Earth in an
uninterrupted form. The continents appear as units and are in relatively correct size and location.
Poles are represented as lines.
Meridians are equally spaced and resemble elliptical arcs, concave toward the central meridian.
The central meridian is a straight line 0.51 times the length of the Equator. Parallels are equally
spaced straight lines between 38°N and S, and then the spacing decreases beyond these limits.
The poles are 0.53 times the length of the Equator. The projection is based upon tabular
coordinates instead of mathematical formulas (Environmental Systems Research Institute,
1992).
Prompts
The following prompts display in the Projection Chooser if Robinson Pseudocylindrical is
selected.
Projection Name
Spheroid Type
Datum Name
Southern Orientated Southern Orientated Gauss Conformal is another name for the Transverse Mercator projection,
Gauss Conformal after mathematician Friedrich Gauss (1777-1855). It is also called the Gauss-Krüger projection.
Prompts
The following prompts display in the Projection Chooser if Southern Orientated Gauss
Conformal is selected.
Projection Name
Spheroid Type
Datum Name
590 ERDAS
External Projections
Swiss Cylindrical The Swiss Cylindrical projection is a cylindrical projection used by the Swiss
Landestopographie, which is a form of the Oblique Mercator projection.
Winkel’s Tripel
Central meridian is straight. Other meridians are curved and are equally
Meridians
spaced along the Equator, and concave toward the central meridian.
Equidistant spacing of parallels. Equator and the poles are straight. Other
Parallels
parallels are curved and concave toward the nearest pole.
Graticule spacing Symmetry is maintained along the central meridian or the Equator.
Linear scale Scale is true along the central meridian and constant along the Equator.
Prompts
The following prompts display in the Projection Chooser if Winkel’s Tripel is selected.
Projection Name
Spheroid Type
Datum Name
592 ERDAS
Glossary
Numerics 2D—two-dimensional.
3D—three-dimensional.
594 ERDAS
B
B band—a set of data file values for a specific portion of the electromagnetic spectrum of
reflected light or emitted heat (red, green, blue, near-infrared, infrared, thermal, etc.), or
some other user-defined information created by combining or enhancing the original
bands, or creating new bands from other sources. Sometimes called channel.
banding—see striping.
base map—a map portraying background reference information onto which other information
is placed. Base maps usually show the location and extent of natural surface features and
permanent human-made features.
Basic Image Interchange Format—(BIIF) the basis for the NITFS format.
batch file—a file that is created in the Batch mode of ERDAS IMAGINE. All steps are recorded
for a later run. This file can be edited.
batch mode—a mode of operating ERDAS IMAGINE in which steps are recorded for later use.
bathymetric map—a map portraying the shape of a water body or reservoir using isobaths
(depth contours).
Bayesian—a variation of the maximum likelihood classifier, based on the Bayes Law of
probability. The Bayesian classifier allows the application of a priori weighting factors,
representing the probabilities that pixels are assigned to each class.
BIIF—see Basic Image Interchange Format.
BIL—band interleaved by line. A form of data storage in which each record in the file contains
a scan line (row) of data for one band. All bands of data for a given line are stored
consecutively within the file.
bilinear interpolation—a resampling method that uses the data file values of four pixels in a 2
× 2 window to calculate an output data file value by computing a weighted average of the
input data file values with a bilinear function.
bin function—a mathematical function that establishes the relationship between data file values
and rows in a descriptor table.
bins—ordered sets of pixels. Pixels are sorted into a specified number of bins. The pixels are
then given new values based upon the bins to which they are assigned.
BIP—band interleaved by pixel. A form of data storage in which the values for each band are
ordered within a given pixel. The pixels are arranged sequentially on the tape.
bit—a binary digit, meaning a number that can have two possible values 0 and 1, or off and on.
A set of bits, however, can have many more values, depending upon the number of bits
used. The number of values that can be expressed by a set of bits is 2 to the power of the
number of bits used. For example, the number of values that can be expressed by 3 bits
is 8 (23 = 8).
block of photographs—formed by the combined exposures of a flight. The block consists of a
number of parallel strips with a sidelap of 20-30%.
blocked—a method of storing data on 9-track tapes so that there are more logical records in each
physical record.
blocking factor—the number of logical records in each physical record. For instance, a record
may contain 28,000 bytes, but only 4,000 columns due to a blocking factor of 7.
book map—a map laid out like the pages of a book. Each page fits on the paper used by the
printer. There are neatlines and tick marks on all sides of every page.
Boolean—logical, based upon, or reducible to a true or false condition.
border—on a map, a line that usually encloses the entire map, not just the image area as does a
neatline.
boundary—a neighborhood analysis technique that is used to detect boundaries between
thematic classes.
bpi—bits per inch. A measure of data storage density for magnetic tapes.
breakline—an elevation polyline in which each vertex has its own X, Y, Z value.
brightness value—the quantity of a primary color (red, green, blue) to be output to a pixel on
the display device. Also called intensity value, function memory value, pixel value,
display value, and screen value.
BSQ—band sequential. A data storage format in which each band is contained in a separate file.
buffer zone—a specific area around a feature that is isolated for or from further analysis. For
example, buffer zones are often generated around streams in site assessment studies so
that further analyses exclude these areas that are often unsuitable for development.
596 ERDAS
C
build—the process of constructing the topology of a vector layer by processing points, lines,
and polygons. See clean.
bundle—the unit of photogrammetric triangulation after each point measured in an image is
connected with the perspective center by a straight light ray. There is one bundle of light
rays for each image.
bundle attitude—defined by a spatial rotation matrix consisting of three angles
(κ, ω, ϕ).
bundle location—defined by the perspective center, expressed in units of the specified map
projection.
byte—8 bits of data.
check point analysis—the act of using check points to independently verify the degree of
accuracy of a triangulation.
chi-square distribution—a nonsymmetrical data distribution: its curve is characterized by a tail
that represents the highest and least frequent data values. In classification thresholding,
the tail represents the pixels that are most likely to be classified incorrectly.
choropleth map—a map portraying properties of a surface using area symbols. Area symbols
usually represent categorized classes of the mapped phenomenon.
CIB—see Controlled Image Base.
city-block distance—the physical or spectral distance that is measured as the sum of distances
that are perpendicular to one another.
class—a set of pixels in a GIS file that represents areas that share some condition. Classes are
usually formed through classification of a continuous raster layer.
class value—a data file value of a thematic file that identifies a pixel as belonging to a particular
class.
classification—the process of assigning the pixels of a continuous raster image to discrete
categories.
classification accuracy table—for accuracy assessment, a list of known values of reference
pixels, supported by some ground truth or other a priori knowledge of the true class, and
a list of the classified values of the same pixels, from a classified file to be tested.
classification scheme—(or classification system) a set of target classes. The purpose of such a
scheme is to provide a framework for organizing and categorizing the information that
can be extracted from the data.
clean—the process of constructing the topology of a vector layer by processing lines and
polygons. See build.
client—on a computer on a network, a program that accesses a server utility that is on another
machine on the network.
clump—a contiguous group of pixels in one class. Also called raster region.
clustering—unsupervised training; the process of generating signatures based on the natural
groupings of pixels in image data when they are plotted in spectral space.
clusters—the natural groupings of pixels when plotted in spectral space.
CMY—cyan, magenta, yellow. Primary colors of pigment used by printers, whereas display
devices use RGB.
CNES—Centre National d’Etudes Spatiales. The corporation was founded in 1961. It provides
support for ESA. CNES suggests and executes programs (Centre National D’Etudes
Spatiales, 1998).
coefficient—one number in a matrix, or a constant in a polynomial expression.
coefficient of variation—a scene-derived parameter that is used as input to the Sigma and Local
Statistics radar enhancement filters.
598 ERDAS
C
continuous—a term used to describe raster data layers that contain quantitative and related
values. See continuous data.
continuous data—a type of raster data that are quantitative (measuring a characteristic) and
have related, continuous values, such as remotely sensed images (e.g., Landsat, SPOT,
etc.).
contour map—a map in which a series of lines connects points of equal elevation.
contrast stretch—the process of reassigning a range of values to another range, usually
according to a linear function. Contrast stretching is often used in displaying continuous
raster layers, since the range of data file values is usually much narrower than the range
of brightness values on the display device.
control point—a point with known coordinates in the ground coordinate system, expressed in
the units of the specified map projection.
Controlled Image Base—(CIB) a military data product based upon the general RPF
specification.
convolution filtering—the process of averaging small sets of pixels across an image. Used to
change the spatial frequency characteristics of an image.
convolution kernel—a matrix of numbers that is used to average the value of each pixel with
the values of surrounding pixels in a particular way. The numbers in the matrix serve to
weight this average toward particular pixels.
coordinate system—a method of expressing location. In two-dimensional coordinate systems,
locations are expressed by a column and row, also called x and y.
correlation threshold—a value used in rectification to determine whether to accept or discard
GCPs. The threshold is an absolute value threshold ranging from 0.000 to 1.000.
correlation windows—windows that consist of a local neighborhood of pixels. One example is
square neighborhoods (e.g., 3 × 3, 5 × 5, 7 × 7 pixels).
corresponding GCPs—the GCPs that are located in the same geographic location as the
selected GCPs, but are selected in different files.
covariance—measures the tendencies of data file values for the same pixel, but in different
bands, to vary with each other in relation to the means of their respective bands. These
bands must be linear. Covariance is defined as the average product of the differences
between the data file values in each band and the mean of each band.
covariance matrix—a square matrix that contains all of the variances and covariances within
the bands in a data file.
CPU— see central processing unit.
credits—on maps, the text that can include the data source and acquisition date, accuracy
information, and other details that are required for or helpful to readers.
CRG—see Compressed Raster Graphics.
crisp filter—a filter used to sharpen the overall scene luminance without distorting the
interband variance content of the image.
600 ERDAS
D
cross correlation—a calculation that computes the correlation coefficient of the gray values
between the template window and the search window.
cubic convolution—a method of resampling that uses the data file values of sixteen pixels in a
4 × 4 window to calculate an output data file value with a cubic function.
current directory—also called default directory, it is the directory that you are in. It is the
default path.
cylindrical projection—a map projection that is created from projecting the surface of the Earth
to the surface of a cylinder.
D dangling node—a line that does not close to form a polygon, or that extends past an
intersection.
data—1. in the context of remote sensing, a computer file containing numbers that represent a
remotely sensed image, and can be processed to display that image. 2. a collection of
numbers, strings, or facts that requires some processing before it is meaningful.
database (one word)—a relational data structure usually used to store tabular information.
Examples of popular databases include SYBASE, dBase, Oracle, INFO, etc.
data base (two words)—in ERDAS IMAGINE, a set of continuous and thematic raster layers,
vector layers, attribute information, and other kinds of data that represent one area of
interest. A data base is usually part of a GIS.
data file—a computer file that contains numbers that represent an image.
data file value—each number in an image file. Also called file value, image file value, DN,
brightness value, pixel.
datum—see reference plane.
DCT—see Discrete Cosine Transformation.
decision rule—an equation or algorithm that is used to classify image data after signatures have
been created. The decision rule is used to process the data file values based upon the
signature statistics.
decorrelation stretch—a technique used to stretch the principal components of an image, not
the original image.
default directory—see current directory.
Defense Mapping Agency—(DMA) agency that supplies VPF, ARC digital raster, DRG,
ADRG, and DTED files.
degrees of freedom—when chi-square statistics are used in thresholding, the number of bands
in the classified file.
DEM—see digital elevation model.
densify—the process of adding vertices to selected lines at a user-specified tolerance.
density—1. the number of bits per inch on a magnetic tape. 9-track tapes are commonly stored
at 1600 and 6250 bpi. 2. a neighborhood analysis technique that outputs the number of
pixels that have the same value as the analyzed pixel in a user-specified window.
derivative map—a map created by altering, combining, or analyzing other maps.
descriptor—see attribute.
desktop scanners—general purpose devices that lack the image detail and geometric accuracy
of photogrammetric quality units, but are much less expensive.
detector—the device in a sensor system that records electromagnetic radiation.
developable surface—a flat surface, or a surface that can be easily flattened by being cut and
unrolled, such as the surface of a cone or a cylinder.
DFT—see Discrete Fourier Transform.
DGPS—see Differential Correction.
Differential Correction—(DGPS) can be used to remove the majority of the effects of Selective
Availability.
digital elevation model—(DEM) continuous raster layers in which data file values represent
elevation. DEMs are available from the USGS at 1:24,000 and 1:250,000 scale, and can
be produced with terrain analysis programs, IMAGINE IFSAR DEM, IMAGINE
OrthoMAX™, and IMAGINE StereoSAR DEM.
Digital Number—(DN) variation in pixel intensity due to composition of what it represents. For
example, the DN of water is different from that of land. DN is expressed in a value—
typically from 0-255.
digital orthophoto—an aerial photo or satellite scene that has been transformed by the
orthogonal projection, yielding a map that is free of most significant geometric
distortions.
digital orthophoto quadrangle—(DOQ) a computer-generated image of an aerial photo (United
States Geological Survey, 1999b).
digital photogrammetry—photogrammetry as applied to digital images that are stored and
processed on a computer. Digital images can be scanned from photographs or can be
directly captured by digital cameras.
Digital Line Graph—(DLG) a vector data format created by the USGS.
Digital Terrain Elevation Data—(DTED) data produced by the DMA. DTED data comes in
two types, both in Arc/second format: DTED 1—a 1° × 1° area of coverage, and DTED
2—a 1° × 1° or less area of coverage.
digital terrain model—(DTM) a discrete expression of topography in a data array, consisting
of a group of planimetric coordinates (X,Y) and the elevations of the ground points and
breaklines.
digitized raster graphic—(DRG) a digital replica of DMA hardcopy graphic products. See also
ADRG.
602 ERDAS
D
digitizing—any process that converts nondigital data into numeric data, usually to be stored on
a computer. In ERDAS IMAGINE, digitizing refers to the creation of vector data from
hardcopy materials or raster images that are traced using a digitizer keypad on a
digitizing tablet, or a mouse on a display device.
DIME—see Dual Independent Map Encoding.
dimensionality—a term referring to the number of bands being classified. For example, a data
file with three bands is said to be three-dimensional, since three-dimensional spectral
space is plotted to analyze the data.
directory—an area of a computer disk that is designated to hold a set of files. Usually,
directories are arranged in a tree structure, in which directories can also contain many
levels of subdirectories.
Discrete Cosine Transformation—(DCT) an element of a commonly used form of JPEG,
which is a compression technique.
Discrete Fourier Transform—(DFT) method of removing striping and other noise from radar
images. See also Fast Fourier Transform.
displacement—the degree of geometric distortion for a point that is not on the nadir line.
display device—the computer hardware consisting of a memory board and a monitor. It displays
a visible image from a data file or from some user operation.
display driver—the ERDAS IMAGINE utility that interfaces between the computer running
ERDAS IMAGINE software and the display device.
display memory—the subset of image memory that is actually viewed on the display screen.
display pixel—one grid location on a display device or printout.
display resolution—the number of pixels that can be viewed on the display device monitor,
horizontally and vertically (i.e., 512 × 512 or 1024 × 1024).
distance—see Euclidean distance, spectral distance.
distance image file—a one-band, 16-bit file that can be created in the classification process, in
which each data file value represents the result of the distance equation used in the
program. Distance image files generally have a chi-square distribution.
distribution—the set of frequencies with which an event occurs, or the set of probabilities that
a variable has a particular value.
distribution rectangles—(DR) the geographic data sets into which ADRG data are divided.
dithering—a display technique that is used in ERDAS IMAGINE to allow a smaller set of
colors appear to be a larger set of colors.
divergence—a statistical measure of distance between two or more signatures. Divergence can
be calculated for any combination of bands used in the classification; bands that diminish
the results of the classification can be ruled out.
diversity—a neighborhood analysis technique that outputs the number of different values within
a user-specified window.
DLG—see Digital Line Graph.
E Earth Observation Satellite Company—(EOSAT) a private company that directs the Landsat
satellites and distributes Landsat imagery.
Earth Resources Observation Systems—(EROS) a division of the USGS National Mapping
Division. EROS is involved with managing data and creating systems, as well as research
(USGS, 1999a).
Earth Resources Technology Satellites—(ERTS) in 1972, NASA’s first civilian program to
acquire remotely sensed digital satellite data, later renamed to Landsat.
EDC—see EROS Data Center.
edge detector—a convolution kernel, which is usually a zero-sum kernel, that smooths out or
zeros out areas of low spatial frequency and creates a sharp contrast where spatial
frequency is high. High spatial frequency is at the edges between homogeneous groups
of pixels.
edge enhancer—a high-frequency convolution kernel that brings out the edges between
homogeneous groups of pixels. Unlike an edge detector, it only highlights edges, it does
not necessarily eliminate other features.
604 ERDAS
E
606 ERDAS
F
European Space Agency—(ESA) company with two satellites, ERS-1 and ERS-2, that collect
radar data. For more information, visit the ESA web site at http://www.esa.int.
extract—selected bands of a complete set of NOAA AVHRR data.
F false color—a color scheme in which features have expected colors. For instance, vegetation is
green, water is blue, etc. These are not necessarily the true colors of these features.
false easting—an offset between the y-origin of a map projection and the y-origin of a map.
Typically used so that no y-coordinates are negative.
false northing—an offset between the x-origin of a map projection and the x-origin of a map.
Typically used so that no x-coordinates are negative.
fast format—a type of BSQ format used by EOSAT to store Landsat TM data.
Fast Fourier Transform—(FFT) a type of Fourier Transform faster than the DFT. Designed to
remove noise and periodic features from radar images. It converts a raster image from the
spatial domain into a frequency domain image.
feature based matching—an image matching technique that determines the correspondence
between two image features.
feature collection—the process of identifying, delineating, and labeling various types of natural
and human-made phenomena from remotely-sensed images.
feature extraction—the process of studying and locating areas and objects on the ground and
deriving useful information from images.
feature space—an abstract space that is defined by spectral units (such as an amount of
electromagnetic radiation).
feature space area of interest—a user-selected area of interest (AOI) that is selected from a
feature space image.
feature space image—a graph of the data file values of one band of data against the values of
another band (often called a scatterplot).
FFT—see Fast Fourier Transform.
fiducial center—the center of an aerial photo.
fiducials—four or eight reference markers fixed on the frame of an aerial metric camera and
visible in each exposure. Fiducials are used to compute the transformation from data file
to image coordinates.
field—in an attribute database, a category of information about each class or feature, such as
Class name and Histogram.
field of view—(FOV) in perspective views, an angle that defines how far the view is generated
to each side of the line of sight.
file coordinates—the location of a pixel within the file in x,y coordinates. The upper left file
coordinate is usually 0,0.
file pixel—the data file value for one data unit in an image file.
file specification or filespec—the complete file name, including the drive and path, if necessary.
If a drive or path is not specified, the file is assumed to be in the current drive and
directory.
filled—referring to polygons; a filled polygon is solid or has a pattern, but is not transparent. An
unfilled polygon is simply a closed vector that outlines the area of the polygon.
filtering—the removal of spatial or spectral features for data enhancement. Convolution
filtering is one method of spatial filtering. Some texts may use the terms filtering and
spatial filtering synonymously.
flip—the process of reversing the from-to direction of selected lines or links.
focal length—the orthogonal distance from the perspective center to the image plane.
focal operations—filters that use a moving window to calculate new values for each pixel in
the image based on the values of the surrounding pixels.
focal plane—the plane of the film or scanner used in obtaining an aerial photo.
Fourier analysis—an image enhancement technique that was derived from signal processing.
FOV—see field of view.
from-node—the first vertex in a line.
full set—all bands of a NOAA AVHRR data set.
function memories—areas of the display device memory that store the lookup tables, which
translate image memory values into brightness values.
function symbol—an annotation symbol that represents an activity. For example, on a map of
a state park, a symbol of a tent would indicate the location of a camping area.
Fuyo 1 (JERS-1)—the Japanese radar satellite launched in February 1992.
608 ERDAS
G
geocoded data—an image(s) that has been rectified to a particular map projection and cell size
and has had radiometric corrections applied.
Geographic Base File—(GBF) along with DIME, sometimes provides the cartographic base
for TIGER/Line files, which cover the US, Puerto Rico, Guam, the Virgin Islands,
American Samoa, and the Trust Territories of the Pacific.
geographic information system—(GIS) a unique system designed for a particular application
that stores, enhances, combines, and analyzes layers of geographic data to produce
interpretable information. A GIS may include computer images, hardcopy maps,
statistical data, and any other data needed for a study, as well as computer software and
human knowledge. GISs are used for solving complex geographic planning and
management problems.
geographical coordinates—a coordinate system for explaining the surface of the Earth.
Geographical coordinates are defined by latitude and by longitude (Lat/Lon), with
respect to an origin located at the intersection of the equator and the prime (Greenwich)
meridian.
geometric correction—the correction of errors of skew, rotation, and perspective in raw,
remotely sensed data.
georeferencing—the process of assigning map coordinates to image data and resampling the
pixels of the image to conform to the map projection grid.
GeoTIFF— TIFF files that are geocoded.
gigabyte—(Gb) about one billion bytes.
GIS—see geographic information system.
GIS file—a single-band ERDAS Ver. 7.X data file in which pixels are divided into discrete
categories.
global area coverage—(GAC) a type of NOAA AVHRR data with a spatial resolution of 4 × 4
km.
global operations—functions that calculate a single value for an entire area, rather than for each
pixel like focal functions.
GLObal NAvigation Satellite System—(GLONASS) a satellite-based navigation system
produced by the Russian Space Forces. It provides three-dimensional locations, velocity,
and time measurements for both civilian and military applications. GLONASS started its
mission in 1993 (Magellan Corporation, 1999).
Global Ozone Monitoring Experiment—(GOME) instrument aboard ESA’s ERS-2 satellite,
which studies atmospheric chemistry (European Space Agency, 1995).
Global Positioning System—(GPS) system used for the collection of GCPs, which uses
orbiting satellites to pinpoint precise locations on the Earth’s surface.
GLONASS—see GLObal NAvigation Satellite System.
.gmd file—the ERDAS IMAGINE graphical model file created with Model Maker (Spatial
Modeler).
gnomonic—an azimuthal projection obtained from a perspective at the center of the Earth.
H halftoning—the process of using dots of varying size or arrangements (rather than varying
intensity) to form varying degrees of a color.
hardcopy output—any output of digital computer (softcopy) data to paper.
HARN—see High Accuracy Reference Network.
610 ERDAS
H
header file—a file usually found before the actual image data on tapes or CD-ROMs that
contains information about the data, such as number of bands, upper left coordinates,
map projection, etc.
header record—the first part of an image file that contains general information about the data
in the file, such as the number of columns and rows, number of bands, database
coordinates of the upper left corner, and the pixel depth. The contents of header records
vary depending on the type of data.
HFA—see Hierarchal File Architecture System.
Hierarchal File Architecture System—(HFA) a format that allows different types of
information about a file to be stored in a tree-structured fashion. The tree is made of
nodes that contain information such as ephemeris data.
High Accuracy Reference Network—(HARN) HARN is based on the GRS 1980 spheroid, and
can be used to perform State Plane calculations.
high-frequency kernel—a convolution kernel that increases the spatial frequency of an image.
Also called high-pass kernel.
High Resolution Picture Transmission—(HRPT) the direct transmission of AVHRR data in
real-time with the same resolution as LAC.
High Resolution Visible Infrared—(HR VIR) a pushbroom scanner on the SPOT 4 satellite,
which captures information in the visible and near-infrared bands (SPOT Image, 1999).
High Resolution Visible sensor—(HRV) a pushbroom scanner on a SPOT satellite that takes a
sequence of line images while the satellite circles the Earth.
histogram—a graph of data distribution, or a chart of the number of pixels that have each
possible data file value. For a single band of data, the horizontal axis of a histogram graph
is the range of all possible data file values. The vertical axis is a measure of pixels that
have each data value.
histogram equalization—the process of redistributing pixel values so that there are
approximately the same number of pixels with each value within a range. The result is a
nearly flat histogram.
histogram matching—the process of determining a lookup table that converts the histogram of
one band of an image or one color gun to resemble another histogram.
horizontal control—the horizontal distribution of GCPs in aerial triangulation
(x,y - planimetry).
host workstation—a CPU, keyboard, mouse, and a display.
HRPT—see High Resolution Picture Transmission.
HRV—see High Resolution Visible sensor.
HR VIR—see High Resolution Visible Infrared.
hue—a component of IHS (intensity, hue, saturation) that is representative of the color or
dominant wavelength of the pixel. It varies from 0 to 360. Blue = 0 (and 360), magenta
= 60, red = 120, yellow = 180, green = 240, and cyan = 300.
hyperspectral sensors—the imaging sensors that record multiple bands of data, such as the
AVIRIS with 224 bands.
612 ERDAS
I
International Map of the World—(IMW) a series of maps produced by the International Map
Committee. Maps are in 1:1,000,000 scale.
intersection—the area or set that is common to two or more input areas or sets.
interval data—a type of data in which thematic class values have a natural sequence, and in
which the distances between values are meaningful.
Inverse Fast Fourier Transform—(IFFT) used after the Fast Fourier Transform to transform a
Fourier image back into the spatial domain. See also Fast Fourier Transform.
IR—infrared portion of the electromagnetic spectrum. See also electromagnetic spectrum.
IRS—see Indian Remote Sensing Satellite.
isarithmic map—a map that uses isorithms (lines connecting points of the same value for any
of the characteristics used in the representation of surfaces) to represent a statistical
surface. Also called an isometric map.
ISODATA clustering—see Iterative Self-Organizing Data Analysis Technique.
island—A single line that connects with itself.
isopleth map—a map on which isopleths (lines representing quantities that cannot exist at a
point, such as population density) are used to represent some selected quantity.
iterative—a term used to describe a process in which some operation is performed repeatedly.
Iterative Self-Organizing Data Analysis Technique—(ISODATA clustering) a method of
clustering that uses spectral distance as in the sequential method, but iteratively classifies
the pixels, redefines the criteria for each class, and classifies again, so that the spectral
distance patterns in the data gradually emerge.
614 ERDAS
K
K Kappa coefficient—a number that expresses the proportionate reduction in error generated by
a classification process compared with the error of a completely random classification.
kernel—see convolution kernel.
L label—in annotation, the text that conveys important information to the reader about map
features.
label point—a point within a polygon that defines that polygon.
LAC—see local area coverage.
.LAN files—multiband ERDAS Ver. 7.X image files (the name originally derived from the
Landsat satellite). LAN files usually contain raw or enhanced remotely sensed data.
land cover map—a map of the visible ground features of a scene, such as vegetation, bare land,
pasture, urban areas, etc.
Landsat—a series of Earth-orbiting satellites that gather MSS and TM imagery, operated by
EOSAT.
large-scale—a description used to represent a map or data file having a large ratio between the
area on the map (such as inches or pixels), and the area that is represented (such as feet).
In large-scale image data, each pixel represents a small area on the ground, such as SPOT
data, with a spatial resolution of 10 or 20 meters.
Lat/Lon—Latitude/Longitude, a map coordinate system.
layer—1. a band or channel of data. 2. a single band or set of three bands displayed using the
red, green, and blue color guns of the ERDAS IMAGINE Viewer. A layer could be a
remotely sensed image, an aerial photograph, an annotation layer, a vector layer, an area
of interest layer, etc. 3. a component of a GIS data base that contains all of the data for
one theme. A layer consists of a thematic image file, and may also include attributes.
least squares correlation—uses the least squares estimation to derive parameters that best fit a
search window to a reference window.
least squares regression—the method used to calculate the transformation matrix from the
GCPs. This method is discussed in statistics textbooks.
legend—the reference that lists the colors, symbols, line patterns, shadings, and other
annotation that is used on a map, and their meanings. The legend often includes the map’s
title, scale, origin, and other information.
lettering—the manner in which place names and other labels are added to a map, including
letter spacing, orientation, and position.
level 1A (SPOT)—an image that corresponds to raw sensor data to which only radiometric
corrections have been applied.
level 1B (SPOT)—an image that has been corrected for the Earth’s rotation and to make all
pixels 10 × 10 on the ground. Pixels are resampled from the level 1A sensor data by cubic
polynomials.
level slice—the process of applying a color scheme by equally dividing the input values (image
memory values) into a certain number of bins, and applying the same color to all pixels
in each bin. Usually, a ROYGBIV (red, orange, yellow, green, blue, indigo, violet) color
scheme is used.
line—1. a vector data element consisting of a line (the set of pixels directly between two points),
or an unclosed set of lines. 2. a row of pixels in a data file.
line dropout—a data error that occurs when a detector in a satellite either completely fails to
function or becomes temporarily overloaded during a scan. The result is a line, or partial
line of data with incorrect data file values creating a horizontal streak until the detector(s)
recovers, if it recovers.
linear—a description of a function that can be graphed as a straight line or a series of lines.
Linear equations (transformations) can generally be expressed in the form of the equation
of a line or plane. Also called 1st-order.
linear contrast stretch—an enhancement technique that outputs new values at regular intervals.
linear transformation—a 1st-order rectification. A linear transformation can change location
in X and/or Y, scale in X and/or Y, skew in X and/or Y, and rotation.
line of sight—in perspective views, the point(s) and direction from which the viewer is looking
into the image.
local area coverage—(LAC) a type of NOAA AVHRR data with a spatial resolution of 1.1 ×
1.1 km.
logical record—a series of bytes that form a unit on a 9-track tape. For example, all the data for
one line of an image may form a logical record. One or more logical records make up a
physical record on a tape.
long wave infrared region—(LWIR) the thermal or far-infrared region of the electromagnetic
spectrum.
lookup table—(LUT) an ordered set of numbers that is used to perform a function on a set of
input values. To display or print an image, lookup tables translate data file values into
brightness values.
lossy—”a term describing a data compression algorithm which actually reduces the amount of
information in the data, rather than just the number of bits used to represent that
information” (Free On-Line Dictionary of Computing, 1999c).
low-frequency kernel—a convolution kernel that decreases spatial frequency. Also called low-
pass kernel.
LUT—see lookup table.
LWIR—see long wave infrared region.
M Machine Independent Format—(MIF) a format designed to store data in a way that it can be
read by a number of different machines.
616 ERDAS
M
magnify—the process of displaying one file pixel over a block of display pixels. For example,
if the magnification factor is 3, then each file pixel takes up a block of
3 × 3 display pixels. Magnification differs from zooming in that the magnified image is
loaded directly to image memory.
magnitude—an element of an electromagnetic wave. Magnitude of a wave decreases
exponentially as the distance from the transmitter increases.
Mahalanobis distance—a classification decision rule that is similar to the minimum distance
decision rule, except that a covariance matrix is used in the equation.
majority—a neighborhood analysis technique that outputs the most common value of the data
file values in a user-specified window.
MAP—see Maximum A Posteriori.
map—a graphic representation of spatial relationships on the Earth or other planets.
map coordinates—a system of expressing locations on the Earth’s surface using a particular
map projection, such as UTM, State Plane, or Polyconic.
map frame—an annotation element that indicates where an image is placed in a map
composition.
map projection—a method of representing the three-dimensional spherical surface of a planet
on a two-dimensional map surface. All map projections involve the transfer of latitude
and longitude onto an easily flattened surface.
matrix—a set of numbers arranged in a rectangular array. If a matrix has i rows and j columns,
it is said to be an i × j matrix.
matrix analysis—a method of combining two thematic layers in which the output layer contains
a separate class for every combination of two input classes.
matrix object—in Model Maker (Spatial Modeler), a set of numbers in a two-dimensional array.
maximum—a neighborhood analysis technique that outputs the greatest value of the data file
values in a user-specified window.
Maximum A Posteriori—(MAP) a filter (Gamma-MAP) that is designed to estimate the
original DN value of a pixel, which it assumes is between the local average and the
degraded DN.
maximum likelihood—a classification decision rule based on the probability that a pixel
belongs to a particular class. The basic equation assumes that these probabilities are
equal for all classes, and that the input bands have normal distributions.
.mdl file—an ERDAS IMAGINE script model created with the Spatial Modeler Language.
mean—1. the statistical average; the sum of a set of values divided by the number of values in
the set. 2. a neighborhood analysis technique that outputs the mean value of the data file
values in a user-specified window.
mean vector—an ordered set of means for a set of variables (bands). For a data file, the mean
vector is the set of means for all bands in the file.
measurement vector—the set of data file values for one pixel in all bands of a data file.
median—1. the central value in a set of data such that an equal number of values are greater
than and less than the median. 2. a neighborhood analysis technique that outputs the
median value of the data file values in a user-specified window.
megabyte—(Mb) about one million bytes.
memory resident—a term referring to the occupation of a part of a computer’s RAM (random
access memory), so that a program is available for use without being loaded into memory
from disk.
mensuration—the measurement of linear or areal distance.
meridian—a line of longitude, going north and south. See geographical coordinates.
MIF—see Machine Independent Format.
minimum—a neighborhood analysis technique that outputs the least value of the data file
values in a user-specified window.
minimum distance—a classification decision rule that calculates the spectral distance between
the measurement vector for each candidate pixel and the mean vector for each signature.
Also called spectral distance.
minority—a neighborhood analysis technique that outputs the least common value of the data
file values in a user-specified window.
mode—the most commonly-occurring value in a set of data. In a histogram, the mode is the
peak of the curve.
model—in a GIS, the set of expressions, or steps, that defines your criteria and creates an output
layer.
modeling—the process of creating new layers from combining or operating upon existing
layers. Modeling allows the creation of new classes from existing classes and the creation
of a small set of images—perhaps even a single image—which, at a glance, contains
many types of information about a scene.
modified projection—a map projection that is a modified version of another projection. For
example, the Space Oblique Mercator projection is a modification of the Mercator
projection.
monochrome image—an image produced from one band or layer, or contained in one color gun
of the display device.
morphometric map—a map representing morphological features of the Earth’s surface.
mosaicking—the process of piecing together images side by side, to create a larger image.
MrSID—see Multiresolution Seamless Image Database.
MSS—see multispectral scanner.
Multiresolution Seamless Image Database—(MrSID) a wavelet transform-based compression
algorithm designed by LizardTech, Inc.
multispectral classification—the process of sorting pixels into a finite number of individual
classes, or categories of data, based on data file values in multiple bands. See also
classification.
618 ERDAS
N
620 ERDAS
O
O object—in models, an input to or output from a function. See matrix object, raster object, scalar
object, table object.
oblique aspect—a map projection that is not oriented around a pole or the Equator.
observation—in photogrammetric triangulation, a grouping of the image coordinates for a
GCP.
off-nadir—any point that is not directly beneath a scanner’s detectors, but off to an angle. The
SPOT scanner allows off-nadir viewing.
1:24,000—1:24,000 scale data, also called 7.5-minute DEM, available from USGS. It is usually
referenced to the UTM coordinate system and has a spatial resolution of 30 × 30 meters.
1:250,000—1:250,000 scale DEM data available from USGS. Available only in arc/second
format.
opacity—a measure of how opaque, or solid, a color is displayed in a raster layer.
operating system—(OS) the most basic means of communicating with the computer. It
manages the storage of information in files and directories, input from devices such as
the keyboard and mouse, and output to devices such as the monitor.
orbit—a circular, north-south and south-north path that a satellite travels above the Earth.
order—the complexity of a function, polynomial expression, or curve. In a polynomial
expression, the order is simply the highest exponent used in the polynomial. See also
linear, nonlinear.
ordinal data—a type of data that includes discrete lists of classes with an inherent order, such
as classes of streams—first order, second order, third order, etc.
orientation angle—the angle between a perpendicular to the center scan line and the North
direction in a satellite scene.
orthographic—an azimuthal projection with an infinite perspective.
orthocorrection—see orthorectification.
orthoimage—see digital orthophoto.
orthomap—an image map product produced from orthoimages, or orthoimage mosaics, that is
similar to a standard map in that it usually includes additional information, such as map
coordinate grids, scale bars, north arrows, and other marginalia.
orthorectification—a form of rectification that corrects for terrain displacement and can be
used if a DEM of the study area is available.
OS—see operating system.
outline map—a map showing the limits of a specific set of mapping entities such as counties.
Outline maps usually contain a very small number of details over the desired boundaries
with their descriptive codes.
overlay—1. a function that creates a composite file containing either the minimum or the
maximum class values of the input files. Overlay sometimes refers generically to a
combination of layers. 2. the process of displaying a classified file over the original
image to inspect the classification.
overlay file—an ERDAS IMAGINE annotation file (.ovr extension).
.ovr file—an ERDAS IMAGINE annotation file.
622 ERDAS
P
624 ERDAS
Q
pseudo projection—a map projection that has only some of the characteristics of another
projection.
pushbroom—a scanner in which all scanning parts are fixed, and scanning is accomplished by
the forward motion of the scanner, such as the SPOT scanner.
pyramid layers—image layers which are successively reduced by the power of 2 and resampled.
Pyramid layers enable large images to display faster.
Q quadrangle—1. any of the hardcopy maps distributed by USGS such as the 7.5-minute
quadrangle or the 15-minute quadrangle. 2. one quarter of a full Landsat TM scene.
Commonly called a quad.
qualitative map—a map that shows the spatial distribution or location of a kind of nominal data.
For example, a map showing corn fields in the US would be a qualitative map. It would
not show how much corn is produced in each location, or production relative to other
areas.
quantitative map—a map that displays the spatial aspects of numerical data. A map showing
corn production (volume) in each area would be a quantitative map.
R radar data—the remotely sensed data that are produced when a radar transmitter emits a beam
of micro or millimeter waves, the waves reflect from the surfaces they strike, and the
backscattered radiation is detected by the radar system’s receiving antenna, which is
tuned to the frequency of the transmitted waves.
RADARSAT—a Canadian radar satellite.
radiative transfer equations—the mathematical models that attempt to quantify the total
atmospheric effect of solar illumination.
radiometric correction—the correction of variations in data that are not caused by the object or
scene being scanned, such as scanner malfunction and atmospheric interference.
radiometric enhancement—an enhancement technique that deals with the individual values of
pixels in an image.
radiometric resolution—the dynamic range, or number of possible data file values, in each
band. This is referred to by the number of bits into which the recorded energy is divided.
See pixel depth.
RAM—see random-access memory.
random-access memory—(RAM) memory used for applications and data storage on a CPU
(Free On-Line Dictionary of Computing, 1999d).
rank—a neighborhood analysis technique that outputs the number of values in a user-specified
window that are less than the analyzed value.
RAR—see Real-Aperture Radar.
raster data—data that are organized in a grid of columns and rows. Raster data usually represent
a planar graph or geographical area. Raster data in ERDAS IMAGINE are stored in
image files.
raster object—in Model Maker (Spatial Modeler), a single raster layer or set of layers.
Raster Product Format—(RPF) Data from NIMA, used primarily for military purposes.
Organized in 1536 × 1536 frames, with an internal tile size of 256 × 256 pixels.
raster region—a contiguous group of pixels in one GIS class. Also called clump.
ratio data—a data type in which thematic class values have the same properties as interval
values, except that ratio values have a natural zero or starting point.
RDBMS—see relational database management system.
RDGPS—see Real Time Differential GPS.
Real-Aperture Radar—(RAR) a radar sensor that uses its side-looking, fixed antenna to
transmit and receive the radar impulse. For a given position in space, the resolution of
the resultant image is a function of the antenna size. The signal is processed
independently of subsequent return signals.
Real Time Differential GPS—(RDGPS) takes the Differential Correction technique one step
further by having the base station communicate the error vector via radio to the field unit
in real time.
recoding—the assignment of new values to one or more classes.
record—1. the set of all attribute data for one class of feature. 2. the basic storage unit on a 9-
track tape.
rectification—the process of making image data conform to a map projection system. In many
cases, the image must also be oriented so that the north direction corresponds to the top
of the image.
rectified coordinates—the coordinates of a pixel in a file that has been rectified, which are
extrapolated from the GCPs. Ideally, the rectified coordinates for the GCPs are exactly
equal to the reference coordinates. Because there is often some error tolerated in the
rectification, this is not always the case.
reduce—the process of skipping file pixels when displaying an image so that a larger area can
be represented on the display screen. For example, a reduction factor of 3 would cause
only the pixel at every third row and column to be displayed, so that each displayed pixel
represents a 3 × 3 block of file pixels.
reference coordinates—the coordinates of the map or reference image to which a source (input)
image is being registered. GCPs consist of both input coordinates and reference
coordinates for each point.
reference pixels—in classification accuracy assessment, pixels for which the correct GIS class
is known from ground truth or other data. The reference pixels can be selected by you, or
randomly selected.
626 ERDAS
R
reference plane—In a topocentric coordinate system, the tangential plane at the center of the
image on the Earth ellipsoid, on which the three perpendicular coordinate axes are
defined.
reference system—the map coordinate system to which an image is registered.
reference window—the source window on the first image of an image pair, which remains at a
constant location. See also correlation windows and search windows.
reflection spectra—the electromagnetic radiation wavelengths that are reflected by specific
materials of interest.
registration—the process of making image data conform to another image. A map coordinate
system is not necessarily involved.
regular block of photos—a rectangular block in which the number of photos in each strip is the
same; this includes a single strip or a single stereopair.
relational database management system—(RDBMS) system that stores SDE database layers.
relation based matching—an image matching technique that uses the image features and the
relation among the features to automatically recognize the corresponding image
structures without any a priori information.
relief map—a map that appears to be or is three-dimensional.
remote sensing—the measurement or acquisition of data about an object or scene by a satellite
or other instrument above or far from the object. Aerial photography, satellite imagery,
and radar are all forms of remote sensing.
replicative symbol—an annotation symbol that is designed to look like its real-world
counterpart. These symbols are often used to represent trees, railroads, houses, etc.
representative fraction—the ratio or fraction used to denote map scale.
resampling—the process of extrapolating data file values for the pixels in a new grid when data
have been rectified or registered to another image.
rescaling—the process of compressing data from one format to another. In ERDAS IMAGINE,
this typically means compressing a 16-bit file to an 8-bit file.
reshape—the process of redigitizing a portion of a line.
residuals—in rectification, the distances between the source and retransformed coordinates in
one direction. In ERDAS IMAGINE, they are shown for each GCP. The X residual is the
distance between the source X coordinate and the retransformed X coordinate. The Y
residual is the distance between the source Y coordinate and the retransformed Y
coordinate.
resolution—a level of precision in data. For specific types of resolution see display resolution,
radiometric resolution, spatial resolution, spectral resolution, and temporal resolution.
resolution merging—the process of sharpening a lower-resolution multiband image by merging
it with a higher-resolution monochrome image.
628 ERDAS
S
scale bar—a graphic annotation element that describes map scale. It shows the distance on
paper that represents a geographical distance on the map.
scalar object—in Model Maker (Spatial Modeler), a single numeric value.
scaled map—a georeferenced map that is accurately arranged and referenced to represent
distances and locations. A scaled map usually has a legend that includes a scale, such as
1 inch = 1000 feet. The scale is often expressed as a ratio like 1:12,000 where 1 inch on
the map equals 12,000 inches on the ground.
scanner—the entire data acquisition system, such as the Landsat TM scanner or the SPOT
panchromatic scanner.
scanning—1. the transfer of analog data, such as photographs, maps, or another viewable
image, into a digital (raster) format. 2. a process similar to convolution filtering that uses
a kernel for specialized neighborhood analyses, such as total, average, minimum,
maximum, boundary, and majority.
scatterplot—a graph, usually in two dimensions, in which the data file values of one band are
plotted against the data file values of another band.
scene—the image captured by a satellite.
screen coordinates—the location of a pixel on the display screen, beginning with 0,0 in the
upper left corner.
screen digitizing—the process of drawing vector graphics on the display screen with a mouse.
A displayed image can be used as a reference.
script modeling—the technique of combining data layers in an unlimited number of ways.
Script modeling offers all of the capabilities of graphical modeling with the ability to
perform more complex functions, such as conditional looping.
script model—a model that is comprised of text only and is created with the SML. Script models
are stored in .mdl files.
SCS—see Soil Conservation Service.
SD—see standard deviation.
SDE—see Spatial Database Engine.
SDTS—see spatial data transfer standard.
SDTS Raster Profile and Extensions—(SRPE) an SDTS profile that covers gridded raster data.
search radius—in surfacing routines, the distance around each pixel within which the software
searches for terrain data points.
search windows—candidate windows on the second image of an image pair that are evaluated
relative to the reference window.
seat—a combination of an X-server and a host workstation.
Sea-viewing Wide Field-of-View Sensor—(SeaWiFS) a sensor located on many different
satellites such as ORBVIEW’s OrbView-2, and NASA’s SeaStar.
SeaWiFS—see Sea-viewing Wide Field-of-View Sensor.
secant—the intersection of two points or lines. In the case of conic or cylindrical map
projections, a secant cone or cylinder intersects the surface of a globe at two circles.
Selective Availability—introduces a positional inaccuracy of up to 100 m to commercial GPS
receivers.
sensor—a device that gathers energy, converts it to a digital value, and presents it in a form
suitable for obtaining information about the environment.
separability—a statistical measure of distance between two signatures.
separability listing—a report of signature divergence which lists the computed divergence for
every class pair and one band combination. The listing contains every divergence value
for the bands studied for every possible pair of signatures.
sequential clustering—a method of clustering that analyzes pixels of an image line by line and
groups them by spectral distance. Clusters are determined based on relative spectral
distance and the number of pixels per cluster.
server—on a computer in a network, a utility that makes some resource or service available to
the other machines on the network (such as access to a tape drive).
shaded relief image—a thematic raster image that shows variations in elevation based on a
user-specified position of the sun. Areas that would be in sunlight are highlighted and
areas that would be in shadow are shaded.
shaded relief map—a map of variations in elevation based on a user-specified position of the
sun. Areas that would be in sunlight are highlighted and areas that would be in shadow
are shaded.
shapefile—an ESRI vector format that contains spatial data. Shapefiles have the .shp extension.
short wave infrared region—(SWIR) the near-infrared and middle-infrared regions of the
electromagnetic spectrum.
SI—see image scale.
Side-looking Airborne Radar—(SLAR) a radar sensor that uses an antenna which is fixed
below an aircraft and pointed to the side to transmit and receive the radar signal.
signal based matching—see area based matching.
Signal-to-Noise ratio—(S/N) in hyperspectral image processing, a ratio used to evaluate the
usefulness or validity of a particular band of data.
signature—a set of statistics that defines a training sample or cluster. The signature is used in
a classification process. Each signature corresponds to a GIS class that is created from
the signatures with a classification decision rule.
skew—a condition in satellite data, caused by the rotation of the Earth eastward, which causes
the position of the satellite relative to the Earth to move westward. Therefore, each line
of data represents terrain that is slightly west of the data in the previous line.
SLAR—see Side-looking Airborne Radar.
slope—the change in elevation over a certain distance. Slope can be reported as a percentage or
in degrees.
630 ERDAS
S
slope image—a thematic raster image that shows changes in elevation over distance. Slope
images are usually color coded to show the steepness of the terrain at each pixel.
slope map—a map that is color coded to show changes in elevation over distance.
small-scale—for a map or data file, having a small ratio between the area of the imagery (such
as inches or pixels) and the area that is represented (such as feet). In small-scale image
data, each pixel represents a large area on the ground, such as NOAA AVHRR data, with
a spatial resolution of 1.1 km.
SML—see Spatial Modeler Language.
S/N—see Signal-to-Noise ratio.
softcopy photogrammetry—see digital photogrammetry.
Soil Conservation Service—(SCS) an organization that produces soil maps (Fisher, 1991) with
guidelines provided by the USDA.
SOM—see Space Oblique Mercator.
source coordinates—in the rectification process, the input coordinates.
Spaceborne Imaging Radar—(SIR-A, SIR-B, and SIR-C) the radar sensors that fly aboard
NASA space shuttles. SIR-A flew aboard the 1981 NASA Space Shuttle Columbia. That
data and SIR-B data from a later Space Shuttle mission are still valuable sources of radar
data. The SIR-C sensor was launched in 1994.
Space Oblique Mercator—(SOM) a projection available in ERDAS IMAGINE that is nearly
conformal and has little scale distortion within the sensing range of an orbiting mapping
satellite such as Landsat.
spatial data transfer standard—(SDTS) “a robust way of transferring Earth-referenced spatial
data between dissimilar computer systems with the potential for no information loss”
(United States Geological Survey, 1999c).
Spatial Database Engine—(SDE) An ESRI vector format that manages a database theme. SDE
allows you to access databases that may contain large amounts of information
(Environmental Systems Research Institute, 1996).
spatial enhancement—the process of modifying the values of pixels in an image relative to the
pixels that surround them.
spatial frequency—the difference between the highest and lowest values of a contiguous set of
pixels.
Spatial Modeler Language—(SML) a script language used internally by Model Maker (Spatial
Modeler) to execute the operations specified in the graphical models you create. SML
can also be used to write application-specific models.
spatial resolution—a measure of the smallest object that can be resolved by the sensor, or the
area on the ground represented by each pixel.
speckle noise—the light and dark pixel noise that appears in radar data.
spectral distance—the distance in spectral space computed as Euclidean distance in n-
dimensions, where n is the number of bands.
spectral enhancement—the process of modifying the pixels of an image based on the original
values of each pixel, independent of the values of surrounding pixels.
spectral resolution—the specific wavelength intervals in the electromagnetic spectrum that a
sensor can record.
spectral space—an abstract space that is defined by spectral units (such as an amount of
electromagnetic radiation). The notion of spectral space is used to describe enhancement
and classification techniques that compute the spectral distance between n-dimensional
vectors, where n is the number of bands in the data.
spectroscopy—the study of the absorption and reflection of electromagnetic radiation (EMR)
waves.
spliced map—a map that is printed on separate pages, but intended to be joined together into
one large map. Neatlines and tick marks appear only on the pages which make up the
outer edges of the whole map.
spline—the process of smoothing or generalizing all currently selected lines using a specified
grain tolerance during vector editing.
split—the process of making two lines from one by adding a node.
SPOT—a series of Earth-orbiting satellites operated by the Centre National d’Etudes Spatiales
(CNES) of France.
SRPE—see SDTS Raster Profile and Extensions.
STA—see statistics file.
standard deviation—(SD) 1. the square root of the variance of a set of values which is used as
a measurement of the spread of the values. 2. a neighborhood analysis technique that
outputs the standard deviation of the data file values of a user-specified window.
standard meridian—see standard parallel.
standard parallel—the line of latitude where the surface of a globe conceptually intersects with
the surface of the projection cylinder or cone.
statement—in script models, properly formatted lines that perform a specific task in a model.
Statements fall into the following categories: declaration, assignment, show, view, set,
macro definition, and quit.
statistical clustering—a clustering method that tests 3 × 3 sets of pixels for homogeneity, and
builds clusters only from the statistics of the homogeneous sets of pixels.
statistics file—(STA) an ERDAS IMAGINE Ver. 7.X trailer file for LAN data that contains
statistics about the data.
stereographic—1. the process of projecting onto a tangent plane from the opposite side of the
Earth. 2. the process of acquiring images at angles on either side of the vertical.
stereopair—a set of two remotely-sensed images that overlap, providing two views of the
terrain in the overlap area.
stereo-scene—achieved when two images of the same area are acquired on different days from
different orbits, one taken east of the vertical, and the other taken west of the nadir.
632 ERDAS
S
stream mode—a digitizing mode in which vertices are generated continuously while the
digitizer keypad is in proximity to the surface of the digitizing tablet.
string—a line of text. A string usually has a fixed length (number of characters).
strip of photographs—consists of images captured along a flight-line, normally with an overlap
of 60% for stereo coverage. All photos in the strip are assumed to be taken at
approximately the same flying height and with a constant distance between exposure
stations. Camera tilt relative to the vertical is assumed to be minimal.
striping—a data error that occurs if a detector on a scanning system goes out of adjustment—
that is, it provides readings consistently greater than or less than the other detectors for
the same band over the same ground cover. Also called banding.
structure based matching—see relation based matching.
subsetting—the process of breaking out a portion of a large image file into one or more smaller
files.
suitability/capability analysis—(SCA) a system designed to analyze many data layers to
produce a plan map. Discussed in McHarg’s book Design with Nature (Star and Estes,
1990).
sum—a neighborhood analysis technique that outputs the total of the data file values in a user-
specified window.
Sun raster data—imagery captured from a Sun monitor display.
sun-synchronous—a term used to describe Earth-orbiting satellites that rotate around the Earth
at the same rate as the Earth rotates on its axis.
supervised training—any method of generating signatures for classification, in which the
analyst is directly involved in the pattern recognition process. Usually, supervised
training requires the analyst to select training samples from the data that represent
patterns to be classified.
surface—a one-band file in which the value of each pixel is a specific elevation value.
swath width—in a satellite system, the total width of the area on the ground covered by the
scanner.
SWIR—see short wave infrared region.
symbol—an annotation element that consists of other elements (sub-elements). See plan
symbol, profile symbol, and function symbol.
symbolization—a method of displaying vector data in which attribute information is used to
determine how features are rendered. For example, points indicating cities and towns can
appear differently based on the population field stored in the attribute database for each
of those areas.
Synthetic Aperture Radar—(SAR) a radar sensor that uses its side-looking, fixed antenna to
create a synthetic aperture. SAR sensors are mounted on satellites, aircraft, and the
NASA Space Shuttle. The sensor transmits and receives as it is moving. The signals
received over a time interval are combined to create the image.
T table object—in Model Maker (Spatial Modeler), a series of numeric values or character strings.
tablet digitizing—the process of using a digitizing tablet to transfer nondigital data such as maps
or photographs to vector format.
Tagged Imaged File Format—see TIFF data.
tangent—an intersection at one point or line. In the case of conic or cylindrical map projections,
a tangent cone or cylinder intersects the surface of a globe in a circle.
Tasseled Cap transformation—an image enhancement technique that optimizes data viewing
for vegetation studies.
TEM—see transverse electromagnetic wave.
temporal resolution—the frequency with which a sensor obtains imagery of a particular area.
terrain analysis—the processing and graphic simulation of elevation data.
terrain data—elevation data expressed as a series of x, y, and z values that are either regularly
or irregularly spaced.
text printer—a device used to print characters onto paper, usually used for lists, documents, and
reports. If a color printer is not necessary or is unavailable, images can be printed using
a text printer. Also called a line printer.
thematic data—raster data that are qualitative and categorical. Thematic layers often contain
classes of related information, such as land cover, soil type, slope, etc. In ERDAS
IMAGINE, thematic data are stored in image files.
thematic layer—see thematic data.
thematic map—a map illustrating the class characterizations of a particular spatial variable such
as soils, land cover, hydrology, etc.
Thematic Mapper—(TM) Landsat data acquired in seven bands with a spatial resolution of 30
× 30 meters.
thematic mapper simulator—(TMS) an instrument “designed to simulate spectral, spatial, and
radiometric characteristics of the Thematic Mapper sensor on the Landsat-4 and 5
spacecraft” (National Aeronautics and Space Administration, 1995b).
theme—a particular type of information, such as soil type or land use, that is represented in a
layer.
3D perspective view—a simulated three-dimensional view of terrain.
threshold—a limit, or cutoff point, usually a maximum allowable amount of error in an
analysis. In classification, thresholding is the process of identifying a maximum distance
between a pixel and the mean of the signature to which it was classified.
tick marks—small lines along the edge of the image area or neatline that indicate regular
intervals of distance.
tie point—a point; its ground coordinates are not known, but can be recognized visually in the
overlap or sidelap area between two images.
634 ERDAS
T
TIFF data—Tagged Image File Format data is a raster file format developed by Aldus, Corp.
(Seattle, Washington), in 1986 for the easy transportation of data.
TIGER—see Topologically Integrated Geographic Encoding and Referencing System.
tiled data—the storage format of ERDAS IMAGINE image files.
TIN—see triangulated irregular network.
TM—see Thematic Mapper.
TMS—see thematic mapper simulator.
TNDVI—see Transformed Normalized Distribution Vegetative Index.
to-node—the last vertex in a line.
topocentric coordinate system—a coordinate system that has its origin at the center of the
image on the Earth ellipsoid. The three perpendicular coordinate axes are defined on a
tangential plane at this center point. The x-axis is oriented eastward, the y-axis
northward, and the z-axis is vertical to the reference plane (up).
topographic—a term indicating elevation.
topographic data—a type of raster data in which pixel values represent elevation.
topographic effect—a distortion found in imagery from mountainous regions that results from
the differences in illumination due to the angle of the sun and the angle of the terrain.
topographic map—a map depicting terrain relief.
Topologically Integrated Geographic Encoding and Referencing System—(TIGER) files are
line network products of the US Census Bureau.
Topological Vector Profile—(TVP) a profile of SDTS that covers attributed vector data.
topology—a term that defines the spatial relationships between features in a vector layer.
total RMS error—the total root mean square (RMS) error for an entire image. Total RMS error
takes into account the RMS error of each GCP.
trailer file—1. an ERDAS IMAGINE Ver. 7.X file with a .TRL extension that accompanies a
GIS file and contains information about the GIS classes. 2. a file following the image data
on a 9-track tape.
training—the process of defining the criteria by which patterns in image data are recognized
for the purpose of classification.
training field—the geographical area represented by the pixels in a training sample. Usually, it
is previously identified with the use of ground truth data or aerial photography. Also
called training site.
training sample—a set of pixels selected to represent a potential class. Also called sample.
transformation matrix—a set of coefficients that is computed from GCPs, and used in
polynomial equations to convert coordinates from one system to another. The size of the
matrix depends upon the order of the transformation.
U union—the area or set that is the combination of two or more input areas or sets without
repetition.
United Sates Department of Agriculture—(USDA) an organization regulating the agriculture
of the US. For more information, visit the web site www.usda.gov.
United States Geological Survey—(USGS) an organization dealing with biology, geology,
mapping, and water. For more information, visit the web site www.usgs.gov.
Universal Polar Stereographic—(UPS) a mapping system used in conjunction with the Polar
Stereographic projection that makes the scale factor at the pole 0.994.
Universal Transverse Mercator—(UTM) UTM is an international plane (rectangular)
coordinate system developed by the US Army that extends around the world from 84°N
to 80°S. The world is divided into 60 zones each covering six degrees longitude. Each
zone extends three degrees eastward and three degrees westward from its central
meridian. Zones are numbered consecutively west to east from the 180° meridian.
unscaled map—a hardcopy map that is not referenced to any particular scale in which one file
pixel is equal to one printed pixel.
unsplit—the process of joining two lines by removing a node.
unsupervised training—a computer-automated method of pattern recognition in which some
parameters are specified by the user and are used to uncover statistical patterns that are
inherent in the data.
UPS—see Universal Polar Stereographic.
636 ERDAS
V
V variable—1. a numeric value that is changeable, usually represented with a letter. 2. a thematic
layer. 3. one band of a multiband image. 4. in models, objects which have been associated
with a name using a declaration statement.
variable rate technology—(VRT) in precision agriculture, used with GPS data. VRT relies on
the use of a VRT controller box connected to a GPS and the pumping mechanism for a
tank full of fertilizers/pesticides/seeds/water/etc.
variance—the measure of central tendency.
vector—1. a line element. 2. a one-dimensional matrix, having either one row (1 by j), or one
column (i by 1). See also mean vector, measurement vector.
vector data—data that represent physical forms (elements) such as points, lines, and polygons.
Only the vertices of vector data are stored, instead of every point that makes up the
element. ERDAS IMAGINE vector data are based on the ArcInfo data model and are
stored in directories, rather than individual files. See workspace.
vector layer—a set of vector features and their associated attributes.
Vector Quantization—(VQ) used to compress frames of RPF data.
velocity vector—the satellite’s velocity if measured as a vector through a point on the spheroid.
verbal statement—a statement that describes the distance on the map to the distance on the
ground. A verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to
16 miles. The units on the map and on the ground do not have to be the same in a verbal
statement.
vertex—a point that defines an element, such as a point where a line changes direction.
vertical control—the vertical distribution of GCPs in aerial triangulation
(z - elevation).
vertices—plural of vertex.
viewshed analysis—the calculation of all areas that can be seen from a particular viewing point
or path.
viewshed map—a map showing only those areas visible (or invisible) from a specified point(s).
VIS/IR—see visible/infrared imagery.
visible/infrared imagery—(VIS/IR) a type of multispectral data set that is based on the
reflectance spectrum of the material of interest.
volume—a medium for data storage, such as a magnetic disk or a tape.
volume set—the complete set of tapes that contains one image.
VPF—see vector product format.
W wavelet—”a waveform that is bounded in both frequency and duration” (Free On-Line
Dictionary of Computing, 1999e).
weight—the number of values in a set; particularly, in clustering algorithms, the weight of a
cluster is the number of pixels that have been averaged into it.
weighting factor—a parameter that increases the importance of an input variable. For example,
in GIS indexing, one input layer can be assigned a weighting factor that multiplies the
class values in that layer by that factor, causing that layer to have more importance in the
output file.
weighting function—in surfacing routines, a function applied to elevation values for
determining new output values.
WGS—see World Geodetic System.
Wide Field Sensor—(WiFS) sensor aboard IRS-1C with 188m spatial resolution.
WiFS—see Wide Field Sensor.
working window—the image area to be used in a model. This can be set to either the union or
intersection of the input layers.
workspace—a location that contains one or more vector layers. A workspace is made up of
several directories.
World Geodetic System—(WGS) a spheroid. Earth ellipsoid with multiple versions including:
WGS 66, 72, and 84.
write ring—a protection device that allows data to be written to a 9-track tape when the ring is
in place, but not when it is removed.
X X residual—in RMS error reports, the distance between the source X coordinate and the
retransformed X coordinate.
X RMS error—the root mean square (RMS) error in the X direction.
Y Y residual—in RMS error reports, the distance between the source Y coordinate and the
retransformed Y coordinate.
Y RMS error—the root mean square (RMS) error in the Y direction.
638 ERDAS
Z
zone distribution rectangles—(ZDRs) the images into which each distribution DR are divided
in ADRG data.
zoom—the process of expanding displayed pixels on an image so they can be more closely
studied. Zooming is similar to magnification, except that it changes the display only
temporarily, leaving image memory the same.
640 ERDAS
Bibliography
Works Cited
Ackermann, 1983
Ackermann, F., 1983. High precision digital image correlation. Paper presented at 39th Photogrammetric Week, Institute of
Photogrammetry, University of Stuttgart, 231-243.
Adams et al, 1989
Adams, J.B., M. O. Smith, and A. R. Gillespie. 1989. Simple Models for Complex Natural Surfaces: A Strategy for the
Hyperspectral Era of Remote Sensing. Paper presented at Institute of Electrical and Electronics Engineers, Inc. (IEEE)
International Geosciences and Remote Sensing (IGARSS) 12th Canadian Symposium on Remote Sensing, Vancouver,
British Columbia, Canada, July 1989, I:16-21.
Agouris and Schenk, 1996
Agouris, P., and T. Schenk. 1996. Automated Aerotriangulation Using Multiple Image Multipoint Matching. Photogrammetric
Engineering and Remote Sensing 62 (6): 703-710.
Akima, 1978
Akima, H. 1978. A Method of Bivariate Interpolation and Smooth Surface Fitting for Irregularly Distributed Data Points.
Association for Computing Machinery (ACM) Transactions on Mathematical Software 4 (2): 148-159.
American Society of Photogrammetry, 1980
American Society of Photogrammetry (ASP). 1980. Photogrammetric Engineering and Remote Sensing XLVI:10:1249.
Atkinson, 1985
Atkinson, P. 1985. Preliminary Results of the Effect of Resampling on Thematic Mapper Imagery. 1985 ACSM-ASPRS Fall
Convention Technical Papers. Falls Church, Virginia: American Society for Photogrammetry and Remote Sensing and
American Congress on Surveying and Mapping.
Atlantis Scientific, Inc., 1997
Atlantis Scientific, Inc. 1997. Sources of SAR Data. Retrieved October 2, 1999, from
http://www.atlsci.com/library/sar_sources.html
Bauer and Müller, 1972
Bauer, H., and J. Müller. 1972. Height accuracy of blocks and bundle block adjustment with additional parameters.
International Society for Photogrammetry and Remote Sensing (ISPRS) 12th Congress, Ottawa.
Benediktsson et al, 1990
Benediktsson, J.A., P. H. Swain, O. K. Ersoy, and D. Hong 1990. Neural Network Approaches Versus Statistical Methods in
Classification of Multisource Remote Sensing Data. Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Transactions on Geoscience and Remote Sensing 28 (4): 540-551.
Berk et al, 1989
Berk, A., L. S. Bernstein, and D. C. Robertson. 1989. MODTRAN: A Moderate Resolution Model for LOWTRAN 7. Airforce
Geophysics Laboratory Technical Report GL-TR-89-0122, Hanscom AFB, MA.
Bernstein, 1983
Bernstein, R. 1983. Image Geometry and Rectification. Chapter 21 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls
Church, Virginia: American Society of Photogrammetry.
Blom and Daily, 1982
Blom, R. G., and M. Daily. 1982. Radar Image Processing for Rock-Type Discrimination. Institute of Electrical and Electronics
Engineers, Inc. (IEEE) Transactions on Geoscience and Remote Sensing 20 (3).
Buchanan, 1979
Buchanan, M. D. 1979. Effective Utilization of Color in Multidimensional Data Presentation. Paper presented at the Society of
Photo-Optical Engineers, 199:9-19.
Cannon, 1983
Cannon, T. M. 1983. Background Pattern Removal by Power Spectral Filtering. Applied Optics 22 (6): 777-779.
Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor Specifications: SeaWiFS.
Retrieved December 28, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/seastar.html
Center for Health Applications of Aerospace Related Technologies, 2000a
———. 2000a. Sensor Specifications: Ikonos. Retrieved December 28, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/ikonos.html
Center for Health Applications of Aerospace Related Technologies, 2000b
———. 2000b. Sensor Specifications: Landsat. Retrieved December 31, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/landsat.html
Center for Health Applications of Aerospace Related Technologies, 2000c
———. 2000c. Sensor Specifications: SPOT. Retrieved December 31, 2001, from
http://geo.arc.nasa.gov/sge/health/sensor/sensors/spot.html
Centre National D’Etudes Spatiales, 1998
Centre National D’Etudes Spatiales (CNES). 1998. CNES: Centre National D’Etudes Spatiales. Retrieved October 25, 1999,
from http://sads.cnes.fr/ceos/cdrom-98/ceos1/cnes/gb/lecnes.htm
Chahine et al, 1983
Chahine, M. T., D. J. McCleese, P. W. Rosenkranz, and D. H. Staelin. 1983. Interaction Mechanisms within the Atmosphere.
Chapter 5 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church, Virginia: American Society of
Photogrammetry.
Chavez et al, 1977
Chavez, P. S., Jr., G. L. Berlin, and W. B. Mitchell. 1977. Computer Enhancement Techniques of Landsat MSS Digital Images
for Land Use/Land Cover Assessments. Remote Sensing Earth Resource. 6:259.
Chavez and Berlin, 1986
Chavez, P. S., Jr., and G. L. Berlin. 1986. Restoration Techniques for SIR-B Digital Radar Images. Paper presented at the Fifth
Thematic Conference: Remote Sensing for Exploration Geology, Reno, Nevada, September/October 1986.
Chavez et al, 1991
Chavez, P. S., Jr., S. C. Sides, and J. A. Anderson. 1991. Comparison of Three Different Methods to Merge Multiresolution and
Multispectral Data: Landsat TM and SPOT Panchromatic. Photogrammetric Engineering & Remote Sensing 57 (3): 295-
303.
Clark and Roush, 1984
Clark, R. N., and T. L. Roush. 1984. Reflectance Spectroscopy: Quantitative Analysis Techniques for Remote Sensing
Applications. Journal of Geophysical Research 89 (B7): 6329-6340.
Clark et al, 1990
Clark, R. N., A. J. Gallagher, and G. A. Swayze. 1990. “Material Absorption Band Depth Mapping of Imaging Spectrometer
Data Using a Complete Band Shape Least-Squares Fit with Library Reference Spectra”. Paper presented at the Second
Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Conference, Pasadena, California, June 1990. Jet Propulsion
Laboratory Publication 90-54:176-186.
Colby, 1991
Colby, J. D. 1991. Topographic Normalization in Rugged Terrain. Photogrammetric Engineering & Remote Sensing 57 (5):
531-537.
Colwell, 1983
Colwell, R. N., ed. 1983. Manual of Remote Sensing. 2d ed. Falls Church, Virginia: American Society of Photogrammetry.
Congalton, 1991
Congalton, R. 1991. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of
Environment 37: 35-46.
Conrac Corporation, 1980
Conrac Corporation. 1980. Raster Graphics Handbook. New York: Van Nostrand Reinhold.
Crane, 1971
Crane, R. B. 1971. “Preprocessing Techniques to Reduce Atmospheric and Sensor Variability in Multispectral Scanner Data.”
Proceedings of the 7th International Symposium on Remote Sensing of Environment. Ann Arbor, Michigan, p. 1345.
642 ERDAS
Works Cited
Crippen, 1987
Crippen, R. E. 1987. The Regression Intersection Method of Adjusting Image Data for Band Ratioing. International Journal of
Remote Sensing 8 (2): 137-155.
Crippen, 1989a
———. 1989a. A Simple Spatial Filtering Routine for the Cosmetic Removal of Scan-Line Noise from Landsat TM P-Tape
Imagery. Photogrammetric Engineering & Remote Sensing 55 (3): 327-331.
Crippen, 1989b
———. 1989b. Development of Remote Sensing Techniques for the Investigation of Neotectonic Activity, Eastern Transverse
Ranges and Vicinity, Southern California. Ph.D. diss., University of California, Santa Barbara.
Crist et al, 1986
Crist, E. P., R. Laurin, and R. C. Cicone. 1986. Vegetation and Soils Information Contained in Transformed Thematic Mapper
Data. Paper presented at International Geosciences and Remote Sensing Symposium (IGARSS)’ 86 Symposium, ESA
Publications Division, ESA SP-254.
Crist and Kauth, 1986
Crist, E. P., and R. J. Kauth. 1986. The Tasseled Cap De-Mystified. Photogrammetric Engineering & Remote Sensing 52 (1):
81-86.
Croft (Holcomb), 1993
Croft, F. C., N. L. Faust, and D. W. Holcomb. 1993. Merging Radar and VIS/IR Imagery. Paper presented at the Ninth Thematic
Conference on Geologic Remote Sensing, Pasadena, California, February 1993.
Cullen, 1972
Cullen, C. G. 1972. Matrices and Linear Transformations. 2d ed. Reading, Massachusetts: Addison-Wesley Publishing
Company.
Daily, 1983
Daily, M. 1983. Hue-Saturation-Intensity Split-Spectrum Processing of Seasat Radar Imagery. Photogrammetric Engineering
& Remote Sensing 49 (3): 349-355.
Dent, 1985
Dent, B. D. 1985. Principles of Thematic Map Design. Reading, Massachusetts: Addison-Wesley Publishing Company.
Earth Remote Sensing Data Analysis Center, 2000
Earth Remote Sensing Data Analysis Center (ERSDAC). 2000. JERS-1 OPS. Retrieved December 28, 2001, from
http://www.ersdac.or.jp/Projects/JERS1/JOPS/JOPS_E.html
Eberlein and Weszka, 1975
Eberlein, R. B., and J. S. Weszka. 1975. Mixtures of Derivative Operators as Edge Detectors. Computer Graphics and Image
Processing 4: 180-183.
Ebner, 1976
Ebner, H. 1976. Self-calibrating block adjustment. Bildmessung und Luftbildwesen 44: 128-139.
Elachi, 1987
Elachi, C. 1987. Introduction to the Physics and Techniques of Remote Sensing. New York: John Wiley & Sons.
El-Hakim and Ziemann, 1984
El-Hakim, S.F. and H. Ziemann. 1984. A Step-by-Step Strategy for Gross-Error Detection. Photogrammetric Engineering &
Remote Sensing 50 (6): 713-718.
Environmental Systems Research Institute, 1990
Environmental Systems Research Institute, Inc. 1990. Understanding GIS: The ArcInfo Method. Redlands, California: ESRI,
Incorporated.
Environmental Systems Research Institute, 1992
———. 1992. ARC Command References 6.0. Redlands. California: ESRI, Incorporated.
Environmental Systems Research Institute, 1992
———. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI, Incorporated.
Environmental Systems Research Institute, 1992
———. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands, California: ESRI,
Incorporated.
Environmental Systems Research Institute, 1996
———. 1997. ArcInfo. Version 7.2.1. ArcInfo HELP. Redlands, California: ESRI, Incorporated.
Eurimage, 1998
European Space Agency (ESA). 1995. ERS-2: A Continuation of the ERS-1 Success, by G. Duchossois and R. Zobl. Retrieved
October 1, 1999, from http://esapub.esrin.esa.it/bulletin/bullet83/ducho83.htm
European Space Agency, 1997
———. 1997. SAR Mission Planning for ERS-1 and ERS-2, by S. D’Elia and S. Jutz. Retrieved October 1, 1999, from
http://esapub.esrin.esa.it/bulletin/bullet90/b90delia.htm
Fahnestock and Schowengerdt, 1983
Fahnestock, J. D., and R. A. Schowengerdt. 1983. Spatially Variant Contrast Enhancement Using Local Range Modification.
Optical Engineering 22 (3): 378-381.
Faust, 1989
Faust, N. L. 1989. Image Enhancement. Volume 20, Supplement 5 of Encyclopedia of Computer Science and Technology. Ed.
A. Kent and J. G. Williams. New York: Marcel Dekker, Inc.
Faust et al, 1991
Faust, N. L., W. Sharp, D. W. Holcomb, P. Geladi, and K. Esbenson. 1991. Application of Multivariate Image Analysis (MIA)
to Analysis of TM and Hyperspectral Image Data for Mineral Exploration. Paper presented at the Eighth Thematic
Conference on Geologic Remote Sensing, Denver, Colorado, April/May 1991.
Fisher, 1991
Fisher, P. F. 1991. Spatial Data Sources and Data Problems. In Geographical Information Systems: Principles and Applications.
Ed. D. J. Maguire, M. F. Goodchild, and D. W. Rhind. New York: Longman Scientific & Technical.
Flaschka, 1969
Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes & Noble, Inc.
Förstner and Gülch, 1987
Förstner, W. and E. Gülch. 1987. A fast operator for detection and precise location of distinct points, corners and centers of
circular features. Paper presented at the Intercommission Conference on Fast Processing of Photogrammetric Data,
Interlaken, Switzerland, June 1987, 281-305.
Fraser, 1986
Fraser, S. J., et al. 1986. “Targeting Epithermal Alteration and Gossans in Weathered and Vegetated Terrains Using Aircraft
Scanners: Successful Australian Case Histories.” Paper presented at the fifth Thematic Conference: Remote Sensing for
Exploration Geology, Reno, Nevada.
Free On-Line Dictionary of Computing, 1999a
Free On-Line Dictionary Of Computing. 1999a. American Standard Code for Information Interchange. Retrieved October 25,
1999a, from http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999b
———. 1999b. central processing unit. Retrieved October 25, 1999, from http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999c
———. 1999d. random-access memory. Retrieved November 11, 1999, from http://foldoc.doc.ic.ac.uk/foldoc
Free On-Line Dictionary of Computing, 1999e
644 ERDAS
Works Cited
Frost, V. S., J. A. Stiles, K. S. Shanmugan, and J. C. Holtzman. 1982. A Model for Radar Images and Its Application to Adaptive
Digital Filtering of Multiplicative Noise. Institute of Electrical and Electronics Engineers, Inc. (IEEE) Transactions on
Pattern Analysis and Machine Intelligence PAMI-4 (2): 157-166.
Gonzalez and Wintz, 1977
Gonzalez, R. C., and P. Wintz. 1977. Digital Image Processing. Reading, Massachusetts: Addison-Wesley Publishing
Company.
Gonzalez and Woods, 2001
Gonzalez, R. and Woods, R., Digital Image Processing. Prentice Hall, NJ, 2001.
Green and Craig, 1985
Green, A. A., and M. D. Craig. 1985. Analysis of Aircraft Spectrometer Data with Logarithmic Residuals. Paper presented at
the AIS Data Analysis Workshop, Pasadena, California, April 1985. Jet Propulsion Laboratory (JOL) Publication 85
(41): 111-119.
Grün, 1978
Grün, A., 1978. Experiences with self calibrating bundle adjustment. Paper presented at the American Congress on Surveying
and Mapping/American Society of Photogrammetry (ACSM-ASP) Convention, Washington, D.C., February/March
1978.
Grün and Baltsavias, 1988
Grün, A., and E. P. Baltsavias. 1988. Geometrically constrained multiphoto matching. Photogrammetric Engineering and
Remote Sensing 54 (5): 633-641.
Haralick, 1979
Haralick, R. M. 1979. Statistical and Structural Approaches to Texture. Paper presented at meeting of the Institute of Electrical
and Electronics Engineers, Inc. (IEEE), Seattle, Washington, May 1979, 67 (5): 786-804.
Heipke, 1996
Heipke, C. 1996. Automation of interior, relative and absolute orientation. International Archives of Photogrammetry and
Remote Sensing 31(B3): 297-311.
Helava, 1988
Helava, U.V. 1988. Object space least square correlation. International Archives of Photogrammetry and Remote Sensing 27
(B3): 321-331.
Hodgson and Shelley, 1994
Hodgson, M. E., and B. M. Shelley. 1994. Removing the Topographic Effect in Remotely Sensed Imagery. ERDAS Monitor, 6
(1): 4-6.
Hord, 1982
Hord, R. M. 1982. Digital Image Processing of Remotely Sensed Data. New York: Academic Press.
Iron and Petersen, 1981
Iron, J. R., and G. W. Petersen. 1981. Texture Transforms of Remote Sensing Data. Remote Sensing of Environment 11:359-
370.
Jacobsen, 1980
Jacobsen, K. 1980. Vorschläge zur Konzeption und zur Bearbeitung von Bündelblockausgleichungen. Ph.D. dissertation,
wissenschaftliche Arbeiten der Fachrichtung Vermessungswesen der Universität Hannover, No. 102.
Jacobsen, 1982
———. 1982. Programmgesteuerte Auswahl zusäetzlicher Parameter. Bildmessung und Luftbildwesen, p. 213-217.
Jacobsen, 1984
———. 1984. Experiences in blunder detection for Aerial Triangulation. Paper presented at International Society for
Photogrammetry and Remote Sensing (ISPRS) 15th Congress, Rio de Janeiro, Brazil, June 1984.
Jensen, 1986
Jensen, J. R. 1986. Introductory Digital Image Processing: A Remote Sensing Perspective. Englewood Cliffs, New Jersey:
Prentice-Hall.
Jensen, 1996
Jensen, J. R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective. 2d ed. Englewood Cliffs, New
Jersey: Prentice-Hall.
Jensen, J. R., et al. 1983. Urban/Suburban Land Use Analysis. Chapter 30 in Manual of Remote Sensing. Ed. R. N. Colwell.
Falls Church, Virginia: American Society of Photogrammetry.
Johnston, 1980
Johnston, R. J. 1980. Multivariate Statistical Analysis in Geography: A Primer on the General Linear Model. Essex, England:
Longman Group Ltd.
Jordan and Beck, 1999
Jordan, L. E., III, and L. Beck. 1999. NITFS—The National Imagery Transmission Format Standard. Atlanta, Georgia: ERDAS,
Inc.
Kidwell, 1988
Kidwell, K. B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-7, NOAA-8, NOAA-9, NOAA-10, and NOAA-
11) Users Guide. Washington, DC: National Oceanic and Atmospheric Administration.
King et al, 2001
King, Roger and Wang, Jianwen, “A Wavelet Based Algorithm for Pan Sharpening Landsat 7 Imagery”, 2001.
Kloer, 1994
Kloer, B. R. 1994. Hybrid Parametric/Non-Parametric Image Classification. Paper presented at the ACSM-ASPRS Annual
Convention, Reno, Nevada, April 1994.
Kneizys et al, 1988
Kneizys, F. X., E. P. Shettle, L. W. Abreu, J. H. Chettwynd, G. P. Anderson, W. O. Gallery, J. E. A. Selby, and S. A. Clough.
1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts: Air Force Geophysics Laboratory. AFGL-TR-88-
0177.
Konecny, 1994
Konecny, G. 1994. New Trends in Technology, and their Application: Photogrammetry and Remote Sensing—From Analog to
Digital. Paper presented at the Thirteenth United Nations Regional Cartographic Conference for Asia and the Pacific,
Beijing, China, May 1994.
Konecny and Lehmann, 1984
Konecny, G., and G. Lehmann. 1984. Photogrammetrie. Walter de Gruyter Verlag, Berlin.
Kruse, 1988
Kruse, F. A. 1988. Use of Airborne Imaging Spectrometer Data to Map Minerals Associated with Hydrothermally Altered
Rocks in the Northern Grapevine Mountains, Nevada and California. Remote Sensing of the Environment 24 (1): 31-51.
Krzystek, 1998
Krzystek, P. 1998. On the use of matching techniques for automatic aerial triangulation. Paper presented at meeting of the
International Society for Photogrammetry and Remote Sensing (ISPRS) Commission III Conference, Columbus, Ohio,
July 1998.
Kubik, 1982
Kubik, K. 1982. An error theory for the Danish method. Paper presented at International Society for Photogrammetry and
Remote Sensing (ISPRS) Commission III Symposium, Helsinki, Finland, June 1982.
Larsen and Marx, 1981
Larsen, R. J., and M. L. Marx. 1981. An Introduction to Mathematical Statistics and Its Applications. Englewood Cliffs, New
Jersey: Prentice-Hall, Inc.
Lavreau, 1991
Lavreau, J. 1991. De-Hazing Landsat Thematic Mapper Images. Photogrammetric Engineering & Remote Sensing 57 (10):
1297-1302.
Leberl, 1990
Leberl, F. W. 1990. Radargrammetric Image Processing. Norwood, Massachusetts: Artech House, Inc.
Lee and Walsh, 1984
Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic Information System. U.S. Fish and Wildlife
Service, FWS/OBS-84/17.
Lee, 1981
Lee, J. S. 1981. “Speckle Analysis and Smoothing of Synthetic Aperture Radar Images.” Computer Graphics and Image
Processing 17 (1): 24-32.
646 ERDAS
Works Cited
Leick, 1990
Leick, A. 1990. GPS Satellite Surveying. New York, New York: John Wiley & Sons.
Lemeshewsky, 1999
Lemeshewsky, George P, “Multispectral multisensor image fusion using wavelet transforms”, in Visual Image Processing VIII,
S. K. Park and R. Juday, Ed., Proc SPIE 3716, pp214-222, 1999.
Lemeshewsky, 2002a
Lemeshewsky, George P, “Multispectral Image sharpening Using a Shift-Invariant Wavelet Transform and Adaptive
Processing of Multiresolution Edges” in Visual Information Processing XI, Z. Rahman and R.A. Schowengerdt, Eds.,
Proc SPIE, v. 4736, 2002b.
Li, 1983
Li, D. 1983. Ein Verfahren zur Aufdeckung grober Fehler mit Hilfe der a posteriori-Varianzschätzung. Bildmessung und
Luftbildwesen 5.
Li, 1985
———. 1985. Theorie und Untersuchung der Trennbarkeit von groben Paßpunktfehlern und systematischen Bildfehlern bei
der photogrammetrischen punktbestimmung. Ph.D. dissertation, Deutsche Geodätische Kommission, Reihe C, No. 324.
Lillesand and Kiefer, 1987
Lillesand, T. M., and R. W. Kiefer. 1987. Remote Sensing and Image Interpretation. New York: John Wiley & Sons, Inc.
Lopes et al, 1990
Lopes, A., E. Nezry, R. Touzi, and H. Laur. 1990. Maximum A Posteriori Speckle Filtering and First Order Textural Models in
SAR Images. Paper presented at the International Geoscience and Remote Sensing Symposium (IGARSS), College Park,
Maryland, May 1990, 3:2409-2412.
Lü, 1988
Lü, Y. 1988. Interest operator and fast implementation. IASPRS 27 (B2), Kyoto, 1988.
Lyon, 1987
Lyon, R. J. P. 1987. Evaluation of AIS-2 Data over Hydrothermally Altered Granitoid Rocks. Proceedings of the Third AIS
Data Analysis Workshop. JPL Pub. 87-30:107-119.
Magellan Corporation, 1999
Magellan Corporation. 1999. GLONASS and the GPS+GLONASS Advantage. Retrieved October 25, 1999, from
http://www.magellangps.com/geninfo/glonass.htm
Maling, 1992
Maling, D. H. 1992. Coordinate Systems and Map Projections. 2d ed. New York: Pergamon Press.
Mallat, 1989
Mallat S.G., "A Theory for Multiresolution Signal Decomposition: The Wavelet Representation", IEEE Transactions on Pattern
Analysis and Machine Intelligence, Volume 11. No 7., 1989.
Marble, 1990
Marble, D. F. 1990. Geographic Information Systems: An Overview. In Introductory Readings in Geographic Information
Systems. Ed. D. J. Peuquet and D. F. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.
Mayr, 1995
Mayr, W. 1995. Aspects of automatic aerotriangulation. Paper presented at the 45th Photogrammetric Week, Wichmann Verlag,
Karlsruhe, September 1995, 225-234.
Mendenhall and Scheaffer, 1973
Mendenhall, W., and R. L. Scheaffer. 1973. Mathematical Statistics with Applications. North Scituate, Massachusetts: Duxbury
Press.
Merenyi et al, 1996
Merenyi, E., J. V. Taranik, T. Monor, and W. Farrand. March 1996. Quantitative Comparison of Neural Network and
Conventional Classifiers for Hyperspectral Imagery. Paper presented at the Sixth AVIRIS Conference. JPL Pub.
Minnaert and Szeicz, 1961
Minnaert, J. L., and G. Szeicz. 1961. The Reciprocity Principle in Lunar Photometry. Astrophysics Journal 93:403-410.
Nagao, M., and T. Matsuyama. 1978. Edge Preserving Smoothing. Computer Graphics and Image Processing 9:394-407.
National Aeronautics and Space Administration, 1995a
National Aeronautics and Space Administration (NASA). 1995a. Mission Overview. Retrieved October 2, 1999, from
http://southport.jpl.nasa.gov/science/missiono.html
National Aeronautics and Space Administration, 1995b
———. 1995b. Thematic Mapper Simulators (TMS). Retrieved October 2, 1999, from
http://geo.arc.nasa.gov/esdstaff/jskiles/top-down/OTTER/OTTER_docs/DAEDALUS.html
National Aeronautics and Space Administration, 1996
———. 1999. An Overview of SeaWiFS and the SeaStar Spacecraft. Retrieved September 30, 1999, from
http://seawifs.gsfc.nasa.gov/SEAWIFS/SEASTAR/SPACECRAFT.html
National Aeronautics and Space Administration, 2001
———. 2001. Landsat 7 Mission Specifications. Retrieved December 28, 2001, from
http://landsat.gsfc.nasa.gov/project/L7_Specifications.html
National Imagery and Mapping Agency, 1998
National Imagery and Mapping Agency (NIMA). 1998. The National Imagery and Mapping Agency Fact Sheet. Retrieved
November 11, 1999, from http://164.214.2.59/general/factsheets/nimafs.html
National Remote Sensing Agency, 1998
National Remote Sensing Agency, Department of Space, Government of India. 1998. Table 3. Specifications of IRS-ID LISS-
III camera. Retrieved December 28, 2001 from http://202.54.32.164/interface/inter/v8n4/v8n4t_3.html
Needham, 1986
Needham, B. H. 1986. Availability of Remotely Sensed Data and Information from the U.S. National Oceanic and Atmospheric
Administration’s Satellite Data Services Division. Chapter 9 in Satellite Remote Sensing for Resources Development,
edited by Karl-Heinz Szekielda. Gaithersburg, Maryland: Graham & Trotman, Inc.
Oppenheim and Schafer, 1975
Oppenheim, A. V., and R. W. Schafer. 1975. Digital Signal Processing. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.
ORBIMAGE, 1999
ORBIMAGE. 1999. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved October 1, 1999, from
http://www.orbimage.com/satellite/orbview3/orbview3.html
ORBIMAGE, 2000
———. 2000. OrbView-3: High-Resolution Imagery in Real-Time. Retrieved December 31, 2000, from
http://www.orbimage.com/corp/orbimage_system/ov3/
Parent and Church, 1987
Parent, P., and R. Church. 1987. Evolution of Geographic Information Systems as Decision Making Tools. Fundamentals of
Geographic Information Systems: A Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for
Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.
Pearson, 1990
Pearson, F. 1990. Map Projections: Theory and Applications. Boca Raton, Florida: CRC Press, Inc.
Peli and Lim, 1982
Peli, T., and J. S. Lim. 1982. Adaptive Filtering for Image Enhancement. Optical Engineering 21 (1): 108-112.
Pratt, 1991
Pratt, W. K. 1991. Digital Image Processing. 2d ed. New York: John Wiley & Sons, Inc.
648 ERDAS
Works Cited
Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. 1988. Numerical Recipes in C. New York, New York:
Cambridge University Press.
Prewitt, 1970
Prewitt, J. M. S. 1970. Object Enhancement and Extraction. In Picture Processing and Psychopictorics. Ed. B. S. Lipkin and
A. Resenfeld. New York: Academic Press.
RADARSAT, 1999
RADARSAT. 1999. RADARSAT Specifications. Retrieved September 14, 1999 from http://radarsat.space.gc.ca/
Rado, 1992
Rado, B. Q. 1992. An Historical Analysis of GIS. Mapping Tomorrow’s Resources. Logan, Utah: Utah State University.
Richter, 1990
Richter, R. 1990. A Fast Atmospheric Correction Algorithm Applied to Landsat TM Images. International Journal of Remote
Sensing 11 (1): 159-166.
Ritter and Ruth, 1995
Ritter, N., and M. Ruth. 1995. GeoTIFF Format Specification Rev. 1.0. Retrieved October 4, 1999, from
http:/www.remotesensing.org/geotiff/spec/geotiffhome.html
Robinson and Sale, 1969
Robinson, A. H., and R. D. Sale. 1969. Elements of Cartography. 3d ed. New York: John Wiley & Sons, Inc.
Rockinger and Fechner, 1998
Rockinger, O., and Fechner, T., “Pixel-Level Image Fusion”, in Signal Processing, Sensor Fusion and Target Recognition, I.
Kadar, Ed., Proc SPIE 3374, pp378-388, 1998.
Sabins, 1987
Sabins, F. F., Jr. 1987. Remote Sensing Principles and Interpretation. 2d ed. New York: W. H. Freeman & Co.
Schenk, 1997
Schenk, T., 1997. Towards automatic aerial triangulation. International Society for Photogrammetry and Remote Sensing
(ISPRS) Journal of Photogrammetry and Remote Sensing 52 (3): 110-121.
Schowengerdt, 1980
Schowengerdt, R. A. 1980. Reconstruction of Multispatial, Multispectral Image Data Using Spatial Frequency Content.
Photogrammetric Engineering & Remote Sensing 46 (10): 1325-1334.
Schowengerdt, 1983
———. 1983. Techniques for Image Processing and Classification in Remote Sensing. New York: Academic Press.
Schwartz and Soha, 1977
Schwartz, A. A., and J. M. Soha. 1977. Variable Threshold Zonal Filtering. Applied Optics 16 (7).
Shensa, 1992
Shensa, M., “The discrete wavelet transform”, IEEE Trans Sig Proc, v. 40, n. 10, pp. 2464-2482, 1992.
Shikin and Plis, 1995
Shikin, E. V., and A. I. Plis. 1995. Handbook on Splines for the User. Boca Raton: CRC Press, LLC.
Simonett et al, 1983
Simonett, D. S., et al. 1983. The Development and Principles of Remote Sensing. Chapter 1 in Manual of Remote Sensing. Ed.
R. N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Slater, 1980
Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts: Addison-Wesley Publishing
Company, Inc.
Smith et al, 1980
Smith, J. A., T. L. Lin, and K. J. Ranson. 1980. The Lambertian Assumption and Landsat Data. Photogrammetric Engineering
& Remote Sensing 46 (9): 1183-1189.
Snyder, 1987
Snyder, J. P. 1987. Map Projections--A Working Manual. Geological Survey Professional Paper 1395. Washington, DC: United
States Government Printing Office.
Snyder, J. P., and P. M. Voxland. 1989. An Album of Map Projections. U.S. Geological Survey Professional Paper 1453.
Washington, DC: United States Government Printing Office.
Space Imaging, 1998
Space Imaging. 1998. IRS-ID Satellite Imagery Available for Sale Worldwide. Retrieved October 1, 1999, from
http://www.spaceimage.com/newsroom/releases/1998/IRS1Dworldwide.html
Space Imaging, 1999a
———. 1999b. IRS (Indian Remote Sensing Satellite). Retrieved September 17, 1999, from
http://www.spaceimage.com/aboutus/satellites/IRS/IRS.html
Space Imaging, 1999c
SPOT Image. 1998. SPOT 4—In Service! Retrieved September 30, 1999 from
http://www.spot.com/spot/home/news/press/Commish.htm
SPOT Image, 1999
———. 1999. SPOT System Technical Data. Retrieved September 30, 1999, from
http://www.spot.com/spot/home/system/introsat/seltec/seltec.htm
Srinivasan et al, 1988
Srinivasan, R., M. Cannon, and J. White. 1988. Landsat Destriping Using Power Spectral Filtering. Optical Engineering 27
(11): 939-943.
Star and Estes, 1990
Star, J., and J. Estes. 1990. Geographic Information Systems: An Introduction. Englewood Cliffs, New Jersey: Prentice-Hall.
Steinitz et al, 1976
Steinitz, C., P. Parker, and L. E. Jordan, III. 1976. Hand Drawn Overlays: Their History and Perspective Uses. Landscape
Architecture 66:444-445.
Stojic’ , M., J. Chandler, P. Ashmore, and J. Luce. 1998. The assessment of sediment transport rates by automated digital
Stojic et al, 1998
Strang, Gilbert and Nguyen, Truong, Wavelets and Filter Banks, Wellesley-Cambridge Press, 1997.
Suits, 1983
Suits, G. H. 1983. The Nature of Electromagnetic Radiation. Chapter 2 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls
Church, Virginia: American Society of Photogrammetry.
Swain, 1973
Swain, P. H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis (LARS Information Note 111572). West
Lafayette, Indiana: The Laboratory for Applications of Remote Sensing, Purdue University.
Swain and Davis, 1978
Swain, P. H., and S. M. Davis. 1978. Remote Sensing: The Quantitative Approach. New York: McGraw Hill Book Company.
Tang et al, 1997
Tang, L., J. Braun, and R. Debitsch. 1997. Automatic Aerotriangulation - Concept, Realization and Results. Photogrammetry
& Remote Sensing 52 (3): 122-131.
Taylor, 1977
Taylor, P. J. 1977. Quantitative Methods in Geography: An Introduction to Spatial Analysis. Boston, Massachusetts: Houghton
Mifflin Company.
Tou and Gonzalez, 1974
Tou, J. T., and R. C. Gonzalez. 1974. Pattern Recognition Principles. Reading, Massachusetts: Addison-Wesley Publishing
Company.
650 ERDAS
Works Cited
Tsingas, 1995
Tsingas, V. 1995. Operational use and empirical results of automatic aerial triangulation. Paper presented at the 45th
Photogrammetric Week, Wichmann Verlag, Karlsruhe, September 1995, 207-214.
Tucker, 1979
Tucker, C. J. 1979. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sensing of
Environment 8:127-150.
USGS, 1999a
United States Geological Survey (USGS). 1999a. About the EROS Data Center. Retrieved October 25, 1999, from
http://edcwww.cr.usgs.gov/content_about.html
United States Geological Survey, 1999b
———. n.d. National Landsat Archive Production System (NLAPS). Retrieved September 30, 1999, from
http://edc.usgs.gov/glis/hyper/guide/nlaps.html
Vosselman and Haala, 1992
Vosselman, G., and N. Haala. 1992. Erkennung topographischer Paßpunkte durch relationale Zuordnung. Zeitschrift für
Photogrammetrie und Fernerkundung 60 (6): 170-176.
Walker and Miller, 1990
Walker, T. C., and R. K. Miller. 1990. Geographic Information Systems: An Assessment of Technology, Applications, and
Products. Madison, Georgia: SEAI Technical Publications.
Wang, Y., 1988a
Wang, Y. 1988a. A combined adjustment program system for close range photogrammetry. Journal of Wuhan Technical
University of Surveying and Mapping 12 (2).
Wang, Y., 1998b
———. 1998b. Principles and applications of structural image matching. International Society for Photogrammetry and
Remote Sensing (ISPRS) Journal of Photogrammetry and Remote Sensing 53:154-165.
Wang, Y., 1994
———. 1995. A New Method for Automatic Relative Orientation of Digital Images. Zeitschrift fuer Photogrammetrie und
Fernerkundung (ZPF) 3: 122-130.
Wang, Z., 1990
Wang, Z. 1990. Principles of Photogrammetry (with Remote Sensing). Beijing, China: Press of Wuhan Technical University of
Surveying and Mapping, and Publishing House of Surveying and Mapping.
Watson, 1992
Watson, D. 1992. Contouring: A Guide to the Analysis and Display of Spatial Data. Tarrytown, New York: Elsevier Science.
Welch, 1990
Welch, R. 1990. 3-D Terrain Modeling for GIS Applications. GIS World 3 (5): 26-30.
Welch and Ehlers, 1987
Welch, R., and W. Ehlers. 1987. Merging Multiresolution SPOT HRV and Landsat TM Data. Photogrammetric Engineering
& Remote Sensing 53 (3): 301-303.
Wolf, 1983
Yang, X. 1997. Georeferencing CAMS Data: Polynomial Rectification and Beyond. Ph. D. dissertation, University of South
Carolina.
Yang, X., and D. Williams. 1997. The Effect of DEM Data Uncertainty on the Quality of Orthoimage Generation. Paper
presented at Geographic Information Systems/Land Information Systems (GIS/LIS) ‘97, Cincinnati, Ohio, October
1997, 365-371.
Yocky, 1995
Yocky, D. A., “Image merging and data fusion by means of the two-dimensional wavelet transform”, J. Opt. Soc. Amer., v. 12,
n. 9, pp 1834-1845, 1995.
Zamudio and Atkinson, 1990
Zamudio, J. A., and W. W. Atkinson. 1990. Analysis of AVIRIS data for Spectral Discrimination of Geologic Materials in the
Dolly Varden Mountains. Paper presented at the Second Airborne Visible Infrared Imaging Sepctrometer (AVIRIS)
Conference, Pasadena, California, June 1990, Jet Propulsion Laboratories (JPL) Publication 90-54:162-66.
Zhang, 1999
Zhang, Y., “A New Merging Method and its Spectral and Spatial Effects”, Int. J. Rem. Sens., vol. 20, no. 10, pp 2003-2014,
1999.
Related Reading
Battrick, B., and L. Proud, eds. 1992. ERS-1 User Handbook. Noordwijk, The Netherlands: European Space Agency
Publications Division, c/o ESTEC.
Billingsley, F. C., et al. 1983. “Data Processing and Reprocessing.” Chapter 17 in Manual of Remote Sensing, edited by Robert
N. Colwell. Falls Church, Virginia: American Society of Photogrammetry.
Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory and Implementation. New York: John
Wiley & Sons, Inc.
Carter, J. R. 1989. On Defining the Geographic Information System. Fundamentals of Geographic Information Systems: A
Compendium. Ed. W. J. Ripple. Bethesda, Maryland: American Society for Photogrammetric Engineering and Remote
Sensing and the American Congress on Surveying and Mapping.
Center for Health Applications of Aerospace Related Technologies (CHAART), The. 1998. Sensor Specifications: IRS-P3.
Retrieved December 28, 2001, from http://geo.arc.nasa.gov/sge/health/sensor/sensors/irsp3.html
Dangermond, J. 1989. A Review of Digital Data Commonly Available and Some of the Practical Problems of Entering Them
into a GIS. Fundamentals of Geographic Information Systems: A Compendium. Ed. W. J. Ripple. Bethesda, Maryland:
American Society for Photogrammetry and Remote Sensing and American Congress on Surveying and Mapping.
Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency Product Specifications for ARC Digitized Raster
Graphics (ADRG). St. Louis, Missouri: Defense Mapping Agency Aerospace Center.
Duda, R. O., and P. E. Hart. 1973. Pattern Classification and Scene Analysis. New York: John Wiley & Sons, Inc.
Elachi, C. 1992. “Radar Images of the Earth from Space.” Exploring Space.
Elachi, C. 1988. Spaceborne Radar Remote Sensing: Applications and Techniques. New York: Institute of Electrical and
Electronics Engineers, Inc. (IEEE) Press.
Elassal, A. A., and V. M. Caruso. 1983. USGS Digital Cartographic Data Standards: Digital Elevation Models. Circular 895-
B. Reston, Virginia: U.S. Geological Survey.
Federal Geographic Data Committee (FGDC). 1997. Content Standards for Digital Orthoimagery. Federal Geographic Data
Committee, Washington, DC.
652 ERDAS
Related Reading
Freden, S. C., and F. Gordon, Jr. 1983. Landsat Satellites. Chapter 12 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls
Church, Virginia: American Society of Photogrammetry.
Geological Remote Sensing Group. 1992. Geological Remote Sensing Group Newsletter 5. Wallingford, United Kingdom:
Institute of Hydrology.
Gonzalez, R. C., and R. E. R. Woods. 1992. Digital Image Processing. Reading, Massachusetts: Addison-Wesley Publishing
Company.
Guptill, S. C., ed. 1988. A Process for Evaluating Geographic Information Systems. U.S. Geological Survey Open-File Report
88-105.
Jacobsen, K. 1994. Combined Block Adjustment with Precise Differential GPS Data. International Archives of
Photogrammetry and Remote Sensing 30 (B3): 422.
Jordan, L. E., III, B. Q. Rado, and S. L. Sperry. 1992. Meeting the Needs of the GIS and Image Processing Industry in the 1990s.
Photogrammetric Engineering & Remote Sensing 58 (8): 1249-1251.
Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.
Kennedy, M. 1996. The Global Positioning System and GIS: An Introduction. Chelsea, Michigan: Ann Arbor Press, Inc.
Knuth, D. E. 1987. Digital Halftones by Dot Diffusion. Association for Computing Machinery Transactions on Graphics 6:245-
273.
Lue, Y., and K. Novak. 1991. Recursive Grid - Dynamic Window Matching for Automatic DEM Generation. 1991 ACSM-
ASPRS Fall Convention Technical Papers.
Menon, S., P. Gao, and C. Zhan. 1991. GRID: A Data Model and Functional Map Algebra for Raster Geo-processing. Paper
presented at Geographic Information Systems/Land Information Systems (GIS/LIS) ‘91, Atlanta, Georgia, October
1991, 2:551-561.
Moffit, F. H., and E. M. Mikhail. 1980. Photogrammetry. 3d ed. New York: Harper& Row Publishers.
Nichols, D., J. Frew et al. 1983. Digital Hardware. Chapter 20 in Manual of Remote Sensing. Ed. R. N. Colwell. Falls Church,
Virginia: American Society of Photogrammetry.
Sader, S. A., and J. C. Winne. 1992. RGB-NDVI Colour Composites For Visualizing Forest Change Dynamics. International
Journal of Remote Sensing 13 (16): 3055-3067.
Short, N. M., Jr. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing. Washington, DC: National
Aeronautics and Space Administration.
Space Imaging. 1999. LANDSAT TM. Retrieved September 17, 1999, from
http://www.spaceimage.com/aboutus/satellites/Landsat/landsat.html
Stimson, G. W. 1983. Introduction to Airborne Radar. El Segundo, California: Hughes Aircraft Company.
United States Geological Survey (USGS). 1999. Landsat Thematic Mapper Data. Retrieved September 30, 1999, from
http://edc.usgs.gov/glis/hyper/guide/landsat_tm
Wolberg, G. 1990. Digital Image Warping. Los Alamitos, California: Institute of Electrical and Electronics Engineers, Inc.
(IEEE) Computer Society Press.
Wolf, P. R. 1980. Definitions of Terms and Symbols used in Photogrammetry. Manual of Photogrammetry. Ed. C. C. Slama.
Falls Church, Virginia: American Society of Photogrammetry.
Wong, K. W. 1980. Basic Mathematics of Photogrammetry. Chapter II in Manual of Photogrammetry. Ed. C. C. Slama. Falls
Church, Virginia: American Society of Photogrammetry.
Yang, X., R. Robinson, H. Lin, and A. Zusmanis. 1993. Digital Ortho Corrections Using Pre-transformation Distortion
Adjustment. 1993 ASPRS Technical Papers 3:425-434.
654 ERDAS
Index
element 416
Symbols
in script models 406
.OV1 (overview image) 88
layer 417
.OVR (overview image) 88
ANT (Erdas 7.x) (annotation) 52
.stk (GRID Stack file) 98
AP 304
Numerics
Index Arc Coverage 48
Arc Interchange (raster) 48
1:24,000 scale 91 Arc Interchange (vector) 53
1:250,000 scale 91 ARC system 80, 88
2D affine transformation 276 Arc/second format 90
4 mm tape 19, 20 Arc_Interchange to Coverage 53
7.5-minute DEM 91 Arc_Interchange to Grid 53
8 mm tape 19, 20 ARCGEN 48, 53, 103
9-track tape 19, 21 ArcInfo 47, 48, 98, 103, 340, 383, 386
coverages 33
A data model 33, 35
a priori 225, 343 UNGENERATE 103
Absorption 6, 8 ArcInfo GENERATE 42
spectra 5, 10 ArcInfo INTERCHANGE 42
Absorption spectra 181 ArcView 42
Accuracy assessment 258, 261 Area based matching 293
Accuracy report 262 Area of interest 28, 156, 391
Active sensor 5 ASCII 91, 388
Adaptive filter 163 ASCII Raster 48
Additional Parameter modeling (AP) 304 ASCII To Point Annotation (annotation) 52
ADRG 21, 48, 80 ASCII To Point Coverage (vector) 53
file naming convention 84 Aspect 371, 375, 429
ordering 96 calculating 375
ADRI 48, 85 equatorial 430
file naming convention 87 oblique 430
ordering 96 polar 429
Aerial photography 79 transverse 431
Aerial photos 4, 23, 266 ASTER 48
Aerial triangulation (AT) 268, 284 AT 284
Airborne GPS 283 Atmospheric correction 155
Airborne imagery 47 Atmospheric effect 146
Airborne Imaging Spectrometer 10 Atmospheric modeling 147
Airborne Multispectral Scanner Mk2 10 Attribute
AIRSAR 71, 77 imported 37
Aitoff 592 in model 405
Albers Conical Equal Area 476, 517 information 33, 35, 37
Almaz 71 raster 387
Almaz 1-B 72 thematic 386, 387
Almaz-1 72 vector 387, 388
Analog photogrammetry 265 viewing 388
Analytical photogrammetry 265 Auto update 113, 114, 115
Annotation 52, 123, 127, 416 AutoCAD 42, 47, 103
Automatic image correlation 318
656 ERDAS
D
658 ERDAS
E
660 ERDAS
H
Geoid 316 H
Geology 72 Halftone 451
Georeference 339, 447 Hammer 511
Georeferencing Hardcopy 447
GeoTIFF 102 Hardware 109
GeoTIFF 49, 101 HD 320
geocoding 102 Header
georeferencing 102 file 18, 20
Gigabyte 16 record 18
GIS 1 HFA file 316
database 382 Hierarchical pyramid technique 319
defined 381 High density (HD) 320
history 381 High Resolution Visible sensors (HRV) 297
GIS (Erdas 7.x) 49 Histogram 148, 235, 386, 387, 455, 456, 464, 465
Glaciology 72 breakpoint 152
Global operation 28 signature 239, 244
GLONASS 92 Histogram equalization
Gnomonic 509 formula 154
GOME 74 Histogram match 155
GPS data 92 Homomorphic filtering 200
GPS data applications 93 Host workstation 109
GPS satellite position 92 Hotine 531
Gradient kernel 210 HRPT (High Resolution Picture Transmission) 63
Graphical model 142, 389 HRV 297
convert to script 406 Hue 177
create 400 Huffman encoding 99
Graphical modeling 391, 399 Hydrology 72
GRASS 49 Hyperspectral data 9
Graticule 423, 431 Hyperspectral image processing 141
Gray scale 372
Gray values 305
I
GRD 49
.img file 2, 22
Great circle 428
Ideal window 197
GRID 49, 97, 98
IFOV (instantaneous field of view) 14
GRID (Surfer
IGDS 53
ASCII/Binary) 49
IGES 42, 47, 53, 105
Grid cell 1, 110
IHS to RGB 179
Grid line 422
IKONOS 56
GRID Stack 49
bands/frequencies 56
GRID Stack7x 49
Image 1, 110, 141
GRID Stacks 98
airborne 47
Ground Control Point
complex 52
see GCP
digital 40
Ground coordinate system 273
microscopic 47
Ground space 271
pseudo color 38
Ground truth 221, 225, 226, 239
radar 47
Ground truth data 93
raster 40
Ground-based photographs 266
ungeoreferenced 34
Image algebra 182, 224
662 ERDAS
M
Landsat 8, 15, 23, 40, 49, 54, 55, 141, 147, 164, 548 aspect 414
description 54, 58 base 414
history 58 bathymetric 414
MSS 14, 58, 59, 145, 147, 182 book 447
ordering 94 cadastral 414
TM 8, 13, 18, 59, 162, 179, 182, 204, 219, 341, 343 choropleth 414
displaying 116 colors in 415
Landsat 7 61 composite 414
characteristics 62 composition 443
data types 61 contour 414
Laplacian operator 210, 211 credit 424
Latitude/Longitude 90, 340, 432, 508 derivative 414
rectifying 360 hardcopy 446
Layer 2, 384, 399 index 414
Least squares adjustment 286 inset 414
Least squares condition 286 isarithmic 414
Least squares correlation 294 isopleth 414
Least squares regression 345, 349 label 424
Lee filter 203, 205 land cover 221
Lee-Sigma filter 203 lettering 426
Legend 422 morphometric 414
Lens distortion 277 outline 414
Level 1B data 346 output to TIFF 450
Level slice 155 paneled 447
Light SAR 77 planimetric 67, 414
Line 34, 38, 52, 104 printing 447
Line detection 208 continuous tone 451
Line dropout 16, 146 with black ink 453
Linear regression 147 qualitative 415
Linear transformation 345, 350 quantitative 415
Lines of constant range 216 relief 414
LISS-III 56 scale 439, 448
bands/frequencies 56 scaled 447, 451
Local region filter 203, 204 shaded relief 414
Long wave infrared region 5 slope 414
Lookup table 111, 148 thematic 414, 415
display 152 title 424
Low parallax (LP) 320 topographic 67, 414
Lowtran 7, 147 typography 425
Loximuthal 520 viewshed 414
LP 320 Map Composer 413, 449
Map coordinate 3, 4, 341, 342
M conversion 368
.mdl file 406 Map projection 339, 341, 427, 471
Magnification 110, 129, 130 azimuthal 427, 429, 439
Magnitude of parallax 322 compromise 428
Mahalanobis distance 259 conformal 439
Map 413 conical 427, 430, 439
accuracy 439, 445 cylindrical 427, 430, 439
664 ERDAS
O
666 ERDAS
S
668 ERDAS
T
670 ERDAS
Y
Y
Y residual 356
Y RMS error 356
Z
Zero-sum filter 160, 211
Zone 80, 88
Zone distribution rectangle (ZDR) 81
Zoom 129, 130
672