Geophysical Data Processing, An Introduction: August 2010
Geophysical Data Processing, An Introduction: August 2010
Geophysical Data Processing, An Introduction: August 2010
net/publication/272488455
CITATIONS READS
0 1,504
1 author:
Bruce W. Bevan
Geosight
120 PUBLICATIONS 647 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Bruce W. Bevan on 19 February 2015.
19 July 2010
Data processing is a step between measuring geophysical data and plotting it. During
this step, some errors in the measurements can be corrected. Also, different aspects of the
data may be extracted. For example, one may accentuate broad or small patterns in a
geophysical map; anomalies that extend in one direction may also be amplified or muted.
This introduction illustrates the processing of geophysical data from several different
techniques, including magnetic and conductivity surveys. While the measurements of these
surveys were made for archaeological applications, this report is intended to show general
principles of data processing that also apply to other geophysical surveys. The specialized
processing that is needed for a radar survey is not discussed here.
Four examples of data processing are described; each example is more complex than
the prior one. The captions for the figures have been written with enough detail that most of
the information in this report is included there.
Blue text in this report indicates hyperlinks, primarily to the figures that are at the end
of the report. The data that create the figures here is included with the digital version of this
report; the bottom line of each figure caption lists the data that applied there.
Page 1
Computer programs for data processing
processing of geophysical data; however, programs that have been designed for geophysical
data can be more suitable and the processing may be defined more completely in these
geophysical programs.
Spreadsheet programs can aid data processing. Write a computer program that will
convert an ASCII grid file to a Comma-Separated Variable (*.csv) file: Remove the header
lines from the grid file, change the space between readings into a comma, and put a Carriage
Return at the end of each row of the matrix. The resulting file can also be entered into
Surfer’s spreadsheet as a matrix of Z readings, rather than with XYZ on each line.
General-purpose data-processing programs for geophysical data are also available for
free. The best of these is a suite of programs from the US Geological Survey; these
programs operate within MS-DOS, rather than Windows. The programs are available
through the web site:
http://pubs.usgs.gov/of/1997/ofr-97-0725/pfofr.htm
This is Open-File Report 725 which is dated 1997 and which is titled “Potential field
geophysical software for the PC, version 2.2".
Surfer
The program Surfer has many good features. It is inexpensive. It can create a wide
range of different types of maps (contour line, wire-frame, shaded relief, and gray-scale).
The program allows detaiIed control of the appearance of those maps. The printed manual
for this program is unusually good and complete. The manual describes the data-processing
options of the program thoroughly and gives references for further information; never buy a
data-processing program that does not have this detailed information. The program has no
dongle or other security features that might prevent the user from accidentally not having
access to the program. You can download a demonstration version of the Surfer program
from the company’s web site; this version will allow all of the data processing that is
described here, but the resulting maps cannot be printed, exported, or saved.
Golden Software frequently makes improvements to their programs; the version of the
Surfer program that has been applied here is number 8. In 2010, version 9 is current, and
differences from version 8 that are known will be mentioned here. Version 9 has added a
handy capability for listing the coordinate and grid value at points on a map as the mouse
cursor is moved across the map. This version 9 is the first one that does not included a
printed manual. While the program has an excellent, built-in help file, this provide little
assistance if one does not know that one should ask a question or know the exact wording of
a question that will extract an answer from the help file. If you are moderately new to Surfer,
it will be important to read much of this help file; without that rather complete reading, you will
never discover many of the important options in the program. While the built-in help section
of the program is tedious to read, it is complete, and it has some benefits over a printed
manual, for it has colored figures, and also hyperlinks.
The Surfer program has a surprisingly good capability for editing the locations of labels
on contour lines; this allows better-looking maps to be prepared. However, if an unimportant
Page 2
Example 1: Correcting a few isolated errors
change is made to those contour lines (such as changing the width of the lines), the modified
locations for contour line labels are lost. There are other disadvantages to the program also.
The program would be much easier to use if the Dialog Defaults were selected by right-
clicking a parameter; the hierarchical list in Surfer is difficult to operate. Most of the graphics
conversions between Surfer and other programs will fail; this is a fault that I find with all of my
other graphical software also. However, by a detailed testing of chains of different graphics
formats, I have always found one chain that will usually convert correctly between my
different programs (this chain often includes the Windows MetaFile *.wmf format). Additions
to the line-contouring capability would benefit the program. One addition could be equal-area
contours (the area between each adjacent pair of contour levels is the same). Another
addition could be the thinning of the density of contour lines where they are so thick that they
form a black mass; where contour lines approach too close, the interval between lines could
be increased in that small area.
While the Surfer program was not designed for all of the steps that are needed for the
processing of geophysical data, it does allow many important types of processing to be done,
and so this program is applied here. However, this report does not describe the simple and
basic operations of the Surfer program; those are clearly stated in the program’s manual.
This report does include further details about specific, and unusual, operations of the Surfer
program that can be applied to data processing.
Here is some information that might aid your use of the Surfer program; while much of
this might be in the manual, it can be difficult to locate. A set of similar geophysical maps is
usually prepared for one site or report. Rather than start each map from the default settings
of Surfer, it is easier to simply replace the grid data in the first map with the grid data of a
later map. To do this, select the Contour Properties window and go to the General tab; at the
Input Grid File section, click on the “open file” icon on the right and then select a new grid file.
The new map will be displayed with a click on the Okay button.
The gridded data is stored in the *.srf file that contains all the settings and data that
are needed for creating the map. If you modify the grid file for the map, the map will not show
the changes in the new grid file until you replace the old grid file with the new one that has
the same name. That is, you must go through the steps above in order to make your map
use the new grid file.
Since the grid data is stored with the *.srf file, you can send just this file to a colleague
who also has the Surfer program; since there is just one file, this simplifies the transfer.
Should your colleague need the grid file separately (perhaps for further data processing), the
Surfer program by itself does not allow this grid file to be extracted from the *.srf file.
However, it is possible to extract this grid file using a Scripter program that is freely available
on the Golden Software web site; this Scripter file is called Srf2grd.
Four examples of data processing are described next:
Page 3
Example 2: Window smoothing for clarifying large patterns
to electrical interference. This interference is primarily from nearby electrical wires, but
lightning also creates electrical noise.
Figure 1, Figure 2
The magnetic map in Figure 1 and Figure 2 illustrates the noise that was caused by a
buried electrical cable; this cable is about 15 ft (5 m) north of the north edge of the survey
area. Almost all of the small-area oval anomalies in the map are caused by interference from
that cable. While the source must be the cable, the cause is otherwise unknown; rather than
the 60-Hz current, the cause is most likely power spikes from electrical machinery in a factory
that is about 1 km distant. The noise was quantified by making a series of measurements
with the magnetometer while its sensor was stationary, and stepping the location of the
sensor toward the cable; there was a sharp rise in the variability of the readings as the cable
was approached.
A magnetic map of the shallowest-possible object is plotted at the top of Figure 1; any
object that is underground will cause an anomaly that is broader (relative to its peak
amplitude). The map below shows a number of broad anomalies, and examples of a few that
must be caused by buried objects are marked with the letter G. Some of the anomalies
whose width is about the same as that caused by an at-surface object are indicated with a
question mark. In principle, some of these anomalies might be caused by iron artifacts, but
since all of them are magnetic lows, they are probably the result of interference also. Some
of the anomalies that are almost-certainly caused by electrical interference are indicated with
the letter B in Figure 1.
Figure 3
If a small number of readings are in error, a simple process for changing them one-by-
one is described in Figure 3. Each bad reading can either be removed or it can be replaced
by an estimate of the correct reading. While this correction will make a map look better, this
correction is not necessary and it is unlikely that any new findings will be apparent when it
has been completed.
After this geophysical survey, an archaeological excavation that was directed by Mark
Kostro and Marley Brown III (Colonial Williamsburg) found a dense concentration of brick
rubble around E53 N177 in Figure 1; this appears to be the remains of the Prior Wright house
that was sought in this area. While this rubble was not revealed by this magnetic map, the
rubble was clearly detected with a conductivity meter as high readings.
Page 4
Example 3: Data processing for a conductivity survey
much of the irregularity or jaggedness of that curve. If this averaging is done in two
dimensions, rather than one, the smoothing that is provided by averaging is even more
pronounced. Figure 5 illustrates the procedure of this two-dimensional averaging. Simply
calculate the average of the readings that surround a point (including the reading at that
point). As the area that is averaged becomes broader, the smoothing of the patterns
becomes more pronounced. Note that each calculated average is placed in a new grid; the
process would be very confusing if this average was placed back in the original grid.
Figure 6
Figure 6 shows how the Surfer program can be used for this smoothing. The process
is started by selecting Grid from the main menu, and Filter from the drop-down menu that
follows. In Figure 6, the two Edge Effects can often be left at their default values. If the Edge
of Grid setting is changed from Ignore to Blank, then no averages will be calculated where
the averaging window crosses the edge of the grid, and the values there will be blanked; this
setting of Blank will be appropriate if every average must have the complete number of
readings (for the same smoothing). If the setting for Blanked Nodes is left at the default value
of Leave, then holes in the map are not filled by this smoothing; this is generally best. If this
parameter is changed from Leave to Ignore, then holes in the map will be filled by the
smoothing.
Figure 7
When the window filtering shown in Figure 6 is applied to the original data of Figure 4,
the clarified map of Figure 7 results. If the size of the averaging window is reduced from 7 to
5 or 3, the pattern of the low readings is much less distinct. By carefully examining the
original map in Figure 4, one can conclude that the readings are indeed typically lower in the
middle of the area, and therefore the data processing has not created a pattern that was not
in the original data.
Data review
The first step in data processing is just a review of the measurements; you can do this
in Surfer (see Figure 8). Should your geophysical instrument export its measurements to an
unusual or binary format, you will need to convert them to a format that Surfer can read.
Each measurement should be on a single line of data that also has the coordinate of the
reading (East and North) and the reading itself; Figure 8 shows an example.
Figure 8
The data that illustrate this processing were measured with a constant spacing
Page 5
Example 3: Data processing for a conductivity survey
Page 6
Example 3: Data processing for a conductivity survey
Data plot
Figure 10
The first plot of the geophysical measurements, Figure 10, has been drawn with line
contours. It is probably best that data be plotted with line contours during the data-
processing stage. When the number of measurements to be plotted exceeds about 50,000, it
can be better to plot data with gray-scale or shaded relief views, for small details in large
maps may be more apparent with those methods. However, gray scale and shaded relief
images are analog views, while a contour map might be called a digital view. The lines in a
contour map aid the detection of faults in data and also aid the correction of the data.
Level shift
Figure 10 reveals an abrupt shift in the readings near line N80. This pattern might be
found in nature or in a human-modified landscape. However, the lines of measurement for
this survey were made in an east-west direction; this immediately makes it likely that the shift
is an error in the readings. If there was a change in the survey when the lines near N80 were
measured, that would make it almost certain that the change in the readings is just a fault in
the data. This change during the survey could have been a break for lunch, for overnight, or
for a rainstorm. Perhaps the batteries in the instrument were replaced or the settings of the
instrument were modified.
If it is decided to correct the data for this shift, there are several ways of determining
the change in readings across the abrupt shift. Figure 10 indicates that the shift is two - three
contour lines, which means that the values changed by 10 - 15 mS/m. In the Surfer program,
one can look at individual readings in a grid with the Grid Node Editor. At the Grid menu,
select the Grid Node Editor, and then select the grid to review. With the arrow keys on your
keyboard, check the Z values as you pass over the abrupt change at a few locations; this can
give you a good idea of the actual shift that is there.
Figure 11
Because of the natural variability of the readings, it can be difficult to determine an
accurate value for the shift. One can get a better estimate of an abrupt shift by calculating
the average reading along several lines of traverse; Figure 11 shows how Surfer allows you
to do this. With a conductivity survey, a small fraction of readings can be extremely high or
low, and for this type of data, the median can reveal more accurately the typical value along a
line of measurement.
Figure 12, Figure 13, Figure 14
Even better, one can understand an artificial shift in one’s measurements by
calculating and plotting the median along every line of traverse. While Surfer does not allow
an easy method for this calculation, it can be done, and Figure 12 describes the procedure
for creating a grid file with of this data. Then apply the procedure in Figure 13 to change this
grid file into one that you can plot with any curve-plotting program. The resulting file of
Page 7
Example 3: Data processing for a conductivity survey
medians can be plotted as shown in Figure 14, and this allows one to make a good analysis
for unusual changes in one’s readings during a survey.
Rather than applying the Grid / Extract process (from the pull-down menus) to create a
file of medians to plot, one may instead use the Grid / Slice operation to create a file with a
single line of data across a grid. In order to do this, it is necessary to create a simple
Boundary Line file (*.bln) that has the coordinates of the end points on the line of data that is
wished. For the example here, this Boundary Line file could look like this:
2 1
60 0
60 100
For this example, there are two coordinates (indicated by the 2), and their values are E60 N0
and E60 N100; this file can be prepared with NotePad or another text editor.
Figure 15
Once the shift in the data has been quantified, the readings can be adjusted using the
method in Figure 15. It is simple to adjust for a few abrupt shifts with this procedure.
Locational error
Figure 16
The correction of the shift in the data makes a major improvement in the appearance
of the conductivity map; see Figure 16. This map also shows that there is another error in the
data; this is revealed by regular fluctuations along some contour lines. Since these
fluctuations are exactly synchronized with the lines of traverse (that were east-west and
spaced by 2.5 ft), they must be caused by another fault in the readings. The fault is found
with almost all high-speed geophysical instruments; it is primarily caused by the fact that the
reading at one instant of time is an average of values in an earlier interval. Unlike the prior
fault, which was an error in the amplitude of readings, this is a fault in the location of
readings.
This locational fault would be invisible if lines of traverse went in only one direction; it is
the alternating directions that makes the undulations apparent. However, it is wasteful of field
time to make unidirectional traverses during one’s measurements. Furthermore, if one made
these unidirectional traverses, the locations of the readings would still be slightly wrong, and
this might cause an archaeological excavation to be placed at an incorrect location. It is
better and faster to make bi-directional traverses during one’s survey and to correct for the
locational error in data processing (perhaps geophysical instruments will some day include a
correction for this fault).
Figure 17
It is easy to estimate the locational error in a map like that in Figure 16. Just draw a
straight line that follows the typical peaks (or left-going swings) on a group of undulating
contour lines; then draw a line that follows their “valleys” (or right-going swings). Find the
distance between these two lines and that distance is twice the locational error. Sometimes it
is easier to make special measurements in the field to quantify this error; Figure 17 shows
Page 8
Example 3: Data processing for a conductivity survey
how this may be done. This locational error is typically rather constant between surveys, but
it is still good to check it frequently. I check it on most of my surveys; however, the data in
Figure 17 were measured at a different location from the map of Figure 16.
Figure 18
At first, it may seem that it will be difficult to correct for this locational error, for it is
found with every line and every measurement of a survey. In fact, it is very easy to make an
adequate correction, and Figure 18 shows how this may be done in Surfer.
This simple process can also be applied to correct a locational error in data that has
been measured with unidirectional traverses. If unidirectional traverses are made, it is even
possible to shift your data by a distance that is not a multiple of the spacing between
measurements: Just add a constant to the values on the columns of data that define
distances along each traverse.
A corrected map
Figure 19
After this locational correction, the patterns on the conductivity map are now simpler;
see Figure 19. If there was a shallow wire or metallic pipe that crossed your area of survey,
this locational correction should have converted the original zig-zag anomaly into a rather
straight pattern with a central low trough that has broad highs on either side. If your
conductivity map does not have linear features that will allow the determination of the
locational error, you must still do the correction. When you have done it, you may be
surprised at how much simpler the patterns of the anomalies have become. Without linear
features in your area of survey, you must make a bi-directional traverse over a feature with a
small-area and high-amplitude anomaly, or you must apply the correction you have
determined from an earlier survey.
Because of the backwards shift of the data, there are small gaps in data on every
other line; in Figure 19, these gaps are marked with vertical lines. If you do not want to have
these gaps, you can just extend your measurement traverses for a few readings past the end
of each line.
After these corrections to the data, the general geophysical map will be as good as it
will ever be. You will probably do further data processing in order to accentuate some
aspects of the patterns in the map. While this additional processing will indeed make some
patterns in the data easier to see, it can only reduce the information content of each modified
map. The original map, like that in Figure 19, must always be the primary reference for a
geophysical survey.
Window filtering
There are other ways of reducing or eliminating undulations on the contours of a
geophysical map; however, none of these alternatives is as good as the method already
shown. With many procedures of data processing, one replaces each original reading with
another number that is a combination (or weighted sum) of the group of nearby readings.
Page 9
Example 3: Data processing for a conductivity survey
This procedure is called a linear convolution filter in Surfer; it may more simply (but less
accurately ) be called window averaging or filtering. What is commonly called data smoothing
is a typical example; each reading may perhaps be replaced by the average of that reading
and the four readings that are adjacent in the rectangular grid. This type of averaging is very
effective at reducing small-area fluctuations in geophysical measurements, as revealed in
example 2. These fluctuations are sometimes caused by electrical interference to the
geophysical instrument; random errors in the locations of readings can also cause this effect.
One-dimensional smoothing is often applied to simplifying curves on a graph. Two-
dimensional smoothing, which may be applied to rectangular grids of data, works much
better, for there are then many more readings within a short distance of each measurement.
This allows an averaging to be done without making an extreme change to the general
patterns in the data.
Figure 20, Figure 21
Figure 20 illustrates how one type of two-dimensional filter may be set up in Surfer.
This filter is very good for eliminating line-to-line changes in a grid of data; consistent
fluctuations in the amplitudes or locations of the readings (such as the locational errors that
have just been discussed) can be removed from a map. The lower right corner of the figure
shows a matrix of numbers; these are the weightings that are multiplied by the readings that
are in a grid of measurements. For the striation filter that is set up in Figure 20, it is
necessary that the lines of 2's be in the direction of the traverses in the grid. The success of
this filter is illustrated by its output in Figure 21.
Figure 22, Figure 23
The setup of a simpler data-smoothing filter is shown in Figure 22; this is not nearly as
successful at reducing the undulations from line to line, and this is evident in Figure 23.
Figure 24, Figure 25
With conductivity surveys, it can be valuable to separate two types of patterns in the
maps that are created. Small-area anomalies are often caused by shallow metallic objects in
the soil; these anomalies can make it more difficult to see the larger (and perhaps more
interesting) patterns. One type of two-dimensional filter is excellent for removing small-area
anomalies in a map; this is the median filter. Figure 24 shows how this filter may be set up in
Surfer, and Figure 25 shows the result of applying this filter. The small, intense anomalies
are missing from the resulting map. This filter is like other window filters in that it has
broadened the anomalies in the geophysical map.
A median is determined by sorting the measurements by their amplitude; the median is
the value midway through this sorted list of measurements. This value is a bit like an
average; its main advantage is that a moderate number of extreme readings (high or low) do
not affect the median, but they could have a major effect on an average.
Figure 26, Figure 27, Figure 28
A median filter will likely be the best for this process, but Figures 26 through 28
illustrate three other methods that may also be suitable. For some situations, one of these
other filters may be better than a median filter. If there are a moderately small number of
Page 10
Example 4: Data processing after a magnetic survey
unusual low readings, they can simply be changed to blanks in the Grid Node Editor; these
blanks can later be replaced by the average of nearby readings.
Figure 29
Now that the largest patterns have been clarified in the geophysical data, it is good to
extract the small patterns to a separate map. This may be done with another filter, or simply
by calculating the difference between two maps that have already been prepared. Figure 29
describes these procedures. In Figure 29, the full range of the data has not been contoured;
this leaves oval blank areas inside some anomalies. The width of these blank areas is
proportional to the amplitude of the anomalies.
As these examples show, there can be several different ways of processing
geophysical data. While it is interesting to test the many approaches, one can also spend a
large amount of time on data processing, for little improvement in the appearance of the
maps.
Page 11
Example 4: Data processing after a magnetic survey
project will be reduced by making bi-directional traverses and correcting the data for the
errors when back in the office.
If one-directional traverses are made, there will still be errors in the locations of the
readings; these errors may be less than 0.5 m. However, these errors will be invisible in the
magnetic map (at least until an excavation fails to find a small feature). The errors that will be
corrected here are typical of those that are probably found with all high-speed (that is,
modern) magnetometers.
If these locational errors are not important and if the area of survey is small (taking
less than half of a day), then perhaps the survey will be faster if unidirectional traverses are
made, for then no complicated data processing will be needed.
Temporal correction
The first step in the processing of this magnetic data was a correction for temporal
changes in the readings. If a gradiometer is used, these temporal effects will almost certainly
be too small to be important. A total-field magnetometer was applied to this survey because
of the amount of brush and tree branches in the area of work; for this survey, the magnetic
sensor was carried on a horizontal staff, and then the brush and branches caused little
difficulty. This total field survey also allows deeper features to be detected more clearly than
is possible with a gradiometer. Temporal changes in the Earth's magnetic field were
monitored with a separate magnetometer. These changes were corrected by subtracting the
Earth's field at the time of each of the measurements that is plotted in Figure 30. Since this
correction is difficult or impossible to do with Surfer, it is not described here. Manufacturers
of total field magnetometers may supply a computer program that allows this temporal
correction.
Initial plot
After temporal corrections have been made, the next step is that of making an initial
plot of the readings. During this magnetic survey, readings were made at slightly irregular
intervals along lines of traverse. The Grid / Data operation in Surfer converted these to
readings to estimated values at regular and fixed intervals of distance. The interpolation of
this gridding was done so that the actual readings were changed in the least possible
amount. In order to do this, the settings in Surfer were as follows:
Gridding Method = Inverse Distance to a Power (which was 2)
Advanced Options / Search settings:
Sectors = 1 (since the direction of the samples is not important)
Maximum number = 2 (so that at most two readings will affect the gridded value at
each point in the grid)
Radius 1 = Radius 2 = 0.4 (so that readings on adjacent lines of traverse do not affect
the gridded value)
Grid Line Geometry was set as follows:
X = -25 to 0 with spacing = 0.5 (the actual spacing between lines)
Page 12
Example 4: Data processing after a magnetic survey
Page 13
Example 4: Data processing after a magnetic survey
isolated spikes along the curve. The locations (time or measurement number) of these
spikes could be noted so that only small parts of the data need to be examined in order to
delete these errors. Figure 2 shows a different method for plotting all readings as a set of
curves.
Page 14
Example 4: Data processing after a magnetic survey
is different from heading error, which is a fault in the amplitude of the readings; position error
is a fault in the coordinates of the readings. Position error may be primarily caused by a
delay during each measurement. Position error may also result from the operator not
knowing when the magnetic sensor is directly over a particular point in the area of survey.
Perhaps the magnetic sensor is carried behind the operator's back. For the survey here, the
sensor was carried toward the front on a horizontal pole. Because of what is called parallax
error, it may be difficult to tell when the sensor is over a particular point on the ground. While
positional error is usually a lag in the readings from their correct coordinates, it is possible
that the error might be in the opposite direction for some surveys.
Heading and position errors are apparent in a magnetic map only if bi-directional
traverses are made. Why not make measurements only while walking in one direction? The
reason is that it then takes extra time to retrace one's steps and start a new line. If
bi-directional traverses are made, the correction of errors during data processing does take
additional time, but that time is less than the extra time required for making readings on
unidirectional traverses. Remember also: Unidirectional traverses will probably still cause
position errors. (These points are so important that they have been repeated here.)
Both of these errors may change if the magnetometer is carried by a different operator,
and they may also change between days of survey. Even with a single operator on a single
day, there will be changes in these errors during a survey, but these one-day changes will
usually be small.
Both heading error and position error cause somewhat similar undulations on contour
lines. Unfortunately, very different procedures are needed during data processing to correct
these two types of errors. This can make the processing of magnetic data more complex that
the processing of conductivity data.
For further information about heading and position errors in magnetic maps, see the
related publication "Understand Magnetic Maps".
While it may be possible to distinguish heading error from position error by looking at a
magnetic map, it is easier to separate these two effects with data processing. This method of
distinction is described next.
Page 15
Example 4: Data processing after a magnetic survey
Page 16
Example 4: Data processing after a magnetic survey
example, CEIL (2.0) results in the value 2.0. If a number has a fraction (that is, there are
non-zero digits after the decimal point), the CEIL function gives the next larger integer. For
example, CEIL (1.5) gives 2.0, while CEIL (-1.5) gives -1.0.
Figure 37
The Transform equations in Figure 37 allow the readings to be copied to columns E or
F depending on the direction of traverse. This splitting of the data is possible because every
other traverse line has a value that is either an integer or a fraction. This change was found
because the line spacing was 0.5 m. What if the line spacing of a survey was 1 m? Since we
need the line values to alternate between a fraction and integer for CEIL to make its split,
simply create a new column of data with the original line values (perhaps 0, 1, 2, 3, 4...)
divided by two, so that they are then 0, 0.5 1, 1.5, 2 and so on. If the actual spacing between
lines of survey was 0.25 m, instead create a new column in the worksheet with these line
numbers doubled, changing 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5 and so on to 0, 0.5, 1, 1.5, 2, 2.5
and so on.
While the processing so far has been rather complicated, the steps are quickly going
to become simple and clear. We want to find the average faint anomaly that was measured
on north-going traverses and compare it to the similar average on south-going traverses.
The values marked with 10 in columns E and F of Figure 37 should not be included in these
averages. Therefore, shift them away from the desired numbers by sorting the numbers in
each of those two columns, putting the lowest numbers first, and the highest last. Use the
Data / Sort operation for this.
Figure 38
Figure 38 shows the next step after this sorting has been completed. Just calculate
the average of the readings, excluding the 10 values. The difference between these two
averages is the heading error; the average value in column F is 0.688 nT greater than the
average value in column E.
Figure 39
The readings in Column E are too small, and these readings were measured on lines
whose coordinate number was an integer (these lines were measured on south-going
traverses). The procedure for the correction of this heading error is described in Figure 39.
This modified data should be saved to a file with a new name; it will be called m2.dat here.
The correction needs to be applied to only half of the readings; it is not necessary to add a
constant to half of the readings and subtract another constant from the other half of the
readings. This simplification is possible because any constant can be added to all magnetic
readings without making an change to the patterns or to their interpretation.
Figure 40
The data with this correction for heading error can be gridded and plotted as usual.
The improvement that the correction allows is very apparent in the map of Figure 40. Should
you find that the undulations on contour lines have increased, this just means that you
applied the correction wrong; you probably added the constant when you should have
subtracted it, or you added the constant to the readings on the wrong traverses.
Page 17
Example 4: Data processing after a magnetic survey
Heading error is usually found to range between 0.5 and 5 nT. It will change with the
direction of traversing; for example, it will be different if lines go East-West as compared to
North-South.
If only an approximation of the heading error is needed, one can simply look at the
values in Figure 35 across a few lines of traverse. For example, one could plot a graph of the
readings along lines N2, N12, or N17, and find the amplitude of the typical fluctuation from
one line to the next. The detailed procedure that has been described here gives a more
accurate determination of heading error; this is because many more readings are compared.
As another possibility, one could force the average reading along each line of traverse
to be a constant, perhaps zero; this operation would perhaps be done after large-amplitude
anomalies were removed from the data. While this average-removal procedure is simple, it
does have the unwanted effect of attenuating or eliminating broad anomalies that extend in a
direction that is perpendicular to the lines of traverse. One general rule of data processing is
that no information should be removed from a geophysical map unless it is absolutely
necessary.
Page 18
Conclusions and Further Work
of readings. It is possible that some surveys will be improved if the shift can be made a
distance that is a fractional part of the interval between readings. This shifting can be done
by adding a constant to the along-line coordinate of north-going traverses, and subtracting
the same constant to the along-line coordinates of south-going traverses. The ideas here
may give you the functions that can allow this more complex processing.
The measurements for this survey were made at constant intervals of time (that
interval was 0.5 s). Since the speed of walking each line of traverse was very close to being
constant, the distance between readings was rather constant for the entire survey. If one's
speed varies widely during a survey, additional thought may be needed for the locational
correction. If the source of the locational error is a measurement lag in the magnetometer,
then the readings should be shifted by a constant interval of time (using a *.dat file that has
not been gridded). However, if the locational error is caused by the operator consistently
seeing along-traverse distances wrongly, then the readings should be shifted by a constant
interval of distance. If a shift of a constant distance is needed, gridded values (converted
back to a *.dat file) can be used for this.
Page 19
Publication history
Publication history
19 July 2010 = Made many changes suggested by Rinita Dalan (Minnesota State
University); these greatly improved this report. Added two simple examples. Moved the
appendix to the main body of the text. Added a note about grid to spreadsheet conversion.
11 February 2008 = Added supplement on processing magnetic data.
11 January 2008 = Original report
Page 20
Magnetic calculation:
Object at the ground's surface
high
low
Magnetic map, contour interval = 20 nT
200
incomplete
contouring
?
180 ?
140
G
120 B
100
G
G
80
0 20 40 60 80 100
East distance, ft
Figure 1: Noisy measurements. Many small eye-shaped anomalies are apparent in the
upper half of this map. Almost all of these are caused by electrical interference from a buried
wire which is near the north edge of the area of this survey. The small square above the map
shows the smallest magnetic anomaly that is possible; most of the “noise” anomalies are
smaller than this, and therefore they cannot be caused by buried objects. Furthermore, all of
the small anomalies are magnetic lows; a lack of magnetic highs is almost impossible to be
found in a natural or cultural magnetic map. This map would be improved if these errors in
the measurements were removed.
Plot = a.srf Data = a.grd
Each north-going line of measurement is plotted as a separate curve.
The view is toward the west, looking down at an oblique angle.
10
20
30
East distance, ft
40
50
60
70
80
90
100
80 90 100 110 120 130 140 150 160 170 180 190 200
North distance, ft
Figure 2: Another look at the measurements in Figure 1. The lines of measurement are
plotted separately here, and this may clarify the amplitudes of the many small-area low
readings that were made. Note that each of these small anomalies was found on only one
line, and no anomaly is apparent on two adjacent lines; this is further evidence that these low
readings are errors. The spacing between the lines of measurement was 2.5 ft (0.76 m).
This figure has been prepared as a “wireframe” map in Surfer; this is the easiest
method for plotting a large number of curves.
This survey was done at Appomattox Court House, Virginia, on 26 September 2000.
The magnetometer was a Gem Systems model GSM19FG Overhauser instrument. The
sensor height was 2.3 ft (0.7 m); the line spacing was 2.5 ft (0.76 m), traverses were made
toward grid north, and the measurement interval along lines was 1 ft (0.3 m) and 1 s.
Plot = a1.srf Data = a.grd
Figure 3: One of the measurement errors. The grid file (a.grd) in Figure 1 has been
examined with the Grid Node Editor (found at the bottom of the Grid menu) and the
southwestern quadrant of the map has been enlarged. The diamond-shaped anomaly at
E12.5 N118 has been selected by the mouse cursor, and the map value at that point is listed
as Z: 15.27. By moving the cursor to adjacent readings, the following pattern is found:
65.64891
W 78.87844 15.27 73.17875 E
70.57859
This shows that the value of 15.27 is almost certainly an error, for the reading is too small.
The error can be removed my placing the mouse cursor at E12.5 N118 and then selecting
Options / Blank, or by pressing Ctrl-B on the keyboard. This will place a huge number at that
point (1.70141x1038) that will be interpreted as a point with no measurement.
That blanked point will leave a “hole” in the map. If this is not wished, the faulty
reading at E12.5 N118 could be replaced by the average of the four adjacent good readings.
It would be adequately-accurate to just add the four numbers without the digits after the
decimal point, and divide the sum by 4. Place this interpolated value of 72 back into grid
a.grd by typing that value when the cursor is at the point. When you press the Enter key, the
contour map will change to show your modification. Be certain to save the new a.grd before
quitting the Grid Node Editor, or your changes will be lost.
Other one-point errors can be corrected in the same manner; it takes about a half
minute to change each error. File a2.grd and a2.srf (not printed here) show my correction of
31 bad readings in this map. A later example will show how this type of correction may be
done automatically.
Data = a.grd
high
Self-potential, contour interval = 5 mV low
-130
-135
-140
North distance, ft
-145
-150
-155
-160
510 515 520 525 530 535 540
East distance, ft
Figure 4: A complex pattern. The contour lines wander all over the map, without revealing
any clear pattern. These are measurements of the electrical voltage at the surface of the soil,
for this is an SP survey (where SP is an abbreviation for self-potential or spontaneous
polarization). The readings were made with Ag-AgCl electrodes that had a diameter of about
1 cm; it is likely that the irregular patterns in this map are caused by the small electrodes
touching small pockets or particles of differing types of soil. The following figures show how
this complex map may be simplified and clarified.
The survey was done on 26 October 1992 on the US Civil War (1864) battlefield at
Petersburg, Virginia. The spacing between the measurements was 2 ft (0.6 m) and the
negative reference point for the voltage readings was at E510 S130. The electrodes were set
at the bottom of cored holes that were about 13 cm deep.
Plot = b.srf Data = b.grd
Figure 5: Window smoothing. The small circles locate the readings that are plotted in Figure
4. The patterns in that map can be simplified by replacing each original reading by the
average reading in a square that is seven readings on a side. This averaging square is
shifted about the entire matrix of readings and a new grid is created. Normally, 49 readings
are averaged at each point, but when the averaging square moves partly outside the area of
the measurements, fewer readings are averaged.
For most applications of window smoothing, the averaging window will be much
smaller than this; it will often be only three measurements on a side. A large window is
needed for this map because the readings are so irregular.
Figure 6: The settings for window smoothing in Surfer. Follow the menu Grid / Filter and
select b.grd for input. Then click on the + at User Defined Filters to display the window
above. Set the Output Grid File to b1.grd and leave the Edge Effects as shown here. Then
set the Filter Size to 7 Rows and 7 Columns. The box below lists the weights of the filter for
each of the 49 positions of the averaging square; set each of these weights to 1. Note that
only about 12 of the values can be displayed in the small window. Starting with the cursor in
the upper left corner of this 7x7 matrix, alternately press 1 and then the Tab key 49 times,
and the matrix will be filled with ones. However, examine the matrix by moving the sliders to
be certain that all of the numbers are 1.
When these parameters are set, just click on OK. The file b1.grd will probably already
be on your computer (along with the b.grd file), and you can just overwrite it, or give the new
filtered data a different file name.
Input data = b.grd (Figure 4) Output data = b1.grd (Figure 7)
high
Self-potential, smoothed, contour interval = 5 mV low
-130
-135
-140
North distance, ft
-145
-150
-155
-160
510 515 520 525 530 535 540
East distance, ft
Figure 7: The improved map. This is the data in Figure 4 after the filter in Figure 6 has been
applied. The low readings in the middle of the map are now clear. These low readings reveal
a well that was dug next to Fort Morton at the Petersburg battlefield. This well was not visible
at the surface; however, later excavations (by David Orr, NPS) found that it contained a large
quantity of iron debris. This iron was slowing rusting, and this chemical reaction caused the
iron to act like a large battery, creating the pattern that is shown here.
For more information on this survey, along with procedures for doing SP surveys, see
my report called “Geophysical Exploration for Archaeology”.
Plot = b1.srf Data = b1.grd (Figure 6)
Figure 8: The end of the data. The file s.dat has been opened in Surfer and then the
spreadsheet has been scrolled to the end of the file. This allows a quick look at the numbers
and it also reveals how many measurements have been made (4961 in this case). The data
on each line are: column A = East distance (feet), column B = North distance (feet), column
C = measurement of conductivity (mS/m). Each horizontal line of data (called XYZ) is in the
order of measurement (in this file). Since the East distance decreases with each
measurement, traverses were made to the west (on this final line); the spacing between
measurements was 1 ft (0.3 m). While it is not evident in this part of the data, lines of
measurement were made alternately to the east and west and the spacing between lines of
measurement during the survey was 2.5 ft (about 0.8 m).
Data: s.dat
Figure 9: Preparing a grid. In order to create an accurate map, it is important to set the
parameters of gridding to their correct values. With many geophysical surveys,
measurements are made at uniform intervals (the circles in the left panel); with other surveys,
the readings can have a somewhat irregular spacing along lines that are uniformly-spaced
(as shown in the panel on the right).
For both types of survey, the gridding method called Nearest Neighbor can the most
suitable. With this method, the measurement that is closest to a point in the final grid or
matrix of values is placed at or moved to that grid point. If the measurements are uniformly-
spaced, each grid point (marked by the X symbols above) is at a measurement (the circular
symbols). However, the Nearest Neighbor gridding in Surfer was designed for the more
general case of a somewhat irregular spacing between readings, as shown on the right.
During the gridding process, all of the readings in an elliptical area are examined to see which
is the closest to the grid point. In the example on the right, there are two readings within this
ellipse, and gridding will select the measurement that is located just above the central X,
since that is the closest. This measurement will be put at the grid point.
For the example on the left, one can set both search radii to a very small value; even a
distance of 1 mm would be fine. However, neither radius should be set to large values, for
this could put a grid value at a point where no measurement was near. An example is shown
on the bottom side of the left panel. There was a missing measurement there (no circle), and
it is important that the map and grid show a blank at that point (at least for the initial look and
interpretation of the data). By keeping the search radius small, the map will correctly show
that there was no measurement at that point. Also, if the search distance was too large, grid
values could be added outside the area of survey, and this would imply that there was data
where there was none.
In summary, the examples above illustrate good selections of the radii of the search
ellipse. In probably all cases, both radii can be set to the value of the smaller radius. When
both R1 and R2 are set to that value, the ellipse changes to a circle.
N (magnetic)
high
Original data, contour interval = 5 mS/m low
100
80
abrupt
shift
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 10: A first plot of the measurements. The interval between contour lines is 5 mS/m
(millisiemens per meter); high values are plotted with red colors while low readings are
plotted with contour lines having blue colors.
Data s.dat was gridded as follows: Gridding Method = Nearest Neighbor; X Direction
Spacing = 1; Y Direction Spacing = 2.5 ft. Under Advanced Options, Search Range 1 was
set at 0.4 and Search Range 2 was set at 0.4 also. If wished, the values of the two Search
Ranges could have been set to a small value (such as 0.01) or a large value (such as 100),
for the measurements of this survey were uniformly-spaced, the grid was set to exactly the
size and spacing of the measurements, and there were no missing readings within the
rectangular area of survey. Therefore, a measurement was made at each grid point, and
there were no other points in the grid.
Plot = s.srf Data = s.grd
Figure 11: Statistics on one line of measurement. Tests like this can quantify the abrupt
change in the readings near N80 in the plot of Figure 10. For this example, basic statistics of
the readings on line N77.5 have been determined. First, select the initial reading on this line:
Click with the mouse on the value 31.08 that is found where column A = 0 and column B =
77.5. Second, scroll down and then hold down the Shift key and click on the value 24.92 that
is found where column A = 120 and column B = 77.5. These first two steps have selected all
of the measurements on line N77.5, and this is indicated by the black background. Third,
select Data / Statistics and check the different types of statistics that you wish; you will want
the median and perhaps also the mean (average) and a few others. When you click on
Okay, the values shown in the small window are listed.
Using the same procedure, you can also calculate the statistics of the readings on line
N80 and find that the median has risen to about 34 on that line. Therefore, there is an abrupt
rise of about 10 between these two lines of measurement.
Data = s.dat
Figure 12: Calculate the median along every line of measurement. A special grid of the data
s.dat is created for this determination. The Gridding Method is Data Metrics and the Median
is to be calculated at each point in the grid. Unusual settings are also needed in the Search
option (the lower right hand corner above). Radius 1 is set at 121 so that medians are
always determined for the entire east-west width of the data; Radius 2 is set at 0.4 (or any
value less than the line spacing of 2.5) so that only a single line of measurements is selected
for each calculation of the median. Should you plot the resulting grid, you would find that its
contours are horizontal lines that will be dense where the median changes the most between
lines of survey.
Data input = s.dat Data output = s0.grd
Figure 13: Convert the grid to a simple data file. The values in this data file are plotted in
Figure 14. The easiest method for this conversion is: Grid / Extract. Under the option
Output File, click on the “open file” icon and enter the name of the file (s0.dat) to hold the
median data. Most importantly, at “Save as type”, select “ASCII XYZ (*.dat)”. You might also
change the last column to be extracted to 2 (rather than 121). This conversion of the grid
(probably with a binary format) to a simple ASCII (East, North, value) file will allow you to plot
the values with any graphing program. At each north distance, there will be between 2 and
121 values, each with the same median; your graphing program will plot all of these identical
values at the same point.
Should you be using version 9 of Surfer, the “Save as type” option that you need will
be listed as “DAT XYZ (*.dat)”, rather than “ASCII XYZ (*.dat)”.
Data input = s0.grd Data output = s0.dat
40
Medians of measurements along East-West rows of traverse
34.02
at N80
34.54
30
abrupt
shift
Median conductivity, mS/m
23.51
at N77.5
20
10
0
0 20 40 60 80 100
North distance, ft
Figure 14: The location and amplitude of the abrupt rise in the readings. This is a plot of the
values that were calculated with the procedure in Figure 12. The jump in the median
between N77.5 and N80 reveals a fault in the measurements. The median rises by 10.5
mS/m in this span; since the median typically rises by 0.5 mS/m between lines of
measurement, the rise was actually only 10 mS/m higher than normal.
The data can therefore be corrected by subtracting 10 mS/m from all lines between
N80 and N100.
An abrupt change similar to this can be found with many different geophysical
instruments; with a conductivity meter, it is typically caused by a change in calibration
between days of survey, or by a large change in the temperature of the instrument between
lines of survey. In this case, I simply created a fault where there was none, for this
illustration. The fault that I created was actually a rise of only 7.2 mS/m, not 10.0; this shows
that one cannot make a perfect correction for abrupt shifts in the readings when there is a
large variability in the readings in the area of survey.
Data = s0.dat
Figure 15: Correcting the faulty readings. They are first selected with the normal Windows
procedure described with Figure 11: Click on the conductivity value at data row 3873 and
then Shift-click on the conductivity value at data row 4961; this selects all readings on the
lines between N80 and N100. Next select Data / Transform from the main menu and set the
options as illustrated here; the equation directs the program to subtract 10 from the values in
column C within the range of rows that is indicated. When the Okay button is clicked, this
equation is applied and the values are reduced by 10. Save the modified file as s1.dat.
Grid these modified values as before; the resulting plot is Figure 16.
Data input = s.dat Data output = s1.dat
N (magnetic)
undulations along
contour lines high
Subtracted 10 mS/m from N80 - N100, contour interval = 5 mS/m low
100
80
abrupt
shift
gone
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 16: Elimination of the abrupt shift at N80. The former shift is no longer apparent in
the data. However, another fault remains; this is revealed by periodic undulations along
those contour lines that extend in a north-south direction (perpendicular to the lines of
traverse). There are two possible causes for these undulations: 1, the amplitudes of the
readings may alternate between high and low values from line to line; or 2, the
measurements have not been recorded at their correct locations, but instead are consistently
forward or backward from their proper locations along the lines of traverse. This first possible
cause cannot be the source of the fault, for Figure 14 shows no consistent change from high
to low median from one line to the next.
Therefore, the fault is of the second type, and there is a locational error in the
readings. The amplitude of the error can be estimated from the peak-to-peak oscillations of
the contour lines; simple measurements show that this is 3 - 4 ft. While this graphical
procedure works well, one can also test this locational error by measuring one line with a pair
of traverses that go in opposite directions. This procedure is described in Figure 17, using
data from another site.
Plot = s1.srf Data = s1.grd
Original measurements
5
Traverse to S
Apparent conductivity, mS/m
4
Traverse to N
1
0
0 25 50 75 100 125 150
North distance, ft
Backwards shift = 2 ft
5
Traverse to S
Apparent conductivity, mS/m
Traverse to N
1
0
0 25 50 75 100 125 150
North distance, ft
Figure 17: Testing for locational error. A single line of readings is measured twice, while
traversing in opposite directions. The upper plot shows the offset that can be found, and the
arrows indicate the directions of traverse for each line of readings. The abrupt low readings
near N55 (caused by a shallow metallic object) allow an easy quantification of the shift, which
can be measured to be about 4 ft.
The lower curves illustrate the corrected data. For this correction, the coordinate of
each measurement has been shifted backwards by a distance of 2 ft (half of the total two-way
shift) along the direction of traverse. There is now a much better match between the two
curves. The differences that remain between the pair of curves are caused by electrical
interference to the instrument and by small and random errors in the locations of the
measurements.
Figure 18: A simple correction for locational error. Just delete the first two readings, and shift
the remaining readings to fill the gap. Select the first two readings as usual (click and Shift-
click); then right-click when the cursor is over the selection. Choose Delete and the settings
that are indicated here. Save the result as s2.dat.
The final two places for readings in this file will now have no values except the
coordinates, but that will cause no difficulty for Surfer.
This simple correction will work only if the data are in the order of their survey. Also,
this shift can only be done for distances that are multiples of the measurement spacing along
lines of traverse. It is almost certain that all corrections will require the backwards shift that is
illustrated here. Should a shift ever be needed in the opposite direction, that can be done in
Surfer by deleting the last two readings and shifting the prior values down to fill the gap.
Data input = s1.dat Data output = s2.dat
N (magnetic)
undulations gone
along contour lines high
Shifted readings backwards by 2 ft, contour interval = 5 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 19: The improvement after a correction for locational error. Periodic undulations
along the contour lines are now gone. Because of the shift that was applied, some data are
missing for a span of 2 ft at the left and right ends of the map. The lines at the sides of the
map locate the gaps in the data.
This is the primary geophysical map of the measurements, and the major faults in the
data have now been corrected. While the backwards-shift procedure allows the best possible
geophysical map, some other procedures still allow moderately-good maps to be prepared;
these procedures are described next.
Plot = s2.srf Data = s2.grd
Figure 20: Another method for eliminating undulations on contour lines. In Surfer, select Grid
/ Filter and then s1.grd. Set the parameters as shown here. This creates what might be
called a striation filter, for it reduces consistent line-to-line changes in a grid; these changes
may be errors in location or in amplitude. The key to this filter is the weightings shown in the
3x3 matrix: The three readings on parallel lines to the north and south of the central point
have a weighting of one, while the readings along the central line have a weighting of two.
This filter may also be described as follows: At each point where the filter is applied,
add the reading at that point to the readings just before and after it along the same line of
traverse; double this result and call it sum #1. Next add together the six readings that are
nearest and along immediately adjacent lines; call this sum #2. Finally, add the two sums
you have just calculated, and divide the result by 12 (the total of the weights). This final
value replaces the original reading.
The effect of this filter is apparent in the following figure.
Data input = s1.grd (Figure 16) Data output = s2a.grd (Figure 21)
N (magnetic)
high
After 3x3 striation filter, contour interval = 2 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 21: The improvement of a simple striation filter. The undulations that are apparent on
the contour lines in Figure 16 have been eliminated. This is data s2a.grd; it is a result of
applying the filter in Figure 20 to data s1.grd (Figure 16). Broad-area patterns in the data are
clear here, while they were obscured in Figure 16. The disadvantage of this filter is that all
patterns have been somewhat broadened; another way of saying this is that the spatial
resolution of the map and the survey is now lower. Also, this filter has not eliminated the
intense low anomalies, and these small-area patterns still interfere with the large patterns.
Plot = s2a.srf Data = s2a.grd (Figure 20)
Figure 22: A simple way of filtering data that may give poor results. The advantage of this
filter is that it can be selected from those that are already available in Surfer. The only
difference from the striation filter in Figure 20 is the weighting of the values within the
averaging window; the weighting of each of the nine readings is always one for this simple
filter.
Select the two Edge Effects correctly. For “Edge of Grid”, choose the Ignore option to
allow the filtering to extend over the full span of the grid; without this, there will be blanks at
the edge of the grid. For “Blanked Nodes”, chose the Ignore option if you wish to fill small
blanks within the grid; if you do not wish this, select the default “Leave” option.
Data input = s1.grd (Figure 16) Data output = s2b.grd (Figure 23)
N (magnetic)
high
After 3x3 window filter, contour interval = 2 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 23: The result of applying simple smoothing to the data. This has reduced, but not
eliminated, the undulations on the contour lines. The filter described in Figure 22 has been
applied to the data in Figure 16 to get this result.
This simple window filter is most suitable for clarifying large-area patterns in data that
do not have the striations and the abrupt and extreme readings that are found with this
particular data.
Plot = s2b.srf Data = s2b.grd (Figure 22)
Figure 24: A filter for eliminating small-area anomalies. The reading at each point is
replaced by the median of the values that are found in a square area of 5 by 5 readings which
is centered at the point. Those 25 readings are simply sorted by amplitude to a list, and the
value in the middle of that list replaces the original reading.
This filter is particularly effective for conductivity data; this is because metallic objects
that are at a shallow depth typically cause extremely low readings in a small area near those
objects. If the filter size (window area) is large enough so that fewer than 50% of the
readings in each window have these extremely low values, then all of those low values can
be completely removed.
Data input = s2.grd (Figure 19) Data output = s3.grd (Figure 25)
N (magnetic)
high
After median filtering, contour interval = 2 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 25: The simplification of a median filter. Extreme readings are not just smoothed,
they have been eliminated. This is data s3; it is the result of applying the filter described in
Figure 24 to the data in Figure 16. If smoother contour lines are wished, then the simple
window filter of Figure 23 can next be applied to this grid.
For this map, the size of the filtering window was 5 rows by 5 columns; if this window
size is reduced to 3 rows by 5 columns, not all of the extremely low readings will be
eliminated.
Plot = s3.srf Data = s3.grd (Figure 24)
N (magnetic)
high
After blanking and 3x3 window averaging, contour interval = 2 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 26: Another method for reducing the effects of extreme readings. For this procedure,
each value that is less than zero is first replaced with a blank (indicating that no reading is
there). Select Grid / Math and data s2.grd; then set the output grid to s3a.grd. Finally, enter
the following function:
C = IF(A>0, A, 1.70141e+38)
This just means the following: If the value in grid A (the input grid) is greater than 0, keep it;
otherwise replace it with a blank. Surfer uses the huge value of 1.70141e+38 for this blank,
for it is much too large to be a valid measurement; in scientific notation, rather than
programming notation, this blank value is written as 1.70141x1038.
After this threshold blanking has been done, the simple window filter in Figure 22 is
applied to the data. It is important that the Blanked Nodes part of Edge Effects there have
the Ignore value, for this will allow each blank to be filled with the average of the nearby
readings.
Data input = s2.grd (Figure 19) Data output = s3a.grd Plot = s3a.srf
N (magnetic)
high
After truncating and 3x3 window averaging, contour interval = 2 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 27: A poorer method for eliminating some extremely low readings. With this
procedure, all readings that are less than zero are changed to zero. The procedure is applied
by selecting Grid / Math, and then entering the function:
C = MAX (A, 0)
This function just replaces each value in the grid with either the original value or with zero,
whichever is greater.
This clearly does not work as well as the procedure in Figure 26, for a greater number
of strong and low anomalies remain in this map. This poorer performance is caused by the
fact that a value of zero can still be quite different from the surrounding values, and therefore
an anomaly may still remain. With the procedure in Figure 26, the extreme lows are replaced
by the average of nearby values, and so the effect of those lows is mostly eliminated.
After the truncation above was applied to the readings, the simple window averaging
shown in Figure 22 was applied to smooth the data somewhat, and the result is s3b.
Data input = s2.grd (Figure 19) Data output = s3b.grd Plot = s3b.srf
N (magnetic)
high
After 3 row x 5 col threshold averaging, contour interval = 3 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 28: A final method for reducing extreme readings. For this operation, select Grid /
Filter, and s2.grd. Then, under Digital Filters, select Nonlinear Filters, then Other Nonlinear
Filters, and then finally Threshold Averaging. For the filtering above, the size of the window
was set at 3 rows and 5 columns, and the threshold was set at 2.
The operation of this filter is as follows: The average of the readings in the filtering
window is calculated (without the value at its midpoint). The value at the midpoint is then
compared to that average; if the magnitude of the difference (ignoring the polarity) is greater
than the threshold (which is 2 here), the value at the midpoint is replaced by the average.
This filter can completely remove the extreme values in the map if the size of the filter
is increased to 7 rows and 7 columns; however, the patterns that are then created are quite
broad, and therefore much spatial resolution has been lost.
The smoothing described in Figure 22 has been applied to create this map.
Data input = s2.grd (Figure 19) Data output = s3c.grd Plot = s3c.srf
N (magnetic)
high
After 5x5 median differencing, contour interval = 5 mS/m low
100
80
North distance, ft
60
40
20
0
0 20 40 60 80 100 120
East distance, ft
Figure 29: Extraction of the small-area patterns in the map. Most of the anomalies that are
apparent here are caused by shallow metallic objects. It can be informative to isolate these
patterns to this separate map; shallow metal may be very important to find, or it can be simply
a source of confusion and interference.
This filter is selected like that in Figure 28: From Grid / Filter, go to Nonlinear Filters,
then Other Nonlinear Filters, then Median Difference. The window size was set at 5 rows and
5 columns, just like the median filter that was applied in Figure 24.
This same map can also be calculated from the difference between the readings in two
maps that have already been shown, for this map is simply Figure 19 after Figure 25 has
been subtracted from it. With Grid / Math, choose s2.grd for file A, and s3.grd for file B, and
set the output C to s4.grd and the function to C = A - B.
When this difference map is plotted, it is good to exclude the contour line at zero; this
is because this zero line wanders around the entire map and adds little or no information. It
is differences that are larger or smaller than zero that are important.
Data input = s2.grd (Figure 19) Data output = s4.grd Plot = s4.srf
high
Original data, contour interval = 2 nT low
25
N (magnetic)
20
North distance, m
15
10
0
-25 -20 -15 -10 -5 0
East distance, m
Figure 30: The magnetic measurements. The undulations on the contour lines reveal a
problem with the data; the faults are bad enough that some patterns are difficult to see in this
map. Lines of measurement went alternately to the north and south; these bidirectional
traverses are the cause of the undulations. It is easy to correct the faults in this map, and
Figure 42 shows the improvement that is possible.
Plot = m.srf Data = m.grd
Figure 31: Another type of error in the measurements. The actual measurements are listed
here in Surfer's worksheet; they are in the order of their survey. Column A lists the East part
of the coordinate (all the numbers are negative, and therefore actually West); column B lists
the North part of the coordinate. The readings of magnetic anomaly are in column C; these
have the units of nanoteslas (nT) and the large value of the Earth's field (about 41,435 nT)
has already been subtracted. The height of the magnetic sensor was about 0.65 m, and the
spacing between readings was about 0.25 m. The abrupt drop in the reading that is marked
on data line 3173 must be an error; this error can simply be deleted. A dozen or so of these
errors have been eliminated in the map of Figure 30, and this causes some breaks in the
contour lines and blank areas in the map.
Data = m.dat
Figure 32: A simple filter for eliminating the undulations in the contour map of Figure 30.
This is almost the same filter as the one that is shown in Figure 20. For this survey, the lines
of traverse went alternately north and south, and so the filter is oriented this same way, with
the line of 2's vertical through the middle of the matrix of filter weights (at the lower right).
When this filter is applied to the data shown in Figure 30, the simplified map of Figure 33 is
the result.
For this filtering matrix, the sum of the weights along the central line (this is six) should
be the same as the sum of the weights along the two adjacent lines (this is also six). The
setting for Blanked Nodes is set to Ignore so that the blanks (points with deleted readings) in
Figure 30 will be filled with estimated values.
Data input = m.grd (Figure 30) Data output = m1.grd (Figure 33)
high
After 3x3 striation filter, contour interval = 1 nT low
25
N (magnetic)
20
North distance, m
15
10
0
-25 -20 -15 -10 -5 0
East distance, m
Figure 33: Elimination of the striations along the contour lines. The filter shown in Figure 32
has completely eliminated the unwanted and complex patterns in Figure 30. Large-area
patterns in this map are now apparent, although small patterns have been attenuated or
eliminated. If only these large patterns are important, then this filtered map is all that is
needed. However, if small patterns should be preserved, then the following figures show how
this may be done.
Plot = m1.srf Data = m1.grd (Figure 32)
No errors Both heading and position errors
Figure 34: The two problems that cause striations in a magnetic map. If there is heading
error, the readings along alternate lines change between high and low values; these errors in
amplitude change the large and simple anomaly into one that has been broken into separated
oval patterns. If there is a positional error, the measurements have been shifted to the north
or south from their correct locations; this causes a zig-zag pattern in the magnetic map. If, as
is usual, both errors are found, then both patterns are in the map, and the errors will probably
appear to be larger on one side of an anomaly than the other. This is because the two errors
can partially cancel one another out in some parts of the map.
high
Subtracted smoothed, contour interval = 2 nT low
25
N (magnetic)
20
North distance, m
15
10
0
-25 -20 -15 -10 -5 0
East distance, m
Figure 35: An accentuation of the striations in the magnetic map. While this makes the map
look worse, it also allows a more accurate correction for the errors in the map. This map is
simply the original readings in Figure 30 after the smoothed readings in Figure 33 have been
subtracted. This subtraction has removed the broad-area patterns in the original data, and
therefore it clarifies the alternating high and low readings; these readings are about 0.5 nT
too large on half of the lines and about 0.5 nT too low on the others.
Plot = m1a.srf Data = m1a.grd
Figure 36: Elimination of all of the large-amplitude anomalies. The high anomalies that
remain in Figure 35 complicate the alternately high and low readings that are found along
columns there. First, convert that grid data (m1a.grd) to a simple XYZ ASCII file (m1a.dat),
with each line showing East distance, North distance, and reading; these numbers are in
columns A, B, and C. Next, select Data / Transform and then the Transform equation here, a
new column is created with all of the high readings replaced by 10; column D already shows
the result of this transform. The equation says that if the absolute value (ignore a negative
sign) of the reading in column C is less than one, put that reading in column D; if the reading
is greater than one, put a 10 in column D.
Make certain that the value for Last row in the Transform window is at least as large as
the total number of readings (data lines) in the file. While it is not apparent in this view of the
data, the conversion from the gridded file means that all of the coordinates here are at
constant intervals, unlike the irregular intervals that are seen in Figure 31.
Data input = m1a.dat
Figure 37: Separate the readings by the direction of traverse. The low-amplitude readings in
column D have already been copied to column E for those lines where the traverse direction
was to the south; for these lines, the value in column A is an integer (no fractional part). The
Transform equation for that operation was:
E = IF (CEIL (A) = A, D, 10). In order to create column F, select Data /
Transform and then the equation shown in the small window. This Transform equation
copies the readings from column E to column F if the line coordinate has a fraction (such as
-0.5, -1.5, ... -22.5, -23.5 and so on). Column F already shows the result of this transform.
The next step will be to sort the values in column E and also in column F by increasing
size. For column E, select that column (click on the letter E at the top), and then select Data /
Sort and then Ascending order. Do the same for column F. This will put all the unwanted 10
values at the end of each column.
Data input = m1a.dat continued
Figure 38: Find the average readings, while ignoring the 10's. For column E, select all the
readings from the first until the start of the readings that are 10: Click on the value in column
E on line 1833 and then shift-click on the first value in column E. Then select Data / Statistics
and calculate the mean (also called the average). This value is shown here to be -0.337 nT.
Calculate an average the same way for column F; you will find that this average is 0.351 nT.
The difference between these two averages (0.337 + 0.351) is the heading error, which is
0.688 nT (0.7 nT would probably be sufficiently accurate).
Data input = m1a.dat continued
Figure 39: Remove the heading error from the original readings. Those original readings
(m.dat, that create Figure 30) are listed in column C for each East and North coordinate in
columns A and B. For each line of traverse that has an East distance that is an integer (W0,
W1, W2 ...), the value in column D is the original reading in column C with 0.688 (the heading
error) added to it. For all other traverse lines, the original reading is unchanged.
The Transform equation describes how column D if filled: Look at the number in
column A; if it is an integer (0, -1, -2, ...), then add 0.588 to the number in column C and put
the result in column D; otherwise, put the unchanged number from column C into column D.
In the equation, CEIL is short for ceiling; it means that if a number has a fractional part,
replace than number with the next higher integer. Therefore, a number such as -1.5 would
be replaced by -1 and a number such as 1.5 would be changed to 2, while numbers such as
-3 or 4 would remain unchanged.
After this transform has been completed, as shown by column D here, the original
readings in column C can be deleted (select the column and then Edit / Delete). The
corrected data can be saved to a file with a new name (m2.dat).
Data input = m.dat Data output = m2.dat
high
After heading error correction, contour interval = 2 nT low
25
N (magnetic)
20
North distance, m
15
10
0
-25 -20 -15 -10 -5 0
East distance, m
Figure 40: After a correction for the heading error. Most of the undulations in the original
map of Figure 30 have now been eliminated. The regular undulations that remain along the
contour lines in this map are almost entirely caused by the second error: The locations of the
readings are consistently and slightly wrong. These remaining location errors are also
revealed by the complex shapes of some anomalies, such as the two near W22 N12. The
zig-zag patterns there might be correct, but it is more likely that the anomalies should be
rather oval instead of the complex amoeba-like shapes that are seen there.
Plot = m2.srf Data = m2.grd
Figure 41: A simple correction for location error. One more change is made to the data
(m2.dat) in Figure 39: The first reading has been deleted and the following readings have
been shifted upward. For this illustration, column C shows the readings from Figure 39, while
column D shows them shifted upwards by one value: Select the first reading in column C
(click on it); then right-click on the selection and choose Delete (and Shift Cells Up).
Before saving this data to a file with a new name (m3.dat), it will be best to delete the
values in column C in order to make certain that your gridding selects the data that has the
locational correction.
You will need to check other shift distances to see which is best. For example, you will
also probably want to delete the first two readings, as in the example of Figure 18, and
compare the resulting magnetic maps (Choose the one whose patterns are most simple). It
is possible, although unlikely, that you will need to shift the readings downward by one or two
lines; this is easy to do by inserting one or two blank "readings" and shifting the actual
readings down.
Data input = m2.dat Data output = m3.dat
high
After position correction, contour interval = 2 nT low
25
N (magnetic)
20
North distance, m
15
10
0
-25 -20 -15 -10 -5 0
East distance, m
Figure 42: The final and best magnetic map. This has a correction for heading error and
also for position error. All of the information of the original survey has been retained here,
and most of the unwanted errors have been eliminated. While undulations remain on some
contour lines, many of these may be caused by small and random errors in amplitude or in
the location of the readings; these cause little difficulty for the understanding of the patterns
that have been revealed here.
Plot = m3.srf Data = m3.grd