Spectrum Analysis Basics-trang-3
Spectrum Analysis Basics-trang-3
Before we discuss these uncertainties, let’s look again at the block diagram of
an analog swept-tuned spectrum analyzer, shown in Figure 4-1, and see which
components contribute to the uncertainties. Later in this chapter, we will see how a
digital IF and various correction and calibration techniques can substantially reduce
measurement uncertainty.
Pre-selector, or Video
low-pass filter filter
Local
oscillator
Reference
oscillator
Sweep
generator Display
As an example, consider a spectrum analyzer with an input VSWR of 1.2 and a device
under test (DUT) with a VSWR of 1.4 at its output port. The resulting mismatch error
would be ±0.13 dB.
Since the analyzer’s worst-case match occurs when its input attenuator is set to 0 dB,
we should avoid the 0 dB setting if we can. Alternatively, we can attach a well-matched
pad (attenuator) to the analyzer input and greatly reduce mismatch as a factor. Adding
attenuation is a technique that works well to reduce measurement uncertainty when the
signal we wish to measure is well above the noise. However, in cases where the signal-
to-noise ratio is small (typically ≤ 7 dB), adding attenuation will increase measurement
error because the noise power adds to the signal power, resulting in an erroneously
high reading.
Let’s turn our attention to the input attenuator. Some relative measurements are made
with different attenuator settings. In these cases, we must consider the input attenuation
switching uncertainty. Because an RF input attenuator must operate over the entire
frequency range of the analyzer, its step accuracy varies with frequency. The attenuator
also contributes to the overall frequency response. At 1 GHz, we expect the attenuator
performance to be quite good; at 26 GHz, not as good.
Following the input filter are the mixer and the local oscillator, both of which add to the
frequency response uncertainty.
Figure 4-2 illustrates what the frequency response might look like in one frequency
band. Frequency response is usually specified as ± x dB relative to the midpoint
between the extremes. The frequency response of a spectrum analyzer represents
the overall system performance resulting from the flatness characteristics and
interactions of individual components in the signal path up to and including the first
mixer. Microwave spectrum analyzers use more than one frequency band to go above
3.6 GHz. This is done by using a higher harmonic of the local oscillator, which will be
discussed in detail in Chapter 7. When making relative measurements between signals
in different frequency bands, you must add the frequency response of each band to
determine the overall frequency response uncertainty. In addition, some spectrum
analyzers have a band switching uncertainty which must be added to the overall
measurement uncertainty.
Frequency response
Signals in the same harmonic band
+0.5 dB
- 0.5 dB
BAND 1
Specification: 0.5 dB
After the input signal is converted to an IF, it passes through the IF gain amplifier and IF
attenuator, which are adjusted to compensate for changes in the RF attenuator setting
and mixer conversion loss. Input signal amplitudes are thus referenced to the top line of
the graticule on the display, known as the reference level.
The most common way to display signals on a spectrum analyzer is to use a logarithmic
amplitude scale, such as 10 dB per div or 1 dB per div. Therefore, the IF signal usually
passes through a log amplifier. The gain characteristic of the log amplifier approximates
a logarithmic curve. So any deviation from a perfect logarithmic response adds to the
amplitude uncertainty. Similarly, when the spectrum analyzer is in linear mode, the
linear amplifiers do not have a perfect linear response. This type of uncertainty is called
display scale fidelity.
Relative uncertainty
When we make relative measurements on an incoming signal, we use either some part
of the same signal or a different signal as a reference. For example, when we make
second harmonic distortion measurements, we use the fundamental of the signal as
our reference. Absolute values do not come into play; we are interested only in how the
second harmonic differs in amplitude from the fundamental.
In a worst-case relative measurement scenario, the fundamental of the signal may occur
at a point where the frequency response is highest, while the harmonic we wish to
measure occurs at the point where the frequency response is the lowest. The opposite
scenario is equally likely. Therefore, if our relative frequency response specification is
± 0.5 dB, as shown in Figure 4-2, then the total uncertainty would be twice that value,
or ± 1.0 dB.
Perhaps the two signals under test are in different frequency bands of the spectrum
analyzer. In that case, a rigorous analysis of the overall uncertainty must include the sum
of the flatness uncertainties of the two frequency bands.
It is best to consider all known uncertainties and then determine which ones can be
ignored when making a certain type of measurement. The range of values shown in
Table 4-1 represents the specifications of a variety of spectrum analyzers.
Before taking any data, we can step through a measurement to see if any controls can
be left unchanged. We might find that the measurement can be made without changing
the RF attenuator setting, resolution bandwidth or reference level. If so, all uncertainties
associated with changing these controls drop out. We may be able to trade off reference
level accuracy against display fidelity, using whichever is more accurate and eliminating
the other as an uncertainty factor. We can even get around frequency response if we are
willing to go to the trouble of characterizing our particular analyzer 1. You can accomplish
this by using a power meter and comparing the reading of the spectrum analyzer at the
desired frequencies with the reading of the power meter.
The same applies to the calibrator. If we have a more accurate calibrator, or one
closer to the frequency of interest, we may wish to use that in lieu of the built-in
calibrator. Finally, many analyzers available today have self-calibration routines. These
routines generate error coefficients (for example, amplitude changes versus resolution
bandwidth) that the analyzer later uses to correct measured data. As a result, these
self-calibration routines allow us to make good amplitude measurements with a
spectrum analyzer and give us more freedom to change controls during the course of
a measurement.
Typical performance does not include measurement uncertainty. During manufacture, all
instruments are tested for typical performance parameters.
When the signal-to-noise ratio is less than 10 dB, the degradations to accuracy of any
single measurement (in other words, without averaging) that come from a higher noise
floor are worse than the linearity problems solved by adding dither, so dither is best
turned off.
At higher frequencies, the uncertainties get larger. In this example, we want to measure
a 10-GHz signal with an amplitude of –10 dBm. In addition, we also want to measure its
second harmonic at 20 GHz.
Frequency accuracy
So far, we have focused almost exclusively on amplitude measurements. What about
frequency measurements? Again, we can classify two broad categories, absolute
and relative frequency measurements. Absolute measurements are used to measure
the frequencies of specific signals. For example, we might want to measure a radio
broadcast signal to verify it is operating at its assigned frequency.
Table 4-3. Absolute and relative amplitude accuracy comparison (8563EC and N9030A PXA)
Absolute measurements are also used to analyze undesired signals, such as when you
search for spurs. Relative measurements, on the other hand, are useful for discovering
the distance between spectral components or the modulation frequency.
Up until the late 1970s, absolute frequency uncertainty was measured in megahertz
because the first LO was a high-frequency oscillator operating above the RF range
of the analyzer, and there was no attempt to tie the LO to a more accurate reference
oscillator. Today’s LOs are synthesized to provide better accuracy. Absolute frequency
uncertainty is often described under the frequency readout accuracy specification and
refers to center frequency, start, stop and marker frequencies.
What we care about is the effect these changes have had on frequency accuracy (and
drift). A typical readout accuracy might be stated:
Note that we cannot determine an exact frequency error unless we know something
about the frequency reference. In most cases, we are given an annual aging rate, such
as ± 1 x 10 –7 per year, though sometimes aging is given over a shorter period (for
example, ± 5 x 10–10 per day). In addition, we need to know when the oscillator was
last adjusted and how close it was set to its nominal frequency (usually 10 MHz). Other
factors that we often overlook when we think about frequency accuracy include how
long the reference oscillator has been operating. Many oscillators take 24 to 72 hours
to reach their specified drift rate. To minimize this effect, some spectrum analyzers
continue to provide power to the reference oscillator as long as the instrument is
plugged into the AC power line. In this case, the instrument is not really turned “off.”
It is more accurate to say it is on “standby.”
We also need to consider the temperature stability, as it can be worse than the drift
rate. In short, there are a number of factors to consider before we can determine
frequency uncertainty.
When you make relative measurements, span accuracy comes into play. For Keysight
analyzers, span accuracy generally means the uncertainty in the indicated separation
of any two spectral components on the display. For example, suppose span accuracy
is 0.5% of span and we have two signals separated by two divisions in a 1-MHz span
(100 kHz per division). The uncertainty of the signal separation would be 5 kHz. The
uncertainty would be the same if we used delta markers and the delta reading was
200 kHz. So we would measure 200 kHz ± 5 kHz.
Most analyzers offer markers you can put on a signal to see amplitude and absolute frequency.
However, the indicated frequency of the marker is a function of the frequency calibration
of the display, the location of the marker on the display and the number of display points
selected. Also, to get the best frequency accuracy, we must be careful to place the
marker exactly at the peak of the response to a spectral component. If we place the
marker at some other point on the response, we will get a different frequency reading.
For the best accuracy, we may narrow the span and resolution bandwidth to minimize
their effects and to make it easier to place the marker at the peak of the response.
Many analyzers have marker modes that include internal counter schemes to eliminate
the effects of span and resolution bandwidth on frequency accuracy. The counter does
not count the input signal directly, but instead counts the IF signal and perhaps one
or more of the LOs, and the processor computes the frequency of the input signal. A
minimum signal-to-noise ratio is required to eliminate noise as a factor in the count.
Counting the signal in the IF also eliminates the need to place the marker at the exact
peak of the signal response on the display. If you are using this marker counter function,
placement anywhere near the peak of the signal sufficiently out of the noise will do.
Marker count accuracy might be stated as:
We must still deal with the frequency reference error, as we previously discussed.
Counter resolution refers to the least-significant digit in the counter readout, a factor
here just as with any simple digital counter. Some analyzers allow you to use the counter
mode with delta markers. In that case, the effects of counter resolution and the fixed
frequency would be doubled.
Sensitivity
One of the primary ways engineers use spectrum analyzers is for searching out
and measuring low-level signals. The limitation in these measurements is the noise
generated within the spectrum analyzer itself. This noise, generated by the random
electron motion in various circuit elements, is amplified by multiple gain stages in the
analyzer and appears on the display as a noise signal. On a spectrum analyzer, this
noise is commonly referred to as the displayed average noise level, or DANL1. The
noise power observed in the DANL is a combination of thermal noise and the noise
figure of the spectrum analyzer. While there are techniques to measure signals slightly
below the DANL, this noise power ultimately limits our ability to make measurements
of low-level signals.
T = temperature, in Kelvin
The total noise power is a function of measurement bandwidth, so the value is typically
normalized to a 1-Hz bandwidth. Therefore, at room temperature, the noise power
density is –174 dBm/Hz. When this noise reaches the first gain stage in the analyzer, the
amplifier boosts the noise, plus adds some of its own.
As the noise signal passes on through the system, it is typically high enough in amplitude
that the noise generated in subsequent gain stages adds only a small amount to the
total noise power. The input attenuator and one or more mixers may be between the
input connector of a spectrum analyzer and the first stage of gain, and all of these
components generate noise. However, the noise they generate is at or near the absolute
minimum of –174 dBm/Hz, so they do not significantly affect the noise level input to the
first gain stage, and its amplification is typically insignificant.
1. Displayed average noise level is sometimes confused with the term “sensitivity.” While related,
these terms have different meanings. Sensitivity is a measure of the minimum signal level that
yields a defined signal-to-noise ratio (SNR) or bit error rate (BER). It is a common metric of
radio receiver performance. Spectrum analyzer specifications are always given in terms of
the DANL.
We can determine the DANL simply by noting the noise level indicated on the display
when the spectrum analyzer input is terminated with a 50-ohm load. This level is the
spectrum analyzer’s own noise floor. Signals below this level are masked by the noise
and cannot be seen. However, the DANL is not the actual noise level at the input, but
rather the effective noise level. An analyzer display is calibrated to reflect the level of
a signal at the analyzer input, so the displayed noise floor represents a fictitious or
effective noise floor at the input.
The actual noise level at the input is a function of the input signal. Indeed, noise is
sometimes the signal of interest. Like any discrete signal, a noise signal is much easier
to measure when it is well above the effective (displayed) noise floor. The effective input
noise floor includes the losses caused by the input attenuator, mixer conversion loss,
and other circuit elements prior to the first gain stage. We cannot do anything about the
conversion loss of the mixers, but we can change the RF input attenuator. This enables
us to control the input signal power to the first mixer and thus change the displayed
signal-to-noise floor ratio. Clearly, we get the lowest DANL by selecting minimum (zero)
RF attenuation.
Because the input attenuator has no effect on the actual noise generated in the system,
some early spectrum analyzers simply left the displayed noise at the same position
on the display regardless of the input attenuator setting. That is, the IF gain remained
constant. In this case, the input attenuator affected the location of a true input signal on
the display. As input attenuation was increased, further attenuating the input signal, the
location of the signal on the display went down while the noise remained stationary.
So if we change the resolution bandwidth by a factor of 10, the displayed noise level
changes by 10 dB, as shown in Figure 5-2. For continuous wave (CW) signals, we get
best signal-to-noise ratio, or best sensitivity, using the minimum resolution bandwidth
available in our spectrum analyzer 2.
Figure 5-1. In modern signal analyzers, reference levels Figure 5-2. Displayed noise level changes as 10 log
remain constant when you change input attenuation (BW 2 /BW1 )
2. Broadband, pulsed signals can exhibit the opposite behavior, where the SNR increases as the bandwidth gets larger.
In summary, we get best sensitivity for narrowband signals by selecting the minimum
resolution bandwidth and minimum input attenuation. These settings give us the best
signal-to-noise ratio. We can also select minimum video bandwidth to help us see a
signal at or close to the noise level 3. Of course, selecting narrow resolution and video
bandwidths does lengthen the sweep time.
3. For the effect of noise on accuracy, see “Dynamic range versus measurement
uncer tainty” in Chapter 6.
Generally, if you can accurately identify the noise power contribution of an analyzer,
you can subtract this power from various kinds of spectrum measurements. Examples
include signal power or band power, ACPR, spurious, phase noise, harmonic
and intermodulation distortion. Noise subtraction techniques do not improve the
performance of vector analysis operations such as demodulation or time-domain
displays of signals.
Keysight has been demonstrating noise subtraction capability for some time, using
trace math in vector signal analyzers to remove analyzer noise from spectrum and
band power measurements. (Similar trace math is available in the Keysight X-Series
signal analyzers.)
Thus measurements of both discrete signals and the noise floor of signal sources
connected to high-performance X-Series signal analyzers are more accurately
measured with NFE enabled. NFE works with all spectrum measurements regardless
of RBW or VBW, and it also works with any type of detector or averaging.
S i /N i
F =
S o /N o
where
We can simplify this expression for our spectrum analyzer. First of all, the output signal
is the input signal times the gain of the analyzer. Second, the gain of our analyzer is
unity because the signal level at the output (indicated on the display) is the same as the
level at the input (input connector). So our expression, after substitution, cancellation
and rearrangement, becomes:
F = N o /N i
This expression tells us that all we need to do to determine the noise figure is compare
the noise level as read on the display to the true (not the effective) noise level at the
input connector. Noise figure is usually expressed in terms of dB, or:
We use the true noise level at the input, rather than the effective noise level, because
our input signal-to-noise ratio was based on the true noise. As we saw earlier, when
the input is terminated in 50 ohms, the kTB noise level at room temperature in a 1-Hz
bandwidth is –174 dBm.
For example, if we measured –110 dBm in a 10-kHz resolution bandwidth, we would get:
The 24- dB noise figure in our example tells us that a sinusoidal signal must be 24 dB
above kTB to be equal to the displayed average noise level on this particular analyzer.
Thus we can use noise figure to determine the DANL for a given bandwidth or to
compare DANLs of different analyzers with the same bandwidth.5
Preamplifiers
One reason for introducing noise figure is that it helps us determine how much benefit
we can derive from the use of a preamplifier. A 24-dB noise figure, while good for a
spectrum analyzer, is not so good for a dedicated receiver. However, by placing an
appropriate preamplifier in front of the spectrum analyzer, we can obtain a system
(preamplifier/spectrum analyzer) noise figure lower than that of the spectrum analyzer
alone. To the extent that we lower the noise figure, we also improve the system
sensitivity.
When we introduced noise figure in the previous discussion, we did so on the basis of
a sinusoidal input signal. We can examine the benefits of a preamplifier on the same
basis. However, a preamplifier also amplifies noise, and this output noise can be higher
4. This may not always be precisely true for a given analyzer because of the way resolution
bandwidth filter sections and gain are distributed in the IF chain.
5. The noise figure computed in this manner cannot be directly compared to that of a receiver
because the “measured noise” term in the equation understates the actual noise by 2.5 dB. See
the section titled “Noise as a signal” later in this chapter.
Rather than develop a lot of formulas to see what benefit we get from a preamplifier, let
us look at two extreme cases and see when each might apply. First, if the noise power
out of the preamplifier (in a bandwidth equal to that of the spectrum analyzer) is at least
15 dB higher than the DANL of the spectrum analyzer, then the sensitivity of the system
is approximately that of the preamplifier, less 2.5 dB. How can we tell if this is the case?
Simply connect the preamplifier to the analyzer and note what happens to the noise on
the display. If it goes up 15 dB or more, we have fulfilled this requirement.
On the other hand, if the noise power out of the preamplifier (again, in the same
bandwidth as that of the spectrum analyzer) is 10 dB or more lower than the displayed
average noise level on the analyzer, the noise figure of the system is that of the
spectrum analyzer less the gain of the preamplifier. Again we can test by inspection.
Connect the preamplifier to the analyzer; if the displayed noise does not change, we
have fulfilled the requirement.
Testing by experiment means we must have the equipment at hand. We do not need
to worry about numbers. We simply connect the preamplifier to the analyzer, note the
average displayed noise level and subtract the gain of the preamplifier. Then we have
the sensitivity of the system.
However, we really want to know ahead of time what a preamplifier will do for us. We
can state the two cases above as follows:
And
Using these expressions, we’ll see how a preamplifier affects our sensitivity. Assume
that our spectrum analyzer has a noise figure of 24 dB and the preamplifier has a gain
of 36 dB and a noise figure of 8 dB. All we need to do is to compare the gain plus noise
figure of the preamplifier to the noise figure of the spectrum analyzer.
The gain plus noise figure of the preamplifier is 44 dB, more than 15 dB higher than the
noise figure of the spectrum analyzer, so the sensitivity of the preamplifier/spectrum-
analyzer combination is that of the preamplifier, less 2.5 dB.
In this expression, kTB = −174 dBm/Hz, so kTBB=1 is −174 dBm. The noise bandwidth
(NBW) for typical digital RBW’s is 0.2 dB wider than the RBW, thus 40.2 dB. The noise
figure of the system is 8 dB. The LogCorrectionFactor is −2.5 dB. So the sensitivity is
−128.3 dBm. This is an improvement of 18.3 dB over the –110 dBm noise floor without
the preamplifier.
However, there might be a drawback to using this preamplifier, depending upon our
ultimate measurement objective. If we want the best sensitivity but no loss of measurement
range, this preamplifier is not the right choice. Figure 5-5 illustrates this point. A spectrum
analyzer with a 24-dB noise figure will have an average displayed noise level of –110 dBm in a
10-kHz resolution bandwidth. If the 1-dB compression point 6 for that analyzer is 0 dBm,
the measurement range is 110 dB. When we connect the preamplifier, we must reduce the
maximum input to the system by the gain of the preamplifier to –36 dBm. However, when we
connect the preamplifier, the displayed average noise level will rise by about 17.5 dB because
the noise power out of the preamplifier is that much higher than the analyzer’s own noise
floor, even after accounting for the 2.5 dB factor. It is from this higher noise level that we now
subtract the gain of the preamplifier. With the preamplifier in place, our measurement range is
92.5 dB, 17.5 dB less than without the preamplifier. The loss in measurement range equals the
change in the displayed noise when the preamplifier is connected.
1 dB compression
0 dBm
Gpre
System 1 dB compression
–36 dBm
110 dB
spectrum
analyzer
range 92.5 dB
system
range DANL
–92.5 dBm
DANL
–110 dBm Gpre
System sensitivity
–128.5 dBm
Interestingly enough, we can use the input attenuator of the spectrum analyzer to
effectively degrade the noise figure (or reduce the gain of the preamplifier, if you
prefer). For example, if we need slightly better sensitivity but cannot afford to give up
any measurement range, we can use the above preamplifier with 30 dB of RF input
attenuation on the spectrum analyzer.
This attenuation increases the noise figure of the analyzer from 24 to 54 dB. Now the
gain plus noise figure of the preamplifier (36 + 8) is 10 dB less than the noise figure of
the analyzer, and we have met the conditions of the second criterion above.
= 54 dB – 36 dB
= 18 dB
This represents a 6-dB improvement over the noise figure of the analyzer alone with
0 dB of input attenuation. So we have improved sensitivity by 6 dB and given up virtually
no measurement range.
Of course, there are preamplifiers that fall in between the extremes. Figure 5-6 enables
us to determine system noise figure from a knowledge of the noise figures of the
spectrum analyzer and preamplifier and the gain of the amplifier. We enter the graph of
Figure 5-6 by determining NFPRE + GPRE – NF SA. If the value is less than zero, we find
the corresponding point on the dashed curve and read system noise figure as the left
ordinate in terms of dB above NFSA – GPRE. If NFPRE + G PRE – NF SA is a positive value, we
find the corresponding point on the solid curve and read system noise figure as the right
ordinate in terms of dB above NFPRE.
System noise
NFSA – Gpre + 1 dB NFpre + 1 dB
figure (dB)
NFpre – 1 dB
NFpre – 2 dB
NFpre – 2.5 dB
–10 –5 0 +5 +10
NFpre + Gpre – NFSA (dB)
As NFPRE + GPRE – NFSA becomes less than –10 dB, we find that system noise figure
asymptotically approaches NFSA – GPRE. As the value becomes greater than +15 dB,
system noise figure asymptotically approaches NFPRE less 2.5 dB.
Next, let’s try two numerical examples. Above, we determined that the noise figure
of our analyzer is 24 dB. What would the system noise figure be if we add a Keysight
8447D amplifier, a preamplifier with a noise figure of about 8 dB and a gain of 26 dB?
First, NFPRE + G PRE – NFSA is +10 dB. From the graph of Figure 5-6 we find a system
noise figure of about NFPRE – 1.8 dB, or about 8 – 1.8 = 6.2 dB. The graph accounts
for the 2.5-dB factor. On the other hand, if the gain of the preamplifier is just 10 dB,
then NFPRE + G PRE – NFSA is –6 dB. This time the graph indicates a system noise figure
of NFSA – G PRE + 0.6 dB, or 24 – 10 + 0.6 = 14.6 dB. (We did not introduce the 2.5-dB
factor previously when we determined the noise figure of the analyzer alone because
we read the measured noise directly from the display. The displayed noise included the
2.5-dB factor.)
5-7 5-8
Figure 5-7. Random noise has a Gaussian amplitude distribution
Figure 5-8. The envelope of band-limited Gaussian noise has a Rayleigh distribution
Let’s start with our analyzer in the linear display mode. The Gaussian noise at the input
is band limited as it passes through the IF chain, and its envelope takes on a Rayleigh
distribution (Figure 5-8). The noise we see on our analyzer display, the output of the
envelope detector, is the Rayleigh-distributed envelope of the input noise signal. To get
a steady value, the mean value, we use video filtering or averaging. The mean value of a
Rayleigh distribution is 1.253 σ.
In most spectrum analyzers, the display scale (log or linear in voltage) controls the
scale on which the noise distribution is averaged with either the VBW filter or with trace
averaging. Normally, we use our analyzer in the log display mode, and this mode adds
to the error in our noise measurement.
The gain of a log amplifier is a function of signal amplitude, so the higher noise values
are not amplified as much as the lower values. As a result, the output of the envelope
detector is a skewed Rayleigh distribution, and the mean value that we get from video
filtering or averaging is another 1.45 dB lower. In the log mode, then, the mean or
average noise is displayed 2.5 dB too low. Again, this error is not an ambiguity, and we
can correct for it 7.
This is the 2.5-dB factor we accounted for in the previous preamplifier discussion,
when the noise power out of the preamplifier was approximately equal to or greater than
the analyzer’s own noise.
Another factor that affects noise measurements is the bandwidth in which the
measurement is made. We have seen how changing resolution bandwidth affects the
displayed level of the analyzer’s internally generated noise. Bandwidth affects external
noise signals in the same way. To compare measurements made on different analyzers,
we must know the bandwidths used in each case.
7. In X-Series analyzers, the averaging can be set to video, voltage or power (rms), independent of
display scale. When using power averaging, no correction is needed, since the average rms level
is determined by the square of the magnitude of the signal, not by the log or envelope of the
voltage.
If we use 10 log(BW 2 /BW 1 ) to adjust the displayed noise level to what we would
have measured in a noise-power bandwidth of the same numeric value as our 3-dB
bandwidth, we find that the adjustment varies from:
10 log(10,000/10,500) = –0.21 dB
to
10 log(10,000/11,300) = –0.53 dB
In other words, if we subtract something between 0.21 and 0.53 dB from the indicated
noise level, we have the noise level in a noise-power bandwidth that is convenient
for computations. For the following examples, we will use 0.5 dB as a reasonable
compromise for the bandwidth correction 8.
Let’s consider the various correction factors to calculate the total correction for each
averaging mode:
Log averaging:
8. The X-Series analyzers specify noise power bandwidth accuracy to within 0.5% (± 0.022 dB).
The analyzer does the hard part. It is easy to convert the noise-marker value to other
bandwidths. For example, if we want to know the total noise in a 4-MHz communication
channel, we add 10 log(4,000,000/1), or 66 dB to the noise-marker value10.
If we use the same noise floor we used previously, –110 dBm in a 10-kHz resolution
bandwidth, we get:
As was the case for a sinusoidal signal, NFSA(N) is independent of resolution bandwidth
and tells us how far above kTB a noise signal must be to be equal to the noise floor of
our analyzer.
9. For example, the X-Series analyzers compute the mean over half a division, regardless of the
number of display points.
10. Most moder n spectrum analyzers make this calculation even easier with the channel power
function. You enter the integration bandwidth of the channel and center the signal on the analyzer
display. The channel power function then calculates the total signal power in the channel.
System noise
NFSA – G pre + 2 dB NFpre + 2 dB
figure (dB)
When we add a preamplifier to our analyzer, the system noise figure and sensitivity
improve. However, we have accounted for the 2.5-dB factor in our definition of NFSA(N),
so the graph of system noise figure becomes that of Figure 5-9. We determine system
noise figure for noise the same way that we did previously for a sinusoidal signal.