0% found this document useful (0 votes)
102 views16 pages

W236: T D H T: A B T: HE Iscrete Ilbert Ransform Rief Utorial

The document provides an introduction to the discrete Hilbert transform from a practical perspective. It defines the Hilbert transform as a process that takes a real input signal xr(t) and produces a new real output signal xht(t) that is phase shifted by 90 degrees. The key aspect is that the Fourier transform of the output Xht(ω) is related to the Fourier transform of the input Xr(ω) by the frequency response of the Hilbert transform H(ω), which is -j for positive frequencies and +j for negative frequencies. This rotates the positive frequencies by -90 degrees and negative frequencies by +90 degrees. The Hilbert transform is widely used in applications involving complex signal processing like communications, medical imaging,

Uploaded by

Aldo Villar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views16 pages

W236: T D H T: A B T: HE Iscrete Ilbert Ransform Rief Utorial

The document provides an introduction to the discrete Hilbert transform from a practical perspective. It defines the Hilbert transform as a process that takes a real input signal xr(t) and produces a new real output signal xht(t) that is phase shifted by 90 degrees. The key aspect is that the Fourier transform of the output Xht(ω) is related to the Fourier transform of the input Xr(ω) by the frequency response of the Hilbert transform H(ω), which is -j for positive frequencies and +j for negative frequencies. This rotates the positive frequencies by -90 degrees and negative frequencies by +90 degrees. The Hilbert transform is widely used in applications involving complex signal processing like communications, medical imaging,

Uploaded by

Aldo Villar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

W236: THE DISCRETE HILBERT TRANSFORM: A BRIEF TUTORIAL

Introduction

OK. You've read about the discrete Hilbert transform in the DSP literature. You've
plodded through the mathematical descriptions of analytic functions, with the constraints on
their z-transforms in their regions of convergence. It's likely that you've also encountered
the Cauchy integral theorem used in the definition of the Hilbert transform. Unfortunately,
the author(s) did not supply a "Hilbert transform psychic hotline" phone number you can call
to help make sense of it all. That's where this DSP Workshop paper comes in.

Here we'll gently introduce the Hilbert transform (HT) from a practical standpoint, and
explain some of the mathematics behind its description. In addition to providing some of
the algebraic steps that are "missing" from many textbooks, we'll illustrate the time and
frequency domain characteristics of the transform with an emphasis on the physical
meaning of the quadrature (complex) signals associated with HT applications. With that
said, let's get started.

Hilbert Transform Definition

The HT is mathematical process performed on a real signal xr(t) yielding a new real
signal xht (t), as shown in Figure 1.

x r (t) x (t)
Hilbert transform, ht
h(t), H(w)
X r ( ω) Xht (ω)

Figure 1. The notation used to define the Hilbert transform.

Our goal here is to ensure that xht (t) is a 90 o phase-shifted version of xr(t). So, before
we carry on, let's make sure we understand the notation used in Figure 1. The above
variables are defined as:

xr(t) - a real continuous time-domain input signal,


h(t) - the time impulse response of a Hilbert transformer,
xht (t) - the HT of xr(t), (xht (t)is also a real time-domain signal),
Xr(ω) - the Fourier transform of input x r(t),
H(ω) - the frequency response (complex) of a Hilbert transformer,
Xht (ω) - the Fourier transform of output x ht (t),
ω - continuous frequency measured in radians/second, and
t - continuous time measured in seconds.

We'll clarify that xht (t) = h(t)* xr(t), where the "* " symbol means convolution. In
addition, we can define the spectrum of xht (t) as X ht (ω) = H(ω) .Xr(ω). (These relationships
sure make the HT look like a filter, don't they? We'll cogitate on this notion again later in
this paper.)

Describing how that new xht (t) signal, the "HT of xr(t)", differs from the original xr(t) is
most succinctly done by relating their Fourier transforms, X r(ω) and Xht (ω). In words, we
can say that all of xht (t)'s positive frequency components are equal to xr(t)'s positive
frequency components shifted in phase by -90 o . Also, all of xht (t)'s negative frequency
components are equal to xr(t)'s negative frequency components shifted in phase by +90o .
Mathematically, we recall that:

Xht (ω) = H(ω) .Xr(ω) . (1)

where, again, H(ω) = -j over the positive frequency range, and H(ω) = +j over the negative
frequency range. We show the non-zero imaginary part of H(ω) in Figure 2(a).

Imag Part H(ω)


Imag Part of H( ω) Imag Part
2 of H( ω)
1
... +1 0
Real Part
-1 of H( ω)
-2
0 Freq Real
Part
... +2
-1 -Freq 0 0
+Freq -2

(a) (b)

Figure 2. The complex frequency response of H(ω).

To fully depict the complex H(ω), we show it as "floating" in a 3-dimensional space in


Figure 2(b). The bold curve in the "3D-box" of Figure 2(b) is our complex H(ω). On the
right side of Figure 2(b) is an upright "plane" on which we can project the imaginary part of
H(ω). At the bottom of Figure 2(b) is a flat plane on which we can project the real part of
H(ω). Using the axis conventions of Figure 2(b), we see that H(ω) = 0 +j1 for negative
frequencies and H(ω) = 0 -j1 for positive frequencies. (We introduce the 3D axes of
Figure 2(b) now because we'll be using it to look at other complex frequency-domain
functions later in this discussion. While, admittedly, 3D drawings are a bit "cluttered", at
least here we won't have to wear those silly red-green cardboard glasses.)

To show a simple example of a HT, and to reinforce our graphical notation, the 3-
dimensional diagrams in Figure 3 show that the HT of a real cosine wave cos(ωt) is a
sinewave sin(ωt).

.. Imag Imag
. Part Real Part
Part Real Part
Real
cosine
wave
.. Time Freq
.
Imag Imag
..
. Part Real Part
Real Part
Part
Real
sine
wave
. . Time Freq
.

Figure 3. The Hilbert transform of cos(ωt) is sin(ωt).

The complex spectra on the right side of Figure 3 show that the HT rotates the cosine
wave's positive frequency spectral component by -j, and the cosine wave's negative
frequency spectral component by +j. You can see on the right side of Figure 3 that our
definition of the +j multiplication operation is a +90 o rotation of a complex value (phasor)
counterclockwise about the Frequency axis. (The length of those phasors is half the peak
amplitude of the original cosine wave.) We're assuming those sinusoids on the left in
Figure 3 exist for all time, and that's what allows us to show there spectra as infinitely
narrow "impulses" in the frequency domain. (If you're unaccustomed to thinking about
negative frequencies, or you don't know why they're necessary when we discuss complex
signals, the discussion in Reference [1] may prove helpful.)

Now that we have this frequency response of the HT defined, it's reasonable for the
beginner to ask: "Why would anyone want a process whose frequency domain response is
that weird H(ω) in Figure 2(b)?"

Why Care About the Hilbert Transform?

The answer is: we need to understand the HT because it's useful in so many complex-
signal processing applications. A brief "search" on the Internet reveals HT-related signal
processing techniques being used in the following applications:

- quadrature modulation and demodulation (communications),


- analysis of 2-D and 3-D complex signals,
- medical imaging, seismic data, and ocean wave analysis,
- radar signal processing,
- time-domain signal analysis using wavelets,
- time difference of arrival (TDOA) measurements,
- high definition television (HDTV) receivers,
- loudspeaker and room acoustics analysis,
- mechanical vibration analysis,
- color image compression,
- nonlinear & nonstationary system analysis,
- whitens your teeth...No no! Just jokin'. You get the idea.

All of these applications employ the HT to either generate, or measure, complex time-
domain signals and that's where the HT's power lies. The HT delivers to us, literally,
another dimension of signal processing capabilities to us as we move from 2-dimensional
real signals to 3-dimensional complex signals. Here's how.

Let's consider a few mathematical definitions. If we start with a real time-domain signal
xr(t), we can associate with it a complex signal xc (t), defined as:

xc (t) = xr(t) + jxi(t) . (2)

The complex signal xc (t) is known as an "analytic" signal, and the key here is that its
imaginary part, xi(t), is the HT of the original xr(t) as shown in Figure 4.

x r (t) x r (t)
x c (t) = x r (t) + jx i (t)
Hilbert
x i (t)
transformer, h(t)

Figure 4. Functional relationship between the xc (t) and xr(t) signals.

As we'll see shortly, in many "real world" signal analysis situations xc (t) is easier, or
more meaningful, to process than the original xr(t). Before we see why that is true, we'll
explore x c (t) further to attempt to give it a some physical meaning. Consider a real
x1 r(t) = cos(ω1t) signal that's simply four cycles of a cosine wave and its HT x1i(t) sinewave
as shown in Figure 5. The x1c (t) analytic signal is the bold "corkscrew" function.

As if our 3D plots aren't cluttered enough, we've also shown on the left upright wall the
locus of points (dots) of the complex x1 c (t). We can think of that surface as the standard
complex plane, with its real and imaginary axes. (That locus of points looks like an ellipse
due to the unequal real and imaginary axes scaling in Figure 5. That locus of points is, of
course, a unit-radius circle.)

We can describe x1 c (t) as a complex exponential by using one of Euler's equations.


That is:
x1 c (t) = x1 r(t) + jx1 i(t) = cos(ω1t) + jsin(ω1t) = e jω1 t . (3)

If you're accustomed to thinking of the complex exponential ejω1 t as a point rotating


around the origin on the complex plane, then the locus of that point's path is the circle on
the left side of Figure 5. The spectrum of those signals in Eq. (3) are shown in Figure 6.

x1c (t)

1 x1 i (t)
Imag Part

0 (Hilbert
transform
-1 of x1r (t))

-2

-1
0 2
1 1
0
Time 2 -1 Real Part
3 -2 x1r (t)

Figure 5. The Hilbert transform and the analytic signal of cos(ωt).

Notice three things in Figure 6. First, following the relationships in Eq. (3), if we rotate
X1 i(ω) by +90 o counterclockwise (+j) and add that to X1r(ω), we get X1c (ω). Second, note
that the magnitude of the component in X1c (ω) is double the magnitudes in X1r(ω). Third,
notice how X1 c (ω) is zero over the negative frequency range. This property of zero
frequency content for negative frequencies is why X1c (ω) is called an analytic signal. Some
people call X1 c (ω) a "one-sided" spectrum.

X1 r ( ω) X1 i ( ω) X1c (ω)

Imag Imag Imag


Part Part
Part
Real Part Real Part Real Part
-ω 1 -ω 1
0 0 ω1 0
ω1 ω1

Freq Freq
Freq

Figure 6. The spectrum of cos(ω1t), its Hilbert transform sin(ω1t), and the analytic signal ejω1 t .

To appreciate the physical meaning of our discussion here, let's remember that x1c (t) is
not just a mathematical abstraction. We can generate x1c (t) in our laboratory and transmit it
to the lab down the hall. All we need is two sinusoidal signal generators, set the same
frequency. (However, somehow we have to synchronize those two hardware generators so
that their relative phase is fixed at 90o .) Next we connect coax cables to the generators'
output connectors and run those two cables, labeled "x1r" and "x1i", down the hall to their
destination.

Now for a two-question "pop quiz". In the other lab, what would be seen on the screen of
an oscilloscope if the continuous x1 r and x1i signals were connected to the horizontal and
vertical input channels, respectively, of the scope? (Remembering, of course, to set the
scope's Horizontal Sweep control to the "External" position.) Next, what would be seen on
the scope's display if the cables were mislabeled and the two signals were inadvertently
"swapped"? The answer to the first question is that we'd see a "spot" rotating
counterclockwise in a circle on the scope's display. That's the "circle" in Figure 5. If the
cables were swapped, we'd see another circle, but this time it would be orbiting in the
clockwise direction. This would be a "neat" little demo for students if we set the signal
generators' frequencies to, say, 1 Hz.

To illustrate the utility of this analytic signal xc (t) notion, let's now say that analytic
signals are very useful in measuring instantaneous characteristics of a time domain signal.
That is, measuring the magnitude, phase, or frequency of a signal at some given instant in
time. This idea of instantaneous measurements doesn't seem so profound when we think of
characterizing, say, a pure sinewave. But if we think of a more complicated signal, like a
modulated sinewave, instantaneous measurements can be very meaningful. If a real
sinewave, xr(t), is amplitude modulated so that its "envelope" contains information, from an
analytic version of the signal we can measure the instantaneous envelope Env(t) value using:

Env(t) = |xc (t)| = xr(t) 2 + xi(t) 2 . (4)

That is, the envelope of the signal is equal to the magnitude of xc (t). If the real xr(t)
were an unmodulated sinewave, the envelope (ideally) would be a constant equal to the
sinewave's peak amplitude.

Suppose, on the other hand, that some real xr(t) sinewave is phase modulated. We can
generate and measure xc (t)'s instantaneous phase ø(t), using:

xi(t) 
φ(t) = tan -1   . (5)
xr(t) 

Calculating φ(t) is equivalent to phase demodulation of xr(t). Likewise (and more


oftentimes implemented), should a real sinewave carrier be frequency modulated, we can
measure its instantaneous frequency freq(t) by calculating the instantaneous time rate of
change of xc (t)'s instantaneous phase using:

d d xi(t) 
freq(t) = φ(t) = tan -1   . (6)
dt dt xr(t) 

Calculating freq(t) is equivalent to frequency demodulation of xr(t). By the way, if φ(t)


is measured in radians, then freq(t) in Eq. (6) is measured in radians/second. Dividing
freq(t) by 2π will give it dimensions of Hz.

Now that you're sufficiently convinced of the utility of the HT, let's obtain the HT's
time-domain impulse response and then see how discrete Hilbert transformers are built.

The Impulse Response of the Hilbert Transform

Instead of following tradition and just "plopping down" the HT's discrete time-domain
impulse response equation, at no extra charge we'll show how to arrive at that expression.
To determine the HT's impulse response expression, we take the inverse Fourier transform
of the HT's frequency response H(ω). The garden-variety continuous inverse Fourier
transform of an arbitrary frequency function X(f) is defined as:

inf
x(t) = ∫ X(f)e j2πft df , (7)
-inf

where f is frequency measured in cycles/second (Hertz). We'll make three changes to


Eq. (7). First, in terms of our original frequency variable ω = 2πf radians/second, and
because df = dω/2π, we substitute dω/2π in for the df term. Second, because we know our
discrete frequency response will be periodic with a repetition interval of the sampling
frequency ωs , we'll evaluate Eq. (7) over the frequency limits of -ωs /2 to +ωs /2. Third, we
partition the original integral into two separate integrals. That algebraic massaging gives us
the following:

1 ωs /2 1  0 jωt ωs /2 
jωt dω 
h(t) = ∫
2π -ωs /2
jωt
H(ω)e dω = ∫ je
2π  -ωs /2
dω + ∫ -je 
0 

1  jωt 0 jωt
ωs /2 

= [ e ] - [ e ]
2πt  
 -ωs /2 0

1 1
= ( e j0 -e -jωs t/2 -e jωs t/2 +e j0) = (2 - 2cos(ωs t/2)
2πt 2πt
1
= (1 - cos(ωs t/2) ) (8)
πt

Whew! OK, we're going to plot this impulse response shortly, but first we have one
hurdle to overcome. Heartache occurs when we plug t = 0 into Eq. (8) because we end up
with the indeterminate ratio 0/0. Well, hardcore mathematics to the rescue here. We
merely pull out the Marquis de L'Hopital's Rule and take the derivative of Eq. (8), and then
set t = 0 to determine h(0). Following through on this:

d d 1 -cos(ωs t/2)
h(0) = h(t)|t → 0 = |t → 0
dt dt πt
sin(πt)
=
π2
|t → 0 = 0 . (9)

So now we know that h(0) = 0. Let's find the discrete version of Eq. (8) because that's
the form we can model in software and actually use in our DSP work. We can "go digital"
by substituting the discrete time variable nts in for the continuous time variable t in Eq. (8).
Using the following definitions:

n - discrete time-domain integer index (...,-3,-2,-1,0,1,2,3,etc.),


f s - sample rate measured in samples/second,
t s - time between samples, measured in seconds (t s = 1/f s ), and
ωs - 2πf s ,

we can rewrite Eq. (8) in discrete form as:

1
h(n) = (1 - cos(ωs nts /2) ) (10)
πnts

Substituting 2πf s for ωs , and 1/fs for t s , we have:

1
h(n) = 1 - cos(2πf s n(1/f s )/2) )
πnts (
fs
= 1 - cos(πn)) , for n =/ 0 ,
πn (
[h(n) = 0, for n = 0] (11)

Finally, we plot HT's h(n) impulse response in Figure 7.

That fs term in Eq. (11) is simply a scale factor. Its value does not affect the shape of
h(n). An informed reader might, at this time, jump up and say, "Wait a minute. Eq. (11)
doesn't look at all like the equation for the HT's impulse response that's in my DSP
textbook. What gives?" That reader would be correct because a popular expression in the
literature for h(n) is:

2sin 2(πn/2)
alternate form: h(n) = . (12)
πn

There's no discrepancy here. (Watch how I wiggle out of this one.) First, the derivation
of Eq. (12) is based on the assumption that the fs sampling rate is normalized to unity.
Next, if you blow the dust off your old mathematical reference book, inside you'll find a
trigonometric "power relations" identity that states that sin2(α) = (1 - cos(2α)/2). If in Eq.
(11) we substitute 1 for f s , and substitute 2sin2(πn/2) for (1 - cos(πn)), we see that Eqs.
(11) and (12) are equivalent.

1
2 2
π

2
0.5
3π 2
7π ...
0
...
-0.5

Time
-1
-15 -10 -5 0 5 10 15 n

Figure 7. The Hilbert transform's discrete impulse response when fs = 1.

Looking again at Figure 7, we can give ourselves a warm fuzzy feeling about the validity
of our h(n) derivation. Notice how for n > 0 the values of h(n) are non-zero only when n is
odd. In addition, the amplitudes of those non-zero values decrease by factors of 1/1, 1/3,
1/5, 1/7, etc. Of what does that remind you? That's right, the Fourier series of a periodic
squarewave! That makes sense because our h(n) is the inverse Fourier transform of the
squarewave-like H(ω) in Figure 2. Furthermore, our h(n) is anti-symmetric, and this is
consistent with an H(ω) that is purely imaginary. (If we were to make h(n) symmetrical by
inverting its values for all n < 0, that new sequence would be proportional to the Fourier
series of a periodic real squarewave.)

Now that we have the expression for the HT's impulse response h(n), next we'll see how
it's used to build a discrete Hilbert transformer.
Designing a Discrete Hilbert Transformer

Discrete Hilbert transformations can be implemented in either the time or frequency


domains. Let's look at time-domain Hilbert transformers first.

Time Domain Hilbert Transformation - FIR Implementation

Looking back at Figure 4, and having h(n) available, we want to know how to generate the
discrete x i(n). Recalling the frequency-domain product in Eq. (1), we can say that xi(n) is
the convolution of xr(n) and h(k). Mathematically, this is:
inf
xi(n) = ∑ h(k) xr(n-k) . (13)
k = -inf

So this means we can implement a Hilbert transformer as a discrete finite impulse


response (FIR) filter structure similar to that shown in Figure 8.

Designing a time-domain Hilbert transformer amounts to determining those h(k) values


so the functional block diagram in Figure 4 can be implemented. Our first thought is to
merely take the h(n) coefficient values from Eq. (11), or Figure 7, and use them for the
h(k)'s in Figure 8. That's almost the right answer. Unfortunately, the Figure 7 h(n)
sequence is infinite in length, so we have to truncate that sequence. Figuring out what the
truncated h(n) should be is where the true design activity takes place.

x r (n)
Delay Delay Delay Delay

h(k) h(k+1) h(k+2) h(k+3) h(k+4)

x i (n)

Figure 8. FIR implementation of a Hilbert transform.

To start with, we have to decide if our truncated h(n) sequence will have an odd or even
length. We make that decision by recalling that FIR implementations having anti-symmetric
coefficients and an odd, or even, number of taps are called a Type III, or a Type IV, system
respectively. [2-5] These two anti-symmetric filter types have the following unavoidable
restrictions with respect to their frequency magnitude responses |H(ω)|:

h(n) length → Odd (Type III): Even (Type IV):


|H(0)| = 0 |H(0)| = 0
|H(ωs /2)| = 0 |H(ωs /2)| no restriction
What this little table tells us is that odd-tap Hilbert transformers always have a zero
magnitude response at both zero Hz and at half the sample rate. Even-tap Hilbert
transformers always have a zero magnitude response at zero Hz. Let's look at some
examples.

Figure 9 shows the Frequency response of a 15-tap (Type III, odd-tap) Hilbert
transformer whose coefficients are designated as h 1(k). These plots have much to teach us:

a. For example, an odd-tap FIR implementation does indeed have a zero magnitude
response a 0 Hz and ±half the sample rate (±fs /2 Hz). This means that odd-tap
(Type III) FIR implementations turn out to be bandpass in performance.

b. There's ripple in the H1(ω) passband. We should have expected that because we
were unable to use an infinite number of h(k) coefficients. Here, just as it does
when we're designing standard lowpass FIR filters, truncating the length of the
time-domain coefficients causes ripples in the frequency domain. (When we
abruptly truncate a function in one domain, Mother Nature pays us back by
invoking the Gibbs' phenomenon, resulting in ripples in the other domain.) You
guessed it. We can reduce the ripple in |H1(ω)| by "windowing" the truncated h(k)
sequence. However, windowing the coefficients will narrow the bandwidth of
H1(ω) somewhat, so using more coefficients may be necessary after windowing is
applied. You'll find that windowing the truncated h(k) sequence to be to your
advantage.

c. The phase response of H1(ω) is linear, as it should be when the coefficients'


absolute values are symmetrical. The slope of the phase curve (which is constant
in our case) is proportional to the time delay that a signal sequence experiences
traversing the FIR filter. More on this in a moment. That discontinuity in the
phase response at 0 Hz corresponds to π radians as Figure 2 tells us it should.
Whew, good thing. That's what we were after in the first place!
1
h1 (k)
[15-tap]
0

-1
0 2 4 6 8 10 12 14 k
0

-5
Magnitude of H1 ω
dB

()
-10
-15
-fs /2 -fs /4 0 fs /4 fs /2 Freq
0
-10 Phase of H1 ω
()
radians

-20
-30
-40
-50
-fs /2 -fs /4 0 fs /4 fs /2 Freq

Figure 9. H1(ω) frequency response of h1(k), a 15-tap Hilbert transformer.

In our "relentless pursuit" of correct results, we're forced to compensate for that linear
phase shift of H 1(ω)—that constant time value equal to the group delay of the filter—when
we generate our analytic xc (n). We do this by delaying, in time, the original xr(n) by an
amount equal to the group delay of the h1(k) FIR Hilbert transformer. Recall that the group
delay G of a K-tap FIR filter, measured in samples, is G = (K-1)/2 samples. So our block
diagram for generating a complex xc (n) signal, using a FIR structure, is that in Figure 10.
There we delay xr(n) by G = (15-1)/2 = 7 samples, generating the delayed sequence x'r(n).
This delayed sequence now aligns properly in time with xi(n).

If you're building your odd-tap FIR Hilbert transform in hardware, an easy way to obtain
x'r(n) is to "tap off" the original xr(n) sequence at the center tap of the FIR structure. If
you're implementing Figure 10 in software, the x'r(n) sequence can be had by inserting
G = 7 zeros at the beginning of the original xr(n) sequence.

7-sample delay x' r (n)


x r (n)
x c (n) = x'r (n) + jx i (n)
15-tap FIR
x i (n)
Hilbert transformer

Figure 10. Generating an xc (n) sequence when h(k) is a 15-tap FIR.

We can, for example, implement a Hilbert transformer using a Type IV FIR structure,
with its even number of taps. Figure 11 shows this notion where the coefficients are, say,
h2(k). See that the frequency magnitude response is non-zero at ±half the sample rate (±fs /2
Hz). Thus this even-tap filter approximates an ideal Hilbert transformer a little better than
an odd-tap implementation.

1
h2 (k)
[14-tap]
0

-1
0 2 4 6 8 10 12 14 k
0
-5
dB

2 ω
Magnitude of H ()
-10
-15
-20
-fs /2 -fs /4 0 fs /4 fs /2 Freq

Figure 11. H2(ω) frequency response of h2(k), a 14-tap Hilbert transformer.

Although not shown here, the negative slope of the phase response of H2(ω)
corresponds to a filter group delay of G = (14-1)/2 = 6.5 samples. This causes us trouble
because we then have to delay the original xr(n) sequence by a non-integer (fractional)
number of samples in order to achieve time alignment with xi(n). Fractional time delay
filters is far beyond the scope of this paper, but Reference [6] is a source for further
information on this topic.

Let's again remember that alternate coefficients of a Type III (odd-tap) FIR are zeros.
This makes the odd-tap Hilbert transformer more attractive, than an even-tap version, from a
computational workload standpoint. With so many of the odd-tap coefficients being zero,
almost half of the multiplications in Figure 8 can be eliminated for a Type III FIR Hilbert
transformer. You hardware designers might even be able to further reduce the number of
multiplications by a factor of two by using that "folded" FIR structure that's possible with
symmetric coefficients (keeping in mind that half the coefficients are negative).

Danger Will Robinson. Here’s a mistake that sometimes even the “pros” make. When
we design standard linear-phase FIR filters, we calculate the coefficients and then use them
in our hardware or software designs. As Jim Thomas, DSP engineer at BittWare Inc. likes
to remind us, sometimes we forget to “flip” the coefficients before we use them in a FIR
filter. This forgetfulness usually doesn’t hurt us because typical FIR coefficients are
symmetrical. Not so with HT FIR filters, so please don’t forget to reverse the order of your
HT coefficients before you use them for convolutional filtering.
Time Domain Hilbert Transformation - Mixing Technique

Many practitioners now use a time-domain mixing technique to achieve Hilbert


transformation when dealing with real bandpass signals. This scheme is sweet in its
simplicity, and avoids some of the pitfalls of the process in Figure 10. In this mixing
technique we create two separate FIR filters with essentially equal magnitude responses, but
whose phase responses differ by exactly 90 o as shown in Figure 12.

X r (ω) |H
BP
(ω)| X c( ω)

-ω c 0 ωc Freq -ω c 0 ωc Freq -ω c 0 ωc Freq

x r (n) h cos (k), real FIR filter


x I (n)
[Real parts of h BP (k)]
x c (n) = x I (n) + jxQ(n)
h sin (k), real FIR filter
x Q(n)
[Imag. parts of h BP (k)]

Figure 12. Generating an xc (n) sequence with two real FIR filters.

Here's how it's done. A standard K-tap FIR lowpass filter is designed, using your
favorite FIR design software, to have a two-sided bandwidth that's slightly wider than the
original real bandpass signal of interest. The real coefficients of the lowpass filter, hLP(k),
are then multiplied by the complex exponential ejωc nts , using the following definitions:

ωc - center frequency, in radians/second, of original bandpass signal, (ωc = 2πf c ),


f c - center frequency, in Hz,
n - time index of the lowpass filter coefficients (n = 0,1,2,...K-1),
t s - time between samples, measured in seconds (t s = 1/f s ), and
f s - sample rate of the original bandpass signal sequence.

The result of this process are complex coefficients, h BP(k) = hcos (k) + jhsin (k), for a
complex bandpass filter centered at fc Hz. Next we use the real and imaginary parts of that
filter's h BP(k) coefficients in two separate real-valued coefficient FIR filters as shown in
Figure 12.

In DSP parlance, the filter producing the xI(n) sequence is called the “I channel” for in-
phase, and the filter generating xQ(n) is called the “Q-channel” for quadrature phase. There
are several interesting aspects of this mixing HT scheme in Figure 12:

a. The mixing of the hLP(k)coefficients by e jωc nts induces a loss, by a factor of 2, in


frequency magnitude responses of the two real FIR filters. Doubling the hLP(k)
values before mixing with eliminate that loss.

b. We can window the hLP(k) coefficients before mixing to reduce the passband ripple
in, and to minimize differences between, the magnitude responses of the two final
real filters. Again, windowing will degrade the filter's rolloff somewhat, so using
more coefficients may be necessary after windowing is applied.

c. Odd or even-tap lowpass filters can be used, with equal ease, in this technique.

d. If the original bandpass signal's, and the complex passband filter's, center frequency
is one fourth the sample rate, f s /4, count your blessings. That's because half the
coefficients of each real filter are zero, reducing our FIR computational workload
by a factor of two.

e. For hardware applications, both of the two real FIR filters in Figure 12 must be
implemented. If your Hilbert transformations are strictly a high-level software
language exercise, such as MatLab, the language being used may allow hBP(k) to be
implemented as a single complex filter.

f. The standard FIR implementation of Hilbert transformers, in Figure 10, has


magnitude rolloff in the Q channel (Figure 9) but not in the I channel. The mixing
Hilbert scheme, however, can achieve almost identical I and Q channel magnitude
responses making it a better approximation to "true quadrature processing".

Frequency Domain Hilbert Transformation

Here's a frequency-domain Hilbert processing scheme that's deserves mention, because


the HT of xI(n), and the analytic xc (n) sequence, can be generated simultaneously. We
merely take an N-point FFT of an N-length xr(n) signal sequence and zero out the negative
frequency FFT bins, leaving us with a "one-sided" Xos (m) spectrum. Next we double the
magnitude of the Xos (m). (See Figure 6.) Finally, we perform an N-point inverse FFT on
new Xos (m), the result being the analytic xc (n) sequence. The imaginary part of xc (n) is
xQ(n), the HT of xI(n). Done!

If your HT application is amenable to FFT processing, this technique is sure worth


thinking about because there's none of the heartache associated with time-domain FIR
implementations to worry about.

Conclusions

This ends our HT tutorial. We learned that the HT can be used to build 90o "phase
shifters" in order to generate complex "analytic" signals. The HT has straightforward
definitions in both the time and frequency domains. We saw that Hilbert transformation
performed in the time domain is essentially an exercise in lowpass filter design. As such,
an ideal discrete FIR Hilbert transformer, like an ideal lowpass FIR filter, can not be
achieved in practice. Fortunately, we can usually ensure that the bandpass of our Hilbert
transformer covers the bandwidth of the original signal we're phase shifting by ±90o . We
saw that using more filter taps improves the transformer’s performance, and that choosing
odd or even taps depends on whether the gain at the folding frequency should be zero or not,
and whether an integer-sample delay is needed. We also introduced the topic of frequency-
domain Hilbert transformation with its simplicity over time-domain implementations.

Now that you know the fundamental characteristics of the HT, and because commercial
Hilbert transformer design software is so readily available, you too can start taking
advantage of enhanced signal processing schemes that use the HT and quadrature signals.

References

A quick note: here are readily available books that may be helpful in your pursuit of
further information on complex signals and the Hilbert transform. Reference [1] does not
cover the Hilbert transform, but it does have a very good discussion of complex signals and
the notion of negative frequency. References [2-5] each discuss the Hilbert transform, with
Reference [4] being the most comprehensive. I have not seen Reference [5], but it
reportedly contains much Hilbert transform information along with IIR filter
implementations. I also do not have a copy of Reference [7]. However, when I learned of its
existence, I checked Amazon.com and saw from its Table of Contents that it covered a wide
array of Hilbert transform topics. Reference [8] is a website offering high-performance,
but low cost, FIR and FIR Hilbert design software.

[1] R. G. Lyons, Understanding Digital Signal Processing, Addison Wesley, Reading,


Massachusetts, 1997. Appendix C.
[2] J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles, Algorithms and
Applications, Prentice-Hall, Upper Saddle River, New Jersey, 1996. pp. 618 & 657.
[3] L. R. Rabiner and B. Gold, The Theory and Application of Digital Signal Processing,
Prentice-Hall, Englewood Cliffs, New Jersey, 1975. pp. 67 & 168.
[4] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Prentice-Hall,
Englewood Cliffs, New Jersey, 1st Ed. 1989, 2nd Ed. 1999.
[5] P. A. Regalia, "Special Filter Designs," in: S. K. Mitra and J. F. Kaiser (ed.), Handbook
for Digital Signal Processing, John Wiley & Sons, Inc., New York, 1993.
[6] T. I. Laasko, et. al., "Splitting the Unit Delay," IEEE Signal Processing Magazine,
January 1996.
[7] S. L. Hahn, Hilbert Transforms In Signal Processing, Artech House Publishing, 1997.
[8] Iowegian International. Inc., creators of ScopeDSP and ScopeFIR software for the PC.
Website: http://www.iowegian.com.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy