Standard Handbook of Broadcast Engineering
Standard Handbook of Broadcast Engineering
Standard Handbook of Broadcast Engineering
Additional updates relating to broadcast engineering in general, and this book in particular, can be found at the
Standard Handbook of Broadcast Engineering web site:
www.tvhandbook.com
The tvhandbook.com web site supports the professional audio/video community with news, updates, and
product information relating to the broadcast, post production, and business/industrial applications of digital
video.
Check the site regularly for news, updated chapters, and special events related to audio/video engineering.
The technologies encompassed by the Standard Handbook of Broadcast Engineering are changing rapidly,
with new standards proposed and adopted each month. Changing market conditions and regulatory issues are
adding to the rapid flow of news and information in this area.
Specific services found at www.tvhandbook.com include:
• Audio/Video Technology News. News reports and technical articles on the latest developments in digital
radio and television, both in the U.S. and around the world. Check in at least once a month to see what's
happening in the fast-moving area of digital broadcasting.
• Resource Center. Check for the latest information on professional and broadcast audio/video systems. The
Resource Center provides updates on implementation and standardization efforts, plus links to related web
sites.
• tvhandbook.com Update Port. Updated material for the Standard Handbook of Broadcast Engineering is
posted on the site each month. Material available includes updated sections and chapters in areas of rapidly
advancing technologies.
In addition to the resources outlined above, detailed information is available on other books in the
McGraw-Hill Video/Audio Series.
iii
Editors
McGraw-Hill
New York San Francisco Washington D.C. Auckland Bogotá Caracas Lisbon London
Madrid Mexico City Milan Montreal New Delhi San Juan Singapore Sydney Tokyo Toronto
iv
ISBN 0-07-145100-5
The sponsoring editor for this book was Steve Chapman. The production supervisor was Sherri Souffrance. This book
was set in Times New Roman and Helvetica by Technical Press, Morgan Hill, CA. The art director for the cover was
Anthony Landi.
Printed and bound by RR Donnelley.
McGraw-Hill books are available at special quantity discounts to use as premiums and sales promotions, or for use in
corporate training programs. For more information, please write to the Director of Special Sales, McGraw-Hill
Professional, 2 Penn Plaza, New York, NY 10121-2298. Or contact your local bookstore.
Information contained in this work has been obtained by The McGraw-Hill Companies, Inc. (“McGraw-Hill”) from
sources believed to be reliable. However, neither McGraw-Hill nor its authors guarantee the accuracy or completeness of
any information published herein, and neither McGraw-Hill nor its authors shall be responsible for any errors, omissions,
or damages arising out of use of this information. This work is published with the understanding that McGraw-Hill and its
authors are supplying information but are not attempting to render engineering or other professional services. If such
services are required, the assistance of an appropriate professional should be sought.
This book is printed on recycled, acid-free paper containing a minimum of 50% recycled, de-inked fiber.
v
For
Contents
Contributors ix
Preface xi
Contributors
Some of the chapters authored by these contributors have been adapted from the McGraw-Hill Standard
Handbook of Video and Television Engineering, 3rd edition, and the Standard Handbook of Audio/Radio
Engineering, 2nd edition. Used with permission. All rights reserved.
xi
Preface
The broadcast industry has embarked on the most significant transition of technologies and business models in
the history of radio and television. Dramatic advancements in computer systems, imaging, display, and
compression technologies have all vastly reshaped the technical landscape, and theses have all affected
broadcasting. For television, the transition to digital television (DTV) is already well underway; and for radio,
digital audio broadcasting (DAB) is beginning to take hold. These changes give rise to a new book in the
McGraw-Hill Video/Audio Series, the Standard Handbook of Broadcast Engineering. This handbook
continues the rich tradition of past offerings, examining analog and digital broadcast technologies, including
AM radio, FM radio, DAB, NTSC, and DTV.
This book is a companion volume to the other landmark handbooks in the McGraw-Hill Video/Audio
Series:
• Standard Handbook of Video and Television Engineering, 4rd ed., which focuses on video information
theory, production standards and equipment, and digital coding.
• Standard Handbook of Audio and Radio Engineering, 2nd ed., which focuses on audio capture, storage, and
reproduction systems.
The Standard Handbook of Broadcast Engineering picks up where these books leave off—covering in
considerable detail the transmission/reception aspects of broadcasting. In earlier editions of these cornerstone
books, the transmission elements of radio and television were included along with the production elements.
However, as the handbooks have grown, and as the scope of broadcasting has expanded to encompass both
analog and digital transmission, the practical limitations of page count for a printed book come into play.
Therefore, to maximize the amount of information that can be made available to readers, the radio and
television broadcast/reception elements have been separated into this stand-alone publication.
Structure
In any large handbook, finding the information that a reader needs can be a major undertaking. The sheer size
of an 1000-plus page book makes finding a specific table, reference, or tutorial section a challenge. For this
reason, the Standard Handbook of Broadcast Engineering has been organized into—essentially—eleven
separate “books.” The section titles listed in the Table of Contents outline the scope of the handbook and
each section is self-contained. The section introductions include a detailed table of contents and a complete
listing of references cited in the section. It is the goal of this approach to make the book easier to use and
more useful on the job. In addition, a master subject index is provided at the end of the book.
Sources of Information
The editor has made every effort to cover the subject of broadcast technology in a comprehensive manner.
Extensive references are provided at the end of each chapter to direct readers to sources of additional
information.
Considerable detail has been included on the Advanced Television Systems Committee (ATSC) digital
television (DTV) system. These chapters are based on documents published by the ATSC, and the editor
gratefully acknowledges this contribution.
Within the limits of a practical page count, there are always more items that could be examined in greater
detail. Excellent books on the subject of broadcast engineering are available that cover areas that may not be
addressed in this handbook, notably the National Association of Broadcaster’s NAB Engineering Handbook,
9th edition, and the Proceedings of the NAB Broadcast Engineering Conference, published annually by NAB.
For more information, see the NAB Web site at http://www.nab.org.
The field of science encompassed by radio and television engineering is broad and exciting. It is an area of
growing importance to market segments of all types and—of course—to the public. It is the intent of the
Standard Handbook of Broadcast Engineering to bring these diverse concepts and technologies together in an
understandable form.
Jerry C. Whitaker
Editor-in-Chief
1_1
Section 1
Frequency Bands, Propagation, and Modulation
The usable spectrum of electromagnetic radiation frequencies extends over a range from below 100 Hz for
power distribution to 1020 Hz for the shortest X rays. The lower frequencies are used primarily for terrestrial
broadcasting and communications. The higher frequencies include visible and near-visible infrared and
ultraviolet light, and X rays. The frequencies typically of interest to RF engineers range from 30 kHz to 30
GHz.
In This Section:
Bibliography 1-18
Chapter 1.2: Propagation 1-19
Introduction 1-19
Propagation in Free Space 1-19
Transmission Loss Between Antennas in Free Space 1-21
Propagation Over Plane Earth 1-23
Field Strengths Over Plane Earth 1-23
Transmission Loss Between Antennas Over Plane Earth 1-27
Propagation Over Smooth Spherical Earth 1-28
Propagation Beyond the Line of Sight 1-30
Effects of Hills, Buildings, Vegetation, and the Atmosphere 1-33
Effects of Hills 1-33
Effects of Buildings 1-37
Effects of Trees and Other Vegetation 1-39
Effects of the Lower Atmosphere (Troposphere) 1-39
Stratification and Ducts 1-40
Tropospheric Scatter 1-41
Atmospheric Fading 1-41
Effects of the Upper Atmosphere (Ionosphere) 1-41
References 1-42
Chapter 1.3: Frequency Sources and References 1-45
Introduction 1-45
Characteristics of Crystal Devices 1-45
Frequency Stabilization 1-46
Equivalent Circuit of a Quartz Resonator 1-47
Temperature Compensation 1-48
Stress Compensation 1-49
Aging Effects 1-50
Oscillators 1-50
Key Terms 1-51
Phase-Locked Loop Synthesizers 1-53
Practical PLL Circuits 1-54
Fractional-Division Synthesizers 1-56
Multiloop Synthesizers 1-59
Direct Digital Synthesis 1-61
References 1-64
Bibliography 1-65
Chapter 1.4: Modulation Systems and Characteristics 1-67
Introduction 1-67
Amplitude Modulation 1-68
Vestigial-Sideband Amplitude Modulation 1-71
Single-Sideband Amplitude Modulation 1-71
Quadrature Amplitude Modulation (QAM) 1-74
Frequency Modulation 1-74
1_5
Section
The usable spectrum of electromagnetic radiation frequencies extends over a range from below
100 Hz for power distribution to 1020 Hz for the shortest X rays. The lower frequencies are used
primarily for terrestrial broadcasting and communications. The higher frequencies include visi-
ble and near-visible infrared and ultraviolet light, and X rays. The frequencies typically of inter-
est to RF engineers range from 30 kHz to 30 GHz.
1-1
q y p g
In This Section:
Bibliography 1-18
Eckersley, T. L.: “Ultra-Short-Wave Refraction and Diffraction,” J. Inst. Elec. Engrs., pg. 286,
March 1937.
Epstein, J., and D. Peterson: “An Experimental Study of Wave Propagation at 850 Mc,” Proc.
IRE, pg. 595, May 1953.
Fink, D. G., (ed.): Television Engineering Handbook, McGraw-Hill, New York, N.Y., 1957.
Fink, D., and D. Christiansen (eds.): Electronics Engineers’ Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Frerking, M. E.: Crystal Oscillator Design and Temperature Compensation, Van Nostrand Rein-
hold, New York, N. Y., 1978.
Handbook of Physics, McGraw-Hill, New York, N.Y., 1958.
Hietala, Alexander W., and Duane C. Rabe: “Latched Accumulator Fractional-N Synthesis With
Residual Error Reduction,” United States Patent, Patent No. 5,093,632, March 3, 1992.
Jordan, Edward C. (ed.): Reference Data for Engineers: Radio, Electronics, Computer and Com-
munications, 7th ed., Howard W. Sams, Indianapolis, IN, 1985.
Judd, D. B., and G. Wyszecki: Color in Business, Science and Industry, 3rd ed., John Wiley and
Sons, New York, N.Y.
Kaufman, Ed: IES Illumination Handbook, Illumination Engineering Society.
King, Nigel J. R.: “Phase Locked Loop Variable Frequency Generator,” United States Patent,
Patent No. 4,204,174, May 20, 1980.
Kubichek, Robert: “Amplitude Modulation,” in The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., pp. 1175–1187, 1996.
Lapedes, D. N. (ed.): The McGraw-Hill Encyclopedia of Science & Technology, 2nd ed.,
McGraw-Hill, New York, N.Y.
Longley, A. G., and P. L. Rice: “Prediction of Tropospheric Radio Transmission over Irregular
Terrain—A Computer Method,” ESSA (Environmental Science Services Administration),
U.S. Dept. of Commerce, Report ERL (Environment Research Laboratories) 79-ITS 67,
July 1968.
McPetrie, J. S., and L. H. Ford: “An Experimental Investigation on the Propagation of Radio
Waves over Bare Ridges in the Wavelength Range 10 cm to 10 m,” J. Inst. Elec. Engrs., pt.
3, vol. 93, pg. 527, 1946.
Megaw, E. C. S.: “Some Effects of Obstacles on the Propagation of Very Short Radio Waves,” J.
Inst. Elec. Engrs., pt. 3, vol. 95, no. 34, pg. 97, March 1948.
National Bureau of Standards Circular 462, “Ionospheric Radio Propagation,” June 1948.
NIST: Manual of Regulations and Procedures for Federal Radio Frequency Management, Sep-
tember 1995 edition, revisions for September 1996, January and May 1997, NTIA, Wash-
ington, D.C., 1997.
Norgard, John: “Electromagnetic Spectrum,” NAB Engineering Handbook, 9th ed., Jerry C.
Whitaker (ed.), National Association of Broadcasters, Washington, D.C., 1999.
q y p g
Wyszecki, G., and W. S. Stiles: Color Science, Concepts and Methods, Quantitative Data and
Formulae, 2nd ed., John Wiley and Sons, New York, N.Y.
Ziemer, Rodger E.: “Pulse Modulation,” in The Electronics Handbook, Jerry C. Whitaker (ed.),
CRC Press, Boca Raton, Fla., pp. 1201–1212, 1996.
g g
Chapter
1.1
The Electromagnetic Spectrum
John Norgard
1.1.1 Introduction
The electromagnetic (EM) spectrum consists of all forms of EM radiation—EM waves (radiant
energy) propagating through space, from dc to light to gamma rays. The EM spectrum can be
arranged in order of frequency and/or wavelength into a number of regions, usually wide in
extent, within which the EM waves have some specified common characteristics, such as charac-
teristics relating to the production or detection of the radiation. A common example is the spec-
trum of the radiant energy in white light, as dispersed by a prism, to produce a “rainbow” of its
constituent colors. Specific frequency ranges are often called bands; several contiguous fre-
quency bands are usually called spectrums; and sub-frequency ranges within a band are some-
times called segments.
The EM spectrum can be displayed as a function of frequency (or wavelength). In air, fre-
quency and wavelength are inversely proportional, f = c/λ (where c ≈ 3 × 108 m/s, the speed of
light in a vacuum). The MKS unit of frequency is the Hertz and the MKS unit of wavelength is
the meter. Frequency is also measured in the following sub-units:
• Kilohertz, 1 kHz = 103 Hz
• Megahertz, 1 MHz = 106 Hz
• Gigahertz, 1 GHz = 109 Hz
• Terahertz, 1 THz = 1012 Hz
• Petahertz, 1 PHz = 1015 Hz
• Exahertz, 1 EHz = 1018 Hz
Or for very high frequencies, electron volts, 1 ev ~ 2.41 × 1014 Hz. Wavelength is also measured
in the following sub-units:
• Centimeters, 1 cm = 10–2 m
• Millimeters, 1 mm = 10–3 m
• Micrometers, 1 μm = 10–6 m (microns)
• Nanometers, 1 nm = 10–9 m
1-9
g p
• Ångstroms, 1 Å = 10–10 m
• Picometers, 1 pm = 10–12 m
• Femtometers, 1 fm = 10–15 m
• Attometers, 1 am = 10–18 m
• Orange
• Yellow
• Green, a primary color, peak intensity at 546.1 nm (549 THz)
• Cyan
• Blue, a primary color, peak intensity at 435.8 nm (688 THz)
• Indigo
• Violet
IR Band
The IR band is the region of the EM spectrum lying immediately below the visible light band.
The IR band consists of EM radiation with wavelengths extending between the longest visible
red (circa 0.7 μm) and the shortest microwaves (300 μm–1 mm); i.e., from circa 429 THz down
to 1 THz–300 GHz.
The IR band is further subdivided into the “near” (shortwave), “intermediate” (midwave), and
“far” (longwave) IR segments as follows 1:
• Near IR segment, 0.7 μm up to 3 μm (429 THz down to 100 THz)
• Intermediate IR segment, 3 μm up to 7 μm (100 THz down to 42.9 THz)
• Far IR segment, 7 μm up to 300 μm (42.9 THz down to 1 THz)
• Sub-millimeter band, 100 μm up to 1 mm (3 THz down to 300 GHz). Note that the sub-milli-
meter region of wavelengths is sometimes included in the very far region of the IR band.
EM radiation is produced by oscillating and rotating molecules and atoms. Therefore, all
objects at temperatures above absolute zero emit EM radiation by virtue of their thermal motion
(warmth) alone. Objects near room temperature emit most of their radiation in the IR band. How-
ever, even relatively cool objects emit some IR radiation; hot objects, such as incandescent fila-
ments, emit strong IR radiation.
IR radiation is sometimes incorrectly called “radiant heat” because warm bodies emit IR radi-
ation and bodies that absorb IR radiation are warmed. However, IR radiation is not itself “heat”.
This radiant energy is called “black body” radiation. Such waves are emitted by all material
objects. For example, the background cosmic radiation (2.7K) emits microwaves; room tempera-
ture objects (293K) emit IR rays; the Sun (6000K) emits yellow light; the Solar Corona (1 mil-
lion K) emits X rays.
IR astronomy uses the 1 μm to 1 mm part of the IR band to study celestial objects by their IR
emissions. IR detectors are used in night vision systems, intruder alarm systems, weather fore-
casting, and missile guidance systems. IR photography uses multilayered color film, with an IR
sensitive emulsion in the wavelengths between 700–900 nm, for medical and forensic applica-
tions, and for aerial surveying.
1. Some reference texts use 2.5 mm (120 THz) as the breakpoint between the near and the inter-
mediate IR bands, and 10 mm (30 THz) as the breakpoint between the intermediate and the
far IR bands. Also, 15 mm (20 Thz) is sometimes considered as the long wavelength end of
the far IR band.
g p
UV Band
The UV band is the region of the EM spectrum lying immediately above the visible light band.
The UV band consists of EM radiation with wavelengths extending between the shortest visible
violet (circa 0.4 μm) and the longest X rays (circa 10 nm); i.e., from 750 THz—approximately 3
ev—up to circa 30 PHz—approximately 100 ev.2
The UV band is further subdivided into the “near” and the “far” UV segments as follows:
• Near UV segment, circa 0.4 μm down to 100 nm (circa 750 THz up to 3 PHz, approximately
3 ev up to 10 ev)
• Far UV segment, 100 nm down to circa 10 nm, (3 PHz up to circa 30 PHz, approximately 10
ev up to 100 ev)
The far UV band is also referred to as the vacuum UV band, since air is opaque to all UV radia-
tion in this region.
UV radiation is produced by electron transitions in atoms and molecules, as in a mercury dis-
charge lamp. Radiation in the UV range is easily detected and can cause florescence in some
substances, and can produce photographic and ionizing effects.
In UV astronomy, the emissions of celestial bodies in the wavelength band between 50–320
nm are detected and analyzed to study the heavens. The hottest stars emit most of their radiation
in the UV band.
1.1.2b DC to Light
Below the IR band are the lower frequency (longer wavelength) regions of the EM spectrum,
subdivided generally into the following spectral bands (by frequency/wavelength):
• Microwave band, 300 GHz down to 300 MHz (1 mm up to 1 m). Some reference works define
the lower edge of the microwave spectrum at 1 GHz.
• Radio frequency band, 300 MHz down to 10 kHz (1 m up to 30 Km)
• Power/telephony band, 10 kHz down to dc (30 Km up to ∞ )
These regions of the EM spectrum are usually described in terms of their frequencies.
Radiations whose wavelengths are of the order of millimeters and centimeters are called
microwaves, and those still longer are called radio frequency (RF) waves (or Hertzian waves).
Radiation from electronic devices produces EM waves in both the microwave and RF bands.
Power frequency energy is generated by rotating machinery. Direct current (dc) is produced by
batteries or rectified alternating current (ac).
Microwave Band
The microwave band is the region of wavelengths lying between the far IR/sub-millimeter region
and the conventional RF region. The boundaries of the microwave band have not been definitely
fixed, but it is commonly regarded as the region of the EM spectrum extending from about 1 mm
up to 1 m in wavelengths, i.e. from 300 GHz down to 300 MHz. The microwave band is further
sub-divided into the following segments:
• Millimeter waves, 300 GHz down to 30 GHz (1 mm up to 1 cm); the Extremely High Fre-
quency band. (Some references consider the top edge of the millimeter region to stop at 100
GHz.)
• Centimeter waves, 30 GHz down to 3 GHz (1 cm up to 10 cm); the Super High Frequency
band.
The microwave band usually includes the Ultra High Frequency band from 3 GHz down to 300
MHz (from 10 cm up to 1 m). Microwaves are used in radar, space communication, terrestrial
links spanning moderate distances, as radio carrier waves in television broadcasting, for mechan-
ical heating, and cooking in microwave ovens.
have energies as high as 1020 ev. Cosmic rays have been traced to cataclysmic astrophysical/cos-
mological events, such as exploding stars and black holes. Cosmic rays are emitted by supernova
remnants, pulsars, quasars, and radio galaxies. Comic rays that collide with molecules in the
Earth’s upper atmosphere produce secondary cosmic rays and gamma rays of high energy that
also contribute to natural background radiation. These gamma rays are sometimes called cosmic
or secondary gamma rays. Cosmic rays are a useful source of high-energy particles for certain
scientific experiments.
Radiation from atomic inner shell excitations produces EM waves in the X ray band. Radia-
tion from naturally radioactive nuclei produces EM waves in the gamma ray band.
Band Frequency
Longwave broadcasting band 150–290 kHz
AM broadcasting band 535–1705 kHz (1.640 MHz), 107 channels, 10 kHz separation
International broadcasting band 3–30 MHz
Shortwave broadcasting band 5.95–26.1 MHz (8 bands)
VHF TV (Channels 2 - 4) 54–72 MHz
VHF TV (Channels 5 - 6) 76–88 MHz
FM broadcasting band 88–108 MHz
VHF TV (Channels 7 - 13) 174–216 MHz
UHF TV (Channels 14 - 69) 512–806 MHz
g p
Application Frequency
Aero Navigation 0.96–1.215 GHz
GPS Down Link 1.2276 GHz
Military COM/Radar 1.35–1.40 GHz
Miscellaneous COM/Radar 1.40–1.71 GHz
L-Band Telemetry 1.435–1.535 GHz
GPS Down Link 1.57 GHz
Military COM (Troposcatter/Telemetry) 1.71–1.85 GHz
Commercial COM & Private LOS 1.85–2.20 GHz
Microwave Ovens 2.45 GHz
Commercial COM/Radar 2.45–2.69 GHz
Instructional TV 2.50–2.69 GHz
Military Radar (Airport Surveillance) 2.70–2.90 GHz
Maritime Navigation Radar 2.90–3.10 GHz
Miscellaneous Radars 2.90–3.70 GHz
Commercial C-Band SAT COM Down Link 3.70–4.20 GHz
Radar Altimeter 4.20–4.40 GHz
Military COM (Troposcatter) 4.40–4.99 GHz
Commercial Microwave Landing System 5.00–5.25 GHz
Miscellaneous Radars 5.25–5.925 GHz
C-Band Weather Radar 5.35–5.47 GHz
Commercial C-Band SAT COM Up Link 5.925–6.425 GHz
Commercial COM 6.425–7.125 GHz
Mobile TV Links 6.875–7.125 GHz
Military LOS COM 7.125–7.25 GHz
Military SAT COM Down Link 7.25–7.75 GHz
Military LOS COM 7.75–7.9 GHz
Military SAT COM Up Link 7.90–8.40 GHz
Miscellaneous Radars 8.50–10.55 GHz
Precision Approach Radar 9.00–9.20 GHz
X-Band Weather Radar (& Maritime Navigation Radar) 9.30–9.50 GHz
Police Radar 10.525 GHz
Commercial Mobile COM (LOS & ENG) 10.55–10.68 GHz
Common Carrier LOS COM 10.70–11.70 GHz
Commercial COM 10.70–13.25 GHz
Commercial Ku-Band SAT COM Down Link 11.70–12.20 GHz
DBS Down Link & Private LOS COM 12.20–12.70 GHz
ENG & LOS COM 12.75–13.25 GHz
Miscellaneous Radars & SAT COM 13.25–14.00 GHz
Commercial Ku-Band SAT COM Up Link 14.00–14.50 GHz
Military COM (LOS, Mobile, &Tactical) 14.50–15.35 GHz
Aero Navigation 15.40–15.70 GHz
Miscellaneous Radars 15.70–17.70 GHz
DBS Up Link 17.30–17.80 GHz
Common Carrier LOS COM 17.70–19.70 GHz
Commercial COM (SAT COM & LOS) 17.70–20.20 GHz
Private LOS COM 18.36–19.04 GHz
Military SAT COM 20.20–21.20 GHz
Miscellaneous COM 21.20–24.00 GHz
Police Radar 24.15 GHz
Navigation Radar 24.25–25.25 GHz
Military COM 25.25–27.50 GHz
Commercial COM 27.50–30.00 GHz
Military SAT COM 30.00–31.00 GHz
Commercial COM 31.00–31.20 GHz
g p
• Hard X rays, approximately 10 Kev up to 1Mev (100 pm down to circa 1 pm), 3 EHz up to
circa 300 EHz
Because the physical nature of these rays was at first unknown, this radiation was called “X
rays.” The designation continues to this day. The more powerful X rays are called hard X rays and
are of high frequencies and, therefore, are more energetic; less powerful X rays are called soft X
rays and have lower energies.
X rays are produced by transitions of electrons in the inner levels of excited atoms or by rapid
deceleration of charged particles (Brehmsstrahlung or breaking radiation). An important source
of X rays is synchrotron radiation. X rays can also be produced when high energy electrons from
a heated filament cathode strike the surface of a target anode (usually tungsten) between which a
high alternating voltage (approximately 100 kV) is applied.
X rays are a highly penetrating form of EM radiation and applications of X rays are based on
their short wavelengths and their ability to easily pass through matter. X rays are very useful in
crystallography for determining crystalline structure and in medicine for photographing the
body. Because different parts of the body absorb X rays to a different extent, X rays passing
through the body provide a visual image of its interior structure when striking a photographic
plate. X rays are dangerous and can destroy living tissue. They can also cause severe skin burns.
X rays are useful in the diagnosis and non-destructive testing of products for defects.
1.1.3 Bibliography
Collocott, T. C., A. B. Dobson, and W. R. Chambers (eds.): Dictionary of Science & Technology.
Handbook of Physics, McGraw-Hill, New York, N.Y., 1958.
Judd, D. B., and G. Wyszecki: Color in Business, Science and Industry, 3rd ed., John Wiley and
Sons, New York, N.Y.
Kaufman, Ed: IES Illumination Handbook, Illumination Engineering Society.
Lapedes, D. N. (ed.): The McGraw-Hill Encyclopedia of Science & Technology, 2nd ed.,
McGraw-Hill, New York, N.Y.
Norgard, John: “Electromagnetic Spectrum,” NAB Engineering Handbook, 9th ed., Jerry C. Whi-
taker (ed.), National Association of Broadcasters, Washington, D.C., 1999.
Norgard, John: “Electromagnetic Spectrum,” The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., 1996.
Stemson, A: Photometry and Radiometry for Engineers, John Wiley and Sons, New York, N.Y.
The Cambridge Encyclopedia, Cambridge University Press, 1990.
The Columbia Encyclopedia, Columbia University Press, 1993.
Webster’s New World Encyclopedia, Prentice Hall, 1992.
Wyszecki, G., and W. S. Stiles: Color Science, Concepts and Methods, Quantitative Data and
Formulae, 2nd ed., John Wiley and Sons, New York, N.Y.
g g
Chapter
1.2
Propagation
1.2.1 Introduction
The portion of the electromagnetic spectrum commonly used for radio transmissions lies
between approximately 10 kHz and 40 GHz. The influence on radio waves of the medium
through which they propagate is frequency-dependent. The lower frequencies are greatly influ-
enced by the characteristics of the earth’s surface and the ionosphere, while the highest frequen-
cies are greatly affected by the atmosphere, especially rain. There are no clear-cut boundaries
between frequency ranges but instead considerable overlap in propagation modes and effects of
the path medium.
In the U.S., those frequencies allocated for broadcast-related use include the following:
• 550–1640 kHz: AM radio
• 54–72 MHz: TV channels 2–4
• 76–88 MHz: TV channels 5–6
• 88–108 MHz: FM radio
• 174–216 MHz: TV channels 7–13
• 470–806 MHz: TV channels 14–69
• 0.9–12.2 GHz: nonexclusive TV terrestrial and satellite ancillary services
• 12.2–12.7 GHz: direct satellite broadcasting
• 12.7–40 GHz: nonexclusive direct satellite broadcasting
1-19
p g
For the simplest case of propagation in space, namely that of uniform radiation in all direc-
tions from a point source, or isotropic radiator, it is useful to consider the analogy to a point
source of light, The radiant energy passes with uniform intensity through all portions of an imag-
inary spherical surface located at a radius r from the source. The area of such a surface is 4 π r 2
and the power flow per unit area W = Pt /4 π r 2, where Pt is the total power radiated by the
source and W is represented in W/m2. In the engineering of broadcasting and of some other radio
services, it is conventional to measure the intensity of radiation in terms of the strength of the
electric field Eo rather than in terms of power density W. The power density is equal to the square
of the field strength divided by the impedance of the medium, so for free space
E 2o
W = ------------ (1.2.1)
120 π
and
4π r2E2 o
Pt = ------------------ (1.2.2)
120 π
or
r 2 E o2
Pt = ----------- (1.2.3)
30
Where:
Pt = watts radiated
E o = the free space field in volts per meter
r = the radius in meters
A more conventional and useful form of this equation, which applies also to antennas other
than isotropic radiators, is
30 gP t t
Eo = -------------------- (1.2.4)
r
where gt is the power gain of the antenna in the pertinent direction compared to an isotropic radi-
ator.
An isotropic antenna is useful as a reference for specifying the radiation patterns for more
complex antennas but does not in fact exist. The simplest forms of practical antennas are the
electric doublet and the magnetic doublet, the former a straight conductor that is short compared
with the wavelength and the latter a conducting loop of short radius compared with the wave-
length. For the doublet radiator, the gain is 1.5 and the field strength in the equatorial plane is
p g
Propagation 1-21
45 P t
Eo = --------------- (1.2.5)
r
For a half-wave dipole, namely, a straight conductor one-half wave in length, the power gain is
1.64 and
7 P t
Eo = ------------ (1.2.6)
r
From the foregoing equations it can be seen that for free space:
• The radiation intensity in watts per square meter is proportional to the radiated power and
inversely proportional to the square of the radius or distance from the radiator.
• The electric field strength is proportional to the square root of the radiated power and
inversely proportional to the distance from the radiator.
Eλ 2 g
P r = ⎛⎝ -------⎞⎠ ------r-- W (1.2.7)
2π 120
Where:
E = received field strength in volts per meter
λ = wavelength in meters, 300/F
F = frequency in MHz
gr = receiving antenna power gain over an isotropic radiator
This relationship between received power and the received field strength is shown by scales 2,
3, and 4 in Figure 1.2.1 for a half-wave dipole. For example, the maximum useful power at 100
MHz that can be delivered by a half-wave dipole in a field of 50 dB above 1 μV/m is 95 dB
below 1 W.
A general relation for the ratio of the received power to the radiated power obtained from
Equations (1.2.4) and (1.2.7) is
Pr ⎛ λ ⎞ 2 ⎛ E ⎞ 2
----- = -------- g g ----- (1.2.8)
P t ⎝ 4 π r⎠ t r ⎝ Eo ⎠
P r ⎛ 1.64 λ⎞ 2 ⎛ E ⎞ 2 ⎛ 0.13 λ⎞ 2 ⎛ E ⎞ 2
----- = ------------- ----- = ------------- ----- (1.2.9)
P t ⎝ 4 πr ⎠ ⎝ Eo ⎠ ⎝ r ⎠ ⎝ Eo ⎠
p g
Figure 1.2.1 Free-space field intensity and received power between half-wave dipoles. (From [2].
Used with permission.)
p g
Propagation 1-23
P r
B B ⎛ E ⎞2
t r
----- = ------------- ----- (1.2.10)
( λ r) E
P 2⎝ ⎠
t o
where Bt and Br are the effective areas of the transmitting and receiving antennas, respectively.
This relation is obtained from Equation (1.2.8) by substituting as follows
g = -4---π--2-B--- (1.2.11)
λ
This is shown in Figure 1.2.2 for free-space transmission when Bt = B r. For example, the free-
space loss at 4000 MHz between two antennas of 10 ft 2 (0.93 m2) effective area is about 72 dB
for a distance of 30 mi (48 km).
E d E d Re Δ j
E = E ( cos3θ1 + R cos3θ2 e Δ)
o
j
(1.2.13)
Where:
p g
Figure 1.2.2 Received power in free space between two antennas of equal effective areas. (From
[2]. Used with permission.)
Propagation 1-25
Figure 1.2.3 Ray paths for antennas above plane earth. (From [2]. Used with permission.)
For distances such that θ is small and the differences between d and r1 and r2 can be
neglected, Equations (1.2.12) and (1.2.13) become
E = E ( 1 + Re Δ)
o
j
(1.2.14)
When the angle θ is very small, R is approximately equal to –1. For the case of two antennas,
one or both of which may be relatively close to the earth, a surface-wave term must be added and
Equation (1.2.14) becomes [3, 6]
E = E [ 1 + Re Δ + ( 1 – R ) Ae Δ]
o
j j
(1.2.15)
The quantity A is the surface-wave attenuation factor, which depends upon the frequency,
ground constants, and type of polarization. It is never greater than unity and decreases with
increasing distance and frequency, as indicated by the following approximate equation [1]
–1
A≅ ------------------------------------------------------- (1.2.16)
1+ j⎝ d
⎛ -2---π-----⎞ ( sin θ + ) 2 z
λ ⎠
This approximate expression is sufficiently accurate as long as A < 0.1, and it gives the magni-
tude of A within about 2 dB for all values of A. However, as A approaches unity, the error in
phase approaches 180°. More accurate values are given by Norton [3] where, in his nomencla-
ture, A = f (P,B) eiφ.
The equation (1.2.15) for the absolute value of field strength has been developed from the
successive consideration of the various components that make up the ground wave, but the fol-
lowing equivalent expressions may be found more convenient for rapid calculation
p g
⎧ Δ ⎫
E = Eo ⎨2 sin + j ⎡⎣(1 + R) + (1 − R) A⎤⎦ e jΔ 2 ⎬
⎩ 2 ⎭
(1.2.17)
When the distance d between antennas is greater than about five times the sum of the two
antenna heights ht and hr, the phase difference angle Δ (rad) is
4 π h t hr
Δ = (1.2.18)
λd
-------------
---
Also, when the angle Δ is greater than about 0.5 rad, the terms inside the brackets of Equation
(1.2.17)—which include the surface wave—are usually negligible, and a sufficiently accurate
expression is given by
⎛ 2πht hr ⎞
E = Eo ⎜ 2 sin
⎝ λd ⎟⎠
(1.2.19)
In this case, the principal effect of the ground is to produce interference fringes or lobes, so that
the field strength oscillates about the free-space field as the distance between antennas or the
height of either antenna is varied.
When the angle Δ is less than about 0.5 rad, there is a region in which the surface wave may
be important but not controlling. In this region, sin Δ/2 is approximately equal to Δ/2 and
4 π h t hr' '
E = Eo (1.2.20)
λd
---
--------------
-
In this equation h′ = h + jho, where h is the actual antenna height and ho = λ/2πz has been desig-
nated as the minimum effective antenna height. The magnitude of the minimum effective height
ho is shown in Figure 1.2.4 for seawater and for “good” and “poor” soil. “Good” soil corresponds
roughly to clay, loam, marsh, or swamp, while “poor” soil means rocky or sandy ground [1].
The surface wave is controlling for antenna heights less than the minimum effective height,
and in this region the received field or power is not affected appreciably by changes in the
antenna height. For antenna heights that are greater than the minimum effective height, the
received field or power is increased approximately 6 dB every time the antenna height is dou-
bled, until free-space transmission is reached. It is ordinarily sufficiently accurate to assume that
h′ is equal to the actual antenna height or the minimum effective antenna height, whichever is
the larger.
When translated into terms of antenna heights in feet, distance in miles, effective power in
kilowatts radiated from a half-wave dipole, and frequency F in megahertz, Equation (1.2.20)
becomes the following very useful formula for the rapid calculation of approximate values of
field strength for purposes of prediction or for comparison with measured values
p g
Propagation 1-27
Figure 1.2.4 Minimum effective antenna height. (From [2]. Used with permission.)
ht′ hr′ Pt
E≅F
3d2
(1.2.21)
2 2
Pr ⎛ λ ⎞ ⎛ 4 π ht′ hr′ ⎞ ⎛ ht′ hr′ ⎞
= gt g r ⎜ = gt g r
Pt ⎜⎝ 4 π d ⎟⎠ ⎝ λ d ⎟⎠ ⎜⎝ d 2 ⎟⎠
(1.2.22)
This relationship is independent of frequency, and is shown on Figure 1.2.5 for half-wave
dipoles (gt = gr = 1.64). A line through the two scales of antenna height determines a point on
the unlabeled scale between them, and a second line through this point and the distance scale
determines the received power for 1 W radiated. When the received field strength is desired, the
power indicated on Figure 1.2.5 can be transferred to scale 4 of Figure 1.2.1, and a line through
the frequency on scale 3 indicates the received field strength on scale 2. The results shown on
Figure 1.2.5 are valid as long as the value of received power indicated is lower than that shown
on Figure 1.2.3 for free-space transmission. When this condition is not met, it means that the
angle Δ is too large for Equation (1.2.20) to be accurate and that the received field strength or
power oscillates around the free-space value as indicated by Equation (1.2.19) [1].
E = E ( + DR
o
1 ' e Δ)
j
(1.2.23)
Similar substitutions of the values that correspond in Figures 1.2.3 and 1.2.6 can be made in
Equations (1.2.15 through (1.2.22). However, under practical conditions, it is generally satisfac-
tory to use the plane-earth formulas for the purpose of calculating smooth-earth values. An
exception to this is usually made in the preparation of standard reference curves, which are gen-
erally calculated by the use of the more exact formulas [1, 4–9].
p g
Propagation 1-29
Figure 1.2.5 Received power over plane earth between half-wave dipoles. Notes: (1) This chart is
not valid when the indicated received power is greater than the free space power shown in Figure
1.2.1. (2) Use the actual antenna height or the minimum effective height shown in Figure 1.2.4,
whichever is the larger. (From [2]. Used with permission.)
p g
Figure 1.2.6 Ray paths for antennas above spherical earth. (From [2]. Used with permission.)
Propagation 1-31
Figure 1.2.7 Loss beyond line of sight in decibels. (From [2]. Used with permission.)
ble variations in frequency, electrical characteristics of the earth, polarization, and antenna
height. Also, the values of field strength indicated by smooth-earth curves are subject to consid-
erable modification under actual conditions found in practice. For VHF and UHF broadcast pur-
poses, the smooth-earth curves have been to a great extent superseded by curves modified to
reflect average conditions of terrain.
Figure 1.2.7 is a nomogram to determine the additional loss caused by the curvature of the
earth [1]. This loss must be added to the free-space loss found from Figure 1.2.1. A scale is
included to provide for the effect of changes in the effective radius of the earth, caused by atmo-
spheric refraction. Figure 1.2.7 gives the loss relative to free space as a function of three dis-
p g
Figure 1.2.8 Distance to the horizon. (From [2]. Used with permission.)
tances; d1 is the distance to the horizon from the lower antenna, d2 is the distance to the horizon
from the higher antenna, and d3 is the distance between the horizons. The total distance between
antennas is d = d1 + d2 + d3.
p g
Propagation 1-33
Figure 1.2.9 Ray paths for antennas over rough terrain. (From [2]. Used with permission.)
The horizon distances d1 and d2 for the respective antenna heights h1 and h2 and for any
assumed value of the earth’s radius factor k can be determined from Figure 1.2.8 [1].
Effects of Hills
The profile of the earth between the transmitting and receiving points is taken from available
topographic maps and is plotted on a chart that provides for average air refraction by the use of a
4/3 earth radius, as shown in Figure 1.2.9. The vertical scale is greatly exaggerated for conve-
nience in displaying significant angles and path differences. Under these conditions, vertical
dimensions are measured along vertical parallel lines rather than along radii normal to the curved
surface, and the propagation paths appear as straight lines. The field to be expected at a low
receiving antenna at A from a high transmitting antenna at B can be predicted by plane-earth
methods, by drawing a tangent to the profile at the point at which reflection appears to occur
with equal incident and reflection angles. The heights of the transmitting and receiving antennas
above the tangent are used in conjunction with Figure 1.2.5 to compute the transmission loss, or
with Equation (1.2.21) to compute the field strength. A similar procedure can be used for more
p g
(a)
(b)
(c)
(d)
Figure 1.2.10 Ray paths for antennas behind hills: (a–d), see text. (From [2]. Used with permis-
sion.)
distantly spaced high antennas when the line of sight does not clear the profile by at least the first
Fresnel zone [10].
Propagation over a sharp ridge, or over a hill when both the transmitting and receiving
antenna locations are distant from the hill, may be treated as diffraction over a knife edge, shown
schematically in Figure 1.2.10a [1, 9–14]. The height of the obstruction H is measured from the
line joining the centers of the two antennas to the top of the ridge. As shown in Figure 1.2.11, the
shadow loss approaches 6 dB as H approaches 0—grazing incidence—and it increases with
increasing positive values of H. When the direct ray clears the obstruction, H is negative, and the
shadow loss approaches 0 dB in an oscillatory manner as the clearance is increased. Thus, a sub-
stantial clearance is required over line-of-sight paths in order to obtain free-space transmission.
There is an optimum clearance, called the first Fresnel-zone clearance, for which the transmis-
sion is theoretically 1.2 dB better than in free space. Physically, this clearance is of such magni-
p g
Propagation 1-35
Figure 1.2.11 Shadow loss relative to free space. (From [2]. Used with permission.)
tude that the phase shift along a line from the antenna to the top of the obstruction and from there
to the second antenna is about one-half wavelength greater than the phase shift of the direct path
between antennas.
The locations of the first three Fresnel zones are indicated on the right-hand scale on Figure
1.2.11, and by means of this chart the required clearances can be obtained. At 3000 MHz, for
example, the direct ray should clear all obstructions in the center of a 40 mi (64 km) path by
about 120 ft (36 m) to obtain full first-zone clearance, as shown at “C” in Figure 1.2.9. The cor-
responding clearance for a ridge 100 ft (30 m) in front of either antenna is 4 ft (1.2 m). The locus
p g
of all points that satisfy this condition for all distances is an ellipsoid of revolution with foci at
the two antennas.
When there are two or more knife-edge obstructions or hills between the transmitting and
receiving antennas, an equivalent knife edge can be represented by drawing a line from each
antenna through the top of the peak that blocks the line of sight, as in Figure 1.2.10b.
Alternatively, the transmission loss can be computed by adding the losses incurred when
passing over each of the successive hills, as in Figure 1.2.10c. The height H1 is measured from
the top of hill 1 to the line connecting antenna 1 and the top of hill 2. Similarly, H2 is measured
from the top of hill 2 to the line connecting antenna 2 and the top of hill 1. The nomogram given
in Figure 1.2.11 is used for calculating the losses for terrain conditions represented by Figure
1.2.10a–c.
This procedure applies to conditions for which the earth-reflected wave can be neglected,
such as the presence of rough earth, trees, or structures at locations along the profile at points
where earth reflection would otherwise take place at the frequency under consideration; or where
first Fresnel-zone clearance is obtained in the foreground of each antenna and the geometry is
such that reflected components do not contribute to the field within the first Fresnel zone above
the obstruction. If conditions are favorable to earth reflection, the base line of the diffraction tri-
angle should not be drawn through the antennas, but through the points of earth reflection, as in
Figure 1.2.10d. H is measured vertically from this base line to the top of the hill, while d1 and d2
are measured to the antennas as before. In this case, Figure 1.2.12 is used to estimate the shadow
loss to be added to the plane-earth attenuation [1].
Under conditions where the earth-reflected components reinforce the direct components at
the transmitting and receiving antenna locations, paths may be found for which the transmission
loss over an obstacle is less than the loss over spherical earth. This effect may be useful in estab-
lishing VHF relay circuits where line-of-sight operation is not practical. Little utility, however,
can be expected for mobile or broadcast services [14].
An alternative method for predicting the median value for all measurements in a completely
shadowed area is as follows [15]:
1. The roughness of the terrain is assumed to be represented by height H, shown on the profile
at the top of Figure 1.2.13.
2. This height is the difference in elevation between the bottom of the valley and the elevation
necessary to obtain line of sight with the transmitting antenna.
3. The difference between the measured value of field intensity and the value to be expected
over plane earth is computed for each point of measurement within the shadowed area.
4. The median value for each of several such locations is plotted as a function of sq. rt. (H/λ).
These empirical relationships are summarized in the nomogram shown in Figure 1.2.13. The
scales on the right-hand line indicate the median value of shadow loss, compared with plane-
earth values, and the difference in shadow loss to be expected between the median and the 90
percent values. For example, with variations in terrain of 500 ft (150 m), the estimated median
shadow loss at 4500 MHz is about 20 dB and the shadow loss exceeded in 90 percent of the pos-
sible locations is about 20 + 15 = 35 dB. This analysis is based on large-scale variations in field
intensity, and does not include the standing-wave effects that sometimes cause the field intensity
to vary considerably within a matter of a few feet.
p g
Propagation 1-37
Figure 1.2.12 Shadow loss relative to plane earth. (From [2]. Used with permission.)
Effects of Buildings
Built-up areas have little effect on radio transmission at frequencies below a few megahertz,
since the size of any obstruction is usually small compared with the wavelength, and the shadows
caused by steel buildings and bridges are not noticeable except immediately behind these
obstructions. However, at 30 MHz and above, the absorption of a radio wave in going through an
obstruction and the shadow loss in going over it are not negligible, and both types of losses tend
to increase as the frequency increases. The attenuation through a brick wall, for example, can
p g
Figure 1.2.13 Estimated distribution of shadow loss for random locations (referred to plane-earth
values). (From [2]. Used with permission.)
vary from 2 to 5 dB at 30 MHz and from 10 to 40 dB at 3000 MHz, depending on whether the
wall is dry or wet. Consequently, most buildings are rather opaque at frequencies of the order of
thousands of megahertz.
For radio-relay purposes, it is the usual practice to select clear sites; but where this is not fea-
sible the expected fields behind large buildings can be predicted by the preceding diffraction
methods. In the engineering of mobile- and broadcast-radio systems it has not been found practi-
cal in general to relate measurements made in built-up areas to the particular geometry of build-
ings, so that it is conventional to treat them statistically. However, measurements have been
divided according to general categories into which buildings can readily be classified, namely,
the tall buildings typical of the centers of cities on the one hand, and typical two-story residential
areas on the other.
p g
Propagation 1-39
Buildings are more transparent to radio waves than the solid earth, and there is ordinarily
much more backscatter in the city than in the open country. Both of these factors tend to reduce
the shadow losses caused by the buildings. On the other hand, the angles of diffraction over or
around the buildings are usually greater than for natural terrain, and this factor tends to increase
the loss resulting from the presence of buildings. Quantitative data on the effects of buildings
indicate that in the range of 40 to 450 MHz there is no significant change with frequency, or at
least the variation with frequency is somewhat less than the square-root relationship noted in the
case of hills. The median field strength at street level for random locations in New York City is
about 25 dB below the corresponding plane-earth value. The corresponding values for the 10
percent and 90 percent points are about –15 and –35 dB, respectively [1, 15]. Measurements in
congested residential areas indicate somewhat less attenuation than among large buildings.
therefore with height, climate, and local meteorological conditions. An exponential model show-
ing a decrease with height to 37 to 43 mi (60 to 70 kin) is generally accepted [18, 19]. For this
model, variation of n is approximately linear for the first kilometer above the surface in which
most of the effect on radio waves traveling horizontally occurs. For average conditions, the effect
of the atmosphere can be included in the expression of earth diffraction around the smooth earth
without discarding the useful concept of straight-line propagation by multiplying the actual
earth’s radius by k to obtain an effective earth’s radius, where
1
k=
1+ a ( dn dh)
(1.2.24)
Where:
a = the actual radius of the earth
dn/dh = the rate of change of the refractive index with height
Through the use of average annual values of the refractive index gradient, k is found to be 4/3 for
temperate climates.
P e
N = 77.6 + 3.73×105 2
T T
(1.2.25)
Where:
P = atmospheric pressure, mbar
T = absolute temperature, K
e = water vapor pressure, mbar
When the gradient of N is equal to –39 N-units per kilometer, normal propagation takes place,
corresponding to the effective earth’s radius ka, where k = 4/3.
When dN/dh is less than –39 N-units per kilometer, subrefraction occurs and the radio wave is
bent strongly downward.
When dN/dh is less than –157 N-units per kilometer, the radio energy may be bent downward
sufficiently to be reflected from the earth, after which the ray is again bent toward the earth, and
so on. The radio energy thus is trapped in a duct or waveguide. The wave also may be trapped
between two elevated layers, in which case energy is not lost at the ground reflection points and
even greater enhancement occurs. Radio waves thus trapped or ducted can produce fields
exceeding those for free-space propagation because the spread of energy in the vertical direction
is eliminated as opposed to the free-space case, where the energy spreads out in two directions
p g
Propagation 1-41
orthogonal to the direction of propagation. Ducting is responsible for abnormally high fields
beyond the radio horizon. These enhanced fields occur for significant periods of time on overwa-
ter paths in areas where meteorological conditions are favorable. Such conditions exist for signif-
icant periods of time and over significant horizontal extent in the coastal areas of southern
California and around the Gulf of Mexico. Over land, the effect is less pronounced because sur-
face features of the earth tend to limit the horizontal dimension of ducting layers [20].
Tropospheric Scatter
The most consistent long-term mode of propagation beyond the radio horizon is that of scatter-
ing by small-scale fluctuations in the refractive index resulting from turbulence. Energy is scat-
tered from multitudinous irregularities in the common volume which consists of that portion of
troposphere visible to both the transmitting and receiving sites. There are some empirical data
that show a correlation between the variations in the field beyond the horizon and ΔN, the differ-
ence between the reflectivity on the ground and at a height of 1 km [21]. Procedures have been
developed for calculating scatter fields for beyond-the-horizon radio relay systems as a function
of frequency and distance [22, 23]. These procedures, however, require detailed knowledge of
path configuration and climate.
The effect of scatter propagation is incorporated in the statistical evaluation of propagation
(considered previously in this chapter), where the attenuation of fields beyond the diffraction
zone is based on empirical data and shows a linear decrease with distance of approximately 0.2
dB/mi (0.1 dB/km) for the VHF–UHF frequency band.
ers. The characteristics also differ with the seasons and with the intensity of the sun’s radiation,
as evidenced by the sunspot numbers, and the differences are generally more pronounced upon
the F2 than upon the F1 and E layers. There are also certain random effects that are associated
with solar and magnetic disturbances. Other effects that occur at or just below the E layer have
been established as being caused by meteors [24].
The greatest potential for television interference by way of the ionosphere is from sporadic E
ionization, which consists of occasional patches of intense ionization occurring 62 to 75 mi (100
to 120 km) above the earth’s surface and apparently formed by the interaction of winds in the
neutral atmosphere with the earth’s magnetic field. Sporadic E ionization can reflect VHF sig-
nals back to earth at levels capable of causing interference to analog television reception for peri-
ods lasting from 1 h or more, and in some cases totaling more than 100 h per year. In the U.S.,
VHF sporadic E propagation occurs a greater percentage of the time in the southern half of the
country and during the May to August period [25].
1.2.4 References
1. Bullington, K.: “Radio Propagation at Frequencies above 30 Mc,” Proc. IRE, pg. 1122,
October 1947.
2. Fink, D. G., (ed.): Television Engineering Handbook, McGraw-Hill, New York, N.Y., 1957.
3. Eckersley, T. L.: “Ultra-Short-Wave Refraction and Diffraction,” J. Inst. Elec. Engrs., pg.
286, March 1937.
4. Norton, K. A.: “Ground Wave Intensity over a Finitely Conducting Spherical Earth,” Proc.
IRE, pg. 622, December 1941.
5. Norton, K. A.: “The Propagation of Radio Waves over a Finitely Conducting Spherical
Earth,” Phil. Mag., June 1938.
6. van der Pol, Balth, and H. Bremmer: “The Diffraction of Electromagnetic Waves from an
Electrical Point Source Round a Finitely Conducting Sphere, with Applications to Radio-
telegraphy and to Theory of the Rainbow,” pt. 1, Phil. Mag., July, 1937; pt. 2, Phil. Mag.,
November 1937.
7. Burrows, C. R., and M. C. Gray: “The Effect of the Earth’s Curvature on Groundwave
Propagation,” Proc. IRE, pg. 16, January 1941.
8. “The Propagation of Radio Waves through the Standard Atmosphere,” Summary Technical
Report of the Committee on Propagation, vol. 3, National Defense Research Council,
Washington, D.C., 1946, published by Academic Press, New York, N.Y.
9. “Radio Wave Propagation,” Summary Technical Report of the Committee on Propagation
of the National Defense Research Committee, Academic Press, New York, N.Y., 1949.
10. de Lisle, E. W.: “Computations of VHF and UHF Propagation for Radio Relay Applica-
tions,” RCA, Report by International Division, New York, N.Y.
11. Selvidge, H.: “Diffraction Measurements at Ultra High Frequencies,” Proc. IRE, pg. 10,
January 1941.
p g
Propagation 1-43
12. McPetrie, J. S., and L. H. Ford: “An Experimental Investigation on the Propagation of
Radio Waves over Bare Ridges in the Wavelength Range 10 cm to 10 m,” J. Inst. Elec.
Engrs., pt. 3, vol. 93, pg. 527, 1946.
13. Megaw, E. C. S.: “Some Effects of Obstacles on the Propagation of Very Short Radio
Waves,” J. Inst. Elec. Engrs., pt. 3, vol. 95, no. 34, pg. 97, March 1948.
14. Dickson, F. H., J. J. Egli, J. W. Herbstreit, and G. S. Wickizer: “Large Reductions of VHF
Transmission Loss and Fading by the Presence of a Mountain Obstacle in Beyond-Line-of-
Sight Paths,” Proc. IRE, vol. 41, no. 8, pg. 96, August 1953.
15. Bullington, K.: “Radio Propagation Variations at VHF and UHF,” Proc. IRE, pg. 27, Janu-
ary 1950.
16. “Report of the Ad Hoc Committee, Federal Communications Commission,” vol. 1, May
1949; vol. 2, July 1950.
17. Epstein, J., and D. Peterson: “An Experimental Study of Wave Propagation at 850 Mc,”
Proc. IRE, pg. 595, May 1953.
18. “Documents of the XVth Plenary Assembly,” CCIR Report 563, vol. 5, Geneva, 1982.
19. Bean, B. R., and E. J. Dutton: “Radio Meteorology,” National Bureau of Standards Mono-
graph 92, March 1, 1966.
20. Dougherty, H. T., and E. J. Dutton: “The Role of Elevated Ducting for Radio Service and
Interference Fields,” NTIA Report 81–69, March 1981.
21. “Documents of the XVth Plenary Assembly,” CCIR Report 881, vol. 5, Geneva, 1982.
22. “Documents of the XVth Plenary Assembly,” CCIR Report 238, vol. 5, Geneva, 1982.
23. Longley, A. G., and P. L. Rice: “Prediction of Tropospheric Radio Transmission over Irreg-
ular Terrain—A Computer Method,” ESSA (Environmental Science Services Administra-
tion), U.S. Dept. of Commerce, Report ERL (Environment Research Laboratories) 79-ITS
67, July 1968.
24. National Bureau of Standards Circular 462, “Ionospheric Radio Propagation,” June 1948.
25. Smith, E. E., and E. W. Davis: “Wind-induced Ions Thwart TV Reception,” IEEE Spec-
trum, pp. 52—55, February 1981.
p g
g g
Chapter
1.3
Frequency Sources and References
Ulrich L. Rohde
1.3.1 Introduction1
A stable frequency reference is an important component of any transmission/reception system.
The quartz crystal is the classic device used for frequency generation and control. Quartz acts as
a stable high Q mechanical resonator. Crystal resonators are available for operation at frequen-
cies ranging from 1 kHz to 300 MHz and beyond.
1. Portions of this chapter are based on: Rohde, Ulrich L., and Jerry C. Whitaker: Communica-
tions Receivers: Principles and Design, 3rd ed., McGraw-Hill, New York, N.Y., 2000. Used
with permission.
1-45
1-46 Frequency Bands, Propagation, and Modulation
Normal Normal
Y' Y Y' Y
Figure 1.3.1 Cross section of a quartz crystal taken in the plane perpendicular to the optical axis:
(a) Y-cut plate, (b) X-cut plate.
crystal surfaces is reversed. Conversely, if electric charges are placed on the flat sides of the crys-
tal by applying a voltage across the faces, a mechanical stress is produced in the direction of the
Y-axis. This property by which mechanical and electrical properties are interconnected in a crys-
tal is known as the piezoelectric effect and is exhibited by all sections cut from a piezoelectric
crystal. Thus, if mechanical forces are applied across the faces of a crystal section having its flat
sides perpendicular to a Y-axis, as shown in Figure 1.3.1a, piezoelectric charges will be produced
because forces and potentials developed in such a crystal have components across at least one of
the Y- or X-axes, respectively.
An alternating voltage applied across a quartz crystal will cause the crystal to vibrate and, if
the frequency of the applied alternating voltage approximates a frequency at which mechanical
resonance can exist in the crystal, the amplitude vibrations will be large. Any crystal has a num-
ber of such resonant frequencies that depend upon the crystal dimensions, the type of mechanical
oscillation involved, and the orientation of the plate cut from the natural crystal.
Crystals are temperature-sensitive, as shown in Figure 1.3.2. The extent to which a device is
affected by changes in temperature is determined by its cut and packaging. Crystals also exhibit
changes in frequency with time. Such aging is caused by one or both of the following:
• Mass transfer to or from the resonator surface
• Stress relief within the device itself
Crystal aging is most pronounced when the device is new. As stress within the internal structure
is relieved, the aging process slows.
Curve 1
20
15
10
5
Δ f/f (ppm)
-5
Ti
-10 Curve 2
-15
-20
0 10 20 30 40 50 60 70 80 90 100
Temperature (deg C)
Table 1.3.1 Typical Equivalent Circuit Parameter Values for Quartz Crystals (After [1].)
in order to increase the independence of the component adjustments required to improve stabil-
ity.
These problems can be alleviated through the use of some form of hybrid or digital compen-
sation. In this scheme, the crystal oscillator is connected to a varactor in series, as in the analog
case. This capacitor provides the coarse compensation (about ±4 parts in 107). The fine compen-
sation is provided by the digital network to further improve stability. The hybrid analog-digital
method is then capable of producing stability values of ±5 parts in 10 8 over the range –40°C to
+80°C.
1.3.3 Oscillators
Receivers are seldom single-channel devices, but more often cover wide frequency ranges. In the
superheterodyne receiver, this is accomplished by mixing the input signal with an LO signal. The
LO source must meet a variety of requirements:
• It must have high spectral purity.
• It must be agile so that it can move rapidly (jump) between frequencies in a time frame that
can be as short as a few microseconds.
• The increments in which frequencies can be selected must be small.
Frequency resolution between 1 and 100 Hz is generally adequate below 30 MHz; however, there
are a number of systems that provide 0.001-Hz steps. At higher frequencies, resolution is gener-
ally greater than 1 kHz.
In most modern receivers, such a frequency source is typically a synthesizer that generates all
individual frequencies as needed over the required frequency band. The modern synthesizer pro-
vides stable phase-coherent outputs. The frequencies are derived from a master standard, which
can be a high-precision crystal oscillator, a secondary atomic standard (such as a rubidium gas
q y
cell), or a primary standard using a cesium atomic beam. The following characteristics must be
specified for the synthesizer:
• Frequency range
• Frequency resolution
• Frequency indication
• Maximum frequency error
• Settling time
• Reference frequency
• Output power
• Harmonic distortion
• SSB phase noise
• Discrete spurs (spurious frequencies)
• Wide-band noise
• Control interface
• Power consumption
• Mechanical size
• Environmental conditions
Free-running tunable oscillators, once used in radio receivers, have generally been replaced in
modern receivers because of their lack of precision and stability. Fixed-tuned crystal-controlled
oscillators are still used in second- and third-oscillator applications in multiconversion superhet-
erodyne receivers that do not require single-reference precision. Oscillators used in synthesizers
have variable tuning capability, which may be voltage-controlled, generally by varactor diodes.
Synthesizer designs have used mixing from multiple crystal sources and mixing of signals
derived from a single source through frequency multiplication and division. Synthesizers may be
“direct” and use the product of multiple mixing and filtering or “indirect” and use a phase-
locked-loop (PLL) slaved to the direct output to provide reduced spurious signals. There are a
number of publications describing these and other techniques, the classic texts being [4] through
[6]. Most modern receivers operating below 3 GHz use single- or multiple-loop digital PLL syn-
thesizers, although for some applications, direct digital waveform synthesis may be used.
• Harmonic output power. Harmonic output power is measured relative to the output power of
the oscillator. Typical values are 20 dB or more suppression relative to the fundamental. This
suppression can be improved by additional filtering.
• Output power. The output power of the oscillator, typically expressed in dBm, is measured
into a 50-Ω load. The output power is always combined with a specification for flatness or
variation. A typical spec would be 0 dBm ±1 dB.
• Output power as a function of temperature. All active circuits vary in performance as a
function of temperature. The output power of an oscillator over a temperature range should
vary less than a specified value, such as 1 dB.
• Post-tuning drift. After a voltage step is applied to the tuning diode input, the oscillator fre-
quency may continue to change until it settles to a final value. This post-tuning drift is one of
the parameters that limits the bandwidth of the VCO input.
• Power consumption. This characteristic conveys the dc power, usually specified in milliwatts
and sometimes qualified by operating voltage, required by the oscillator to function properly.
• Sensitivity to load changes. To keep manufacturing costs down, many wireless applications
use a VCO alone, without the buffering action of a high reverse-isolation amplifier stage. In
such applications, frequency pulling, the change of frequency resulting from partially reactive
loads, is an important oscillator characteristic. Pulling is commonly specified in terms of the
frequency shift that occurs when the oscillator is connected to a load that exhibits a non-unity
VSWR (such as 1.75:1, usually referenced to 50 Ω ), compared to the frequency that results
with a unity-VSWR load (usually 50 Ω ). Frequency pulling must be minimized, especially in
cases where power stages are close to the VCO unit and short pulses may affect the output fre-
quency. Such feedback can make-phase locking impossible.
• Spurious outputs. The spurious output specification of a VCO, expressed in decibels, char-
acterizes the strength of unwanted and nonharmonically related components relative to the
oscillator fundamental. Because a stable, properly designed oscillator is inherently clean,
such spurs are typically introduced only by external sources in the form of radiated or con-
ducted interference.
• Temperature drift. Although the synthesizer is responsible for locking and maintaining the
oscillator frequency, the VCO frequency change as a function of temperature is a critical
parameter and must be specified. This value varies between 10 kHz/ºC to several hundred
kHz/ºC depending on the center frequency and tuning range.
• Tuning characteristic. This specification shows the relationship, depicted as a graph,
between the VCO operating frequency and the tuning voltage applied. Ideally, the correspon-
dence between operating frequency and tuning voltage is linear.
• Tuning linearity. For stable synthesizers, a constant deviation of frequency versus tuning
voltage is desirable. It is also important to make sure that there are no breaks in the tuning
range—for example, that the oscillator does not stop operating with a tuning voltage of 0 V.
• Tuning sensitivity, tuning performance. This datum, typically expressed in megahertz per
volt (MHz/V), characterizes how much the frequency of a VCO changes per unit of tuning-
voltage change.
q y
• Tuning speed. This characteristic is defined as the time necessary for the VCO to reach 90
percent of its final frequency upon the application of a tuning-voltage step. Tuning speed
depends on the internal components between the input pin and tuning diode—including,
among other things, the capacitance present at the input port. The input port’s parasitic ele-
ments determine the VCO’s maximum possible modulation bandwidth.
Figure 1.3.5 Block diagram of a PLL synthesizer driven by a frequency standard, direct digital
synthesis (DDS), or fractional-N synthesizer for high resolution at the output. The last two stan-
dards allow a relatively low division ratio and provide quasi-arbitrary resolution.
The filters shown in the table are single-ended, which means that they are driven from the out-
put of a CMOS switch in the phase/frequency detector. This type of configuration exhibits a
problem under lock conditions: If we assume that the CMOS switches are identical and have no
leakage, initially the output current charges the integrator and the system will go into lock. If it is
locked and stays locked, there is no need for a correction voltage, and therefore the CMOS
switches will not supply any output. The very moment a tiny drift occurs, a correction voltage is
required, and therefore there is a drastic change from no loop gain (closed condition) to a loop
gain (necessary for acquiring lock). This transition causes a number of nonlinear phenomena,
and therefore it is a better choice to either use a passive filter in a symmetrical configuration or a
symmetrical loop filter with an operational amplifier instead of the CMOS switches. Many of
the modern PFDs have different outputs to accommodate this. An ill-conditioned filter or selec-
tion of values for the filter frequently leads to a very low-frequency type of oscillation, often
referred to as motorboating. This can only be corrected by adjusting the phase/frequency behav-
ior of the filter.
The Bode diagram is a useful tool in designing the appropriate loop filter. The Bode diagram
shows the open-loop performance, both magnitude and phase, of the phase-locked loop.
Figure 1.3.7 A Type 2 high-order filter with a notch to suppress the discrete reference spurs.
q y
Figure 1.3.8 Phase/frequency discriminator including an active loop filter capable of operating up
to 100 MHz.
Figure 1.3.9 Custom-built phase detector with a noise floor of better than –168 dBc/Hz. This
phase detector shows extremely low phase jitter.
q y
⎛ 1 ⎞
f0 =⎜N + ⎟fr
⎝ M⎠
(1.3.1)
This expression shows that f0 can be varied in fractional increments of the reference frequency
by varying M. The technique is equivalent to constructing a fractional divider, but the fractional
part of the division is actually implemented using a phase accumulator. This method can be
expanded to frequencies much higher than 6 GHz using the appropriate synchronous dividers.
The phase accumulator approach is illustrated by the following example.
Consider the problem of generating 899.8 MHz using a fractional-N loop with a 50-MHz ref-
erence frequency; 899.8 MHz = 50 MHz (N – K/F). The integral part of the division N is set to
17 and the fractional part K/F is 996/1000 (the fractional part K/F is not a integer). The VCO
output is divided by 996 × every 1,000 cycles. This can easily be implemented by adding the
number 0.996 to the contents of an accumulator every cycle. Every time the accumulator over-
flows, the divider divides by 18 rather than by 17. Only the fractional value of the addition is
retained in the phase accumulator. If we move to the lower band or try to generate 850.2 MHz, N
remains 17 and K/F becomes 4/1000. This method of fractional division was first introduced by
using analog implementation and noise cancellation, but today it is implemented as a totally dig-
ital approach. The necessary resolution is obtained from dual-modulus prescaling, which allows
for a well-established method for achieving a high-performance frequency synthesizer operating
at UHF and higher frequencies. Dual-modulus prescaling avoids the loss of resolution in a sys-
tem compared to a simple prescaler. It allows a VCO step equal to the value of the reference fre-
quency to be obtained. This method needs an additional counter and the dual-modulus prescaler
then divides one or two values depending upon the state of its control. The only drawback of
prescalers is the minimum division ratio of the prescaler for approximately N2.
The dual modulus divider is the key to implementing the fractional-N synthesizer principle.
Although the fractional-N technique appears to have a good potential of solving the resolution
limitation, it is not free of complications. Typically, an overflow from the phase accumulator,
which is the adder with output feedback to the input after being latched, is used to change the
instantaneous division ratio. Each overflow produces a jitter at the output frequency, caused by
the fractional division, and is limited to the fractional portion of the desired division ratio.
In our example, we had chosen a step size of 200 kHz, and yet the discrete side bands vary
from 200 kHz for K/F = 4/1000 to 49.8 MHz for K/F = 996/1000. It will become the task of the
loop filter to remove those discrete spurious elements. While in the past the removal of discrete
spurs was accomplished by analog techniques, various digital methods are now available. The
microprocessor has to solve the following equation:
⎛ K⎞
N * = ⎜ N + ⎟ = [N ( F − K ) + (N +1)K ]
⎝ F⎠
(1.3.2)
850.2 MHz
N* = = 17.004
50 MHz
(1.3.3)
[16932 − 72]
Fout = 50 MHz × = 846.6 MHz + 36
. MHz = 850.2 MHz
1000
(1.3.4)
Figure 1.3.10 Block diagram of a multiloop synthesizer. (Courtesy of Rohde & Schwarz.)
The 69.2- to 69.3-MHz output frequency from the mixer, after filtering, is mixed with the
final VCO output frequency to produce a signal of 12.2 to 42.1 MHz, which after division by M
is used for comparison in the final PLL with the 100-kHz reference. The M value is used to
select the 0.1-MHz steps, while the value of N shifts the 70- to 80-MHz oscillator to provide 10-
Hz resolution over the 0.1-MHz band resulting from its division by 100. This synthesizer pro-
vides the first oscillator frequency for a receiver with an 81.4-MHz first IF, for a band of input
frequencies up to 30 MHz, with 10-Hz step resolution.
This multiloop synthesizer illustrates the most important principles found in general-purpose
synthesizers. A different auxiliary loop could be used to provide further resolution. For example,
by replacing the 10.7- to 10.8-MHz loop by a digital direct frequency synthesizer, quasi-infinite
resolution could be obtained by a microprocessor-controlled low-frequency synthesizer. Such a
synthesizer has a very fast lock-up time, comparable to the speed of the 100-kHz loop. In the
design indicated, the switching speed is determined by the 1-kHz fine resolution loop. For a loop
bandwidth of 50 Hz for this loop, we will obtain a settling time in the vicinity of 40 ms. The
longest time is needed when the resolution synthesizer is required to jump from 80 to 70 MHz.
During this frequency jump, the loop will go out of both phase and frequency lock and will need
complete reacquisition. Thus, each time a 100-kHz segment is passed through, there is such a
q y
disturbance. The same occurs when the output frequency loop jumps over a large frequency seg-
ment. The VCO, operating from 81.4 to 111.4 MHz, is coarse-tuned by switching diodes, and
some of the jumps are objectionable. The noise sideband performance of this synthesizer is
determined inside the loop bandwidth by the reference noise multiplied to the output frequency,
and outside the loop bandwidth by the VCO noise. The latter noise can be kept low by building a
high-quality oscillator.
Another interesting multiloop synthesizer is shown in Figure 1.3.11. A 13- to 14-MHz sin-
gleloop synthesizer with 1-kHz steps is divided by 10 to provide 100-Hz steps and a 20-dB noise
improvement. A synthesizer in this frequency range can be built with one oscillator in a single
chip. The output loop, operating from 69.5 to 98 MHz, is generated in a synthesizer loop with
50-kHz reference and 66.6- to 66.7-MHz frequency offset generated by translation of the fine
loop frequency. The reference frequencies for the two loops are generated from a 10.7 MHz tem-
perature-compensated crystal oscillator standard. An additional 57.3-MHz TCXO is used to
drive a second oscillator in the receiver as well as to offset the fine frequency loop of the synthe-
sizer. An increase in frequency of this oscillator changes the output frequency of the first LO
(synthesizer) in the opposite direction to the second LO, thereby canceling the drift error. This
drift-canceling technique is sometimes referred to as the Barlow-Wadley principle.
Y n = [ 2 cos ( 2 π f t)] Y ( n – 1 ) – Y ( n – 2 )
(1.3.5)
This is solved by Yn = cos (2fnt). There are at least two problems with this method, however. The
noise can increase until a limit cycle (nonlinear oscillation) occurs. Also, the finite word length
used to represent 2 cos (2ft) places a limitation on the frequency resolution. Another method of
DDFS, direct table lookup, consists of storing the sinusoidal amplitude coefficients for succes-
sive phase increments in memory. The continuing miniaturization in size and cost of ROM make
this the most frequently used technique.
One method of direct table lookup outputs the same N points for each cycle of the sine wave,
and changes the output frequency by adjusting the rate at which the points are computed. It is rel-
atively difficult to obtain fine frequency resolution with this approach, so a modified table look-
up method is generally used. It is this method that we describe here. The function cos (2ft) is
approximated by outputting the function cos (2fnT) for n = 1, 2, 3, ... , where T is the interval
q y
between conversions of digital words in the D/A converter and n represents the successive sam-
ple numbers. The sampling frequency, or rate, of the system is 1/T. The lowest output frequency
waveform contains N distinct points in its waveform, as illustrated in Figure 1.3.12. A waveform
of twice the frequency can be generated, using the same sampling rate, but outputting every other
data point. A waveform k times as fast is
obtained by outputting every kth point at the
same rate 1/T. The frequency resolution, then,
is the same as the lowest frequency fL.
The maximum output frequency is selected
so that it is an integral multiple of fL, that is, fU
= kF L. If P points are used in the waveform of
the highest frequency, then N (= kP) points are
used in the lowest frequency waveform. The
number N is limited by the available memory
size. The minimum value that P can assume is
Figure 1.3.12 Synthesized waveform gener-
usually taken to be four. With this small value
ated by direct digital synthesis.
of P, the output contains many harmonics of the
q y
Figure 1.3.13 Block diagram of a direct digital frequency synthesizer. (From [8]. Reprinted with
permission.)
desired frequency. These can be removed by the use of low-pass filtering at the D/A output. For
P = 4, the period of the highest frequency is 4T, resulting in fU = 4fL. Thus, the highest attainable
frequency is determined by the fastest sampling rate possible.
In the design of this type of DDFS, the following guidelines apply:
• The desired frequency resolution determines the lowest output frequency fL.
• The number of D/A conversions used to generate fL is N = 4k = 4fU/fL provided that four con-
versions are used to generate fU (P = 4).
• The maximum output frequency fU is limited by the maximum sampling rate of the DDFS,
≤1/4T. Conversely, T ≤1/4fU.
The architecture of the complete DDFS is shown in Figure 1.3.13. To generate nfL, the integer
n addresses the register, and each clock cycle kn is added to the content of the accumulator so
that the content of the memory address register is increased by kn. Each knth point of the mem-
ory is addressed, and the content of this memory location is transferred to the D/A converter to
produce the output sampled waveform.
To complete the DDFS, the memory size and the length (number of bits) of the memory word
must be determined. The word length is determined by system noise requirements. The ampli-
tude of the D/A output is that of an exact sinusoid corrupted with the deterministic noise result-
ing from truncation caused by the finite length of the digital words (quantization noise). If an (n
+ l)-bit word length (including one sign bit) is used and the output of the A/D converter varies
between 1, the mean noise from the quantization will be
q y
2n 2 (n + 1 )
1 ⎛ 1 ⎞ 1 ⎛ 1 ⎞
σ2 = ⎜ ⎟ = ⎜ ⎟
12 ⎝ 2 ⎠ 3 ⎝2 ⎠
(1.3.6)
The mean noise is averaged over all possible waveforms. For a worst-case waveform, the noise is
a square wave with amplitude
n
( 1 ⁄ 2 )( 1 ⁄ 2 )
and
2n
σ = ( 1 ⁄ 4 )( 1 ⁄ 2 )
For each bit added to the word length, the spectral purity improves by 6 dB.
The main drawback of the DDFS is that it is limited to relatively low frequencies. The upper
frequency is directly related to the maximum usable clock frequency. DDFS tends to be noisier
than other methods, but adequate spectral purity can be obtained if sufficient low-pass filtering
is used at the output. DDFS systems are easily constructed using readily available microproces-
sors. The combination of DDFS for fine frequency resolution plus other synthesis techniques to
obtain higher-frequency output can provide high resolution with very rapid settling time after a
frequency change. This is especially valuable for frequency-hopping spread-spectrum systems.
1.3.7 References
1. Tate, Jeffrey P., and Patricia F. Mead: “Crystal Oscillators,” in The Electronics Handbook,
Jerry C. Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 185–199, 1996.
2. Frerking, M. E.: Crystal Oscillator Design and Temperature Compensation, Van Nostrand
Reinhold, New York, N. Y., 1978.
3. Dye, D. W.: Proc. Phys. Soc., Vol. 38, pp. 399–457, 1926.
4. Hietala, Alexander W., and Duane C. Rabe: “Latched Accumulator Fractional-N Synthesis
With Residual Error Reduction,” United States Patent, Patent No. 5,093,632, March 3,
1992.
5. Riley, Thomas A. D.: “Frequency Synthesizers Having Dividing Ratio Controlled Sigma-
Delta Modulator,” United States Patent, Patent No. 4,965,531, October 23, 1990.
6. King, Nigel J. R.: “Phase Locked Loop Variable Frequency Generator,” United States
Patent, Patent No. 4,204,174, May 20, 1980.
7. Wells, John Norman: “Frequency Synthesizers,” European Patent, Patent No. 012579OB2,
July 5, 1995.
8. Rohde, Ulrich L.: Digital PLL Frequency Synthesizers, Prentice-Hall, Englewood Cliffs,
N.J., 1983.
q y
1.3.8 Bibliography
Rohde, Ulrich L.: Microwave and Wireless Synthesizers: Theory and Design, John Wiley &
Sons, New York, N.Y., 1997.
q y
g g
Chapter
1.4
Modulation Systems and Characteristics
1.4.1 Introduction
The primary purpose of most communications and signaling systems is to transfer information
from one location to another. The message signals used in communication and control systems
usually must be limited in frequency to provide for efficient transfer. This frequency may range
from a few hertz for control systems to a few megahertz for video signals to many megahertz for
multiplexed data signals. To facilitate efficient and controlled distribution of these components,
an encoder generally is required between the source and the transmission channel. The encoder
acts to modulate the signal, producing at its output the modulated waveform. Modulation is a
process whereby the characteristics of a wave (the carrier) are varied in accordance with a mes-
sage signal, the modulating waveform. Frequency translation is usually a by-product of this pro-
cess. Modulation may be continuous, where the modulated wave is always present, or pulsed,
where no signal is present between pulses.
There are a number of reasons for producing modulated waves, including:
• Frequency translation. The modulation process provides a vehicle to perform the necessary
frequency translation required for distribution of information. An input signal may be trans-
lated to its assigned frequency band for transmission or radiation.
• Signal processing. It is often easier to amplify or process a signal in one frequency range as
opposed to another.
• Antenna efficiency. Generally speaking, for an antenna to be efficient, it must be large com-
pared with the signal wavelength. Frequency translation provided by modulation allows
antenna gain and beamwidth to become part of the system design considerations. The use of
higher frequencies permits antenna structures of reasonable size and cost.
• Bandwidth modification. The modulation process permits the bandwidth of the input signal to
be increased or decreased as required by the application. Bandwidth reduction permits more
efficient use of the spectrum, at the cost of signal fidelity. Increased bandwidth, on the other
hand, provides increased immunity to transmission channel disturbances.
• Signal multiplexing. In a given transmission system, it may be necessary or desirable to com-
bine several different signals into one baseband waveform for distribution. Modulation pro-
vides the vehicle for such multiplexing. Various modulation schemes allow separate signals to
1-67
y
be combined at the transmission end and separated (demultiplexed) at the receiving end. Mul-
tiplexing may be accomplished by using, among other systems, frequency-domain multiplex-
ing (FDM) or time-domain multiplexing (TDM).
Modulation of a signal does not come without the possible introduction of undesirable
attributes. Bandwidth restriction or the addition of noise or other disturbances are the two pri-
mary problems faced by the transmission system designer.
e ( t ) = AE c cos ( ωc t ) (1.4.1)
Where:
e(t) = instantaneous amplitude of carrier wave as a function of time (t)
A = a factor of amplitude modulation of the carrier wave
ω c = angular frequency of carrier wave (radians per second)
E c = peak amplitude of carrier wave
If A is a constant, the peak amplitude of the carrier wave is constant, and no modulation
exists. Periodic modulation of the carrier wave results if the amplitude of A is caused to vary with
respect to time, as in the case of a sinusoidal wave
E m
A = 1 + ⎛ -------⎞ cos ( ωm t ) (1.4.2)
⎝ Ec ⎠
Em
e ( t ) = Ec 1 + ⎛ ------- ⎞ cos ( ωm t ) cos ( ωc t ) (1.4.3)
⎝ Ec ⎠
y
This is the basic equation for periodic (sinusoidal) amplitude modulation. When all multiplica-
tions and a simple trigonometric identity are performed, the result is
e ( t ) = E c cos ( ωc t ) + M c m
M
---- cos ( ω t + ω t ) + ---- cos ( ω t – ω t )
c m (1.4.4)
2 2
E – E min
m = ----avg
------------------------- (1.4.5)
E avg
Ec
Where:
Eavg = average envelope amplitude
E min = minimum envelope amplitude
Full (100 percent) modulation occurs when
the peak value of the modulated envelope Ec /2 Ec/ 2
Positive
Carrier amplitude
Time
Negative
E – E avg
mpp = ----max
------------------------- × 100 (1.4.6)
E avg
E – E min
mnp = ----avg
------------------------- × 100 (1.4.7)
E avg
Where:
mpp = positive peak modulation (percent)
Emax = peak value of modulation envelope
mnp = negative peak modulation (percent)
Eavg = average envelope amplitude
E min = minimum envelope amplitude
When modulation exceeds 100 percent on the negative swing of the carrier, spurious signals
are emitted. It is possible to modulate an AM carrier asymmetrically; that is, to restrict modula-
tion in the negative direction to 100 percent, but to allow modulation in the positive direction to
exceed 100 percent without a significant loss of fidelity. In fact, many modulating signals nor-
mally exhibit asymmetry, most notably human speech waveforms.
The carrier wave represents the average amplitude of the envelope and, because it is the same
regardless of the presence or absence of modulation, the carrier transmits no information. The
information is carried by the sideband frequencies. The amplitude of the modulated envelope
may be expressed as [1]
Where:
E = envelope amplitude
E0 = carrier wave crest value, V
y
RFC
RF drive
input
Filament power
input
Class B
modulator
Audio
drive Modulation
transformer
Modulation
reactor
Audio
drive
Class B
modulator
DC power
supply input (HV)
1.0
Visual carrier
Chrominance
subcarrier
Relative maximum radiated field
strength (picture carrier = 1.0)
0
3.579545 MHz
0.75 MHz (min)
4.2 MHz
4.5 MHz
6.0 MHz
Figure 1.4.4 Idealized amplitude characteristics of the FCC standard waveform for monochrome
and color TV transmission. (Adapted from FCC Rules, Sec. 73.699.)
low-power stage and then amplify the signal with a linear power amplifier to drive the antenna.
Linear amplifiers generally exhibit relatively low efficiency.
y
(a ) Positive
Time
Negative
(b ) Positive
100% positive modulation
Carrier
Carrier amplitude
Time
(c ) Positive
Carrier amplitude
Time
Negative
(d ) Positive
Carrier amplitude
Time
Negative
Figure 1.4.5 Types of suppressed carried amplitude modulation: (a) the modulating signal, (b)
double-sideband AM, (c) double-sideband suppressed carrier AM, (d) single-sideband sup-
pressed carrier AM.
y
Fd
MI = ------ (1.4.9)
Mf
Where:
MI = the modulation index
F d = frequency deviation
Mf = modulating frequency
The higher the MI, the more sidebands produced. It follows that the higher the modulating fre-
quency for a given deviation, the fewer number of sidebands produced, but the greater their spac-
ing.
To determine the frequency spectrum of a transmitted FM waveform, it is necessary to com-
pute a Fourier series or Fourier expansion to show the actual signal components involved. This
work is difficult for a waveform of this type, because the integrals that must be performed in the
Fourier expansion or Fourier series are not easily solved. The result, however, is that the integral
y
produces a particular class of solution that is identified as the Bessel function, illustrated in Fig-
ure 1.4.6.
The carrier amplitude and phase, plus the sidebands, can be expressed mathematically by
making the modulation index the argument of a simplified Bessel function. The general expres-
sion is given from the following equations:
Where:
A = the unmodulated carrier amplitude constant
J0 = modulated carrier amplitude
J1, J2, J3... Jn = amplitudes of the nth-order sidebands
M = modulation index
ω c = 2 π F c , the carrier frequency
ω m = 2 π F m , the modulating frequency
Further supporting mathematics will show that an FM signal using the modulation indices that
occur in a wideband system will have a multitude of sidebands. From the purist point of view, all
sidebands would have to be transmitted, received, and demodulated to reconstruct the modulating
signal with complete accuracy. In practice, however, the channel bandwidths permitted practical
FM systems usually are sufficient to reconstruct the modulating signal with little discernible loss
in fidelity, or at least an acceptable loss in fidelity.
Figure 1.4.7 illustrates the frequency components present for a modulation index of 5. Figure
1.4.8 shows the components for an index of 15. Note that the number of significant sideband
y
Figure 1.4.6 Plot of Bessel functions of the first kind as a function of modulation index.
components becomes quite large with a high MI. This simple representation of a single-tone fre-
quency-modulated spectrum is useful for understanding the general nature of FM, and for mak-
ing tests and measurements. When typical modulation signals are applied, however, many more
sideband components are generated. These components vary to the extent that sideband energy
becomes distributed over the entire occupied bandwidth, rather than appearing at discrete fre-
quencies.
Although complex modulation of an FM carrier greatly increases the number of frequency
components present in the frequency-modulated wave, it does not, in general, widen the fre-
quency band occupied by the energy of the wave. To a first approximation, this band is still
roughly twice the sum of the maximum frequency deviation at the peak of the modulation cycle
plus the highest modulating frequency involved.
FM is not a simple frequency translation, as with AM, but involves the generation of entirely
new frequency components. In general, the new spectrum is much wider than the original modu-
lating signal. This greater bandwidth may be used to improve the signal-to-noise ratio (S/N) of
the transmission system. FM thereby makes it possible to exchange bandwidth for S/N enhance-
ment.
The power in an FM system is constant throughout the modulation process. The output power
is increased in the amplitude modulation system by the modulation process, but the FM system
simply distributes the power throughout the various frequency components that are produced by
y
0.5
0.4
Relative amplitude
0.3
0.2
0.1
0
fc - 75 kHz fc fc + 75 kHz
Carrier frequency
Figure 1.4.7 RF spectrum of a frequency-modulated signal with a modulation index of 5 and oper-
ating parameters as shown.
0.5
0.4
Relative amplitude
0.3
0.2
0.1
0
fc - 75 kHz fc fc + 75 kHz
Carrier frequency
modulation. During modulation, a wideband FM system does not have a high amount of energy
present in the carrier. Most of the energy will be found in the sum of the sidebands.
The constant-amplitude characteristic of FM greatly assists in capitalizing on the low noise
advantage of FM reception. Upon being received and amplified, the FM signal normally is
clipped to eliminate all amplitude variations above a certain threshold. This removes noise
picked up by the receiver as a result of man-made or atmospheric signals. It is not possible (gen-
erally speaking) for these random noise sources to change the frequency of the desired signal;
y
they can affect only its amplitude. The use of hard limiting in the receiver will strip off such
interference.
Δ f = mp × f m (1.4.10)
Where:
Δ f = frequency deviation of the carrier
mp = phase shift of the carrier
fm = modulating frequency
In a phase-modulated wave, the phase shift mp is independent of the modulating frequency;
the frequency deviation Δ f is proportional to the modulating frequency. In contrast, with a fre-
quency-modulated wave, the frequency deviation is independent of modulating frequency.
Therefore, a frequency-modulated wave can be obtained from a phase modulator by making the
modulating voltage applied to the phase modulator inversely proportional to frequency. This can
be readily achieved in hardware.
offset this problem, the input signal to the FM transmitter may be preemphasized to increase the
amplitude of higher-frequency signal components in normal program material. FM receivers uti-
lize complementary deemphasis to produce flat overall system frequency response.
instant of the sampling interval. This “single instant” may be the center or edge of the sam-
pling interval.
There are two common methods of generating a PAM signal:
• Variation of the amplitude of a pulse sequence about a fixed nonzero value (or pedestal). This
approach constitutes double-sideband amplitude modulation.
• Double-polarity modulated pulses with no pedestal. This approach constitutes double-side-
band suppressed carrier modulation.
(a) Positive
Waveform trace
Time
Negative
(b) Positive
Waveform trace
Time
Negative
Positive
(c)
Waveform trace
Time
Negative
(d) Positive
Waveform trace
0 Time
(e) Positive
Waveform trace
0 Time
Figure 1.4.9 Pulse amplitude modulation waveforms: (a) modulating signal; (b) square-topped
sampling, bipolar pulse train; (c) topped sampling, bipolar pulse train; (d) square-topped sampling,
unipolar pulse train; (e) top sampling, unipolar pulse train.
y
(a ) +
Signal amplitude
Time
(b ) +
Signal amplitude
Time
(c ) +
Signal amplitude
Time
(d ) +
Signal amplitude
Time
Figure 1.4.10 Pulse time modulation waveforms: (a) modulating signal and sample-and-hold (S/
H) waveforms, (b) sawtooth waveform added to S/H, (c) leading-edge PTM, (d) trailing-edge PTM.
In the classic design of a PCM encoder, the quantization steps are equal. The quantization
error (or quantization noise) usually can be reduced, however, through the use of nonuniform
spacing of levels. Smaller quantization steps are provided for weaker signals, and larger steps are
provided near the peak of large signals. Quantization noise is reduced by providing an encoder
that is matched to the level distribution (probability density) of the input signal.
Nonuniform quantization typically is realized in an encoder through processing of the input
(analog) signal to compress it to match the desired nonuniformity. After compression, the signal
is fed to a uniform quantization stage.
+
Signal amplitude
Time
4 Quantization levels
Amplitude
1
Sample
Time
concept. The clock rate is assumed to be constant. Transmitted pulses from the pulse generator
are positive if the signal is changing in a positive direction; they are negative if the signal is
changing in a negative direction.
As with the PCM encoding system, quantization noise is a parameter of concern for DM.
Quantization noise can be reduced by increasing the sampling frequency (the pulse generator fre-
quency). The DM system has no fixed maximum (or minimum) signal amplitude. The limiting
factor is the slope of the sampled signal, which must not change by more than one level or step
during each pulse interval.
y
(a) +
Amplitude
Time
(b) +
Amplitude
Time
(c) +
Amplitude
Time
(d) +
Amplitude
Time
Figure 1.4.13 Delta modulation waveforms: (a) modulating signal, (b) quantized modulating sig-
nal, (c) pulse train, (d) resulting delta modulation waveform.
+ f2
Signal amplitude
Time
-
f1
0 character
+
Signal amplitude
Time
-
Clock period
T 2T 3T 4T
1 character
age-controlled oscillator (VCO). The transmitted signals often are referred to as a mark
(binary digit 1) or a space (binary digit 0). Figure 1.4.14 illustrates the transmitted waveform
of a BFSK system.
• Binary phase-shift keying (BPSK), a modulating method in which the phase of the transmit-
ted wave is shifted 180° in synchronism with the input digital signal. The phase of the RF car-
rier is shifted by π/2 radians or –π/2 radians, depending upon whether the data bit is a 0 or a 1.
Figure 1.4.15 shows the BPSK transmitted waveform.
• Quadriphase-shift keying (QPSK), a modulation scheme similar to BPSK except that quater-
nary modulation is employed, rather than binary modulation. QPSK requires half the band-
width of BPSK for the same transmitted data rate.
y
1.4.6 References
1. Terman, F. E.: Radio Engineering, 3rd ed., McGraw-Hill, New York, N.Y., pg. 468, 1947.
y
1.4.7 Bibliography
Benson, K. B., and Jerry. C. Whitaker: Television and Audio Handbook for Technicians and
Engineers, McGraw-Hill, New York, N.Y., 1989.
Crutchfield, E. B. (ed.): NAB Engineering Handbook, 8th ed., National Association of Broad-
casters, Washington, D.C., 1991.
Fink, D., and D. Christiansen (eds.): Electronics Engineers’ Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Jordan, Edward C. (ed.): Reference Data for Engineers: Radio, Electronics, Computer and Com-
munications, 7th ed., Howard W. Sams, Indianapolis, IN, 1985.
Kubichek, Robert: “Amplitude Modulation,” in The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., pp. 1175–1187, 1996.
Seymour, Ken: “Frequency Modulation,” in The Electronics Handbook, Jerry C. Whitaker (ed.),
CRC Press, Boca Raton, Fla., pp. 1188–1200, 1996.
Whitaker, Jerry. C.: Radio Frequency Transmission Systems: Design and Operation, McGraw-
Hill, New York, N.Y., 1991.
Ziemer, Rodger E.: “Pulse Modulation,” in The Electronics Handbook, Jerry C. Whitaker (ed.),
CRC Press, Boca Raton, Fla., pp. 1201–1212, 1996.
y
g g
Section
2-1
y
In This Section:
References 2-86
Chapter
2.1
Radio Broadcast Systems
2.1.1 Introduction
Standard broadcasting refers to the transmission of voice and music received by the general pub-
lic in the 535–1705 kHz frequency band. Amplitude modulation is used to provide service rang-
ing from that needed for small communities to higher powered broadcast stations needed for
larger regional areas. The primary service area is defined as the area in which the ground or sur-
face-wave signal is not subject to objectionable interference or objectionable fading. The second-
ary service area refers to an area serviced by skywaves and not subject to objectionable
interference. Intermittent service area refers to an area receiving service from either a surface
wave or a skywave but beyond the primary service area and subject to some interference and fad-
ing.
2-5
y
mitter powers during daytime, but nighttime operation—if permitted at all—must be at low
power (less than 0.25 kW) with no protection from interference.
When a station is already limited by interference from other stations to a contour of higher
value than that normally protected for its class, this higher-value contour is the established pro-
tection standard for the station.
Although Class A stations cover large areas at night (approximately a 1000 km radius), the
nighttime coverage of Class B, C, and D stations is limited by interference from other stations,
electrical devices, and atmospheric conditions to a relatively small area. Class C stations, for
example, have an interference-free nighttime coverage radius of approximately 8 to 16 km. As a
result there may be large differences in the area that the station covers daytime vs. nighttime.
With thousands of AM stations licensed for operation by the FCC, interference—both day and
night—is a factor that significantly limits the service stations can provide. In the absence of
interference, a daytime signal strength of 2 mV/m is required for reception in populated towns
and cities, whereas a signal of 0.5 mV/m is generally acceptable in rural areas without large
amounts of man-made interference present. Secondary nighttime service is provided in areas
receiving a 0.5-mV/m signal 50 percent or more of the time without objectionable interference.
However, it should be noted that these limits apply to new stations and modifications to existing
stations. Nearly every station on the air was allocated prior to the implementation of these rules
with interference criteria that were less restrictive.
2.1.2d Propagation
One of the major factors in the determination of field strength is the propagation characteristic,
described as the change in electric field intensity with an increase in distance from the broadcast
station antenna. This variation depends on a number of factors including frequency, distance,
surface dielectric constant, surface loss tangent, polarization, local topography, and time of day.
Generally speaking, surface-wave propagation occurs over shorter ranges both during day and
night periods. Skywave propagation in the AM broadcast band permits longer ranges and occurs
during night periods, and thus some stations must either reduce power or cease to operate at night
to avoid causing interference. Skywave propagation is much less predictable than surface-wave
propagation, because it necessarily involves some fading and less predictable field intensities,
and is most appropriately described in terms of statistics or the percentage of time a particular
field strength level is found.
y
Figure 2.1.3 Basic configuration for high-level amplitude modulation in the standard broadcast
band.
2.1.2e Transmitters
Standard AM broadcast transmitters range in power output from 5 W up to 50 kW units. Modern
transmitters utilize low-voltage, high-current metal-oxide-semiconductor field-effect transistor
(MOSFET) devices to generate the RF power. However, in a high-power transmitter, it may take
hundreds of transistors to generate the rated power. These transistors are combined into modules,
the outputs of which are combined to produce the output signal. This design approach has the
added benefit that in the event of a module failure, the transmitter usually continues to operate,
but at slightly reduced power.
High-Level AM Modulation
High-level anode modulation is the oldest and simplest way of generating a high power AM sig-
nal. In this system, the modulating signal is amplified and combined with the de supply source to
the anode of the final RF amplifier stage. The RF amplifier is normally operated class C. The
final stage of the modulator usually consists of a pair of tubes operating class B. A basic modula-
tor of this type is shown in Figure 2.1.3.
The RF signal is normally generated in a low-level oscillator. It is then amplified by one or
more solid-state or vacuum-tube stages to provide final RF drive at the appropriate frequency to
the grid of the final class C amplifier. The audio input is applied to an intermediate power ampli-
fier (usually solid state) and used to drive two class B (or class AB) push-pull output stages. The
final amplifiers provide the necessary modulating power to drive the final RF stage. For 100 per-
cent modulation, this modulating power is 50 percent of the actual carrier power.
y
The modulation transformer shown in Figure 2.1.3 does not usually carry the dc supply cur-
rent for the final RF amplifier. The modulation reactor and capacitor shown provide a means to
combine the audio signal voltage from the modulator with the dc supply to the final RF ampli-
fier. This arrangement eliminates the necessity of having dc current flow through the secondary
of the modulation transformer, which would result in magnetic losses and saturation effects. In
some transmitter designs, the modulation reactor is eliminated from the system, thanks to
improvements in transformer technology.
The RF amplifier normally operates class C with grid current drawn during positive peaks of
the cycle. Typical stage efficiency is 75 to 83 percent.
This type of system was popular in AM broadcasting for many years, primarily because of its
simplicity. The primary drawback is low overall system efficiency. The class B modulator tubes
cannot operate with greater than 50 percent efficiency. Still, with inexpensive electricity, this was
not considered to be a significant problem. As energy costs increased, however, more efficient
methods of generating high-power AM signals were developed. Increased efficiency normally
came at the expense of added technical complexity.
Pulse-Width Modulation
Pulse-width (also known as pulse-duration) modulation is one of the most popular systems
developed for modern vacuum-tube AM transmitters. Figure 2.1.4 shows a scheme for pulse-
width modulation identified as PDM (as patented by the Harris Corporation). The PDM system
works by utilizing a square wave switching system, illustrated in Fig. 2.1.5.
The PDM process begins with a signal generator (Figure 2.1.6). A 75-kHz sine wave is pro-
duced by an oscillator and used to drive a square wave generator, resulting in a simple 75-kHz
square wave. The square wave is then integrated, resulting in a triangular waveform, which is
mixed with the input audio in a summing circuit. The result is a triangular waveform that effec-
tively rides on the incoming audio (as shown in Figure 2.1.5). This composite signal is then
y
applied to a threshold amplifier, which functions as a switch that is turned on whenever the value
of the input signal exceeds a certain limit. The result is a string of pulses in which the width of
pulse is proportional to the period of time the triangular waveform exceeds the threshold. The
pulse output is applied to an amplifier to obtain the necessary power to drive subsequent stages.
A filter eliminates whatever transients exist after the switching process is complete.
The PDM scheme is—in effect—a digital modulation system with the audio information
being sampled at a 75-kHz rate. The width of the pulses contain all the audio information. The
pulse-width-modulated signal is applied to a switch or modulator tube. The tube is simply turned
on, to a fully saturated state, or off in accordance with the instantaneous value of the pulse. When
the pulse goes positive, the modulator tube is turned on and the voltage across the tube drops to a
minimum. When the pulse returns to its minimum value, the modulator tube turns off.
y
This PDM signal becomes the power supply to the final RF amplifier tube. When the modula-
tor is switched on, the final amplifier will have current flow and RF will be generated. When the
switch or modulator tube goes off, the final amplifier current will cease. This system, in effect,
causes the final amplifier to operate in a highly efficient class D switching mode. A dc offset
voltage to the summing amplifier of Figure 2.1.6 is used to set the carrier (no modulation) level
of the transmitter.
A high degree of third-harmonic energy will exist at the output of the final amplifier because
of the switching-mode operation. This energy is eliminated by a third-harmonic trap. The result
is a stable amplifier system that normally operates in excess of 90 percent efficiency. The power
consumed by the modulator and its driver is usually a fraction of a full class B amplifier stage.
The damping diode shown in Figure 2.1.4 is used to prevent potentially damaging transient
overvoltages during the switching process. When the switching tube turns off the supply current
during a period when the final amplifier is conducting, the high current through the inductors
contained in the PDM filters could cause a large transient voltage to be generated. The energy in
the PDM filter is returned to the power supply by the damping diode. If no alternative route were
established, the energy would return by arcing through the modulator tube itself.
The pulse-width-modulation system makes it possible to completely eliminate audio fre-
quency transformers in the transmitter. The result is wide frequency response and low distortion.
It should be noted that variations on this amplifier and modulation scheme have been used by
other manufacturers for both standard broadcast and short-wave service.
Solid-State Transmitters
Solid-state transmitters make use of various schemes employing pulse modulation techniques.
One system is shown in simplified form in Figure 2.1.7a. This method of generating a modulated
RF signal employs a solid-state switch to swing back and forth between two voltage levels at the
carrier frequency. The result is a square-wave signal that is filtered to eliminate all components
except the fundamental frequency itself. A push-pull version of the circuit is shown in Figure
2.1.7b.
Figure 2.1.8 illustrates a class D switching system utilizing bipolar transistors that permits the
generation of a modulated carrier. Modern solid-state transmitters use field-effect-transistor
(FET) power devices as RF amplifiers. Basically, the dc supply to the RF amplifier stages is
switched on and off by an electronic switch in series with a filter. Operating in the class D mode
results in a composite signal similar to that generated by the vacuum-tube class D amplifier in
the PDM system (at much higher power). The individual solid-state amplifiers are typically com-
bined through a toroidal filter/combiner. The result is a group of low-power amplifiers operating
in parallel and combined to generate the required energy.
Transmitters well above 50 kW have been constructed using this design philosophy. They
have proved to be reliable and efficient. The parallel design provides users with automatic redun-
dancy and has ushered in the era of “graceful degradation” failures. In almost all solid-state
transmitters, the failure of a single power device will reduce the overall power output or modula-
tion capability of the system but will not take the transmitter off the air. This feature allows repair
of defective modules or components at times convenient to the service engineer. The negative
effects on system performance are usually negligible.
The audio performance of current technology solid-state transmitters is usually far superior to
vacuum-tube designs of comparable power levels. Frequency response is typically within ±1 dB
from 50 Hz to 10 kHz. Distortion is usually less than 1 percent at 95 percent modulation.
y
(a)
(b)
Figure 2.1.7 Solid-state switching RF amplifier: (a) basic system, (b) push-pull configuration.
2.1.3 FM Broadcasting
Frequency modulation utilizes the audio modulating signal to vary the frequency of the RF car-
rier. The greater the amplitude of the modulating frequency, the greater the frequency deviation
from the center carrier frequency. The rate of the frequency variation is a direct function of the
frequency of the audio modulating signal. In FM modulation, multiple pairs of sidebands are
produced. The actual number of sidebands that make up the modulated wave is determined by
the modulation index (MI) of the system. The modulation index is a function of the frequency
deviation of the system and the applied modulating signal.
As the MI increases there are more sidebands produced. As the modulating frequency
increases for a given maximum deviation, there will be a smaller number of sidebands spaced at
wider intervals. Unlike amplitude modulation, which has a percentage of modulation directly
proportional to the carrier power, the percentage of modulation in FM is generally referenced to
the maximum allowable occupied bandwidth set by regulation. For example, U.S. FM broadcast
stations are required to restrict frequency deviation to ±75 kHz from the main carrier. This is
referred to as 100 percent modulation for FM broadcast stations.
To determine the frequency spectrum of a transmitted FM waveform, it is necessary to com-
pute a Fourier series or Fourier expansion to show the actual signal components involved. This
work is difficult for a waveform of this type, as the integrals that must be performed in the Fou-
rier expansion or Fourier series are difficult to solve. The actual result is that the integral pro-
duces a particular class of solution that is identified as the Bessel function.
Supporting mathematics will show that an FM signal using the modulation indices that occur
in a broadcast system will have a multitude of sidebands. To achieve zero distortion, all side-
y
Figure 2.1.8 Bipolar transistor RF stage operating in a class D switching mode for AM service.
bands would have to be transmitted, received, and demodulated. However, in reality, the ±75-kHz
maximum deviation for FM broadcasting allows the transmission and reception of audio with
negligible distortion.
The power emitted by an FM transmitter is virtually constant, regardless of the modulating
signal. Additional noise or distortion of the amplitude of the waveform has virtually no effect on
the quality of the received audio. Thus, FM transmitters may utilize Class C type amplifiers,
which cause amplitude distortion but are inherently more efficient than Class A or Class B
amplifiers. In addition, atmospheric and man-made noise has little affect, since the receiver clips
all amplitude variations off the signal prior to demodulation.
index. There is significant signal-to-noise improvement at the receiver, which is equipped with a
matching de-emphasis circuit.
Modulation Circuits
Early FM transmitters used reactance modulators that operated at a low frequency. The output of
the modulator was then multiplied to reach the desired output frequency. This approach was
acceptable for monaural FM transmission but not for modern stereo systems or other applica-
tions that utilize subcarriers on the FM broadcast signal. Modern FM systems utilize direct mod-
ulation. That is, the frequency modulation occurs in a modulated oscillator that operates on a
center frequency equal to the desired transmitter output frequency. In stereo broadcast systems, a
composite FM signal is applied to the FM modulator. The basic parameters of this composite
signal are shown in Figure 2.1.9.
Various techniques have been developed to generate the direct-FM signal. One of the most
popular uses a variable-capacity diode as the reactive element in the oscillator. The modulating
signal is applied to the diode, which causes the capacitance of the device to vary as a function of
the magnitude of the modulating signal. Variations in the capacitance cause the frequency of the
oscillator to vary. Again, the magnitude of the frequency shift is proportional to the amplitude of
the modulating signal, and the rate of frequency shift is equal to the frequency of the modulating
signal.
The direct-FM modulator is one element of an FM transmitter exciter, which generates the
composite FM waveform. A block diagram of a complete FM exciter is shown in Figure 2.1.10.
Audio inputs of various types (stereo left and right signals, plus subcarrier programming, if used)
y
are buffered, filtered, and preemphasized before being summed to feed the modulated oscillator.
It should be noted that the oscillator is not normally coupled directly to a crystal but is a free-run-
ning oscillator adjusted as near as practical to the carrier frequency of the transmitter. The final
operating frequency is carefully maintained by an automatic frequency control system employ-
ing a phase-locked loop (PLL) tied to a reference crystal oscillator or frequency synthesizer.
A solid-state class C amplifier follows the modulated oscillator and increases the operating
power of the FM signal to 20–30 W. One or more subsequent amplifiers in the transmitter raise
the signal power to several hundred watts for application to the final power amplifier (PA) stage.
Nearly all current high-power FM transmitters utilize solid-state amplifiers up to the final RF
stage, which is generally a vacuum tube for operating powers of 15 kW and above. All stages
operate in the class C mode.
2.1.3e Transmitters
FM broadcast transmitters typically range in power output from 10 W to 50 kW. Digital technol-
ogy is being used extensively in the processing and exciter stages, allowing for precision modula-
tion control, whereas solid-state devices are predominately used for RF drivers and amplifiers up
to several kilowatts. High-power transmitters still rely on tubes in the final stages for the genera-
tion of RF power, but manufacturers are developing new devices and technologies that have
made high-power solid-state transmitters cost effective and reliable.
FM Power Amplifiers
The majority of high-power FM transmitters employ cavity designs. The 1/4-wavelength cavity
is the most common. The design is simple and straightforward. A number of variations can be
found in different transmitters, but the underlying theory of operation is the same.
The goal of any cavity amplifier is to simulate a resonant tank circuit at the operating fre-
quency and provide a means to couple the energy in the cavity to the transmission line. Because
of the operating frequencies involved (88 to 108 MHz), the elements of the “tank” take on unfa-
miliar forms.
A typical 1/4-wave cavity is shown in Figure 2.1.11. The plate of the tube connects directly to
the inner section (tube) of the plate-blocking capacitor. The blocking capacitor can be formed in
one of several ways. In at least one design, it is made by wrapping the outside surface of the inner
tube conductor with multiple layers of 8-in-wide and 0.005-in-thick polymide (Kapton) film.
The exhaust chimney-inner conductor forms the other element of the blocking capacitor. The
cavity walls form the outer conductor of the 1/4-wave transmission line circuit. The dc plate volt-
y
Figure 2.1.11 Physical layout of a common type of 1/4-wave PA cavity for FM broadcast service.
age is applied to the PA tube by a cable routed inside the exhaust chimney and inner tube conduc-
tor.
In the design shown in Figure 2.1.11, the screen-contact fingerstock ring mounts on a metal
plate that is insulated from the grounded-cavity deck by a Kapton blocker. This hardware makes
up the screen-blocker assembly. The dc screen voltage feeds to the fingerstock ring from under-
neath the cavity deck through an insulated feed through.
Some transmitters that employ the 1/4-wave cavity design use a grounded screen configura-
tion in which the screen contact fingerstock ring is connected directly to the grounded cavity
deck. The PA cathode then operates at below ground potential (in other words, at a negative volt-
age), establishing the required screen voltage for the tube.
The cavity design shown in Figure 2.1.11 is set up to be slightly shorter than a full 1/4-wave-
length at the operating frequency. This makes the load inductive and resonates the tube's output
capacity. Thus, the physically foreshortened shorted transmission line is resonated and electri-
cally lengthened to 1/4-wavelength.
shunts the inner conductor to the outer conductor and can vary the electrical length and resonant
frequency of the cavity.
The THD test is sensitive to the noise floor of the system under test. If the system has a sig-
nal-to-noise ratio of 60 dB, the distortion analyzer's best possible reading will be greater than 0.1
percent (60 dB = 0.001 = 0.1 percent).
Intermodulation distortion (IMD) is the creation by a nonlinear device of spurious signals not
harmonically related to the audio waveform. These distortion components are sum-and-differ-
ence (beat notes) mixing products that research has shown are more objectionable to listeners
than even-harmonic distortion products. The IMD measurement is relatively impervious to the
noise floor of the system under test. IMD performance targets for AM and FM transmitters are
the same as the TH D targets mentioned previously.
Signal-to-noise ratio (S/N) is the amplitude difference, expressed in decibels, between a refer-
ence level audio signal and the system's residual noise and hum. In many cases, system noise is
the most difficult parameter to bring under control. An FM performance target of –70 dB per ste-
reo channel reflects reasonable exciter-transmitter performance. Most AM transmitters are capa-
ble of –60 dB S/N or better.
Separation is a specialized definition for signal crosstalk between the left and right channels
of a stereo system. The separation test is performed by feeding a test tone into one channel while
measuring leakage into the other channel (whose input is terminated with a 600-Ω wirewound
resistor, or other appropriate value). Typical performance targets for an FM station are –40 dB or
better from 50 Hz to 15 kHz.
Synchronous AM in FM Systems
The output spectrum of an FM transmitter contains many sideband frequency components, theo-
retically an infinite number. The practical considerations of transmitter design and frequency
allocation make it necessary to restrict the bandwidth of all FM broadcast signals. Bandwidth
restriction brings with it the undesirable side-effects of phase shifts through the transmission
chain, the generation of synchronous AM components, and distortion in the demodulated output
of a receiver.
In most medium- and high-power FM transmitters, the primary out-of-band filtering is per-
formed in the output cavity of the final stage. Previous stages in the transmitter (the exciter and
IPA) are designed to be broadband, or at least more broadband than the PA.
As the bandwidth of an FM transmission system is reduced, synchronous amplitude modula-
tion increases for a given carrier deviation (modulation). Synchronous AM is generated as tuned
circuits with finite bandwidth are swept by the frequency of modulation. The amount of synchro-
nous AM generated is dependent on tuning, which determines (to a large extent) the bandwidth
of the system. Figure 2.1.12 illustrates how synchronous AM is generated through modulation of
the FM carrier.
As a general rule, because IPM is a direct result of the modulation process, it can be gener-
ated in any stage that is influenced by modulation. The most common cause of IPM in plate-
modulated and pulse-modulated transmitters is improper neutralization of the final RF amplifier.
Adjusting the transmitter for minimum IPM is an accurate way of achieving proper neutraliza-
tion. The reverse is not always true, however, because some neutralization methods will not nec-
essarily result in the lowest amount of IPM.
Improper tuning of the IPA stage is the second most common cause of IPM. As modulation
changes the driver loading to the PA grid, the driver output also may change. The circuits that
feed the driver stage usually are isolated enough from the PA that they do not produce IPM. An
exception can be found when the power supply for the IPA is influenced by modulation. Such a
problem could be caused by a loss of capacitance in the high-voltage power supply itself.
2.1.5 Bibliography
Bordonaro, Dominic: “Reducing IPM in AM Transmitters,” Broadcast Engineering, Intertec
Publishing, Overland Park, Kan. May 1988.
Crutchfield, E. B. (ed): NAB Engineering Handbook, 8th ed., National Association of Broadcast-
ers, Washington, D.C., 1992.
g g
Chapter
2.2
Radio STL Systems
2.2.1 Introduction
One of the major concerns in the design and operation of a radio broadcasting facility is the
means by which the program audio from the studio is conveyed to the transmitter site. As illus-
trated in Figure 2.2.1, this link represents an important element in the overall reliability of the
transmission chain. Furthermore, as digital technology continues to move into daily radio station
operation, the studio to transmitter link (STL) must become as transparent as possible. An infe-
rior link will impose an unacceptable limit on overall audio quality. The requirements for reli-
ability and transparent program relay have led to the development of new STL systems based on
digital technology.
Changes in FCC broadcast ownership rules and the popularity of local marketing agreements
(LMAs) have reshaped radio broadcasting. The need for high-quality audio programming is one
outgrowth of new competition and new alliances. STL systems are an important component of
these audio improvement efforts. Furthermore, increasing numbers of stations are using intercity
relay (ICR) facilities to share programming. Unfortunately, in many areas of the United States,
the demand for 950 MHz STL channels has far outstripped the available supply. The Part 74
bandwidth allocations for STL systems, therefore, necessitate highly efficient designs to meet
the needs of radio broadcasters today.
Market demand for STL systems is not limited to North America. The commercialization of
radio broadcasting in Europe and elsewhere has led to increased use of radio links to relay pro-
gramming from one studio to the next, and from the studio to the transmitter site. In some areas,
repeated use of the same frequency places stringent demands on system design.
2-21
y
Figure 2.2.2 Discrete channel landline digital audio STL system. (After [1].)
sion, is rapidly becoming the system of choice for many radio stations [1]. The decision to use
some form of digital landline technology may be based on several factors, including:
• Necessity. The station has no line-of-sight to the transmitter, or suitable frequencies are
unavailable.
• Sound quality. A digital landline STL can sound better than even the best analog systems.
• Cost. A single leased data line can cost less than multiple leased analog lines.
Whatever the reasons, a broadcaster who decides to use digital transmission must then choose
what type of system to implement. Digital STL systems can be designed either to transmit dis-
crete left and right channel stereo, as shown in Figure 2.2.2, or to transmit a composite stereo sig-
nal, illustrated in Figure 2.2.3.
One of the advantages to a broadcaster in using a digital landline STL is the ability to multi-
plex several signals together. SCAs, SAP channels, transmitter remote control signals, data, and
voice can all be combined with the broadcast audio signal for transmission on a single circuit.
The duplex nature of data links permits use of the same system for both STL and transmitter-to-
studio (TSL) functions as well.
y
Figure 2.2.3 Composite landline digital audio STL system. (After [1].)
RF link RF power
divider
Right Right
audio audio
input Automatic STL STL Limiting output
gain transmitter receiver amplifier
control
Figure 2.2.4 Comparison of STL systems: (a) composite transmitter-receiver system, (b) dual
monaural transmitter-receiver system. (After [2].)
(a )
ed
itul Program channel (15 kHz high-frequency limit)
p
m
a
eiv Subcarrier (39 kHz center frequency)
ta
elR
Figure 2.2.5 Baseband spectrum of STL systems: (a) monaural, (b) composite. (After [2].)
It is fair to point out that digitization of the input audio signal always brings with it a measure
of degradation (quantization errors), but with the high sampling rates typically used for profes-
sional audio applications, such degradation is minor and almost always inaudible.
The process of quantization is illustrated in Figure 2.2.7. The sampling rate and quality of the
sampling circuit determine, in large part, the overall quality of the digital system. A properly
operating transmission channel can be assumed to provide error-free throughput. This being the
case, the digital signal can be regenerated at the receiving point as an exact duplicate of the input
waveform. Figure 2.2.8 shows a general representation of a digital communications channel. In
the case of a radio link, such as an STL, the transmission medium is analog in nature (FM). The
circuits used to excite the FM modulator, however, are essentially identical to those used for an
all-digital link, such as fiber optic cable.
The functions of the encoder and decoder, shown in Figure 2.2.8a, usually are formed into a
single device, or set of devices (a chip set), known as a codec (coding and decoding device). At
the transmission end, the codec provides the necessary filtering to band-limit the analog signal to
avoid aliasing, thereby preventing analog-to-digital (A/D) conversion errors. At the receiver, the
codec performs the reciprocal digital-to-analog (D/A) conversion and interpolates (smooths) the
resulting analog waveform.
)
70
B
d
60
(
o
it
a
r
e
is
o
50
n
-
o
t
-
l
a
40
n
ig
S 30
20
10
0
1.0 2.0 3.0 5.0 10 20 30 50 100 200 300 500 1E3
Received RF signal ( V)
Figure 2.2.6 The benefits of digital vs. analog STL systems in terms of S/N and received RF level.
(After [2].)
casters to extend the fade margin of an existing analog link by 20 dB or more. Furthermore,
audio signal-to-noise (S/N) improvements of at least 10 dB can be expected for a given RF signal
strength. Alternatively, for the same S/N, the maximum possible path distance of a given com-
posite STL transmitter and receiver can be extended. These features could, in some cases, make
the difference between a one-hop system and a two-hop system.
The spectrum-efficiency of a digital STL is of great importance today in highly congested
markets. The system may, for example, be capable of relaying four program channels and two
voice-grade channels. The use of digital coding also makes the signals more tolerant of co-chan-
nel interference than a comparable analog STL.
Coding System
Several approaches may be used to digitize or encode the input audio signals. The complexity of
the method used is a function of the availability of processing power and encoder memory, and of
the resulting delay incurred during the encoding/decoding process. For a real-time function such
as an STL, significant encoding/decoding delay is unacceptable. Pulse code modulation (PCM)
is a common scheme that meets the requirements for speed and accuracy. In the PCM process,
the sampled analog values of the input waveform are coded into unique and discrete values. This
quantization may be uniform, as illustrated in Figure 2.2.7, or nonuniform. With nonuniform
quantization, compression at the coder and subsequent expansion at the decoder is performed.
By using larger quantization steps for high energy signals and smaller steps for low energy sig-
nals, efficient use is made of the data bits, while maintaining a specified signal-to-quantization
noise level. This process is known as companding (compression and expansion).
y
0111
0110
0101
0100
0011
0010
0000 0001
(b)
Analog input
Q uantized output
ed
uit
lp
m
al Time
an
igS
ro
rr
E
Figure 2.2.7 Quantization of an input signal: (a) quantization steps, (b) quantization error signal.
(After [2].)
PCM encoding, in a simple real-time system, provides a high-speed string of discrete digital
values that represent the input audio waveform. Each value is independent of all previous sam-
ples. No encoder memory is required. This approach, while simple and fast, is not particularly
efficient insofar as the transmission channel is concerned. There are many redundancies in any
given input signal. By eliminating the redundancies, and taking advantage of the masking effects
of human hearing, greater transmission efficiency can be realized. Viewed from another perspec-
tive, for a given radio transmission bandwidth, more information can be transferred by using a
compression system that removes nonessential data bits.
2-28 Radio Transmission Systems
(a )
Transmitter
Analog Anti- A/D
input aliasing converter Encoder
filter
Digital
channel
Receiver
Analog Smoothing and D/A
output interpolation Decoder
filter converter
(b )
Transcoding Analog
transmitter Modulator communications Demodulator
channel
Figure 2.2.8 Digital transmission system: (a) coding/decoding functions, (b) overall communica-
tions link. (After [2].)
Figure 2.2.9 Analog composite STL link for FM radio applications. (After [2].)
M ix er/ RF power
up-converter amplifier Antenna
Input signals
Composite Synthesized
MUX 1 Baseband direct-FM
MUX 2 processor oscillator
Table 2.2.1 Specifications for a Typical Composite STL System (After [2].)
Parameter Specification
Power output 6 to 8 W
Frequency stability ± 0.0002%, 0 to 50° C
Spurious emissions 60 dB below maximum carrier power
Baseband frequency response ± 0.1 dB or less, 30 Hz to 75 kHz
Stereo separation Greater than 55 dB at 1 kHz
Total harmonic distortion 0.02%, 75 μs deemphasis
S/N 85 dB below ± 75 kHz deviation, 75 μs deemphasis
Nonlinear crosstalk 50 dB or less
Subchannel-to-main crosstalk 60 dB or less
A block diagram of the companion composite STL receiver is shown in Figure 2.2.12. Like
the transmitter, the receiver is user-programmable in 12.5 kHz steps through the use of internal
DIP switches. The front-end uses cascaded high-Q cavity filters and surface acoustic wave
(SAW) IF filters to provide high selectivity and phase linearity. Triple conversion IF is used to
feed a pulse-counting discriminator for linear baseband demodulation.
RF input
Output
Composite signals
3 MHz FM baseband
IF amplifier demodulator processor
Metering
circuitry
amplifier can be used. For convenience in manipulating figures, the transmitter power output
should be converted to gain in decibels above a 1 mW reference (dBm). (See Table 2.2.2.)
When choosing an STL receiver, specifications should be carefully analyzed, particularly
receiver sensitivity. This figure, necessary in STL path calculations, is usually specified as a sig-
nal level required for a specified S/N. This value should be converted to dBm. (See Table 2.2.3.)
For example, a receiver may require 100 μV for 60 dB S/N. This is equivalent to –66.9 dBm. In
receiver design, sensitivity, S/N, selectivity, and the method of demodulation are determining
factors of receiver quality. The use of SAW filters provide sharper selectivity and more linear
phase response. These attributes yield better stereo separation and lower distortion. A pulse-
counting discriminator also provides low distortion and accurate demodulation of the received
signal. Phase-linear lowpass filtering is critical for best stereo separation.
A low-noise RF preamplifier may be added to the system when the received signal level is
low. For best performance, the preamplifier should be mounted directly at the receive antenna.
Care must be taken, however, to prevent overloading the receiver front-end by unwanted, and
often strong, interfering signals.
In areas of frequency congestion, narrowband receivers are important in preventing interfer-
ence from other transmitters. STL manufacturers have responded to the needs of broadcasters in
congested RF environments by providing suitable narrowband STL systems. Such receivers typ-
ically incorporate bandpass cavity filters, helical resonators, or mechanical coaxial filters. SAW
filters and ceramic filters in IF stages also may be included to improve selectivity.
y
Transmission Lines
Figure 2.2.14 shows the primary hardware elements required for an aural STL. Transmission line
sections, connections, and strain-relief provisions are important for long-term reliability. The
main criteria in the selection of transmission line include the following:
• Amount of signal attenuation
• Physical parameters (dielectric material and size)
• Purchase and installation cost
In general, the larger the diameter of the transmission line, the lower the loss, and the greater
the cost of the line. Loss is also affected by the type of dielectric material used. The most com-
mon types of dielectric are air and foam. Air dielectric cable typically requires pressurization and
is, therefore, seldom used for 950 MHz installations. For the purpose of gain/loss calculations,
cable loss for a particular installation can be determined from the transmission line manufac-
turer's specification table, given in decibels per unit of measure. Attenuation for common types
of coaxial line are given in Table 2.2.4
Other electrical and mechanical specifications (mainly cable size and impedance) must be
compatible with the transmitter, receiver, and antennas to be used. Connector loss must also be
considered. It is important to minimize each potential source of signal loss when assembling the
system. There are no “minor details” in the installation of an STL.
Stain relief must be provided on each end of the cable run (at the transmitter and at the
receiver). So-called pigtail or jumper cables are commonly used for this purpose. They permit
movement without straining cable and chassis connections. Because the pigtails commonly are
terminated with N-type male connectors on both ends, the main transmission line must be con-
figured with female N-type connectors on both ends if a pair of pigtails are used.
y
Antenna System
At the frequencies commonly used for STL operation, antennas can readily focus the signal into
a narrow beam. This focusing, required for point-to-point communications, provides high gain at
both the transmitter and the receiver. Several types of antennas are available, including parabolic
and parabolic section. Parabolic antennas are available in solid or grid styles, while parabolic
section antennas usually are grid-type in design. Antenna models differ in a number of respects,
including:
y
Pigtail Pigtail
Antennas
Transmission line Transmission line
STL STL
transmitter receiver
Pigtail Pigtail
For path analysis calculations of system gains and losses, dBi is used.
Figure 2.2.15 plots the response of a common type of STL antenna. Note the high directivity
provided by the device and the difference in gain between operation in the 450 MHz band as
y
(a) (b)
Figure 2.2.15 Radiating plots for a parabolic section antenna at 450 MHz and 950 MHz: (a) hori-
zontal response, (b) vertical response. (After [2].)
Hardware Considerations
Depending on the complexity of the STL system, additional hardware may be required. Where
more than one transmitter or receiver is used with a single antenna, a combiner, splitter, or
duplexer will be needed. These items contribute additional loss and must be accounted for in
path calculations.
Certain installations may require the installation of an external power amplifier in cases
where the output of a standard STL transmitter is insufficient for a particular path. In path calcu-
lations, the power output of the external power amplifier (converted to dBm) is substituted for
the output of the STL transmitter. In general practice, most engineers choose to use an external
power amplifier only as a last resort. Higher gain antennas usually are a more attractive alterna-
tive.
Spectrum Considerations
In view of the serious spectrum congestion problems that exist today in many areas of the U.S.
and elsewhere, an STL system should be designed to be as spectrum-efficient as possible and—
equally important—to be as immune to undesired transmissions as possible. Even if the system
will be operated in an area that currently does not have a spectrum congestion problem, there is
no guarantee that such a situation will not surface in the near future. In any event, a well-engi-
neered system is also a spectrum-efficient system.
The first rule of spectrum-efficiency is to use only the effective radiated power (ERP) neces-
sary to do the job. There is no justification for putting 15 W into the air when 5 W will provide
the required receiver quieting and fade margin.
A simple and sometimes effective spectrum coordination tool is cross-polarization. Two sta-
tions on nearby frequencies may achieve as much as 25 dB RF isolation through the use of differ-
ent polarizations of transmit antennas, matched by like polarization at their respective receive
antennas. Cross-polarization results in varying degrees of success, depending upon the frequency
of operation and the surrounding terrain.
Line of Sight
Because microwave frequencies are used for STL systems, the signal path is theoretically limited
to the line-of-sight between the studio and transmitter locations. In reality, the radio horizon is
frequently situated beyond the visual horizon. This is the result of the gradual decrease in the
refractive index of the atmosphere with increasing altitude above the earth. This effect bends
radio waves downward. The degree of bending is characterized by the K factor, which is the ratio
of the effective earth radius to the true earth radius. A typical value for K is 4/3, or 1.33, valid
over 90 percent of the time in most parts of the world. For long paths, consult a map showing the
minimum K factor occurring in the specific area so that proper antenna heights can be planned.
Figure 2.2.16 plots an example path on 4/3 earth graph paper.
Figure 2.2.16 An aural STL path profile drawn on true-earth radius graph paper. (After [2].)
consideration. Figure 2.2.17a illustrates a poor path. Generally speaking, a good path is literally
line-of-sight, with no obstructions to block the signal, and no other terrestrial or atmospheric
conditions that would compromise the path (Figure 2.2.17b.
Terrain Considerations
One of the major tasks required to engineer an STL system is the path analysis between the STL
transmitter at the studio and the STL receiver location. To determine what constitutes a clear
path, the concept of Fresnel zones for optical theory is applied to radio waves. Most of the elec-
tromagnetic energy at a receiving point is concentrated in an elliptical volume that is a function
of the distance between the transmit and receive points and the wavelength. The energy outside
this volume either cancels or reinforces the energy within the volume, depending on whether the
distance that the energy travels to the receive point is longer by an even or odd number of 1/4-
wavelengths. Even distances result in radio wave cancellations; odd distances result in radio
wave reinforcement. (See Figure 2.2.18.) The radius of the first Fresnel zone, which defines the
boundary of the elliptical volume, is given by
d1 d 2
F1 = (2.2.2)
fD
72.1 -----------
y
(a)
Unacceptable path Transmitter
site
Studio
site
(b) Transmitter
Studio site
site
Clear path
Figure 2.2.17 Path considerations in planning an STL: (a) unacceptable path because of obstruc-
tions, (b) usable path. (After [2].)
Where:
F 1 = first Fresnel zone radius in feet
d1 = distance from the transmitting antenna to the obstruction in miles
D = total path length in miles
d2 = D – d1 in miles
f = frequency in GHz
H = distance from the top of the obstruction to the radio path
For reliable operation, obstructions should not project into the area thus defined. Empirical
studies, however, have shown that performance is substantially the same as long as H is greater
than 0.6 F1.
The first step in evaluating the path is to make a subjective check of the planned route. First,
determine that a reasonable possibility of success exists. Next, draw a best-case route for the
path. If obstructions are found, consider accomplishing the path by using a repeater system, or by
shifting the location of the STL transmit antenna. Although a detailed path analysis may not be
required in cases where line-of-sight is clearly present and the distance is less than about 10
miles, it is still good engineering practice to review some of the basic elements of the evaluation.
Obtain an accurate map showing the topography between the STL transmitter and receiver
locations. After determining the transmitter and receiver sites on the map, connect them with a
straight line showing the proposed path. After the path has been drawn, a protractor can be used
y
STL
Fresnel zone receiver
STL
transmitter Obstruction
Figure 2.2.18 Fresnel zone clearance for an STL path. (After [2].)
to determine the direction (azimuth) of the path in degrees from true North. This data will later
assist in antenna setup, and is necessary for filling out the appropriate FCC application.
Using the scale of miles on the map, measure the distance between the transmitter and
receiver sites. Determine the altitude of the proposed transmit antenna location from the contour
lines on the map, and add to that the height of the building or other structure on which the
antenna will be mounted. Make a similar determination for the receive antenna location. Adjust
the heights to ensure that all obstructions are cleared. Depending on the path length and the
height of the antennas, it may be necessary to take the curvature of the earth into account, and
use earth radius graph paper to plot a cross-section of the path.
Study the map to see what terrain features are between the path points. Prior to making any
other evaluations, conduct a visual field survey. Check for any structures or features not listed on
the map. Anticipate any possible new construction or tree growth that may cause problems in the
future.
The terrain from the transmitting antenna to the receiving antenna must be examined not only
for obstructions, but for reflection possibilities as well. A large body of water will often cause
problems for an STL system. If the water is an even number of Fresnel zones from the direct
path, signal attenuation will likely occur at the receiver. Temperature changes and tidal condi-
tions will also have an effect. Likewise, thick vegetation or forested areas can be reflective to RF
signals when wet, creating a similar (but not so troublesome) problem. Generally, the solution to
reflection conditions is to change either the transmitting or receiving antenna height. In extreme
cases, a diversity reception system may also be used.
y
Transmit Receive
Transmission line antenna antenna Transmission line
RF path
Transmitter Receiver
Transmitter
power
)
m
Bd 0 dBm
(l
ev
el
F
R
Fade margin
Receiver
quieting
Figure 2.2.19 Path analysis plotting gains vs. losses for an STL. (After [2].)
Path Reliability
The long-term reliability of an STL path is determined in large part by the initial engineering
work done before the system is installed. Figure 2.2.19 charts gains and losses through a typical
system. The most important factors are free space loss and allowance for fade margin. Thereaf-
ter, effects such has diffraction, reflection, refraction, absorption, scattering, and terrain loss
must be considered.
A gain and loss balance sheet should be computed to determine the fade margin of the
planned STL system. An adequate fade margin is vital to reliable performance because a link
that is operating on the edge of the minimum acceptable receiver quieting will encounter prob-
lems later down the road. Normal component aging in the receiver or transmitter can cause a loss
in received signal level and, thus, degrade system performance. Atmospheric conditions, such as
severe weather in the area or ice on the transmitting or receiving antennas, can also cause sharp
fading and even a complete loss of signal if an adequate fade margin above minimum receiver
quieting is not provided. The STL fade margin can be computed using the following equations:
Where:
Gs = total system gain in decibels
Gt = transmitter power output in dBm
Gta = transmit antenna gain in dBi
Gra = receive antenna gain in dBi
The values for Gta and Gra are gathered from the antenna manufacturer's literature.
The value for Gt is given by
Gt = 30 + 10 log Po (2.2.4)
y
10
5 .0
)
dB
1 .0
(t
f
00 0 .5
1r
ep
no
it
au
ne
t 0 .1
A
0 .0 5
0 .0 1
1 .0 10 1 00 1E3 1E4
O p e ra tin g fre q u e n c y (M H z )
Figure 2.2.20 Loss vs. frequency for 1/2-inch foam dielectric transmission line. (After [2].)
Where:
Gt = transmitter power output in dBm
Po = transmitter power output in watts
Ls = Lp + Ll + Lc + Lm (2.2.5)
Where:
Ls = total system losses in decibels
Lp = path loss in dB
Ll = transmission line loss in dB
Lc = connector losses in dB
Lm = miscellaneous losses in dB
The values for Lt and Lc can be determined from manufacturer's literature. Figure 2.2.20
shows typical loss values for 1/2-in foam-filled transmission line. A reasonable value for connec-
tor loss with components normally used in 1/2-in coax installations is 0.5 dB. The value for Lp
can be determined by using the formula
Where:
Lp = free space attenuation loss between two isotropic radiators (in dB)
F = frequency of operation in megahertz
D = distance between the antennas in statute miles
Free space loss can also be found using a table of approximate values, as given in Table 2.2.6.
With the foregoing information, the fade margin can be calculated per
M f = G s – L s – Rm (2.2.7)
Where:
Mf = fade margin (in dB)
Gs = total system gain (dB)
Ls = total system losses (dB)
Rm = minimum signal strength required for the target S/N in dBm (a negative number)
Gs and Ls are determined by the equations given previously. Rm (receiver sensitivity) is deter-
mined from the receiver manufacturer's specifications. If the manufacturer gives a receiver sensi-
tivity figure in microvolts, the following formula can be used to convert to dBm:
–6
⎛ V × 10 ⎞
R m = 20 log ⎜ ----r-------------------⎟ (2.2.8)
⎝ 0.7746 ⎠
Where:
Rm = minimum required signal strength (in dBm)
Vr = receiver sensitivity (in μV)
In order to predict accurately the performance of the STL radio link, the value of Rm must be
determined carefully. For maximum system performance and reliability, the fade margin deter-
y
mination should be made based upon the signal level required Table 2.2.7 Recommended
to provide the minimum acceptable receiver S/N perfor- Fade Margin as a Function of
mance. Longer paths require greater margins for fade. Path Length
The primary cause of signal fade in an STL system below Path Length Fade Margin
1.0 GHz is changes in the refractive indexes of the atmo- 5 mi 5 dB
sphere along the signal path. These fluctuations affect the 10 mi 7 dB
path of reflected or refracted signals differently from the 15 mi 15 dB
direct, line-of-sight signal. When the interfering signals com- 20 mi 22 dB
bine with the direct signal, the level at the receiver increases 25 mi 27 dB
30 mi 30 dB
or decreases depending upon the degree of reinforcement or
cancellation. Because atmospheric conditions are seldom sta-
ble, some fade margin is always necessary.
Another cause of signal fade is earth bulge (or inverse beam) fading, where the overall refrac-
tive index in the vicinity of the signal path decreases, thus hindering the full signal from reaching
the receive antenna. Again, allowance for signal fade will minimize degradation. Precipitation is
another potential cause of signal fading, although it is not generally considered significant at fre-
quencies below 1.0 GHz.
Fade margin also can be determined approximately from Table 2.2.7. The relationship
between system reliability and fade margin is detailed in Table 2.2.8. A sample STL path, show-
ing gain and loss elements, is given in Figure 2.2.21.
Table 2.2.8 Relationship Between Fade Margin, Reliability, and Outage Time for Rayleigh Distributed Paths
Fade Margin Path Reliability/ Outage Hours Outage Minutes per Outage Seconds
(dB) Availability per Year Month per Day
10 90.4837 834.20 4170.98 8222.05
20 99.0050 87.22 436.12 859.69
21 99.2088 69-35 346.77 683.58
22 99.3710 55.14 275.68 543.43
23 99.5001 43.82 219.12 431.94
24 99.6027 34.83 174.14 343.28
25 99-6843 27.68 138.38 272.79
26 99.7491 21.99 109.96 216.75
27 99.9007 17.47 87.37 172.22
28 99.8416 13.88 69.41 136.83
29 99.8742 11.03 55.14 108.70
30 99.9000 8.76 43.81 86.36
31 99.9206 6.96 34.80 68.60
32 99.9369 5.53 27.65 54.50
33 99.9499 4.39 21.96 43.29
34 99.9602 3.49 17.45 34.39
35 99.9684 2.77 13.86 27.32
36 99.9749 2.20 11.01 21.70
37 99-9800 1.75 8.74 17.24
38 99.9842 1.39 6.95 13.69
39 99.9874 1.10 5.52 10.88
40 99.9900 0.88 4.38 8.64
41 99.9921 0.70 3.48 6.86
42 99-9937 0.55 2.77 5.45
43 99.9950 0.44 2.20 4.33
44 99.9960 0.35 1.74 3.44
45 99.9968 0.28 1.39 2.73
50 99.9990 0.09 0.44 0.86
55 99.9997 0.03 0.14 0.27
60 99.9999 0.01 0.04 0.09
transmitter and a second receiver. An automatic changeover unit is included for switching pur-
poses. The changeover system monitors each transmitter and receiver pair to sense critical
parameters. If a failure occurs, the changeover controller switches from the faulty transmitter to
the standby transmitter, or from the faulty receiver to the standby receiver. The changeover units
work independently.
This system can be simplified to include only hot-standby provisions at the transmitter site, or
at the receiver site. If this approach is taken, a case can be made for either backup system. One
argument states that the transmitter is more likely to fail because it contains high-power stages,
which typically are more prone to failures than low-power circuits. On the other hand, the trans-
mitter is almost always more accessible for repair—being at the studio site—than the receiver.
y
Transmitter
site
20.15 dBi gain 250 ft line
Clear path
20.15 dBi gain STL receiver
(a ) Transmitter
site
Studio
Insufficient clearance
Transmitter
site
Studio
site
Repeater
site
(b ) Transmitter Transmitter
site site
Studio Studio
site site
Obstructed path
(c ) Repeater
site
Transmitter
site
Transmitter
Interference condition site
Studio
site #1 Studio
site #1
948.5 MHz
948.5 MHz
945.0 MHz
Studio
site #2 Studio
site #2
Repeater
site
Figure 2.2.22 Applications requiring the use of a multi-hop STL system: (a) excessively long path,
(b) path obstructions, (c) interference possibility. (After [2].)
Radio STL Systems 2-49
Radio path
STL
transmitter
Filtering,
IPA, PA modulator
Radio path
Transmitter site
STL
Program output receiver
Figure 2.2.23 Repeater link using conventional demodulation and remodulation of the composite
signal. (After [2].)
should be checked before adding the encoder to the system. In this way, any problems detected
can be readily pinpointed.
An STL using one or more subcarriers for program relay, closed-circuit communications, or
remote control functions can be bench tested in a similar manner. Having all hardware on one
bench and easily accessible makes adjustment of levels and other parameters much easier than
when the units are separated by many miles.
Nearly all STL systems checked in this manner pass specification testing with no problem.
Still, there may be instances where a unit was damaged during shipment. It is far easier to solve
such problems before the hardware is installed.
Antennas should be given a close visual inspection. Many antenna models used for STL work
are shipped in a partially disassembled state. Final assembly of the antennas should be completed
before the installation process is begun. Pay particular attention to mounting brackets; make sure
y
Radio path
STL
transmitter
IPA, PA Filtering,
modulator
Radio path
Transmitter site
Figure 2.2.24 Repeater link using IF transfer of the composite signal. (After [2].)
the hardware will mate properly with the tower or structure on which the antennas will be
mounted. Check carefully for missing parts.
The transmission line and connectors require no pre-installation quality control, however, it is
suggested that as many connectors as possible be installed in the shop, rather than at the site.
Type N connectors require patience and skill to install. If possible, have the cable supplier attach
the proper connectors to both ends of the cable reel. In this way, only two connectors will need to
be installed in the field. Always place the manufacturer-installed connector ends at the antennas.
This provides the best assurance of trouble free operation in an exposed environment. Consider
ordering a couple extra connectors just in case a part is lost or damaged during construction. The
engineer may also want to use one connector as a test case to become more familiar with the
required installation technique.
Radio STL Systems 2-51
(a)
Transmitter RF
#1 output
#1
Fault
sense
Switched output
Program Changeover Antenna
and MUX switch
inputs
Fault
sense
Transmitter RF
#2 output
#2
(b)
Receiver
#1 Receiver
output #1
Fault
sense
Program
Antenna Signal Changeover and MUX
splitter switch outputs
Fault
sense Receiver
Receiver output #2
#2
Figure 2.2.25 Equipment configuration for a hot-standby system: (a) studio site, (b) broadcast
transmitter site. (After [2].)
The test equipment required for pre-installation checkout is usually readily available in the
radio station shop. Basic items include the following:
• A high-quality 50 Ω dummy load capable of dissipating approximately 25 W.
• An in-line RF power output meter capable of reading forward and reverse power at 1.0 GHz.
• Audio frequency signal generator.
• Audio frequency distortion analyzer.
• Frequency counter accurate to 1.0 GHz.
If possible, a spectrum analyzer also should be available. Figure 2.2.26 shows a typical bench
test setup for an STL system.
y
Load
Frequency
counter
Power
meter
RF signal input
Spectrum analyzer
Baseband signal input
Figure 2.2.26 Equipment setup for bench testing an STL system prior to installation. (After [2].)
2.2.3e Installation
The locations commonly used for broadcast transmitter sites are seldom ideal from an environ-
mental standpoint. The site may be difficult to reach during certain parts of the year, very hot
during the summer, and very cold during the winter. For these reasons, rugged equipment should
be chosen and properly installed. Temperature extremes can cause problems for frequency-deter-
mining elements, as well as accessories such as cavity filters, preselectors, and preamplifiers.
Environmental control at the broadcast transmitter site, therefore, is highly recommended.
The STL transmitter and receiver should be mounted in an equipment rack in a protected
location adjacent to the stereo generator at the studio site, and adjacent to the exciter at the
broadcast transmitter site. Keep all cable runs as short and direct as possible. Excessive cable
lengths between the stereo generator and the STL transmitter, or between the STL receiver and
y
STL antenna
STL receiver
Bond to tower
at the point the
line begins its
horizontal run
Figure 2.2.27 Grounding practice for an STL transmission line on a tower. (After [2].)
the exciter, can degrade stereo separation and frequency response. Follow good grounding prac-
tices at all times.
The antenna presents probably the greatest installation challenge. Because of its directional
nature, the antenna must be properly oriented. Compass bearings are desirable along with what-
ever visual sightings are practical. The received signal strength at the broadcast transmitter site
can be used as an indication of proper orientation of the STL receive antenna. Ensure also that
the antenna is set to the proper angle relative to the horizon. Because the STL antennas may be
located high on a tower or building, a professional tower crew may be required to mount the
devices. Make certain that all cables on the tower are securely fastened to prevent stress and
wear. Seal the external connections with sealant made for that purpose and completely wrap the
connection joints with tape.
While it is desirable to mount the receive antenna high on a tower to ensure a good path, it is
also good engineering practice to keep the antenna as far as possible from radiating elements on
the structure, such as the station's main broadcast antenna. Other potential sources of RF prob-
lems include 2-way radio, cellular radio, and TSL systems. If the STL receive antenna is located
in a strong RF field, the front-end may be overloaded or desensitized. Placing the STL antenna
close to other antennas on the structure also can detune the antennas, degrading the performance
of both systems.
One of the most common problems encountered during installation is damage to the transmis-
sion line. Lines must not be twisted or bent beyond their minimum bending radius. To retain its
characteristic impedance, the line must be treated with care. A line damaged by carelessness can
cause poor performance of the system.
The transmission line must be properly grounded. As illustrated in Figure 2.2.27, grounding
typically is performed at the point where the line leaves the tower and at the point where it enters
the building. This procedure, obviously, applies only to grounded tower systems. Grounding kits
are available for this purpose.
y
In situations where the STL antenna is to be mounted on an AM antenna tower, which typi-
cally is insulated from ground, an isocoupler will need to be installed where the line begins its
run up the tower. The isocoupler will pass the STL signal efficiently, while providing a high
impedance to ground for the tower at the AM frequency. When this is done, the base impedance
of the tower can be excepted to change slightly.
2.2.3g Troubleshooting
The most common problem associated with STL commissioning is high VSWR. If the indicated
VSWR is outside normal limits, the transmission line, a connector, or an antenna usually is at
fault. A dummy load may be used to determine where the fault lies. Begin at the first pigtail and
substitute the dummy load for the transmission line. At each point the load is substituted for the
line and antenna, check the displayed VSWR on the transmitter. It is necessary, of course, to
power-down the transmitter while the dummy load is moved from one point to the next.
Gradually changing readings can provide clues that will help prevent a failure. Many users will
wish to monitor forward power continuously. Reverse power also should be checked regularly to
monitor the condition of the line and antennas, as discussed previously. Compare the test data
taken after system commissioning with the operating parameters observed during routine checks.
They should closely agree.
Operation of the receiver is best checked with its built-in multimeter. The RF level indication
should be carefully noted during initial installation, and subsequent observations should be com-
pared with this reference. Remember there will be some change in signal strength readings
because of weather conditions and temperature variations. Consider the fade margin conditions
used in the path analysis calculations when making judgments about the observed readings. Be
aware of unusual variations resulting from temperature inversions, if they are common in the
area. If trouble is experienced with the receiver, the possibility of interference from another STL
or another service should not be overlooked. A spectrum analyzer is invaluable for such trouble-
shooting.
For digital STL systems, error rate monitors are provided to assess overall performance. As
long as the status indicators on the front panel indicate proper operation, the system usually can
be considered to be transparent to the program material.
The most definitive overall check of the system will be the audio proof of performance. First
attention should be given to the noise measurement. If this is not satisfactory, it will be impossi-
ble to achieve meaningful distortion measurements, because noise will be indicated as distortion
by the analyzer. For dual monaural systems, the engineer will need to carefully check left and
right cannel balance.
Any broadcast system is only as strong as its weakest component. Before placing blame for
poor performance on the STL, start at the beginning of the broadcast chain and follow the signal
step-by-step to the point where it deteriorates.
2.2.4 References
1. Rollins, William W., and Robert L. Band: “T1 Digital STL: Discrete vs. Composite Trans-
mission,” NAB 1996 Broadcast Engineering Conference Proceedings, National Association
of Broadcasters, Washington, D.C., pp. 356–359, 1996.
2. Whitaker, Jerry C., (ed.): A Primer: Digital Aural Studio to Transmitter Links, TFT, Santa
Clara, CA, 1994.
2.2.5 Bibliography
Hauptstuek, Jim: “Interconnecting the Digital Chain,” NAB 1996 Broadcast Engineering Confer-
ence Proceedings, National Association of Broadcasters, Washington, D.C., pp. 360–358,
1996.
McClanahan, M. E.: “Aural Broadcast Auxiliary Links,” in NAB Engineering Handbook, 8th ed.,
E. B. Cructhfield (ed.), National Association of Broadcasters, Washington, D.C, pp. 671–
678, 1992.
y
Parker, Darryl: “TFT DMM92 Meets STL Requirements,” Radio World, Falls Church, VA, Octo-
ber 21, 1992.
Salek, Stanley: “Analysis of FM Booster System Configurations,” Proceedings of the 1992 NAB
Broadcast Engineering Conference, National Association of Broadcasters, Washington,
DC, April 1992.
Whitaker, Jerry C., and Skip. Pizzi: “Radio Electronic News Gathering and Field Production,” in
NAB Engineering Handbook, 8th ed., E. B. Cructhfield (ed.), National Association of
Broadcasters, Washington, D.C, pp. 1051–1072, 1992.
g g
Chapter
2.3
Digital Radio Systems
Ken Pohlmann
2.3.1 Introduction
Following many years of work, digital audio radio (DAR), also known as digital audio broadcast-
ing (DAB), is a reality. Instead of using analog modulation methods such as AM or FM, DAR
transmits audio signals digitally. DAR is designed to eventually replace analog AM and FM
broadcasting, providing a signal that is robust against reception problems such as multipath inter-
ference, with fidelity comparable to that of the compact disc. In addition to audio data, DAR
supports auxiliary data transmission; for example, text, graphics, or still video images.
The evolution of a practical DAR standard was a complicated process because broadcasting is
regulated by governments, and swayed by economic concerns. Two principal DAR technologies
evolved: Eureka 147 DAB, and in-band on channel (IBOC) broadcasting. The route to DAR was
complex, with each country choosing one method or another. Unfortunately, there will not be a
single worldwide standard.
1. Portions of this chapter were adapted from: Pohlmann, Ken: Principles of Digital Audio,
McGraw-Hill, New York, N.Y., 2000. Used with permission.
2-57
g y
ence. However, the S-band is suitable for satellite delivery. Portions of the VHF and UHF bands
have been allocated to DTV applications.
A worldwide allocation would assist manufacturers, and would ultimately lower costs for all
concerned, but such a consensus was impossible to obtain. The World Administrative Radio
Conference (WARC) allocated 40 MHz at 1500 MHz (L-band) for digital audio broadcasting via
satellite, but ultimately deferred selection for regional solutions. Similarly, the CCIR (Interna-
tional Radio Consultative Committee) proposed a worldwide 60-MHz band at 1500 MHz for
both terrestrial and satellite DAR; however; terrestrial broadcasting at 1500 MHz is prone to
absorption and obstruction, and satellite broadcasting requires repeaters. In the U.S., in 1995, the
FCC allocated the S-band (2310–2360 MHz) spectrum to establish satellite-delivered digital
audio broadcasting services. Canada and Mexico allocated space at 1500 MHz. In Europe, both
1500 MHz and 2600 MHz regions have been developed. Ideally, whether using adjacent or sepa-
rated bands, DAR would permit compatibility between terrestrial and satellite channels. In prac-
tice, there is not a mutually ideal band space, and any allocation involves compromises.
Alternatively, new DAR systems could cohabit spectral space with existing applications. Spe-
cifically, DAR could use a shared-spectrum technique to locate the digital signal in the FM and
AM bands. By using an “in-band” approach, power multiplexing can provide compatibility with
analog transmissions, with the digital broadcast signal coexisting with the analog carriers.
Because of its greater efficiency, the DAR signal transmits at lower power relative to the analog
station. An analog receiver rejects the weaker digital signal as noise, and DAR receivers pick up
both DAR and analog broadcasts. No matter how DAR is implemented, the eventual disposition
of AM and FM broadcasting is a concern. A transition period is required, lasting until conven-
tional AM and FM gradually disappear.
Figure 2.3.1 The effects of multipath on signal reception in a mobile environment. (From [1]. Used
with permission.)
Two types of multiplexing are used. The most common method is time division multiplexing
(TDM) in which multiple channels share a single carrier by time-interleaving their data streams
on a bit or word basis. Frequency-division multiplexing (FDM) divides a band into subbands,
and individual channels modulate individual carriers within the available bandwidth. A single
channel can be frequency multiplexed; this lowers the bit rate on each carrier and lowers bit
errors as well. Because different carriers are used, multipath interference is reduced because only
one carrier frequency is affected.
Phase-shift keying (PSK) modulation
methods are commonly used because they (a) (c)
yield the lowest BER for a given signal
strength. In binary phase-shift keying
(BPSK) two phase shifts represent two
binary states. For example, a binary 0 places (b)
the carrier in phase, and a binary 1 places it
180° out of phase, as shown in Figure 2.3.2a.
This phase change codes the binary signal, as
shown in Figure 2.3.2b. The symbol rate
equals the data rate.
In quadrature phase-shift keying (QPSK)
four phase shifts are used; thus, two bits per
Figure 2.3.2 Phase-shift keying modulation: (a)
symbol are represented (Figure 2.3.2c). The binary phase-shift keying phasor diagram, (b)
symbol rate is half the transmission rate. BPSK phase change coding, (c) quadrature
QPSK is the most widely used method, espe- phase-shift keying phasor diagram. (From [1].
cially for data rates above 100 Mbits/s. Other Used with permission.)
modulation methods include amplitude shift
g y
keying (ASK) in which different carrier powers represent binary values, and frequency shift key-
ing (FSK) in which the carrier frequency is varied.
The bandwidth (BW) for an M-PSK signal is given by
D
--------------- < BW 2 D
> --------------- (2.3.1)
log
2M log
2M
where D is the data rate in bits/s. For example, a QPSK signal transmitting a 400-kbits/s signal
would require a bandwidth of between 200 to 400 kHz. A 16-PSK signal could transmit the same
data rate in half the bandwidth, but would require 8 dB more power for a satisfactory BER.
Given the inherently high bandwidth of digital audio signals, data reduction is mandatory to con-
serve spectral space and provide low BER for a reasonable transmission power level.
One of the great strengths of a digital radio system is its transmission power efficiency. This
can be seen by relating coverage area to the carrier-to-noise ratio (C/N) at the receiver. A digital
system might need a C/N of only 6 dB, but a FM receiver needs a C/N of 30 dB—a difference of
24 dB—to provide the same coverage area. The field strength for a DAR system can be esti-
mated from
E = V i + NF + C / N – --96.5
----------- (2.3.2)
F MHz
where E = minimum acceptable field strength at the receiver in dBu, Vi = thermal noise of the
receiver into 300 Ω in dBu, and
Vi = ⎛ kTRB
20 log --------------
⎞
1⁄ 2
(2.3.3)
⎝ –6 ⎠
10
Where:
k = 1.38 × 10-23 W/kHz
T = temperature in degrees Kelvin (290 at room temperature)
R = input impedance of the receiver
B = bandwidth of the digital signal
NF = noise figure of the receiver
C/N = carrier-to-noise ratio for a given BER
F MHZ = transmission frequency
For example, if a DAR signal is broadcast at 100 MHz, with 200-kHz bandwidth, into a
receiver with 6-dB noise figure, and with a C/N of 6 dB, then E = 5.5 dBu. In contrast, an FM
receiver might require a field strength of 60 dBu for good reception, and about 30 dBu for noisy
reception.
g y
(a)
(b)
Figure 2.3.3 Eureka 147 DAB system: (a) transmitter, (b) receiver. (From [1]. Used with permis-
sion.)
tion of error correction profiles is used, optimized for the error characteristics of MPEG layer II
encoded data. The coders aim to provide graceful degradation as opposed to brick-wall failure.
Thus, stronger protection is given to data for which an error would yield muting or other obvious
effects, and weaker protection where errors would be less audible. Specifically, three levels of
protection are used within an MPEG frame; the frame header and bit allocation data are given the
strongest protection, followed by the scale factors, and subband audio samples respectively.
A block diagram of a Eureka 147 receiver is shown in Figure 2.3.3b. The DAB receiver uses
an analog tuner to select the desired DAB ensemble; it also performs down-conversion and filter-
ing. The signal is quadrature-demodulated and converted into digital form. FFT and differential
demodulation is performed, followed by time and frequency de-interleaving and error correction.
Final audio decoding completes the signal chain. A receiver may be designed to simultaneously
recover more than one service component from an ensemble signal.
g y
The DAB standard defines three basic transmission mode options, allowing a range of trans-
mitting frequencies up to 3 GHz:
• Mode I with a frame duration of 96 ms, 1536 carriers, and nominal frequency range of less
than 375 MHz is suited for a terrestrial VHF network because it allows the greatest transmit-
ter separations.
• Mode II with a frame duration of 24 ms, 384 carriers, and nominal frequency range of less
than 1.5 GHz is suited for UHF and local radio applications.
• Mode III with a frame duration of 24 ms (as in Mode II), 192 carriers, and nominal fre-
quency range of less than 3 GHz is suited for cable, satellite, and hybrid (terrestrial gap filler)
applications.
• Other modes can and have been defined.
In all modes, the transmitted signal uses a frame structure with a fixed sequence of symbols. The
gross capacity of the main service channel is about 2.3 Mbits/s within a 1.54-MHz bandwidth
DAB signal. The net bit rate ranges from approximately 0.6 to 1.7 Mbits/s depending on the
error correction redundancy used.
Eureka 147 uses ISO/MPEG-1 Layer II bit rate reduction in its source coding to minimize
spectrum requirements. Bit rates may range from 32 to 384 kbits/s in 14 steps; nominally, a rate
of 128 kbps per channel is used. Stereo or surround-sound signals can be conveyed. Nominally, a
sampling frequency of 48 kHz is used; however; a 24-kHz sampling rate is optional.
(a) (b)
Figure 2.3.4 RF emissions mask per FCC Rules: (a) FM broadcasting, (b) AM broadcasting.
(From [1]. Used with permission.)
while new digital radios can receive digital signals. At the end of a transition period, broadcasters
would turn off their analog transmitters and continue to broadcast in an all-digital mode. Such in-
band systems offer commercial advantages over a wideband system because broadcasters can
retain their existing listener base during a transition period, much of their current equipment can
be reused, existing spectral allocation can be used, and no new spectral space is needed.
In-band systems permit broadcasters to simultaneously transmit analog and digital programs.
Digital signals are inherently immune to interference, thus a digital receiver is able to reject the
analog signals. However, it is more difficult for an analog receiver to reject the digital signal's
interference. Coexistence can be achieved if the digital signal is broadcast at much lower power;
because of the broadcast efficiency of DAR, a low-power signal can maintain existing coverage
areas for digital receivers and allow analog receivers to reject the interfering signal.
With an in-band on-channel (IBOC) system, DAR signals are superimposed on current FM
and AM transmission frequencies. In the U.S., FM radio stations have a nominal bandwidth of
400 kHz with approximately a 200-kHz signal spectrum. FM radio stations are spaced 200 kHz
apart, and there is a guard band of 400 kHz between co-located stations to minimize interference.
In-band systems fit within the same bandwidth constraints, and furthermore, efficiently use
the FCC RF mask in which the channel's spectrum widens as power decreases. Specifically, if a
DAR signal is 25 dB below the FM signal, it could occupy a 480-kHz bandwidth, as shown in
Figure 2.3.4a. In the case of AM, if the DAR signal is 25 dB below the AM signal, the band can
be 40 kHz wide, as shown in Figure 2.3.4b. Because the digital signal's power can be lower, it can
thus efficiently use the entire frequency mask area. In practice, different IBOC systems use the
mask in different ways, seeking to optimize data throughput and robustness. Clearly, because of
the wider FM bandwidth, an FM in-band system is considerably easier to implement; rates of
256 kbits/s can be achieved. The narrow AM channels can limit DAR data rates to no more than
half that rate. In addition, existing AM radios are not as amenable to DAR signals as FM receiv-
ers. On the other hand, AM broadcast is not hampered by multipath problems.
An in-band FM system might require spatial diversity antennas on cars to combat multipath
interference. Any DAR system must rely on perceptual coding to reduce the channel data rate to
128 kbits/s or so, to allow the high fidelity signal (along with nonaudio data) to be transmitted in
the narrow bands available.
g y
The IBOC method is highly attractive because it fits within much of the existing regulatory
statutes and commercial interests. No modifications of existing analog AM and FM receivers are
required, and DAR sets receive both analog and digital signals. Moreover, because digital signals
are simply simulcast over existing equipment, start-up costs are low. An in-band system provides
improved frequency response, and lower noise and distortion within existing coverage areas.
Receivers can be designed so that if the digital signal is lost, the radio will automatically switch
to the analog signal.
FCC Actions
In a First Report and Order issued October 10, 2002, the FCC selected in-band, on-channel
(IBOC) as the technology to bring the benefits of digital audio broadcasting to AM and FM radio
broadcasters efficiently and rapidly. Also, the Commission announced notification procedures
that would allow AM (daytime operations only) and FM stations on a voluntary basis to begin
interim digital transmissions immediately using the IBOC systems developed by iBiquity Digital
Corporation. During the interim IBOC operations, stations would broadcast the same main chan-
nel program material in both analog and digital modes.
The Commission also announced its support for a public and open process to develop formal
IBOC AM and FM standards. It also deferred consideration of IBOC licensing and service rule
changes to a future Further Notice of Proposed Rulemaking.
2.3.5 References
1. Pohlmann, Ken: “Digital Radio and Television Broadcasting,” Principles of Digital Audio,
McGraw-Hill, New York, N.Y., 2000.
g y
g g
Chapter
2.4
IBOC AM Digital Radio System
2.4.1 Introduction1
The principal system analysis work on the in-band on-channel (IBOC) digital radio system for
AM broadcasting was performed by the DAB Subcommittee of the National Radio Systems
Committee (NRSC). The goals and objectives of the subcommittee were [1]:
• To study IBOC DAB systems and determine if they provide broadcasters and users with: 1) a
digital signal with significantly greater quality and durability than available from the analog
systems that presently exist in the U.S.; 2) a digital service area that is at least equivalent to
the host station's analog service area while simultaneously providing suitable protection in co-
channel and adjacent channel situations; 3) a smooth transition from analog to digital ser-
vices.
• To provide broadcasters and receiver manufacturers with the information they need to make
an informed decision on the future of digital audio broadcasting in the U.S., and if appropriate
to foster its implementation.
To meet its objectives, the subcommittee resolved to work towards achieving the following goals:
• To develop a technical record and, where applicable, draw conclusions that will be useful to
the NRSC in the evaluation of IBOC systems.
• Provide a direct comparison between IBOC DAB and existing analog broadcasting systems,
and between an IBOC signal and its host analog signal, over a wide variation of terrain and
under adverse propagation conditions that could be expected to be found throughout the U.S.
• Fully assess the impact of the IBOC DAB signal upon the existing analog broadcast signals
with which they must co-exist.
1. This chapter is based on the following document: NRSC, “DAB Subcommittee Evaluation of
the iBiquity Digital Corporation IBOC System, Part 2—AM IBOC,” National Radio Systems
Committee, Washington, D.C., April 6, 2002.
2-69
g y
• Develop a testing process and measurement criteria that would produce conclusive, believ-
able, and acceptable results, and be of a streamlined nature so as not to impede rapid develop-
ment of this new technology.
• Work closely with IBOC system proponents in the development of their laboratory and field
test plans, which would be used to provide the basis for the comparisons.
• Indirectly participate in the test process, by assisting in the selection of (one or more) inde-
pendent testing agencies, or by closely observing proponent-conducted tests to insure that the
testing is executed in a thorough, fair, and impartial manner.
subjective testing Using human subjects to judge the performance of a system. Subjective test-
ing is especially useful when testing systems that include components such as perceptual
audio coders. Traditional audio measurement techniques, such as signal-to-noise and dis-
tortion measurements, are often not sensitive to the way perceptual audio coders work and
cannot characterize their performance in a manner that can be compared with other coders,
or with traditional analog systems.
WQP (weighted quasi-peak) Refers to a fast attack, slow-decay detector circuit that approxi-
mately responds to signal peaks, and that has varying attenuation as a function of frequency
so as to produce a measurement that approximates the human hearing system.
dBc
(-10 kHz) (+10 kHz)
FCC AM 0
MASK ANALOG
AM
SIGNAL
PRIMARY
Figure 2.4.1 iBiquity AM IBOC system signal spectral power density. (From [1]. Used with permis-
sion.)
there were two first-adjacent channel signals, one on either side of the desired signal (hence
digital sidebands on both sides of the carrier were experiencing interference).
• Proximity of digital sidebands to second-adjacent channel signals. The digital sidebands of
the AM IBOC signal are located such that they could potentially interfere with (and receive
interference from) a second-adjacent AM signal’s digital sidebands (Figure 2.4.3). The NRSC
test procedures included tests that characterized this behavior, including tests of IBOC perfor-
mance when there were two second-adjacent channel signals, one on either side of the desired
signal (hence digital sidebands on both sides of the carrier were experiencing interference).
• Proximity of digital sidebands to third-adjacent channel signals. The digital sidebands of the
AM IBOC signal are located such that they could potentially interfere with (and receive inter-
ference from) a third-adjacent AM IBOC signal’s digital sidebands (Figure 2.4.4). The NRSC
test procedures included tests that characterized this behavior.
• Blend from enhanced to core. The audio coder in the iBiquity system creates two digital
audio streams—enhanced and core. When all sideband groups (primary, secondary, and ter-
tiary) are receivable, an AM IBOC receiver will decode these streams and provide enhanced
digital audio quality to the listener. As reception conditions degrade such that the secondary
and tertiary sideband groups experience errors but the primary sideband groups are still of
acceptable quality, the receiver audio will blend from enhanced quality to core quality.
• Blend-to-analog. The iBiquity AM IBOC system simulcasts a radio station’s main channel
audio signal using the analog AM carrier and IBOC digital sidebands, and under certain cir-
cumstances, the IBOC receiver will blend back and forth between these two signals. Conse-
quently, depending upon the reception environment, the listener will either hear digital audio
(transported over the IBOC digital sidebands) or analog audio (delivered on the AM-modu-
lated analog carrier), with the digital audio being the default condition.
g y
DESIRED 10 kHz
CHANNEL
1s t A DJA CENT
CHANNEL
PRIMARY
DIGITAL SIDEBAND
OVERLAPS 1ST ADJ
ANALOG
Figure 2.4.2 Illustration of potential interference to/from first-adjacent analog signals by AM IBOC
digital sidebands. (From [1]. Used with permission.)
20 kHz
DESIRED PRIMARY
CHANNEL DIGITAL
2nd
SIDEBAND IN ADJACENT
WIDEBAND RX
FILTER
CHANNEL WIDEBAND
RX FILTER
MASK (TYP)
Figure 2.4.3 Illustration of potential interference between second-adjacent channel AM IBOC sig-
nals. (From [1]. Used with permission.)
The two main circumstances under which an IBOC receiver reverts to analog audio output are
during acquisition (i.e., when a radio station is first tuned in, an IBOC receiver acquires the ana-
log signal in milliseconds but takes a few seconds to begin decoding the audio on the digital side-
bands) or, when reception conditions deteriorate to the point where approximately 10 percent of
the data blocks sent in the digital sidebands are corrupted during transmission. Many of the tests
in the NRSC procedures were designed to determine the conditions that would cause blend-to-
analog to occur in this second circumstance, since at this point the IBOC system essentially
reverts to analog AM.
g y
DESIRED 30 kHz 3r d
CHANNEL ADJACENT
500 Hz
GUARD BAND CH ANNEL
BETWEEN
SIGNALS
Figure 2.4.4 Illustration of potential interference between third-adjacent AM IBOC signals. (From
[1]. Used with permission.)
quately respond to the perceptual aspects of the system. This is one of the reasons why the
NRSC’s test program included a comprehensive subjective evaluation component.
The NRSC’s initial AM IBOC test program did not include any tests specifically designed to
evaluate the digital performance or host compatibility of the IBOC system under skywave propa-
gation conditions, for example, as distant listeners of a clear-channel AM station might experi-
ence at night. In addition, there were no provisions in the initial test program for evaluating the
impact of a skywave IBOC interferer on either analog or IBOC desired signals. Such testing was
beyond the scope of the NRSC’s accelerated test program, given the highly variable and statisti-
cal nature of skywave propagation conditions that make the collection of statistically significant
test data extremely difficult and time-consuming.
the nighttime compatibility of hybrid AM IBOC. The NRSC therefore recommended that sta-
tions desiring to operate with AM IBOC do so during daytime hours only.
Analog Compatibility
For the conditions tested, the AM IBOC system was found to have little effect on the host analog
signal; the amount of interference to the host analog signal was receiver-dependent [1]. Narrow
bandwidth automobile receivers were found to be the least sensitive to the digital signal. Wider
bandwidth hi-fi and portable radios were found to be more sensitive to the digital signal. Each
receiver’s frequency and phase response symmetry plays a part in its host compatibility.
The test results suggested that, although the introduction of AM IBOC would be noticeable to
some listeners of the host analog station using certain analog receivers, these listeners are not
expected to find their audio quality sufficiently degraded to impact listening.
Other findings included the following:
• Co-channel compatibility. The introduction of AM IBOC was not expected to have any
impact on the level of co-channel interference due to the design of the AM IBOC system. Co-
channel compatibility was not tested by the NRSC.
• First adjacent compatibility. Overall conclusions about first-adjacent compatibility of the
AM IBOC system were that the interference caused by the introduction of the IBOC signal
was predominantly determined by the D/U ratio. FCC allocation rules permit 6 dB D/U ratios
at an AM station’s daytime protected contour. At the 10 dB D/U point, all AM radios tested,
when receiving speech programming, were unable to provide audio quality that would satisfy
at least half of all listeners whether or not an interfering first-adjacent station was broadcast-
ing the AM IBOC signal. At the 15 dB D/U point, automobile radios provided listenable
audio, and were not significantly affected by the introduction of IBOC; however, hi-fi receiv-
ers did provide listenable audio that would become unlistenable with the introduction of
IBOC. At the same 15 dB D/U point, portable radios appeared to provide unlistenable audio
with or without IBOC. At the 30 dB D/U point, all radios appeared to provide listenable audio
with or without IBOC.
• Second adjacent compatibility. The data indicated that second-adjacent interference from
AM IBOC would be receiver- and D/U-dependent. At the D/U ratios tested, narrowband (typ-
ically automobile) receivers were not sensitive to AM IBOC interference, although hi fi and
portable receivers (i.e., wideband receivers) experienced interference at the 0 dB D/U ratio,
and at negative D/U ratios. FCC allocation rules permit 0 dB D/U ratios at an AM station’s 5
mV/m groundwave contour.
• Third adjacent compatibility. AM IBOC was not expected to have an impact on the amount
of third-adjacent channel interference in the AM band, and the test results confirmed this.
2.4.4 References
1. NRSC: “DAB Subcommittee Evaluation of the iBiquity Digital Corporation IBOC System,
Part 2—AM IBOC,” National Radio Systems Committee, Washington, D.C., April 6, 2002.
g y
g g
Chapter
2.5
IBOC FM Digital Radio System
2.5.1 Introduction1
The principal system analysis work on the in-band on-channel (IBOC) digital radio system for
FM broadcasting was performed by the DAB Subcommittee of the National Radio Systems
Committee. The goals and objectives of the subcommittee were [1]:
• To study IBOC DAB systems and determine if they provide broadcasters and users with: 1) a
digital signal with significantly greater quality and durability than available from the analog
system that presently exists in the U.S.; 2) a digital service area that is at least equivalent to
the host station's analog service area while simultaneously providing suitable protection in co-
channel and adjacent channel situations; 3) a smooth transition from analog to digital ser-
vices.
• To provide broadcasters and receiver manufacturers with the information they need to make
an informed decision on the future of digital audio broadcasting in the U.S., and if appropriate
to foster its implementation.
To meet its objectives, the subcommittee resolved to work towards achieving the following goals:
• To develop a technical record and, where applicable, draw conclusions that will be useful to
the NRSC in the evaluation of IBOC systems.
• Provide a direct comparison between FM IBOC DAB and the existing analog broadcasting
system, and between an IBOC signal and its host analog signal, over a wide variation of ter-
rain and under adverse propagation conditions that could be expected to be found throughout
the U.S.
• Fully assess the impact of the IBOC DAB signal upon the existing analog broadcast signals
with which they must co-exist.
1. This chapter is based on the following document: NRSC, “DAB Subcommittee Evaluation of
the iBiquity Digital Corporation IBOC System, Part 1—FM IBOC,” National Radio Systems
Committee, Washington, D.C., November 29, 2001.
2-79
g y
• Develop a testing process and measurement criteria that would produce conclusive, believable
and acceptable results, and be of a streamlined nature so as not to impede rapid development
of this technology.
• Work closely with IBOC system proponents in the development of laboratory and field test
plans, which would be used to provide the basis for future comparisons.
• Indirectly participate in the test process, by assisting in selection of (one or more) indepen-
dent testing agencies, or by closely observing proponent-conducted tests to insure that the
testing is executed in a thorough, fair, and impartial manner.
Perceptual Audio Coding Also known as audio compression or audio bit rate reduction, this is
the process of representing an audio signal with fewer bits while still preserving audio
quality. The coding schemes are based on the perceptual characteristics of the human ear.
Some examples of these coders are PAC, AAC, MPEG-2, and AC-3.
protected contour A representation of the theoretical signal strength of a radio station that
appears on a map as a closed polygon surrounding the station’s transmitter site. The FCC
defines a particular signal strength contour, such as 60 dBuV/m for certain classes of sta-
tion, as the protected contour. In allocating the facilities of other radio stations, the pro-
tected contour of an existing station may not be overlapped by certain interfering contours
of the other stations. The protected contour coarsely represents the primary coverage area
of a station, within which there is little likelihood that the signals of another station will
cause interference with its reception.
RDS (Radio Data System) The RDS signal is a low bit rate data stream transmitted on the 57
kHz subcarrier of an FM radio signal. Radio listeners know RDS mostly through its ability
to permit RDS radios to display call letters and search for stations based on their program-
ming format. Special traffic announcements can be transmitted to RDS radios, as well as
emergency alerts.
SDARS Satellite Digital Audio Radio Service, describes satellite-delivered digital audio systems
such as those from XM Radio and Sirius. The digital audio data rate in these systems is
specified as being 64 kbits/s.
subjective testing Using human subjects to judge the performance of a system. Subjective test-
ing is especially useful when testing systems that include components such as perceptual
audio coders. Traditional audio measurement techniques, such as signal-to-noise and dis-
tortion measurements, are often not compatible with way perceptual audio coders work and
therefore cannot characterize their performance in a manner that can be compared with
other coders, or with traditional analog systems.
WQP (weighted quasi peak) Refers to a fast attack, slow-decay detector circuit that approxi-
mately responds to signal peaks, and that has varying attenuation as a function of frequency
so as to produce a measurement that approximates the human hearing system.
dBc
(-1 2 0 k H z ) (+ 1 2 0 kH z )
0
FC C F M M A S K
ANALOG FM
-1 3 S IG N A L
(-2 4 0 k H z ) (+ 2 4 0 kH z )
O FDM -4 1 O FD M
CA R R IER S CA R R IER S
(LS B ) (U S B )
C h a n n e l c e n te r f, k H z
-2 0 0 -1 0 0 fre q u e n c y
+ 100 + 200
-19 8 -1 29 + 12 9 + 1 98
Figure 2.5.1 iBiquity FM IBOC system signal spectral power density. (From [1]. Used with permis-
sion.)
200 kHz
D E S IR E D
D IG IT A L
CHAN N EL S ID E B A N D 1st ADJACENT
O V E R LA P S
1ST ADJ CHANNEL
A N A LO G
Figure 2.5.2 Illustration of potential interference to/from first-adjacent analog signals by FM IBOC
digital sidebands. (From [1]. Used with permission.)
400 kHz
D E SIR E D
CHANNEL 2 nd A D JA C E N T
4 kH z G U A R D
BAND CHANNEL
BETW EEN
S IG N A LS
The two main circumstances under which an IBOC receiver reverts to analog audio output are
during acquisition; i.e. when a radio station is first tuned in (an IBOC receiver acquires the ana-
log signal in milliseconds but takes a few seconds to begin decoding the audio on the digital side-
bands) or when reception conditions deteriorate to the point where approximately 10 percent of
the data blocks sent in the digital sidebands are corrupted during transmission. Many of the tests
in the NRSC procedures were designed to determine the conditions that would cause blend-to-
analog to occur.
• Performance tests. In the context of the NRSC test procedures, performance tests (some-
times called digital performance tests) were those used to establish the performance of the
IBOC digital radio system itself. Performance test results were obtained using an IBOC
receiver or through direct observation of the received signal.
• Compatibility tests. In the context of the NRSC IBOC evaluation, compatibility tests (some-
times referred to as analog compatibility tests) were designed to determine the effect that the
IBOC digital radio signal had on existing analog signals (main channel audio and subcarri-
ers). Compatibility testing involved observing performance with IBOC digital sidebands
alternately turned on and off; test results were obtained using either analog FM receivers or
FM subcarrier receivers (analog or digital) or through direct observation of the received sig-
nal.
For each of these, two basic types of measurements were made:
• Objective measurements, where a parameter such as signal power, signal to noise ratio, or
error rate was measured, typically by using test equipment designed specifically for that par-
ticular measurement.
• Subjective measurements, which involved human interpretation or opinion (not something
that can be simply measured with a device). In the NRSC test program, subjective measure-
ments involved determining the quality of audio recordings by having people listen to them
and rate them according to a pre-defined quality scale.
Subjective evaluation was especially important when trying to assess the quality of IBOC dig-
ital audio because the IBOC radio system relies upon perceptual audio coding for audio trans-
mission. The listening experience of audio which has passed through a perceptually coded
system is not accurately characterized by many of the normal objective audio quality measures,
such as signal-to-noise, distortion, or bandwidth. The instruments used to make such measure-
ments do not adequately respond to the perceptual aspects of the system.
Lab tests
Laboratory tests are fundamental to any characterization of a new broadcast system such as FM
IBOC [1]. The controlled and repeatable environment of a laboratory makes it possible to deter-
mine how the system behaves with respect to individual factors such as the presence or absence
of RF noise, multipath interference, or co- and adjacent-channel signals. These factors all exist in
the real world but because they exist simultaneously and are constantly changing, it is virtually
impossible to determine, in the real world, the effect each has on system operation.
Field tests
Field testing of a new broadcast system is necessary to determine performance in the real world
where all of the various factors which impact propagation and reception of radio signals exist to
varying degrees depending upon time of day, geographic location, and environmental factors [1].
ital sidebands on the performance of existing main channel audio services was found to be
varied. Still, tests showed that listeners should not perceive an impact on the analog host signal,
nor on the analog signals on carriers that are either co-channel or second-adjacent channel with
respect to an IBOC signal. With respect to carriers that are located first-adjacent to an IBOC sig-
nal, listeners within the protected contour should not perceive an impact, but a limited number of
listeners may perceive an impact outside of the protected contour under certain conditions.
The NRSC also concluded that the tradeoffs necessary for the adoption of FM IBOC are rela-
tively minor. With respect to the main channel audio signal, evaluation of test data showed that a
small decrease in audio signal-to-noise ratio will be evident to some listeners in localized areas
where first-adjacent stations, operating with the FM IBOC system, overlap the coverage of a
desired station. However, listeners in these particular areas may also be subject to adjacent-chan-
nel analog interference which will tend to mask the IBOC-related interference, most appropri-
ately characterized as band-limited white noise, rendering it inaudible under normal listening
conditions. Also, the NRSC reported that all present-day mobile receivers include a stereo blend-
to-mono function dynamically active under conditions of varying signal strength and adjacent
channel interference. This characteristic of mobile receivers will also tend to mask IBOC-related
noise. The validity and effectiveness of these masking mechanisms is apparent from the rigorous
subjective evaluations performed on the data obtained during NRSC adjacent-channel testing.
Careful evaluation of test data showed that the digital SCA services tested (RDS and DARC)
should not be adversely impacted by IBOC. For the case of analog SCA services, some questions
remained as to the impact of IBOC on such services
2.5.3 References
1. NRSC: “DAB Subcommittee Evaluation of the iBiquity Digital Corporation IBOC System,
Part 1—FM IBOC,” National Radio Systems Committee, Washington, D.C., November 29,
2001.
g g
Section
In This Section:
3-1
y
ATSC: “Guide to the Use of the Digital Television Standard,” Advanced Television Systems
Committee, Washington, D.C., Doc. A/54A, December 4, 2003.
ATSC: “Ghost Canceling Reference Signal for NTSC,” Advanced Television Systems Commit-
tee, Washington, D.C., Doc. A/49, 1993.
ATSC Standard: “Modulation And Coding Requirements For Digital TV (DTV) Applications
Over Satellite,” Doc. A/80, ATSC, Washington, D.C., July, 17, 1999.
ATSC: “Synchronization Standard for Distributed Transmission,” Advanced Television Systems
Committee, Washington, D.C., Doc. A/110, July 14, 2004.
Carnt, P. S., and G. B. Townsend: Colour Television—Volume 1: NTSC; Volume 2: PAL and
SECAM, ILIFFE Bookes Ltd. (Wireless World), London, 1969.
“CCIR Characteristics of Systems for Monochrome and Colour Television—Recommendations
and Reports,” Recommendations 470-1 (1974–1978) of the Fourteenth Plenary Assembly
of CCIR in Kyoto, Japan, 1978.
FCC Report and Order and Notice of Proposed Rule Making on Rules and Policies Affecting the
Conversion to DTV, MM Docket No. 00-39, Federal Communications Commission, Wash-
ington, D.C., January 19, 2001.
Eilers, C.: “The Zenith Multichannel TV Sound System,” Proc. 38th NAB Eng. Conf., National
Association of Broadcasters, Washington, D.C., 1984.
Electronic Industries Association: Multichannel Television Sound—BTSC System Recommended
Practices, EIA, Washington, D.C., ETA Television Systems Bulletin 5, July 1985.
Ericksen, Dane E.: “A Review of IOT Performance,” Broadcast Engineering, Intertec Publish-
ing, Overland Park, Kan., p. 36, July 1996.
ETS-300-421, “Digital Broadcasting Systems for Television, Sound, and Data Services; Framing
Structure, Channel Coding and Modulation for 11–12 GHz Satellite Services,” DVB
Project technical publication.
ETS-300-429, “Digital Broadcasting Systems for Television, Sound, and Data Services; Framing
Structure, Channel Coding and Modulation for Cable Systems,” DVB Project technical
publication.
ETS-300-468, “Digital Broadcasting Systems for Television, Sound, and Data Services; Specifi-
cation for Service Information (SI) in Digital Video Broadcasting (DVB) Systems,” DVB
Project technical publication.
ETS-300-472, “Digital Broadcasting Systems for Television, Sound, and Data Services; Specifi-
cation for Carrying ITU-R System B Teletext in Digital Video Broadcasting (DVB) Bit-
streams,” DVB Project technical publication.
ETS-300-473, “Digital Broadcasting Systems for Television, Sound, and Data Services; Satellite
Master Antenna Television (SMATV) Distribution Systems,” DVB Project technical publi-
cation.
European Telecommunications Standards Institute: “Digital Video Broadcasting; Framing Struc-
ture, Channel Coding and Modulation for Digital Terrestrial Television (DVB-T)”, March
1997.
y
FCC: “Memorandum Opinion and Order on Reconsideration of the Sixth Report and
Order,” Federal Communications Commission, Washington, D.C., February 17, 1998.
Federal Communications Commission: Multichannel Television Sound Transmission and Audio
Processing Requirements for the BTSC System, OST Bulletin 60, FCC, Washington, D.C.
Fink, Donald G. (ed.): Color Television Standards, McGraw-Hill, New York, N.Y., 1955.
Gilmore, A. S.: Microwave Tubes, Artech House, Dedham, Mass., pp. 196–200, 1986.
Herbstreit, J. W., and J. Pouliquen: “International Standards for Color Television,” IEEE Spec-
trum, IEEE, New York, N.Y., March 1967.
Hirsch, C. J.: “Color Television Standards for Region 2,” IEEE Spectrum, IEEE, New York, N.Y.,
February 1968.
Hulick, Timothy P.: “60 kW Diacrode UHF TV Transmitter Design, Performance and Field
Report,” Proceedings of the 1996 NAB Broadcast Engineering Conference, National Asso-
ciation of Broadcasters, Washington, D.C., p. 442, 1996.
Hulick, Timothy P.: “Very Simple Out-of-Band IMD Correctors for Adjacent Channel NTSC/
DTV Transmitters,” Proceedings of the Digital Television '98 Conference, Intertec Publish-
ing, Overland Park, Kan., 1998.
Lee, E. A., and D.G. Messerschmitt: Digital Communication, 2nd ed.,. Kluwer, Boston, Mass.,
1994.
Muschallik, C.: “Improving an OFDM Reception Using an Adaptive Nyquist Windowing,” IEEE
Trans. on Consumer Electronics, no. 03, 1996.
Ostroff, Nat S.: “A Unique Solution to the Design of an ATV Transmitter,” Proceedings of the
1996 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., p. 144, 1996.
Plonka, Robert J.: “Planning Your Digital Television Transmission System,” Proceedings of the
1997 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., p. 89, 1997.
Pollet, T., M. van Bladel, and M. Moeneclaey: “BER Sensitivity of OFDM Systems to Carrier
Frequency Offset and Wiener Phase Noise,” IEEE Trans. on Communications, vol. 43,
1995.
Priest, D. H., and M. B. Shrader: “The Klystrode—An Unusual Transmitting Tube with Potential
for UHF-TV,” Proc. IEEE, vol. 70, no. 11, pp. 1318–1325, November 1982.
Pritchard, D. H.: “U.S. Color Television Fundamentals—A Review,” SMPTE Journal, SMPTE,
White Plains, N.Y., vol. 86, pp. 819–828, November 1977.
Roizen, J.: “Universal Color Television: An Electronic Fantasia,” IEEE Spectrum, IEEE, New
York, N.Y., March 1967.
Rhodes, Charles W.: “Terrestrial High-Definition Television,” The Electronics Handbook, Jerry
C. Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 1599–1610, 1996.
Robertson, P., and S. Kaiser: “Analysis of the Effects of Phase-Noise in Orthogonal Frequency
Division Multiplex (OFDM) Systems,” ICC 1995, pp. 1652–1657, 1995.
y
Sari, H., G. Karam, and I. Jeanclaude: “Channel Equalization and Carrier Synchronization in
OFDM Systems,” IEEE Proc. 6th. Tirrenia Workshop on Digital Communications, Tirre-
nia, Italy, pp. 191–202, September 1993.
Symons, R., M. Boyle, J. Cipolla, H. Schult, and R. True: “The Constant Efficiency Amplifier—
A Progress Report,” Proceedings of the NAB Broadcast Engineering Conference, National
Association of Broadcasters, Washington, D.C., pp. 77–84, 1998.
Symons, Robert S.: “The Constant Efficiency Amplifier,” Proceedings of the NAB Broadcast
Engineering Conference, National Association of Broadcasters, Washington, D.C., pp.
523–530, 1997.
Tardy, Michel-Pierre: “The Experience of High-Power UHF Tetrodes,” Proceedings of the 1993
NAB Broadcast Engineering Conference, National Association of Broadcasters, Washing-
ton, D.C., p. 261, 1993.
van Klinken, N., and W. Renirie: “Receiving DVB: Technical Challenges,” Proceedings of the
International Broadcasting Convention, IBC, Amsterdam, September 2000.
Whitaker, Jerry C.: “Microwave Power Tubes,” Power Vacuum Tubes Handbook, Van Nostrand
Reinhold, New York, p. 259, 1994.
Whitaker, Jerry C.: “Microwave Power Tubes,” The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., p. 413, 1996.
Whitaker, Jerry C.: “Solid State RF Devices,” Radio Frequency Transmission Systems: Design
and Operation, McGraw-Hill, New York, p. 101, 1990.
y
g g
Chapter
3.1
Television Transmission Standards
3.1.1 Introduction
The performance of a motion picture system in one location of the world is generally the same as
in any other location. Thus, international exchange of film programming is relatively straightfor-
ward. This is not the case, however, with the conventional broadcast color television systems.
The lack of compatibility has its origins in many factors, such as constraints in communications
channel allocations and techniques, differences in local power source characteristics, network
requirements, pickup and display technologies, and political considerations relating to interna-
tional telecommunications agreements.
3.1.1a Definitions
Applicable governmental regulations for the various analog television transmission systems in
use around the world provide the basic framework and detailed specifications pertaining to those
standards. Some of the key parameters specified include the following:
• Amplitude modulation (AM). A system of modulation in which the envelope of the transmit-
ted wave contains a component similar to the waveform of the baseband signal to be transmit-
ted.
• Antenna height above average terrain. The average of the antenna heights above the terrain
from about 2 to 10 mi (3.2 to 16 km) from the antenna as determined for eight radial direc-
tions spaced at 450 intervals of azimuth. Where circular or elliptical polarization is employed,
the average antenna height is based upon the height of the radiation center of the antenna that
produces the horizontal component of radiation.
• Antenna power gain. The square of the ratio of the rms free space field intensity produced at
1 mi (1.6 km) in the horizontal plane, expressed in millivolts per meter for 1 kW antenna
input power to 137.6 mV/m. The ratio is expressed in decibels (dB).
• Aspect ratio. The ratio of picture width to picture height as transmitted. The standard is 4:3
for 525-line NTSC and 625 line-PAL and SECAM systems.
3-9
3-10 Television Transmission Systems
• Chrominance. The colorimetric difference between any color and a reference color of equal
luminance, the reference color having a specific chromaticity.
• Effective radiated power. The product of the antenna input power and the antenna power gain
expressed in kilowatts and in decibels above 1 kW (dBk). The licensed effective radiated
power is based on the average antenna power gain for each direction in the horizontal plane.
Where circular or elliptical polarization is employed, the effective radiated power is applied
separately to the vertical and horizontal components. For assignment purposes, only the effec-
tive radiated power for horizontal polarization is usually considered.
• Field. A scan of the picture area once in a predetermined pattern.
• Frame. One complete image. In the line-interlaced scanning pattern of 2/1, a frame consists of
two interlaced fields.
• Frequency modulation (FM). A system of modulation where the instantaneous radio fre-
quency varies in proportion to the instantaneous amplitude of the modulating signal, and the
instantaneous radio frequency is independent of the frequency of the modulating signal.
• Interlaced scanning. A scanning pattern where successively scanned lines are spaced an inte-
gral number of line widths, and in which the adjacent lines are scanned during successive
periods of the field rate.
• IRE standard scale. A linear scale for measuring, in arbitrary units, the relative amplitudes of
the various components of a television signal. (See Table 3.1.1.)
• Luminance. Luminance flux emitted, reflected, or transmitted per unit solid angle per unit
projected area of the source (the relative light intensity of a point in the scene).
• Negative transmission. Modulation of the radio-frequency visual carrier in such a way as to
cause an increase in the transmitted power with a decrease in light intensity.
• Polarization. The direction of the electric field vector as radiated from the antenna.
• Scanning. The process of analyzing successively, according to a predetermined method or
pattern, the light values of the picture elements constituting the picture area.
• Vestigial sideband transmission. A system of transmission wherein one of the modulation
sidebands is attenuated at the transmitter and radiated only in part.
chrome receivers must be able to produce high-quality black-and-white images from a color
broadcast and color receivers must produce high-quality black-and-white images from mono-
chrome broadcasts. The first such color television system to be placed into commercial broad-
cast service was developed in the U.S. On December 17, 1953, the Federal Communications
Commission (FCC) approved transmission standards and authorized broadcasters, as of January
23, 1954, to provide regular service to the public under these standards. This decision was the
culmination of the work of the NTSC (National Television System Committee) upon whose rec-
ommendation the FCC action was based [1]. Subsequently, this system, commonly referred to as
the NTSC system, was adopted by Canada, Japan, Mexico, and others.
That nearly 50 years later, this standard is still providing color television service of good qual-
ity testifies to the validity and applicability of the fundamental principles underlying the choice
of specific techniques and numerical standards.
The previous existence of a monochrome television standard was two-edged in that it pro-
vided a foundation upon which to build the necessary innovative techniques while simulta-
neously imposing the requirement of compatibility. Within this framework, an underlying
theme—that which the eye does not see does not need to be transmitted nor reproduced—set the
stage for a variety of fascinating developments in what has been characterized as an “economy of
representation” [1].
The countries of Europe delayed the adoption of a color television system, and in the years
between 1953 and 1967, a number of alternative systems that were compatible with the 625-line,
50-field existing monochrome systems were devised. The development of these systems was to
some extent influenced by the fact that the technology necessary to implement some of the
NTSC requirements was still in its infancy. Thus, many of the differences between NTSC and the
other systems are the result of technological rather than fundamental theoretical considerations.
Most of the basic techniques of NTSC are incorporated into the other system approaches. For
example, the use of wideband luminance and relatively narrowband chrominance, following the
principle of mixed highs, is involved in all systems. Similarly, the concept of providing horizontal
interlace for reducing the visibility of the color subcarrier(s) is followed in all approaches. This
feature is required to reduce the visibility of signals carrying color information that are contained
within the same frequency range as the coexisting monochrome signal, thus maintaining a high
order of compatibility.
An early system that received approval was one proposed by Henri de France of the Compag-
nie de Television of Paris. It was argued that if color could be relatively band-limited in the hori-
zontal direction, it could also be band-limited in the vertical direction. Thus, the two pieces of
coloring information (hue and saturation) that need to be added to the one piece of monochrome
information (brightness) could be transmitted as subcarrier modulation that is sequentially trans-
mitted on alternate lines—thereby avoiding the possibility of unwanted crosstalk between color
signal components. Thus, at the receiver, a one-line memory, commonly referred to as a 1-H
delay element, must be employed to store one line to then be concurrent with the following line.
Then a linear matrix of the red and blue signal components (R and B) is used to produce the third
green component (G). Of course, this necessitates the addition of a line-switching identification
technique. Such an approach, designated as SECAM (SEquential Couleur Avec Memoire, for
sequential color with memory) was developed and officially adopted by France and the USSR,
and broadcast service began in France in 1967.
The implementation technique of a 1-H delay element led to the development, largely through
the efforts of Walter Bruch of the Telefunken Company, of the Phase Alternation Line (PAL) sys-
tem. This approach was aimed at overcoming an implementation problem of NTSC that requires
3-12 Television Transmission Systems
a high order of phase and amplitude integrity (skew-symmetry) of the transmission path charac-
teristics about the color subcarrier to prevent color quadrature distortion. The line-by-line alter-
nation of the phase of one of the color signal components averages any colorimetric distortions
to the observer’s eye to that of the correct value. The system in its simplest form (simple PAL),
however, results in line flicker (Hanover bars). The use of a 1-H delay device in the receiver
greatly alleviates this problem (standard PAL). PAL systems also require a line identification
technique.
The standard PAL system was adopted by numerous countries in continental Europe, as well
as in the United Kingdom. Public broadcasting began in 1967 in Germany and the United King-
dom using two slightly different variants of the PAL system.
All systems:
Use three primary additive colorimetric principles
Use similar camera pickup and display technologies
Employ wide-band luminance and narrow-band chrominance
All are compatible with coexisting monochrome systems First-order differences are therefore:
Line and field rates
Component bandwidths
Frequency allocations
Major differences lie in color-encoding techniques:
NTSC: Simultaneous amplitude and phase quadrature modulation of an interlaced, suppressed
subcarrier
PAL: Similar to NTSC but with line alternation of one color-modulation component
SECAM: Frequency modulation of line-sequential color subcarrier(s)
Recognition that the scanning process, being equivalent to sampled-data techniques, produces
signal components largely concentrated in uniformly spaced groups across the channel width,
led to introduction of the concept of horizontal frequency interlace (dot interlace). The color
subcarrier frequency was so chosen as to be an odd multiple of one-half the line rate (in the case
of NTSC) such that the phase of the subcarrier is exactly opposite on successive scanning lines.
This substantially reduces the subjective visibility of the color signal “dot” pattern components.
Thus, the major differences among the three main systems of NTSC, PAL, and SECAM are in
the specific modulating processes used for encoding and transmitting the chrominance informa-
tion. The similarities and differences are briefly summarized in Table 3.1.2.
The signal of Equation (3.1.1) would be exactly equal to the output of a linear monochrome
sensor with ideal spectral sensitivity if the red, green, and blue elements were also linear devices
with theoretically correct spectral sensitivity curves. In actual practice, the red, green, and blue
primary signals are deliberately made nonlinear to accomplish gamma correction (adjustment of
3-14 Television Transmission Systems
the slope of the input/output transfer characteristic). The prime mark (´) is used to denote a
gamma-corrected signal.
Signals representative of the chromaticity information (hue and saturation) that relate to the
differences between the luminance signal and the basic red, green, and blue signals are generated
in a linear matrix. This set of signals is termed color-difference signals and is designated as R – Y,
G – Y, and B – Y. These signals modulate a subcarrier that is combined with the luminance com-
ponent and passed through a common communications channel. At the receiver, the color differ-
ence signals are detected, separated, and individually added to the luminance signal in three
separate paths to recreate the original R, G, and B signals according to the equations
In the specific case of NTSC, two other color-difference signals, designated as I and Q, are
formed at the encoder and are used to modulate the color subcarrier.
Another reason for the choice of signal values in the NTSC system is that the eye is more
responsive to spatial and temporal variations in luminance than it is to variations in chrominance.
Therefore, the visibility of luminosity changes resulting from random noise and interference
effects can be reduced by properly proportioning the relative chrominance gain and encoding
angle values with respect to the luminance values. Thus, the principle of constant luminance is
incorporated into the system standard [1, 2].
The chrominance signal components are defined by the following equations. For NTSC
B −Y = 0.493 ( E ′B − EY′ )
(3.1.7)
R −Y = 0.877 ( E ′R − EY′ )
(3.1.8)
For PAL
For SECAM
For the NTSC system, the total chroma signal expression is given by
B −Y R −Y
CNTSC = sin ωSC t + cos ωSC t
2.03 1.14
(3.1.14)
U V
CPAL = sin ωSC t + cos ωSC t
2.03 1.14
(3.1.15)
where U and V are substituted for B – Y and R – Y, respectively, and the V component is alter-
nated 180º on a line-by-line basis.
The voltage outputs from the three camera sensors are adjusted to be equal when a scene ref-
erence white or neutral gray object is being scanned for the color temperature of the scene ambi-
ent. Under this condition, the color subcarrier also automatically becomes zero. The colormetric
values have been formulated by assuming that the reproducer will be adjusted for illuminant C,
representing the color of average daylight.
Figure 3.1.1 is a CIE chromaticity diagram showing the primary color coordinates for NTSC,
PAL, and SECAM. It is interesting to compare the available color gamut relative to that of color
paint, pigment, film, and dye processes.
In the NTSC color standard, the chrominance information is carried as simultaneous ampli-
tude and phase modulation of a subcarrier chosen to be in the high frequency portion of the 0–
3-16 Television Transmission Systems
4.2 MHz video band and specifically related to the scanning rates as an odd multiple of one-half
the horizontal line rate, as shown by the vector diagram in Figure 3.1.2. The hue information is
assigned to the instantaneous phase of the subcarrier. Saturation is determined by the ratio of the
instantaneous amplitude of the subcarrier to that of the corresponding luminance signal ampli-
tude value.
The choice of the I and Q color modulation components relates to the variation of color acuity
characteristics of human color vision as a function of the field of view and spatial dimensions of
objects in the scene. The color acuity of the eye decreases as the size of the viewed object is
Television Transmission Standards 3-17
decreased and thereby occupies a small part of the field of view. Small objects, represented by
frequencies above about 1.5 to 2.0 MHz, produce no color sensation (mixed-highs). Intermediate
spatial dimensions (approximately in the 0.5 to 1.5 MHz range) are viewed satisfactorily if repro-
duced along a preferred orange-cyan axis. Large objects (0–0.5 MHz) require full three-color
reproduction for subjectively-pleasing results. Thus, the I and Q bandwidths are chosen accord-
ingly, and the preferred colorimetric reproduction axis is obtained when only the I signal exists
by rotating the subcarrier modulation vectors by 33º. In this way, the principles of mixed-highs
and I, Q color-acuity axis operation are exploited.
At the encoder, the Q signal component is band-limited to about 0.6 MHz and is representa-
tive of the green-purple color axis information. The I signal component has a bandwidth of about
1.5 MHz and contains the orange-cyan color axis information. These two signals are then used to
individually modulate the color subcarrier in two balanced modulators operated in phase quadra-
ture. The sum products are selected and added to form the composite chromaticity subcarrier.
This signal—in turn—is added to the luminance signal along with the appropriate horizontal and
vertical synchronizing and blanking waveforms to include the color synchronization burst. The
result is the total composite color video signal.
Quadrature synchronous detection is used at the receiver to identify the individual color sig-
nal components. When individually recombined with the luminance signal, the desired R, G, and
B signals are recreated. The receiver designer is free to demodulate either at I or Q and matrix to
form B – Y, R – Y, and G – Y, or as in nearly all modern receivers, at B – Y and R – Y and maintain
500 kHz equiband color signals.
The chrominance information can be carried without loss of identity provided that the proper
phase relationship is maintained between the encoding and decoding processes. This is accom-
3-18 Television Transmission Systems
(simple PAL) or an electronic averaging technique such as the use of a 1-H delay element (stan-
dard PAL), produces cancellation of the phase (hue) error and provides the correct hue but with
somewhat reduced saturation; this error being subjectively much less visible.
Obviously, the PAL receiver must be provided with some means by which the V signal switch-
ing sequence can be identified. The technique employed is known as A B sync, PAL sync, or
swinging burst and consists of alternating the phase of the reference burst by ±45° at a line rate,
as shown in Figure 3.1.4. The burst is constituted from a fixed value of U phase and a switched
value of V phase. Because the sign of the V burst component is the same sign as the V picture
content, the necessary switching “sense” or identification information is available. At the same
time, the fixed-U component is used for reference carrier synchronization. Figure 3.1.5 shows
the degree to which horizontal frequency (dot) interlace of the color subcarrier components with
the luminance components is achieved in PAL.
To summarize, in NTSC, the Y components are spaced at fH intervals as a result of the hori-
zontal sampling (blanking) process. Thus, the choice of a color subcarrier whose harmonics are
also separated from each other by fH (as they are odd multiples of 1/2 fH) provides a half-line off-
set and results in a perfect “dot” interlace pattern that moves upward. Four complete field scans
are required to repeat a specific picture element “dot” position.
In PAL, the luminance components are also spaced at fH intervals. Because the V components
are switched symmetrically at half the line rate, only odd harmonics exist, with the result that the
3-20 Television Transmission Systems
(a)
(b)
Figure 3.1.5 NTSC and PAL frequency-interlace relationship: (a) NTSC 1/2-H interlace (four fields
for a complete color frame), (b) PAL 1/4-H and 3/4-H offset (eight fields for a complete color
frame.)
V components are spaced at intervals of fH. They are spaced at half-line intervals from the U
components which, in turn have fH spacing intervals because of blanking. If half-line offset were
used, the U components would be perfectly interlaced but the V components would coincide with
Y and, thus, not be interlaced, creating vertical, stationary dot patterns.
For this reason, in PAL, a 1/4-line offset for the subcarrier frequency is used as shown in Fig-
ure 3.1.5. The expression for determining the PAL subcarrier specific frequency for 625-line/50-
field systems is given by
1135 1
FSC = fH + fv
4 2
(3.1.16)
The additional factor 1/2 fV = 25 Hz is introduced to provide motion to the color dot pattern,
thereby reducing its visibility. The degree to which interlace is achieved is, therefore, not perfect,
but is acceptable, and eight complete field scans must occur before a specific picture element
“dot” position is repeated.
One additional function must be accomplished in relation to PAL color synchronization. In all
systems, the burst signal is eliminated during the vertical synchronization pulse period. Because
in the case of PAL, the swinging burst phase is alternating line-by-line, some means must be pro-
vided for ensuring that the phase is the same for the first burst following vertical sync on a field-
by-field basis. Therefore, the burst reinsertion time is shifted by one line at the vertical field rate
Television Transmission Standards 3-21
Figure 3.1.6 Meander burst-blanking gate timing diagram for systems B, G, H, and PAL/I. (Source:
CCIR.)
by a pulse referred to as the meander gate. The timing of this pulse relative to the A versus B
burst phase is shown in Figure 3.1.6.
The transmitted signal specifications for PAL systems include the basic features discussed
previously. Although a description of the great variety of receiver decoding techniques in com-
mon use is outside the scope and intent of this chapter, it is appropriate to review—at least
briefly—the following major features:
• “Simple” PAL relies upon the eye to average the line-by-line color switching process and can
be plagued with line beats known as Hanover bars caused by the system nonlinearities intro-
ducing visible luminance changes at the line rate.
• “Standard” PAL employs a 1-H delay line element to separate U color signal components
from V color signal components in an averaging technique, coupled with summation and sub-
traction functions. Hanover bars can also occur in this approach if an imbalance of amplitude
or phase occurs between the delayed and direct paths.
• In a PAL system, vertical resolution in chrominance is reduced as a result of the line averag-
ing processes. The visibility of the reduced vertical color resolution as well as the vertical
time coincidence of luminance and chrominance transitions differs depending upon whether
the total system, transmitter through receiver, includes one or more averaging (comb filter)
processes.
Thus, PAL provides a similar system to NTSC and has gained favor in many areas of the world,
particularly for 625-line/50-field systems.
3-22 Television Transmission Systems
These frequencies represent zero color difference information (zero output from the FM dis-
criminator), or a neutral gray object in the televised scene.
As shown in Figure 3.1.7, the accepted convention for direction of frequency change with
respect to the polarity of the color difference signal is opposite for the DOB and DOR signals. A
Television Transmission Standards 3-23
Figure 3.1.8 SECAM color signal low-frequency preemphasis. (CCIR Rep. 624-2.)
positive value of DOR means a decrease in frequency, whereas a positive value of DOB indicates
an increase in frequency. This choice relates to the idea of keeping the frequencies representative
of the most critical color away from the upper edge of the available bandwidth to minimize dis-
tortions.
The deviation for DR´ is 280 kHz and DB´ is 230 kHz. The maximum allowable deviation,
including preemphasis, for DR´ = –506 kHz and +350 kHz, while the values for DB´ = –350 kHz
and +506 kHz.
Two types of preemphasis are employed simultaneously in SECAM. First, as shown in Figure
3.1.8, a conventional type of preemphasis of the low-frequency color difference signals is intro-
duced. The characteristic is specified to have a reference level break-point at 85 kHz (f1) and a
maximum emphasis of 2.56 dB. The expression for the characteristic is given as
1+ j ( f / f1 )
A=
1+ j ( f / 3 f1 )
(3.1.19)
Figure 3.1.10 Color versus line-and-field timing relationship for SECAM. Notes: (1) Two frames
(four fields) for picture completion; (2) subcarrier interlace is field-to-field and line-to-line of same
color.
⎛ f f ⎞
1× j16⎜ − c ⎟
⎝ fc f ⎠
G = MO
⎛ f f ⎞
1+ j1.26⎜ − c ⎟
⎝ fc f ⎠
(3.1.20)
(a )
(b )
Figure 3.1.11 SECAM line-identification technique: (a) line identification (bottle) signal, (b) hori-
zontal blanking interval. (Source CCIR.)
This type of preemphasis is intended to further reduce the visibility of the frequency modu-
lated subcarriers in low luminance level color values and to improve the signal-to-noise ratio (S/
N) in high luminance and highly saturated colors. Thus, monochrome compatibility is better for
pastel average picture level objects but sacrificed somewhat in favor of S/R in saturated color
areas.
Of course, precise interlace of frequency modulated subcarriers for all values of color modu-
lation cannot occur. Nevertheless, the visibility of the interference represented by the existence
of the subcarriers can be reduced somewhat by the use of two separate carriers, as is done in
SECAM. Figure 3.1.10 illustrates the line switching sequence. In the undeviated “resting” fre-
quency mode, the 2:1 vertical interlace in relation to the continuous color difference line switch-
ing sequence produces adjacent line pairs of fOB and fOR. In order to further reduce the subcarrier
“dot” visibility, the phase of the subcarriers (phase carries no picture information in this case) is
reversed 180º on every third line and between each field. This, coupled with the “bell” preem-
phasis, produces a degree of monochrome compatibility considered subjectively adequate.
As in PAL, the SECAM system must provide some means for identifying the line switching
sequence between the encoding and decoding processes. This is accomplished, as shown in Fig-
ure 3.1.11, by introducing alternate DR and DB color identifying signals for nine lines during the
vertical blanking interval following the equalizing pulses after vertical sync. These “bottle”
3-26 Television Transmission Systems
shaped signals occupy a full line each and represent the frequency deviation in each time
sequence of DB and DR at zero luminance value. These signals can be thought of as fictitious
green color that is used at the decoder to determine the line switching sequence.
During horizontal blanking, the subcarriers are blanked and a burst of fOB / fOR is inserted and
used as a gray level reference for the FM discriminators to establish their proper operation at the
beginning of each line.
Thus, the SECAM system is a line sequential color approach using frequency modulated sub-
carriers. A special identification signal is provided to identify the line switch sequence and is
especially adapted to the 625-line/50-field wideband systems available in France and the USSR.
It should be noted that SECAM, as practiced, employs amplitude modulation of the sound
carrier as opposed to the FM sound modulation in other systems.
CCIR documents [3] define recommended standards for worldwide color television systems
in terms of the three basic color approaches—NTSC, PAL, and SECAM. The variations—at
least 13 of them—are given alphabetical letter designations; some represent major differences
while others signify only very minor frequency allocation differences in channel spacings or the
differences between the VHF and UHF bands. The key to understanding the CCIR designations
lies in recognizing that the letters refer primarily to local monochrome standards for line and
field rates, video channel bandwidth, and audio carrier relative frequency. Further classification
in terms of the particular color system is then added for NTSC, PAL, or SECAM as appropriate.
For example, the letter “M” designates a 525-line/60-field, 4.2 MHz bandwidth, 4.5 MHz sound
carrier monochrome system. Thus, M(NTSC) describes a color system employing the NTSC
technique for introducing the chrominance information within the constraints of the basic mono-
chrome signal values. Likewise, M(PAL) would indicate the same line/field rates and band-
widths but employing the PAL color subcarrier modulation approach.
In another example, the letters “I” and “G” relate to specific 625-line/50-field, 5.0 or 5.5
MHz bandwidth, 5.5 or 6.0 MHz sound carrier monochrome standards. Thus, G(PAL) would
describe a 625-line/50-field, 5.5 MHz bandwidth, color system utilizing the PAL color subcarrier
modulation approach. The letter “L” refers to a 625-line/50-field, 6.0 MHz bandwidth system to
which the SECAM color modulation method has been added (often referred to as SECAM III).
System E is an 819-line/50-field, 10 MHz bandwidth, monochrome system. This channel was
used in France for early SECAM tests and for system E transmissions.
Some general comparison statements can be made about the underlying monochrome systems
and existing color standards:
• There are three different scanning standards: 525-lines/60-fields, 625-lines/50-fields, and
819-lines/50-fields.
• There are six different spacings of video-to-sound carriers, namely 3.5, 4.5, 5.5, 6.0, 6.5, and
11.15 MHz.
• Some systems use FM and others use AM for the sound modulation.
• Some systems use positive polarity (luminance proportional to voltage) modulation of the
video carrier while others, such as the U.S. (M)NTSC system, use negative modulation.
• There are differences in the techniques of color subcarrier encoding represented by NTSC,
PAL, and SECAM, and of course, in each case there are many differences in the details of
various pulse widths, timing, and tolerance standards.
The signal in the M(NTSC) system occupies the least total channel width, which when the
vestigial sideband plus guard bands are included, requires a minimum radio frequency channel
spacing of 6 MHz. (See Figure 3.1.12.) The L(III) SECAM system signal occupies the greatest
channel space with a full 6 MHz luminance bandwidth. Signals from the two versions of PAL lie
in between and vary in vestigial sideband width as well as color and luminance bandwidths.
NTSC is the only system to incorporate the I, Q color acuity bandwidth variation. PAL mini-
mizes the color quadrature phase distortion effects by line-to-line averaging, and SECAM avoids
this problem by only transmitting the color components sequentially at a line-by-line rate.
3-28 Television Transmission Systems
Figure 3.1.12 Bandwidth comparison among NTSC, PAL, and SECAM systems.
(a )
(b )
Figure 3.1.14 Baseband frequency allocations: (a) stereophonic FM, (b) BTSC TV stereophonic
system.
Compatibility with existing monophonic receivers was the reason behind the selection of 25-kHz
deviation for the main channel audio. The increased deviation of the subchannel and the use of
noise reduction was intended to maintain at least 50 dB S/N in outlying areas. With the noise
reduction compandor in the stereo subchannel and the 6-dB increase in level, the L and R S/N
should be dependent only on the main-channel S/N, which was found to be approximately 63 dB
in system tests. The choice fH as the pilot was to minimize buzz-beat interference.
Figure 3.1.14 shows the FM-versus-TV stereo baseband spectrum. Table 3.1.3 shows the
aural carrier-modulation standards for the TV stereo system. Figure 3.1.5 is a block diagram of a
typical TV stereo generator.
Some of the specifications for TV stereo generators are not different from those for other
audio equipment except that they cannot be verified without use of a decoder and that the
decoder contribution to performance must be recognized.
There are at least two operational modes for TV stereo generators: the normal (noise reduc-
tion) operational mode and a test mode called 75-μs equivalent mode. In the 75-μs equivalent
mode, the noise reduction system in L – R is replaced with a 75-μs preemphasis network identi-
cal to that in L + R. This function was included to allow noise, distortion, and separation mea-
surements to be made without the level-dependent degradation caused by noise reduction.
Subchannel filters serve to limit the noise-reduction-control-line bandwidth and to control the
out-of-band energy created by the noise reduction circuit. When there is no audio input to the
noise reduction circuitry, the spectral compressor gain is at maximum, creating a high-level par-
abolic noise spectrum which needs to be bandwidth-limited to audio frequencies to prevent spec-
Television Transmission Standards 3-31
trum spillover. Because the pilot is separated from the subchannel by only 734 Hz, the filter
slope needs to be very sharp to provide pilot protection with flat response to 15,000 Hz.
Any out-of-band information on the noise-reduction-control line (such as fH) after the sub-
channel filter will cause the noise reduction circuitry to misencode, causing degradation of
received separation and frequency response when decoded. To ensure stereo separation greater
than 40 dB, the subchannel filters need to be matched to within 0.08-dB amplitude difference
and 1° phase difference. For good overall frequency response, the filter's response should be less
than ±0.2 dB from 50 to 15,000 Hz.
The stereo-generator frequency response should be less than ±1 dB from 50 to 15,000 Hz
(with noise reduction) and ±0.5 dB without noise reduction to help meet the total-system
response goals.
3.1.4 References
1. Herbstreit, J. W., and J. Pouliquen: “International Standards for Color Television,” IEEE
Spectrum, IEEE, New York, N.Y., March 1967.
2. Fink, Donald G. (ed.): Color Television Standards, McGraw-Hill, New York, N.Y., 1955.
3. “CCIR Characteristics of Systems for Monochrome and Colour Television—Recommen-
dations and Reports,” Recommendations 470-1 (1974–1978) of the Fourteenth Plenary
Assembly of CCIR in Kyoto, Japan, 1978.
4. Eilers, C.: “The Zenith Multichannel TV Sound System,” Proc. 38th NAB Eng. Conf.,
National Association of Broadcasters, Washington, D.C., 1984.
5. Federal Communications Commission: Multichannel Television Sound Transmission and
Audio Processing Requirements for the BTSC System, OST Bulletin 60, FCC, Washington,
D.C.
6. Electronic Industries Association: Multichannel Television Sound—BTSC System Recom-
mended Practices, EIA, Washington, D.C., ETA Television Systems Bulletin 5, July 1985.
3-32 Television Transmission Systems
3.1.5 Bibliography
Carnt, P. S., and G. B. Townsend: Colour Television—Volume 1: NTSC; Volume 2: PAL and
SECAM, ILIFFE Bookes Ltd. (Wireless World), London, 1969.
Hirsch, C. J.: “Color Television Standards for Region 2,” IEEE Spectrum, IEEE, New York, N.Y.,
February 1968.
Pritchard, D. H.: “U.S. Color Television Fundamentals—A Review,” SMPTE Journal, SMPTE,
White Plains, N.Y., vol. 86, pp. 819–828, November 1977.
Roizen, J.: “Universal Color Television: An Electronic Fantasia,” IEEE Spectrum, IEEE, New
York, N.Y., March 1967.
g g
Chapter
3.2
Ghost Canceling Reference Signal
3.2.1 Introduction
The quality of a received analog television image depends on several factors. Some relate to the
transmission system while others relate to the reception equipment. Ghosting, which is caused by
reflections from buildings, aircraft, and other objects in the environment, falls somewhere in
between.
3-35
g g
and its VBI location the official U.S. voluntary standard. The voluntary nature of the standard
relies on the cooperation of broadcasters and receiver manufacturers.
digital signal processor (DSP), which uses ghost cancellation algorithms to calculate coefficients
to be fed to digital adaptive filters that cancel the ghosts. This process occurs continuously, with
the coefficients of the filters being constantly updated to follow transient conditions, as mea-
sured from the received GCR.
Figure 3.2.2 shows a simplified block diagram of the GCR system. The ghosted received
GCR is captured, providing the computed settings for the ghost-canceling filter. The corrected
signal, with the ghosts removed, is converted back to analog for display.
The GCR acts as a test signal in characterizing the impulse or frequency response of the
“black box” representation of the ghosting transmission channel. The ATSC GCR standard
resembles a swept-frequency chirp pulse. It is not such a pulse, but is carefully synthesized to
have a flat frequency spectrum over the entire 4.2 MHz band of interest to NTSC applications.
(The spectrum of a linear FM chirp pulse is not flat.) This signal was preferred to the BTA GCR
because it provides more energy, permitting more reliable operation under noisier conditions.
Although the primary purpose of the GCR signal is for ghost cancellation, the resemblance
between the GCR and a linear FM chirp offers other uses. For example, broadcasters can use this
pulse as a qualitative indicator of the frequency response of their system.
1. Subject to reservation of line 19 by the FCC exclusively for the optional placement of the
GCR signal.
g g
1.2 1.0
1.0
0.5
0.8
0.6 0.0
0.4
-0.5
0.2
0.0 -1.0
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 0 100 200 300 400 500 600 700 800 900
Frequency (MHz) Samples at 14.32 MHz
Figure 3.2.3 Spectrum of the GCR signal. (From [1]. Figure 3.2.4 GCR signal amplitude as a function of
Used with permission.) time. (From [1]. Used with permission.)
50 50
0 0
-50 -50
0 100 200 300 400 500 600 700 800 900 0 100 200 300 400 500 600 700 800 900
Samples at 14.32 MHz Samples at 14.32 MHz
Figure 3.2.5 GCR signal on Line A. ( From [1]. Used Figure 3.2.6 GCR signal on Line B. (From [1]. Used
with permission.) with permission.)
is 16.7 μs after the leading edge of horizontal sync. The GCR signal varies from –10 to +70 IRE
(the pedestal is the average of these extreme values).
Waveforms of the GCR signal on the pedestal are shown in Figures 3.2.5 and Figure 3.2.6,
and represent line A and line B, respectively. Line A and line B have the same 30 IRE pedestal
but the GCR polarity is inverted from line A to line B. The line A and line B signals are con-
tained in an 8-field sequence as follows:
Numerical values of the GCR signal as a function of time are given ATSC Standard A/49. The
values were calculated from
g g
A⎧ Ω⎡ 0 ⎫
f (t ) = ⎨ ∫ ⎣ cos (bω2 )+ j sin (bω2 )⎤
⎦W ( ω) e d ω+ ∫ ⎡
j ωt
⎣ cos (bω )− j sin (bω )⎤
2 2
⎦W ( ω) e d ω⎬
j ωt
2π ⎩ 0 −Ω ⎭
Where:
A = 9.0
b = 110.0
Ω = 4.3/7.16 π radians/s
W(ω ) is the window function
π
c ⎡⎛ 1 1 ⎞⎛ 1 Ω1 j γt ⎞⎤ − j ωt
W ( ω) = ∫ ⎢⎜ + cos ( ct )⎟⎜ ∫ e d γ ⎟⎟⎥⎥e dt
π ⎢
⎣⎝2 2 ⎠⎜
⎝ 2 π −Ω1 ⎠⎦
−
c
Where:
c = π/40.0 radians/s
Ω1 = 4.15/7.16 π radians/s
3.2.3 References
1. ATSC: “Ghost Canceling Reference Signal for NTSC,” Advanced Television Systems
Committee, Washington, D.C., document A/49, 1993.
g g
g g
Chapter
3.3
The ATSC DTV System
3.3.1 Introduction1
The Digital Television (DTV) Standard has ushered in a new era in television broadcasting [1].
The impact of DTV is more significant than simply moving from an analog system to a digital
system. Rather, DTV permits a level of flexibility wholly unattainable with analog broadcasting.
An important element of this flexibility is the ability to expand system functions by building
upon the technical foundations specified in ATSC standards A/53, “ATSC Digital Television
Standard,” and A/52, “Digital Audio Compression (AC-3) Standard.”
With NTSC, and its PAL and SECAM counterparts, the video, audio, and some limited data
information are conveyed by modulating an RF carrier in such a way that a receiver of relatively
simple design can decode and reassemble the various elements of the signal to produce a pro-
gram consisting of video and audio, and perhaps related data (e.g., closed captioning). As such, a
complete program is transmitted by the broadcaster that is essentially in finished form. In the
DTV system, however, additional levels of processing are required after the receiver demodu-
lates the RF signal. The receiver processes the digital bit stream extracted from the received sig-
nal to yield a collection of program elements (video, audio, and/or data) that match the service(s)
that the consumer has selected. This selection is made using system and service information that
is also transmitted. Audio and video are delivered in digitally compressed form and must be
decoded for presentation. Audio may be monophonic, stereo, or multi-channel. Data may supple-
ment the main video/audio program (e.g., closed captioning, descriptive text, or commentary) or
it may be a stand-alone service (e.g., a stock or news ticker).
1. This chapter is based on: ATSC, “Guide to the Use of the Digital Television Standard,”
Advanced Television Systems Committee, Washington, D.C., Doc. A/54A, December 4,
2003; and ATSC, “ATSC Digital Television Standard,” Advanced Television Systems Com-
mittee, Washington, D.C., Doc. A/53C, May 21, 2004. Used with permission.
Editor’s note: This chapter provides an overview of the ATSC DTV transmission system based
on ATSC A/53C, and A54A. For full details on this system, readers are encouraged to
download the source documents from the ATSC Web site (http://www.atsc.org). All ATSC
Standards, Recommended Practices, and Information Guides are available at no charge..
3-41
y
The nature of the DTV system is such that it is possible to provide new features that build
upon the infrastructure within the broadcast plant and the receiver. One of the major enabling
developments of digital television, in fact, is the integration of significant processing power in
the receiving device itself. Historically, in the design of any broadcast system—be it radio or
television—the goal has always been to concentrate technical sophistication (when needed) at the
transmission end and thereby facilitate simpler receivers. Because there are far more receivers
than transmitters, this approach has obvious business advantages. While this trend continues to
be true, the complexity of the transmitted bit stream and compression of the audio and video
components require a significant amount of processing power in the receiver, which is practical
because of the enormous advancements made in computing technology. Once a receiver reaches
a certain level of sophistication (and market success) additional processing power is essentially
“free.”
The Digital Television Standard describes a system designed to transmit high quality video
and audio and ancillary data over a single 6 MHz channel. The system can deliver about 19
Mbits/s in a 6 MHz terrestrial broadcasting channel and about 38 Mbits/s in a 6 MHz cable tele-
vision channel. This means that encoding HD video essence at 1.106 Gbits/s 1 (highest rate pro-
gressive input) or 1.244 Gbits/s2 (highest rate interlaced picture input) requires a bit rate
reduction by about a factor of 50 (when the overhead numbers are added, the rates become
closer). To achieve this bit rate reduction, the system is designed to be efficient in utilizing avail-
able channel capacity by exploiting complex video and audio compression technology.
The compression scheme optimizes the throughput of the transmission channel by represent-
ing the video, audio, and data sources with as few bits as possible while preserving the level of
quality required for the given application.
The RF/transmission subsystems described in the Digital Television Standard are designed
specifically for terrestrial and cable applications. The structure is such that the video, audio, and
service multiplex/transport subsystems are useful in other applications.
1. 720 × 1280 × 60 × 2 × 10 = 1.105920 Gbits/s (the 2 represents the factor needed for 4:2:2
color subsampling, and the 10 is for 10-bit systems).
2. 1080 × 1920 × 30 × 2 × 10 = 1.244160 Gbits/s (the 2 represents the factor needed for 4:2:2
color subsampling, and the 10 is for 10-bit systems).
y
Video
Video Source Coding
and Compression
Channel
Transport
Audio Subsystem Coding
Audio
Service Multiplex
Audio Source Coding
and Compression
Modulation
Ancillary Data
Control Data
Receiver Characteristics
Figure 3.3.1 Digital terrestrial television broadcasting model. (From [2]. Used with permission.)
Video In
Video Transport fTP FEC and fsym RF Out
A/D VSB
Encoder Encoder Sync
Modulator
Insertion
Audio In
Audio
A/D Encoder
Figure 3.3.2 High-level view of the DTV encoding system. (From [2]. Used with permission.)
sent the transmitted signal. The modulation (or physical layer) uses the digital data-stream infor-
mation to modulate the transmitted signal. The modulation subsystem offers two modes:
• Terrestrial broadcast mode (8-VSB)
• High-data-rate mode (16-VSB).
Figure 3.3.2 gives a high-level view of the encoding equipment. This view is not intended to
be complete, but is used to illustrate the relationship of various clock frequencies within the
encoder. There are two domains within the encoder where a set of frequencies are related: the
source-coding domain and the channel-coding domain.
The source-coding domain, represented schematically by the video, audio, and transport
encoders, uses a family of frequencies that are based on a 27 MHz clock (f27MHz ). This clock is
used to generate a 42-bit sample of the frequency, which is partitioned into two elements defined
by the MPEG-2 specification:
• The 33-bit program clock reference base
• The 9-bit program clock reference extension
The 33-bit program clock reference base is equivalent to a sample of a 90 kHz clock that is
locked in frequency to the 27 MHz clock, and is used by the audio and video source encoders
when encoding the presentation time stamp (PTS) and the decode time stamp (DTS). The audio
and video sampling clocks, fa and fv , respectively, must be frequency-locked to the 27 MHz
clock. This condition can be expressed as the requirement that there exist two pairs of integers,
(na, ma) and (nV, mV), such that
na
fa = × 27
------ MHz (3.3.1)
ma
y
and
n
fv = ----v-- × 27 MHz (3.3.2)
mv
The channel-coding domain is represented by the forward error correction/sync insertion sub-
system and the vestigial sideband (VSB) modulator. The relevant frequencies in this domain are
the VSB symbol frequency ( fsym) and the frequency of the transport stream ( fTP), which is the
frequency of transmission of the encoded transport stream. These two frequencies must be
locked, having the relation
188 312
fTP = 2 × -------- -------- fsym (3.3.3)
208 313
The signals in the two domains are not required to be frequency-locked to each other and, in
many implementations, will operate asynchronously. In such systems, the frequency drift can
necessitate the occasional insertion or deletion of a null packet from within the transport stream,
thereby accommodating the frequency disparity.
A basic block diagram representation of the overall ATSC DTV system is shown in Figure
3.3.3.
3.3.2b Receiver
The ATSC receiver must recover the bits representing the original video, audio and other data
from the modulated signal [1]. In particular, the receiver must:
• Tune the selected 6 MHz channel
• Reject adjacent channels and other sources of interference
• Demodulate (equalize as necessary) the received signal, applying error correction to produce
a transport bit stream
• Identify the elements of the bit stream using a transport layer processor
• Select each desired element and send it to its appropriate processor
• Decode and synchronize each element
y
Transmitter
Sources for encoding
(video, audio, data)
packetization and
multiplexing
Application
Transport
Encoders
* Mod-
* ulator
*
* Transport
Stream
elementary
streams,PES or
private_sections
Sy stem Time
Clock
RF Transmission
PSI/PSIP
Application
Transport
Decoders
Demod-
*
* ulator
*
* Transport Note: PSIP ref ers to ATSC
elementary Stream with Standard (A/65) which
error was dev eloped
streams, PES or
signaling subsequent to ATSC
private_sections,
with error signaling Standards (A/52), (A/53)
and to this Guide. For
details regarding PSIP
PSI/PSIP ref er to ATSC Standard
(A/65).
Figure 3.3.3 Sample organization of functionality in a transmitter-receiver pair for a single DTV
program. (From [1]. Used with permission.)
Vertical Size Value Horizontal Size Value Aspect Ratio Frame-Rate Code Scanning Sequence
1 1920 16:9, square pixels 1,2,4,5 Progressive
1080
4,5 Interlaced
720 1280 16:9, square pixels 1,2,4,5,7,8 Progressive
480 704 4:3, 16:9 1,2,4,5,7,8 Progressive
4,5 Interlaced
640 4:3, square pixels 1,2,4,5,7,8 Progressive
4,5 Interlaced
Frame-rate code: 1 = 23.976 Hz, 2 = 24 Hz, 4 = 29.97 Hz, 5 = 30 Hz, 7 = 59.94 Hz, 8 = 60 Hz
1
Note that 1088 lines actually are coded in order to satisfy the MPEG-2 requirement that the coded vertical size be
a multiple of 16 (progressive scan) or 32 (interlaced scan).
Optional
Reed- Data RF
Data Trellis Pilot Pre-equalizer VSB Up-
Solomon Inter- MUX
Randomizer Encoder Insertion Filter Modulator Converter
Encoder leaver
Segment Sync
Field Sync
Figure 3.3.4 Simplified block diagram of an example VSB transmitter. (From [2]. Used with per-
mission.)
1.
.7
Pilot
0
Suppressed
Carrier
.31 5.38 MHz .31
6.0 MHz
Figure 3.3.6 Nominal VSB channel occupancy. (From [2]. Used with permission.)
4.5
Sr = -------- × 684 = 10.76 … MHz (3.3.4)
286
Sr 3
fseg = -------- = 12.94 …× 10 data segments/s (3.3.5)
832
f
seg 20.66 … frames/s
f
frame = -626
------- = (3.3.6)
The symbol rate Sr and the transport rate Tr are locked to each other in frequency.
The 8-level symbols combined with the binary data segment sync and data field sync signals
are used to suppressed-carrier-modulate a single carrier. Before transmission, however, most of
the lower sideband is removed. The resulting spectrum is flat, except for the band edges where a
nominal square-root raised-cosine response results in 620 kHz transition regions. The nominal
VSB transmission spectrum is shown in Figure 3.3.6. It includes a small pilot signal at the sup-
pressed-carrier frequency, 310 kHz from the lower band edge.
this byte is used to individually XOR the corresponding input data bit. The randomizer-generator
polynomial and initialization are shown in Figure 3.3.7.
Although a thorough knowledge of the channel error protection and synchronization system
is not required for typical end-users, a familiarity with the basic principles of operation—as out-
lined in the following sections—is useful in understanding the important functions performed.
Reed-Solomon Encoder
The Reed-Solomon (RS) code used in the VSB transmission subsystem is a t = 10 (207,187)
code [2]. The RS data block size is 187 bytes, with 20 RS parity bytes added for error correction.
A total RS block size of 207 bytes is transmitted per data segment. In creating bytes from the
serial bit stream, the most significant bit (MSB) is the first serial bit. The 20 RS parity bytes are
sent at the end of the data segment. The parity-generator polynomial and the primitive-field-gen-
erator polynomial (with the fundamental supporting equations) are shown in Figure 3.3.8.
Reed-Solomon encoding/decoding is expressed as the total number of bytes in a transmitted
packet to the actual application payload bytes, where the overhead is the RS bytes used (i.e.,
207,187).
Interleaving
The interleaver employed in the VSB transmission system is a 52-data-segment (intersegment)
convolutional byte interleaver [2]. Interleaving is provided to a depth of about one-sixth of a data
field (4 ms deep). Only data bytes are interleaved. The interleaver is synchronized to the first
data byte of the data field. Intrasegment interleaving also is performed for the benefit of the trel-
lis coding process. The convolutional interleave stage is shown in Figure 3.3.9.
Trellis Coding
The 8-VSB transmission subsystem employs a 2/3-rate (R = 2/3) trellis code (with one unen-
coded bit that is precoded). Put another way, one input bit is encoded into two output bits using a
1/2-rate convolutional code while the other input bit is precoded [2]. The signaling waveform
used with the trellis code is an 8-level (3-bit), 1-dimensional constellation. Trellis code intraseg-
ment interleaving is used. This requires 12 identical trellis encoders and precoders operating on
interleaved data symbols. The code interleaving is accomplished by encoding symbols (0, 12, 24,
36 ...) as one group, symbols (1, 13, 25, 37 ...) as a second group, symbols (2, 14, 26, 38 ...) as a
third group, and so on for a total of 12 groups.
In creating serial bits from parallel bytes, the MSB is sent out first: (7, 6, 5, 4, 3, 2, 1, 0). The
MSB is precoded (7, 5, 3, 1) and the LSB (least significant bit) is feedback-convolutional
encoded (6, 4, 2, 0). Standard 4-state optimal Ungerboeck codes are used for the encoding. The
trellis code utilizes the 4-state feedback encoder shown in Figure 3.3.10. Also shown in the fig-
ure is the precoder and the symbol mapper. The trellis code and precoder intrasegment inter-
leaver, which feed the mapper shown in Figure 3.3.10, are illustrated in Figure 3.3.11. As shown
in the figure, data bytes are fed from the byte interleaver to the trellis coder and precoder; then
they are processed as whole bytes by each of the 12 encoders. Each byte produces four symbols
from a single encoder.
The output multiplexer shown in Figure 3.3.11 advances by four symbols on each segment
boundary. However, the state of the trellis encoder is not advanced. The data coming out of the
multiplexer follows normal ordering from encoders 0 through 11 for the first segment of the
Generator Polynominal G (16) = X16+X13+X12+X11+X7+X6+X3+X+1
The initalization (pre load) occurs during the field sync interval
Initalization to F180 hex (Load to 1)
X16 X15 X14 X13 X9 X8
D0 D1 D2 D3 D4 D5 D6 D7
The generator is shifted with the Byte Clock and one 8 bit Byte
of data is extracted per cycle.
Figure 3.3.7 Randomizer polynomial for the DTV transmission subsystem. (From [2]. Used with permission.)
The ATSC DTV System 3-51
i = 2t- 1
i= 0
3-52 Television Transmission Systems
y
Figure 3.3.8 Reed-Solomon (207,187) t = 10 parity-generator polynomial. (From [2]. Used with permission.)
y
1
M(=4Bytes)
2
From 2M To
3
Reed-Solomon Pre-Coder and
Encoder Trellis Encoder
(B-2)M
51
(B-1)M
(B=)52
Figure 3.3.9 Convolutional interleaving scheme (byte shift register illustration). (From [2]. Used
with permission.)
Interference Filter
Pre-coder Trellis Encoder 8-Level Symbol Mapper
X2 Y2 Z2 MAP
+
Z2 Z1 Z0 R
0 0 0 -7
D 0 0 1 -5
0 1 0 -3 R
X1 Y1 Z1 0 1 1 -1
1 0 0 +1
Z0 1 0 1 +3
D + D 1 1 0 +5
1 1 1 +7
(D = 12 Symbols Delay)
Figure 3.3.10 An 8-VSB trellis encoder, precoder, and symbol mapper. (From [2]. Used with per-
mission.)
frame; on the second segment, the order changes, and symbols are read from encoders 4 through
11, and then 0 through 3. The third segment reads from encoder 8 through 11 and then 0 through
7. This 3-segment pattern repeats through the 312 data segments of the frame. Table 3.3.4 shows
the interleaving sequence for the first three data segments of the frame.
After the data segment sync is inserted, the ordering of the data symbols is such that symbols
from each encoder occur at a spacing of 12 symbols.
A complete conversion of parallel bytes to serial bits needs 828 bytes to produce 6624 bits.
Data symbols are created from 2 bits sent in MSB order, so a complete conversion operation
y
Trellis
Encoder &
Pre-Coder
#0
Trellis
Trellis Encoded
Encoder & and
Pre-Coder Pre-Coded
Interleaved #1
Data Data
In Trellis out to
Encoder & Mapper
Pre-Coder
... #2 ...
... ...
... ...
...
...
... ...
... Trellis
...
Encoder &
Pre-Coder
#10
Trellis
Encoder &
Pre-Coder
#11
Figure 3.3.11 Trellis code interleaver. (From [2]. Used with permission.)
yields 3312 data symbols, which corresponds to four segments of 828 data symbols. A total of
3312 data symbols divided by 12 trellis encoders gives 276 symbols per trellis encoder, and 276
symbols divided by 4 symbols per byte gives 69 bytes per trellis encoder.
The conversion starts with the first segment of the field and proceeds with groups of four seg-
ments until the end of the field. A total of 312 segments per field divided by 4 gives 78 conver-
sion operations per field.
During segment sync, the input to four encoders is skipped, and the encoders cycle with no
input. The input is held until the next multiplex cycle, then is fed to the correct encoder.
3.3.3b Modulation
You will recall that in Figure 3.3.10 the mapping of the outputs of the trellis decoder to the nom-
inal signal levels (–7, –5, –3, –1, 1, 3, 5, 7) was shown. As detailed in Figure 3.3.12, the nominal
levels of data segment sync and data field sync are –5 and +5. The value of 1.25 is added to all
y
Data Data
Segment Data + FEC Segment
+7 SYNC SYNC
+5
+3
+1
-1
-3
-5
-7
Levels Before 4
828 Symbols 4
Pilot Addition Symbols 207 Bytes Symbols
(Pilot=1.25)
Data Segment
832 Symbols
208 Bytes
Figure 3.3.12 The 8-VSB data segment. (From [2]. Used with permission.)
R = .1 1 5 2
1 .0
.5
0
d d d = .3 1 M H z d d
5 .3 8 M H z
6 MHz
Figure 3.3.13 Nominal VSB system channel response (linear-phase raised-cosine Nyquist filter).
(From [2]. Used with permission.)
these nominal levels after the bit-to-symbol mapping function for the purpose of creating a small
pilot carrier [2]. The frequency of the pilot is the same as the suppressed-carrier frequency. The
in-phase pilot is 11.3 dB below the average data signal power.
The VSB modulator receives the 10.76 Msymbols/s, 8-level trellis-encoded composite data
signal with pilot and sync added. The DTV system performance is based on a linear-phase
raised-cosine Nyquist filter response in the concatenated transmitter and receiver, as illustrated
in Figure 3.3.13. The system filter response is essentially flat across the entire band, except for
the transition regions at each end of the band. Nominally, the rolloff in the transmitter has the
response of a linear-phase root raised-cosine filter.
Data Data
Segment Data + FEC Segment
SYNC SYNC
+15
+13
+11
+9
+7
+5
+3
+1
-1
-3
-5
-7
-9
-11
-13
-15
4
828 4
Levels Before
Pilot Addition Symbols Symbols Symbols
(Pilot=2.5)
Data Segment
832 Symbols
Figure 3.3.14 Typical data segment for the 16-VSB mode. (From [2]. Used with permission.)
signal power; and the symbol, segment, and field signals and rates all are the same as well,
allowing either receiver type to lock up on the other’s transmitted signal. Also, the data frame
definitions are identical. The primary difference is the number of transmitted levels (8 vs.16) and
the use of trellis coding and NTSC interference-rejection filtering in the terrestrial system.
The RF spectrum of the high data-rate modem transmitter looks identical to the terrestrial
system. Figure 3.3.14 illustrates a typical data segment, where the number of data levels is seen
to be 16 as a result of the doubled data rate. Each portion of 828 data symbols represents 187
data bytes and 20 Reed-Solomon bytes, followed by a second group of 187 data bytes and 20
Reed-Solomon bytes (before convolutional interleaving).
Figure 3.3.15 shows a functional block diagram of the high data-rate transmitter. It is identical
to the terrestrial VSB system, except that the trellis coding is replaced with a mapper that con-
verts data to multilevel symbols. The interleaver is a 26-data-segment intersegment convolu-
tional byte interleaver. Interleaving is provided to a depth of about one-twelfth of a data field (2
ms deep). Only data bytes are interleaved. Figure 3.3.16 shows the mapping of the outputs of the
interleaver to the nominal signal levels (–15, –13, –11, ..., 11, 13, 15). As shown in Figure 3.3.14,
the nominal levels of data segment sync and data field sync are –9 and +9. The value of 2.5 is
added to all these nominal levels after the bit-to-symbol mapping for the purpose of creating a
small pilot carrier. The frequency of the in-phase pilot is the same as the suppressed-carrier fre-
quency. The modulation method of the high data-rate mode is identical to the terrestrial mode
except that the number of transmitted levels is 16 instead of 8.
Reed- Data RF
Data Pilot VSB
Solomon Inter- Mapper MUX Up-
Randomizer Insertion Modulator
Encoder leaver Converter
Segment Sync
Field Sync
Figure 3.3.15 Functional block diagram of the 16-VSB transmitter. (From [2]. Used with permis-
sion.)
Xa Xb Xc Xd O
Byte to 1 1 1 1 +15
Symbol 1 1 1 0 +13
Conversion
1 1 0 1 +11
MSB 1 1 0 0 +9
7 Xa1 Xa
6 Xb1 1 0 1 1 +7
1st Nibble Xb 1 0 1 0 +5
5 Xc1
4 Xd1 1 0 0 1 +3
From 1 0 0 0 +1
3 Xa2 Xc To
Byte 0 1 1 1 -1
Interleaver 2 Xb2 MUX
2nd Nibble 0 1 1 0 -3
1 Xc2 Xd
0 1 0 1 -5
0 Xd2
LSB
0 1 0 0 -7
0 0 1 1 -9
0 0 1 0 -11
0 0 0 1 -13
0 0 0 0 -15
Figure 3.3.16 A 16-VSB mapping table. (From [2]. Used with permission.)
4.5
-------- × 684 = 10.7 ... million symbols/second (megabaud) (3.3.7)
286
• 4.5 MHz = the center frequency of the audio carrier offset in NTSC. This number was tradi-
tionally used in NTSC literature to derive the color subcarrier frequency and scanning rates.
In modern equipment, this may start with a precision 10 MHz reference, which is then multi-
plied by 9/20.
• 4.5 MHz/286 = the horizontal scan rate of NTSC, 15734.2657+...Hz (note that the color sub-
carrier is 455/2 times this, or 3579545 +5/11 Hz).
• 684: this multiplier gives a symbol rate for an efficient use of bandwidth in 6 MHz. It requires
a filter with Nyquist roll-off that is a fairly sharp cutoff (11 percent excess bandwidth), which
is still realizable with a reasonable surface acoustic wave (SAW) filter or digital filter.
In the terrestrial broadcast mode, channel symbols carry three bits/symbol of trellis-coded
data. The trellis code rate is 2/3, providing 2 bits/symbol of gross payload. Therefore the gross
payload is
To find the net payload delivered to a decoder it is necessary to adjust Equation (3.3.8) for the
overhead of the data segment sync, data field sync, and Reed-Solomon FEC.
To get the net bit rate for an MPEG-2 stream carried by the system (and supplied to an MPEG
transport decoder), it is first noted that the MPEG sync bytes are removed from the data stream
input to the 8-VSB transmitter and replaced with segment sync, and later reconstituted at the
receiver. For throughput of MPEG packets (the only allowed transport mechanism) segment sync
is simply equivalent to transmitting the MPEG sync byte, and does not reduce the net data rate.
The net bit rate of an MPEG-2 stream carried by the system and delivered to the transport
decoder is accordingly reduced by the data field sync (one segment of every 313) and the Reed-
Solomon coding (20 bytes of every 208)
312 188
21.52 …× -------- × -------- = 19.39 ... Mbps (3.3.9)
313 208
The net bit rate supplied to the transport decoder for the high data rate mode is
1E+01
1E+00
Note: Threshold of visibility
Probability of Error 1E-01 (TOV) has been measured to
occur at a S/N of 14.9dB when
there are 2.5 segment errors per
1E-02 second which is a segment
error rate (SER) of 1.93x10 -4.
1E-03
SER at TOV
1E-04
1E-05
1E-06
1E-07
1E-08
8 9 10 11 12 13 14 15 16 17 18
S/N db (RMS)
Figure 3.3.17 Segment-error probability for 8-VSB with 4-state trellis coding; RS (207,187). (From
[1]. Used with permission.)
shows that 99.9 percent of the time the transient peak power is within 6.3 dB of the average
power.
Figure 3.3.18 The cumulative distribution function of 8-VSB peak-to-average power ratio in an
ideal linear system. (From [1]. Used with permission.)
can provide independent control of the outer portions of the spectrum (beyond the Nyquist
slopes).
The transmitter vestigial sideband filtering is sometimes implemented by sideband cancella-
tion, using the phasing method. In this method, the baseband data signal is supplied to digital fil-
tering that generates in-phase and quadrature-phase digital modulation signals for application to
respective D/A converters. This filtering process provides the root raised cosine Nyquist filtering
and provides compensation for the (sin x) / x frequency responses of the D/A converters, as well.
The baseband signals are converted to analog form. The in-phase signal modulates the amplitude
of the IF carrier at zero degrees phase, while the quadrature signal modulates a 90-degree shifted
version of the carrier. The amplitude-modulated quadrature IF carriers are added to create the
vestigial sideband IF signal, canceling the unwanted sideband and increasing the desired side-
band by 6 dB. The nominal frequency of the IF carrier (and small in-phase pilot) in the prototype
hardware was 46.69 MHz, which is equal to the IF center frequency (44.000 MHz) plus the sym-
bol rate divided by 4
10.762
---------------- = 2.6905 MHz (3.3.11)
4
Note that the frequency is expressed with respect to the lower adjacent analog video carrier,
rather than the nominal channel edge. This is because the beat frequency depends on this rela-
tionship, and therefore the DTV pilot frequency must track any offsets in the analog video carrier
frequency. The offset in the FCC rules is related to the particular horizontal scanning rate of
NTSC, and can easily be modified for PAL. The offset Of was obtained from
F F
Of = 455 × ⎛⎝ ----h--⎞⎠ + 191 × ⎛⎝ ----h- ⎞⎠ – 29.97 = 5082138 Hz (3.3.12)
2 2
F pilot = F vis ( n ) – 70.5 × F seg = 338.0556 Hz (for no NTSC analog offset) (3.3.13)
Where:
F pilot = DTV pilot frequency above lower channel edge
Fvis(n) = NTSC visual carrier frequency above lower channel edge
= 1,250 kHz for no NTSC offset (as shown)
= 1,240 kHz for minus offset
= 1,260 kHz for plus offset
Fseg = ATSC data segment rate; = symbol clock frequency / 832= 12,935.381971 Hz
y
1E+01
1E+00
Probability of Error
1E-01
16-VSB
1E-02
Symbol Error Rate
1E-03
1E-04 Segment Error
Rate After Reed-
1E-05 Solomon FEC
1E-06
1E-07
1E-08
8 12 16 20 24 28 32 36
S/N db (RMS)
Figure 3.3.19 The error probability of the 16-VSB signal. (From [1]. Used with permission.)
The factor of 70.5 is chosen to provide the best overall comb filtering of analog color TV co-
channel interference. The use of a value equal to an integer +0.5 results in co-channel analog TV
interference being out-of-phase on successive data segment syncs.
Note that in this case the frequency tolerance is plus or minus one kHz. More precision is not
required. Also note that a different data segment rate would be used for calculating offsets for 7
or 8 MHz systems.
Where:
Foffset = offset to be added to one of the two DTV carriers
F seg = 12,935.381971 Hz (as defined previously)
This results in a pilot carrier 328.84363 kHz above the lower band edge, provided neither
DTV station has any other offset.
Use of the factor 1.5 results in the best co-channel rejection, as determined experimentally
with prototype equipment. The use of an integer +0.5 results in co-channel interference alternat-
ing phase on successive segment syncs.
y
Figure 3.3.20 Cumulative distribution function of the 16-VSB peak-to-average power ratio. (From
[1]. Used with permission.)
3.3.4 References
1. ATSC, “Guide to the Use of the Digital Television Standard,” Advanced Television Systems
Committee, Washington, D.C., Doc. A/54A, December 4, 2003.
2. ATSC, “ATSC Digital Television Standard,” Advanced Television Systems Committee,
Washington, D.C., Doc. A/53C, May 21, 2004.
3. ITU-R Document TG11/3-2, “Outline of Work for Task Group 11/3, Digital Terrestrial
Television Broadcasting,” June 30, 1992.
4. Chairman, ITU-R Task Group 11/3, “Report of the Second Meeting of ITU-R Task Group
11/3, Geneva, Oct. 13-19, 1993,” p. 40, Jan. 5, 1994.
FCC: “Memorandum Opinion and Order on Reconsideration of the Sixth Report and
Order,” Federal Communications Commission, Washington, D.C., February 17, 1998.
y
3.3.5 Bibliography
Rhodes, Charles W.: “Terrestrial High-Definition Television,” The Electronics Handbook, Jerry
C. Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 1599–1610, 1996.
y
g g
Chapter
3.4
Longley-Rice Propagation Model
3.4.1 Introduction1
The Longley-Rice radio propagation model is used to make predictions of radio field strength at
specific geographic points based on the elevation profile of terrain between the transmitter and
each specific reception point [1]. A computer is needed to make these predictions because of the
large number of reception points that must be individually examined. Computer code for the
Longley-Rice point-to-point radio propagation model is published in [2].
This chapter describes the process used by the U.S. Federal Communications Commission
(FCC) in determining the digital television (DTV) channel plan using the Longley-Rice propaga-
tion model.
1. This chapter is based on: FCC: “OET Bulletin No. 69—Longley-Rice Methodology for Eval-
uating TV Coverage and Interference,” Federal Communications Commission, Washington,
D.C., July 2, 1997.
3-67
g y p g
Table 3.4.1 Field Strengths Defining the Area Subject to Calculation for Analog Stations (After [1].)
Channels Defining Field Strength, dBu, to be Predicted Using F(50, 50) Curves
2–6 47
7 – 13 56
14 – 69 64 – 20 log[615 / (channel mid-frequency)]
Table 3.4.2 Field Strengths Defining the Area Subject to Calculation for DTV Stations ( After [1].)
Channels Defining Field Strength, dBu, to be Predicted Using F(50, 90) Curves
2–6 28
7 – 13 36
14 – 69 41– 20 log[615 / (channel mid-frequency)]
of FCC propagation curves for predicting field strength at 50 percent of locations 90 percent of
the time is found from
That is, the F(50, 90) value is lower than F(50, 50) by the same amount that F(50, 10) exceeds
F(50, 50).
The defining field strengths for DTV service, contained in Section 73.622 of the FCC rules,
are shown in Table 3.4.2. These values are determined from the DTV planning factors identified
in Table 3.4.3. They are used first to determine the area subject to calculation using FCC curves,
and subsequently to determine whether service is present at particular points within this area
using Longley-Rice terrain-dependent prediction.
For digital TV, three different situations arise:
• For DTV stations of the initial allotment plan located at the initial reference coordinates, the
area subject to calculation extends in each direction to the distance at which the field strength
predicted by FCC curves falls to the value identified in Table 3.4.2. The bounding contour is
identical, in most cases, to that of the analog station with which the initial allotment is paired.
The initial allotment plan and reference coordinates are set forth in [3].
• For new DTV stations, the area subject to calculation extends from the transmitter site to the
distance at which the field strength predicted by FCC curves falls to the value identified in
Table 3.4.2.
• In the case where a DTV station of the initial allotment plan has moved, the area subject to
calculation is the combination (logical union) of the area determined for the initial allotment
and the area inside the contour which would apply in the case of a new DTV station.
They determine the minimum field strength for DTV reception as a function of frequency band
and as a function of channel number in the UHF band.
The adjustment, Ka = 20 log[615/(channel mid-frequency)], is added to Kd to account for the
fact that field strength requirements are greater for UHF channels above the geometric mean fre-
quency of the UHF band and smaller for UHF channels below that frequency. The geometric
mean frequency, 615 MHz, is approximately the mid-frequency of channel 38.
The modified Grade B contour of analog UHF stations is determined by applying this same
adjustment factor to the Grade B field strength given in 47 CFR §73.683. With this dipole factor
modification, the field strength defining the Grade B of UHF channels becomes
in place of simply 64. Thus, the modified Grade B contour for channel 14 is determined by a
median field strength of 61.7 dBu, and the value for channel 51 is 66.3 dBu. The modified values
have been presented in Table 3.4.1. This modified Grade B contour bounds the area subject to
Longley-Rice calculations for analog stations.
The values appearing in Table 3.4.2 follow from the planning factors. They were derived from
Table 3.4.3 by solving the equation
For a new DTV station with a particular authorized set of facilities, the values given in Table
3.4.2 determine the contour within which the FCC makes all subsequent calculations of service
and interference.
necessary. This determination is made using information in the FCC Engineering Data Base of
April 3, 1997, including directional antenna data, and from terrain elevation data at points sepa-
rated by 3 arc-seconds of longitude and latitude. FCC curves (Section 73.699 of FCC rules) are
applied in the usual way, as described in Section 73.684 of the rules, to find this grade B contour
distance, with the exception that dipole factor considerations are applied to the field strength
contour for UHF.
Height above average terrain is determined every 45 degrees from terrain elevation data in
combination with the height of the transmitter radiation center above mean sea level, and by lin-
ear interpolation for compass directions in between. In cases where the TV Engineering Data
Base indicates that a directional antenna is employed, the ERP in each specific direction is deter-
mined through linear interpolation of the relative field values describing the directional pattern.
The directional pattern stored in the FCC Directional Antenna Data Base provides relative field
values at 10 degree intervals and may include additional values in special directions. The result
of linear interpolation of these relative field values is squared and multiplied by the overall max-
imum ERP listed for the station in the TV Engineering Data Base to find the ERP in a specific
direction.
The corresponding values of ERP for DTV signals in each direction is then calculated by a
further application of FCC curves, with noise-limited DTV coverage defined as the presence of
the field strengths identified in Table 3.4.2 at 50 percent of locations and 90 percent of the time.
These ERP values are computed for all 360 azimuths using the same radial-specific height above
average terrain as for the analog TV case, but now in conjunction with F(50, 90) curves.
Finally, the ERP for DTV is modified so that it does not exceed 1 megawatt and is not less
than 50 kilowatts. This is done by scaling the azimuthal power, pattern rather than by truncation.
Thus, if replication by FCC curves as described above required an ERP of 2 megawatts, the
power pattern is reduced by a factor of 2 in all directions. The resulting ERP is the reference
value cited in Section 73.622 of the rules.
Table 3.4.4 Parameter Values Used in FCC Implementation of the Longley-Rice Fortran Code
(After [1].)
Parameter Value Meaning/Comment
EPS 15.0 Relative permittivity of ground.
SGM 0.005 Ground conductivity, Siemens per meter.
ZSYS 0.0 Coordinated with setting of EN0. See page 72 of NTIA Report.
EN0 301.0 Surface refractivity in N-units (parts per million).
IPOL 0 Denotes horizontal polarization.
MDVAR 3 Code 3 sets broadcast mode of variability calculations.
KLIM 5 Climate code 5 for continental temperate.
HG(1) see text Height of the radiation center above ground.
HG(2) 10 m Height of TV receiving antenna above ground.
on a side were also expected to be consistent with the evaluations given in Appendix B of the
Sixth Report and Order.
Table 3.4.5a Interference Criteria for Co- and Adjacent Channels (After [1].)
Channel Offset D/U Ratio, dB
Analog into DTV into Analog Analog into DTV DTV into DTV
Analog
–1 (lower adjacent) –3 –17 –48 –42
0 (co-channel) +28 +34 +2 +15
+1 (upper adjacent) –13 –12 –49 –43
Table 3.4.5b Interference Criteria for UHF Taboo Channels (After [1].)
Channel Offset Relative to D/U Ratio, dB
Desired Channel N
Analog into DTV into Analog into DTV into DTV
Analog Analog DTV
N–8 –32 –32 NC NC
N–7 –30 –35 NC NC
N–4 NC –34 NC NC
N-3 –33 –30 NC NC
N–2 –26 –24 NC NC
N+2 –29 –28 NC NC
N+3 –34 –34 NC NC
N+4 –23 –25 NC NC
N+7 –33 –34 NC NC
N+8 –41 –43 NC NC
N+14 –25 –33 NC NC
N+15 –9 –31 NC NC
(NC means not considered)
Table 3.4.6 Front-to-Back Ratios Assumed for Receiving Antennas (After [1].)
TV Service Front-to-Back Ratios, dB
Low VHF High VHF UHF
Analog 6 6 6
DTV 10 12 14
The discrimination, in relative volts, provided by the assumed receiving pattern is a fourth-
power cosine function of the angle between the lines joining the desired and undesired stations to
the reception point. One of these lines goes directly to the desired station, the other goes to the
undesired station. The discrimination is calculated as the fourth power of the cosine of the angle
between these lines but never more than represented by the front-to-back ratios identified in
Table 3.4.6. When both desired and undesired stations are dead ahead, the angle is 0.0 giving a
cosine of unity so that there is no discrimination. When the undesired station is somewhat off-
axis, the cosine will be less than unity bringing discrimination into play; and when the undesired
station is far off axis, the maximum discrimination given by the front-to-back ratio is attained.
g y p g
3.4.4 References
1. FCC: “OET Bulletin No. 69—Longley-Rice Methodology for Evaluating TV Coverage and
Interference,” Federal Communications Commission, Washington, D.C., July 2, 1997
2. Hufford, G. A., A. G. Longley, and W. A. Kissick: A Guide to the Use of the ITS Irregular
Terrain Model in the Area Prediction Mode, U.S. Department of Commerce, Washington,
D.C., NTIA Report 82-100, April 1982. (Note: some modifications to the code were
described by G. A. Hufford in a memorandum to users of the model dated January 30,
1985. With these modifications, the code is referred to as Version 1.2.2 of the Longley-
Rice model.)
3. FCC: “Appendix B, Sixth Report and Order,” MM Docket 87-268, FCC 97-115, Federal
Communications Commission, Washington, D.C., April 3, 1997.
g g
Chapter
3.5
Television Transmitters
John T. Wilner
3.5.1 Introduction
Any analog television transmitter consists of two basic subsystems:
• The visual section, which accepts the video input, amplitude-modulates an RF carrier, and
amplifies the signal to feed the antenna system
• The aural section, which accepts the audio input, frequency-modulates a separate RF carrier,
and amplifies the signal to feed the antenna system
The visual and aural signals usually are combined to feed a single radiating antenna. Different
transmitter manufacturers have different philosophies with regard to the design and construction
of a transmitter. Some generalizations are possible, however, with respect to basic system config-
urations. Transmitters can be divided into categories based on the following criteria:
• Output power
• Final-stage design
• Modulation system
3-75
3-76 Television Transmission Systems
Figure 3.5.1 Idealized picture transmission amplitude characteristics for VHF and UHF analog
systems.
low feedline losses. To reach the exact power level, minor adjustments are made to the power
output of the transmitter, usually by a front-panel power control.
NTSC UHF stations that want to achieve their maximum licensed power output are faced with
installing a very high power transmitter. Typical pairings in the U.S. include a transmitter rated
for 220 kW and an antenna with a gain of 25, or a 110 kW transmitter and a gain-of-50 antenna.
In the latter case, the antenna could pose a significant problem. UHF antennas with gains in the
region of 50 are possible, but not advisable for most installations because of coverage problems
that can result.
The amount of output power required of the transmitter will have a fundamental effect on sys-
tem design. Power levels dictate the following parameters:
• Whether the unit will be of solid-state or vacuum tube design
• Whether air, water, or vapor cooling must be used
• The type of power supply required
• The sophistication of the high-voltage control and supervisory circuitry
• Whether common amplification of the visual and aural signals (rather than separate visual and
aural amplifiers) is practical
Tetrodes generally are used for VHF transmitters above 25 kW, and specialized tetrodes can
be found in UHF transmitters at the 15 kW power level and higher. As solid-state technology
advances, the power levels possible in a reasonable transmitter design steadily increase, making
solid-state systems more attractive options.
In the realm of UHF transmitters, the klystron (and its related devices) reigns supreme.
Klystrons use an electron-bunching technique to generate high power—55 kW from a single tube
is not uncommon—at ultrahigh frequencies. They are currently the first choice for high-power,
Television Transmitters 3-77
high-frequency service. Klystrons, however, are not particularly efficient. A stock klystron with
no special circuitry might be only 40 percent efficient. Various schemes have been devised to
improve klystron efficiency, the best known of which is beam pulsing. Two types of pulsing are
in common use:
• Mod-anode pulsing, a technique designed to reduce power consumption of the device during
the color burst and video portion of the NTSC signal (and thereby improve overall system
efficiency)
• Annular control electrode (ACE) pulsing, which accomplishes basically the same goal by
incorporating the pulsing signal into a low-voltage stage of the transmitter, rather than a high-
voltage stage (as with mod-anode pulsing)
Experience has shown the ACE approach—and other similar designs—to provide greater
improvement in operating efficiency than mod-anode pulsing, and better reliability as well.
Several newer technologies offer additional ways to improve UHF transmitter efficiency,
including:
• The inductive output tube (IOT), also known as the Klystrode. This device essentially com-
bines the cathode/grid structure of the tetrode with the drift tube/collector structure of the
klystron.
• The multistage depressed collector (MSDC) klystron, a device that achieves greater effi-
ciency through a redesign of the collector assembly. A multistage collector is used to recover
energy from the electron stream inside the klystron and return it to the beam power supply.
Improved tetrode devices, featuring higher operating power at UHF and greater efficiency, have
also been developed.
A number of approaches can be taken to amplitude modulation of the visual carrier. Current
technology systems utilize low-level intermediate-frequency (IF) modulation. This approach
allows superior distortion correction, more accurate vestigial sideband shaping, and significant
economic advantages to the transmitter manufacturer.
A TV transmitter can be divided into four major subsystems:
• The exciter
• Intermediate power amplifier (IPA)
• Power amplifier
• High-voltage power supply
Figure 3.5.2 shows the audio, video, and RF paths for a typical design.
Video
input Video/delay Differential Visual
equalizer phase power
motherboard corrector control
Visual IF
Exciter/modulator subsystem
Visual Power Visual
Frequency IF splitter modulator
synthesizer
Attenuator Differential
gain
corrector
Fv LO mixer
Incidental
Aural phase
IF corrector
Audio Aural
input FMO
Aural Visual
Aural IF output/
output/
mixer mixer
Intermediate power amplifier subsystem
25 W
400 W
Visual 80 W Power visual 400 W
predriver splitter/ amplifier
25 W
attenuator array
Combiner
400 W
Aural visual 400 W
predriver amplifier
array
Aural Harmonic
power filter
amplifier
Figure 3.5.2 Basic block diagram of a TV transmitter. The three major subassemblies are the
exciter, IPA, and PA. The power supply provides operating voltages to all sections, and high volt-
age to the PA stage.
Television Transmitters 3-79
requiring differential gain correction. The plate (anode) circuit of a tetrode PA usually is built
around a coaxial resonant cavity, providing a stable and reliable tank.
UHF transmitters using a klystron in the final output stage must operate class A, the most lin-
ear but also most inefficient operating mode for a vacuum tube. The basic efficiency of a non-
pulsed klystron is approximately 40 percent. Pulsing, which provides full available beam current
only when it is needed (during peak-of-sync), can improve device efficiency by as much as 25
percent, depending on the type of pulsing used.
Two types of klystrons are presently in service:
• Integral-cavity klystron
• External-cavity klystron
The basic theory of operation is identical for each tube, but the mechanical approach is radi-
cally different. In the integral-cavity klystron, the cavities are built into the klystron to form a
single unit. In the external-cavity klystron, the cavities are outside the vacuum envelope and
bolted around the tube when the klystron is installed in the transmitter.
A number of factors come into play in a discussion of the relative merits of integral- vs. exter-
nal-cavity designs. Primary considerations include operating efficiency, purchase price, and life
expectancy.
The PA stage includes a number of sensors that provide input to supervisory and control cir-
cuits. Because of the power levels present in the PA stage, sophisticated fault-detection circuits
are required to prevent damage to components in the event of a problem inside or outside the
transmitter. An RF sample, obtained from a directional coupler installed at the output of the
transmitter, is used to provide automatic power-level control.
The transmitter system discussed in this section assumes separate visual and aural PA stages.
This configuration is normally used for high-power NTSC transmitters. A combined mode also
may be used, however, in which the aural and visual signals are added prior to the PA. This
approach offers a simplified system, but at the cost of additional precorrection of the input video
signal.
PA stages often are configured so that the circuitry of the visual and aural amplifiers is identi-
cal, providing backup protection in the event of a visual PA failure. The aural PA can then be
reconfigured to amplify both the aural and the visual signals, at reduced power.
The aural output stage of a TV transmitter is similar in basic design to an FM broadcast trans-
mitter. Tetrode output devices generally operate class C, providing good efficiency. Klystron-
based aural PAs are used in UHF transmitters.
Sync Beam
modulator
Low voltage/control
subassemblies
Support subassemblies
Supervisory signals (heater supply, magnet supply,
high voltage supply, ion pump)
video information. The pulse waveform is developed through a pulse amplifier, rather than a
switch. This permits more accurate adjustments of operating conditions of the visual amplifier.
Although the current demand from the beam modulator is low, the bias is near cathode poten-
tial, which is at a high voltage relative to ground. The modulator, therefore, must be insulated
from the chassis. This is accomplished with optical transmitters and receivers connected via
fiber optic cables. The fiber optic lines carry supervisory, gain control, and modulating signals.
The four-cavity external klystrons will tune to any channel in the UHF-TV band. An adjust-
able beam perveance feature enables the effective electrical length of the device to be varied by
altering the beam voltage as a function of operating frequency. Electromagnetic focusing is used
on both tubes. The cavities, body, and gun areas of the klystrons are air-cooled. The collectors
are vapor-phase-cooled using an external heat exchanger system.
The outputs of the visual and aural klystrons are passed through harmonic filters to an RF
combiner before being applied to the antenna system.
Transmitter
control
Wideband Wideband
optical optical
Driver transmitter receiver
system
Feeder AGC
monitor controller
AC supply
input Linear Beam
amplifier Linear voltage
driver amplifier
Isolation Power
transformer supply
Visual
output
RF
Visual IPA coupler
Visual input signal Klystron
RF
coupler
Figure 3.5.5 Comparison of the DTV peak transmitter power rating and NTSC peak-of-sync.
(After [1].)
power gain of 24, and a transmission-line efficiency of 70 percent. The required DTV transmitter
power Tx will equal
405
Tx = -------- ÷ 0.7 = 24.1 kW (3.5.1)
24
Because the DTV peak-to-average ratio is 4 (6 dB), the actual DTV transmitter power rating
must be 96.4 kW (peak). This 4× factor is required to allow sufficient headroom for signal
peaks. Figure 3.5.5 illustrates the situation. The transmitter rating is a peak value because the RF
peak envelope excursions must traverse the linear operating range of the transmitter on a peak
basis to avoid high levels of IMD spectral spreading [1]. In this regard, the DTV peak envelope
power (PEP) is similar to the familiar NTSC peak-of-sync rating for setting the transmitter power
level. Note that NTSC linearizes the PEP envelope from sync tip to zero carrier for best perfor-
mance. Although many UHF transmitters use pulsed sync systems, where the major portion of
envelope linearization extends only from black level to maximum white, the DTV signal has no
peak repetitive portion of the signal to apply a pulsing system and, therefore, must be linearized
from the PEP value to zero carrier. Many analog transmitters also linearize over the full NTSC
amplitude range and, as a result, the comparison between NTSC and DTV peak RF envelope
power applies for setting the transmitter power. The DTV power, however, always is stated as
average (rms) because this is the only consistent parameter of an otherwise pseudorandom sig-
nal.
Figure 3.5.6 shows a DTV signal RF envelope operating through a transmitter IPA stage with
a spectral spread level of approximately –43 dB. Note the large peak circled on the plot at 9 dB.
This is significantly above the previously noted values. The plot also shows the average (rms)
level, the 6 dB peak level, and other sporadic peaks above the 6 dB dotted line. If the modulator
output were measured directly, where it can be assumed to be very linear, peak-to-average ratios
as high as 11 dB could be seen.
Figure 3.5.7 shows another DTV RF envelope, but in this case the power has been increased
to moderately compress the signal. Note that the high peaks above the 6 dB dotted line are nearly
gone. The peak-to-average ratio is 6 dB.
Television Transmitters 3-83
Figure 3.5.6 DTV RF envelope at a spectral spread of –43 dB. (After [1].)
Figure 3.5.7 DTV RF envelope at a spectral spread of –35 dB. (After [1].)
Figure 3.5.8 Schematic diagram of a 600 W VHF amplifier using eight FETs in a parallel device/
parallel module configuration.
The ac-to-RF efficiency of a solid-state transmitter may or may not be any better than a tube
transmitter of the same operating power and frequency. Much depends on the type of modulation
used and the frequency of operation. Fortunately, the lower average power and duty cycle of the
DTV signal suggests that a high-efficiency solid-state solution may be possible [2]. The rela-
tively constant signal power of the DTV waveform eliminates one of the biggest problems in
NTSC applications of class AB solid-state amplifiers: the continual level changes in the video
signal vary the temperature of the class AB amplifier junctions and, thus, their bias points. This,
in turn, varies all of the transistor parameters, including gain and linearity. Sophisticated adaptive
bias circuits are required for reduction or elimination of this limitation to class AB operation.
Solid-state amplifiers operating class A do not suffer from such linearity problems, but the
class A operation imposes a substantial efficiency penalty. Still, many designs use class A
because of its simplicity and excellent linearity.
The two primary frontiers for solid-state devices are 1) power dissipation and 2) improved
materials and processes [3]. With regard to power dissipation, the primary factor in determining
the amount of power a given device can handle is the size of the active junctions on the chip. The
same power output from a device also may be achieved through the use of several smaller chips
in parallel within a single package. This approach, however, can result in unequal currents and
uneven distribution of heat. At high power levels, heat management becomes a significant factor
in chip design. Specialized layout geometries have been developed to ensure even current distri-
bution throughout the device.
The second frontier—improved materials and processes—is being addressed with technolo-
gies such as LDMOS and silicon carbide (SiC). The bottom line with regard to solid-state is that,
from the standpoint of power-handling capabilities, there is a point of diminishing returns for a
given technology. The two basic semiconductor structures, bipolar and FET, have seen numerous
fabrication and implementation enhancements over the years that have steadily increased the
maximum operating power and switching speed. Power MOSFET, LDMOS, and SiC devices are
the by-products of this ongoing effort.
With any new device, or class of devices, economies always come into play. Until a device has
reached a stable production point, at which it can be mass-produced with few rejections, the per-
device cost is usually high, limiting its real-world applications. For example, if an SiC device that
can handle more than 4 times the power of a conventional silicon transistor is to be cost-effective
in a transmitter, the per-device cost must be less than 4 times that of the conventional silicon
product. It is fair to point out in this discussion that the costs of the support circuitry, chassis, and
heat sink are equally important. If, for example, the SiC device—though still at a cost disadvan-
tage relative to a conventional silicon transistor—requires fewer support elements, then a cost
advantage still can be realized.
If increasing the maximum operating power for a given frequency is the primary challenge for
solid-state devices, then using the device more efficiently ranks a close second. The only thing
better than being able to dissipate more power in a transistor is not generating the waste heat in
the first place. The real performance improvements in solid-state transmitter efficiency have not
come as a result of simply swapping out a single tube with 200 transistors, but from using the
transistors in creative ways so that higher efficiency is achieved and fewer devices are required.
This process has been illustrated dramatically in AM broadcast transmitters. Solid-state trans-
mitters have taken over that market at all power levels, not because of their intrinsic feature set
(graceful degradation capability when a module fails, no high voltages used in the system, sim-
plified cooling requirements, and other attributes), but because they lend themselves to enor-
mous improvements in operating efficiency as a result of the waveforms being amplified. For
3-86 Television Transmission Systems
Figure 3.5.9 Cutaway view of the tetrode (left ) and the Diacrode (right). Note that the RF current
peaks above and below the Diacrode center, but the tetrode has only one peak at the bottom.
(After [6].)
television, most notably UHF, the march to solid-state has been much slower. One of the prom-
ises of DTV is that clever amplifier design will lead to similar, albeit less dramatic, improve-
ments in intrinsic operating efficiency.
Figure 3.5.10 The elements of the Diacrode, including the upper cavity. Double current, and con-
sequently, double power is achieved with the device because of the current peaks at the top and
bottom of the tube, as shown. (After [6].)
input circuit. The pass-through effect, therefore, contributes to the overall operating efficiency of
the transmitter.
The expected lifetime of a tetrode in UHF service usually is shorter than that of a klystron of
the same power level. Typical lifetimes of 8000 to 15,000 hours have been reported. Intensive
work, however, has led to products that offer higher output powers and extended operating life-
times, while retaining the benefits inherent in tetrode devices. With regard to DTV application
possibilities, the linearity of the tetrode is excellent, a strong point for DTV consideration [6].
Minimal phase distortion and low intermodulation translate into reduced correction requirements
for the amplifier.
The Diacrode (Thomson) is an adaptation of the high-power UHF tetrode. The operating prin-
ciple of the Diacrode is basically the same as that of the tetrode. The anode current is modulated
by an RF drive voltage applied between the cathode and the power grid. The main difference is in
the position of the active zones of the tube in the resonant coaxial circuits, resulting in improved
reactive current distribution in the electrodes of the device.
Figure 3.5.9 compares the conventional tetrode with the Diacrode. The Diacrode includes an
electrical extension of the output circuit structure to an external cavity [6]. The small dc-blocked
cavity rests on top of the tube, as illustrated in Figure 3.5.10.
The cavity is a quarter-wave transmission line, as measured from the top of the cavity to the
vertical center of the tube. The cavity is short-circuited at the top, reflecting an open circuit (cur-
rent minimum) at the vertical center of the tube and a current maximum at the base of the tube,
like the conventional tetrode, and a second current maximum above the tube at the cavity short-
circuit. (Figure 3.5.9 helps to visualize this action.)
With two current maximums, the Diacrode has an RF power capability twice that of the
equivalent tetrode, while the element voltages remain the same. All other properties and aspects
of the Diacrode are basically identical to those of the TH563 high-power UHF tetrode (Thom-
son), upon which the Diacrode is patterned.
3-88 Television Transmission Systems
Figure 3.5.11 Mechanical design of the multistage depressed collector assembly. Note the “V”
shape of the 4-element system.
Some of the benefits of such a device, in addition to the robust power output available, are its
low high-voltage requirements (low relative to a klystron/IOT-based system, that is), small size,
and simple replacement procedures.
Grid
+ Bias -
power
supply
Anode
Tailpipe
Collector
Electron beam
- HV +
power
supply
Cathode
RF input RF output
cally, the klystron is relatively simple. It offers long life and requires a minimum of routine main-
tenance.
The klystron, however, is inefficient in its basic form. Efficiency improvements can be gained
for television applications through the use of beam pulsing; still, a tremendous amount of energy
must be dissipated as waste heat. Years of developmental research have produced two high-effi-
ciency devices for television use: the MSDC klystron and the IOT, also known as the Klystrode.
The MSDC device is essentially identical to a standard klystron, except for the collector
assembly. Beam reconditioning is achieved by including a transition region between the RF
interaction circuit and the collector under the influence of a magnetic field. From an electrical
standpoint, the more stages of a multistage depressed collector klystron, the better. Predictably,
the tradeoff is increased complexity and, therefore, increased cost for the product. Each stage
that is added to the depressed collector system is a step closer to the point of diminishing returns.
As stages are added above four, the resulting improvements in efficiency are proportionally
smaller. Because of these factors, a 4-stage device is common for television service. (See Figure
3.5.11.)
The IOT is a hybrid of a klystron and a tetrode. The high reliability and power-handling capa-
bility of the klystron is due, in part, to the fact that electron-beam dissipation takes place in the
collector electrode, quite separate from the RF circuitry. The electron dissipation in a tetrode is at
the anode and the screen grid, both of which are inherent parts of the RF circuit; therefore, they
must be physically small at UHF frequencies. An advantage of the tetrode, on the other hand, is
that modulation is produced directly at the cathode by a grid so that a long drift space is not
required to produce density modulation. The IOT has a similar advantage over the klystron—
high efficiency in a small package.
The IOT is shown schematically in Figure 3.5.12. The electron beam is formed at the cathode,
density-modulated with the input RF signals by a grid, then accelerated through the anode aper-
ture. In its bunched form, the beam drifts through a field-free region and interacts with the RF
f @ G ( )
3-90 Television Transmission Systems
field in the output cavity. Power is extracted from the beam in the same way it is extracted from a
klystron. The input circuit resembles a typical UHF power grid tube. The output circuit and col-
lector resemble a klystron.
Because the IOT provides beam power variation during sync pulses (as in a pulsed klystron)
as well as over the active modulating waveform, it is capable of high efficiency. The device thus
provides full-time beam modulation as a result of its inherent structure and class B operation.
For DTV service, the IOT is particularly attractive because of its good linearity characteris-
tics. The IOT provides –60 dB or better intermodulation performance in combined 10 dB aural/
visual service [7]. Tube life data varies depending upon the source, but one estimate puts the life
expectancy at more than 35,000 hours [8].
The maximum power output available from an IOT range from 55 kW visual plus 5.5 kW
aural in common amplification [9]. Various improvements have been made over the years,
including changes to the input cavity to improve intermodulation performance of the device (Fig-
ure 3.5.13).
Figure 3.5.14 Schematic overview of the MSDC IOT or constant efficiency amplifier. (After [10].)
ing efficiency [10]. This had been considered by Priest and Shrader [11] and by Gilmore [12],
but the idea was rejected because of the complexity of the multistage depressed collector assem-
bly and because the IOT already exhibited fairly high efficiency. Subsequent development by
Symons [10, 13] led to a working device. An inductive output tube, modified by the addition of a
multistage depressed collector, has the interesting property of providing linear amplification
with (approximately) constant efficiency.
Figure 3.5.14 shows a schematic representation of the constant efficiency amplifier (CEA)
[10]. The cathode, control grid, anode and output gap, and external circuitry are essentially iden-
tical with those of the IOT amplifier. Drive power introduced into the input cavity produces an
electric field between the control grid and cathode, which draws current from the cathode during
positive half-cycles of the input RF signal. For operation as a linear amplifier, the peak value of
the current—or more accurately, the fundamental component of the current—is made (as nearly
as possible) proportional to the square root of the drive power, so that the product of this current
and the voltage it induces in the output cavity will be proportional to the drive power.
Following the output cavity is a multistage depressed collector in which several typical elec-
tron trajectories are shown. These are identified by the letters a through e. The collector elec-
trodes are connected to progressively lower potentials between the anode potential and the
cathode potential so that more energetic electrons penetrate more deeply into the collector struc-
ture and are gathered on electrodes of progressively lower potentials.
3-92 Television Transmission Systems
In considering the difference between an MSDC IOT and an MSDC klystron, it is important
to recognize that in a class B device, no current flows during the portion of the RF cycle when
the grid voltage is below cutoff and the output gap fields are accelerating. As a result, it is not
necessary to have any collector electrode at a potential equal to or below cathode potential. At
low output powers, when the RF output gap voltage is just equal to the difference in potential
between the lowest-potential collector electrode and the cathode, all the current will flow to that
electrode. Full class B efficiency is thus achieved under these conditions.
As the RF output gap voltage increases with increased drive power, some electrons will have
lost enough energy to the gap fields so they cannot reach the lowest potential collector, and so
current to the next-to-the-lowest potential electrode will start increasing. The efficiency will
drop slightly and then start increasing again until all the current is just barely collected by the
two lowest-potential collectors, and so forth.
Maximum output power is reached when the current delivered to the output gap is sufficient
to build up an electric field or voltage that will just stop a few electrons. At this output power, the
current is divided among all of the collector electrodes and the efficiency will be somewhat
higher than the efficiency of a single collector, class B amplifier [10].
The challenge of developing a multistage depressed collector for an IOT is not quite the same
as that of developing a collector for a conventional klystron [13]. It is different because the dc
component of beam current rises and falls in proportion to the square root of the output power of
the tube. The dc beam current is not constant as it is in a klystron. As a result, the energy spread
is low because the output cavity RF voltage is low at the same time that the RF and dc beam cur-
rents are low. Thus, there will be small space-charge forces, and the beam will not spread as
much as it travels deep into the collector toward electrodes having the lowest potential. For this
reason, the collector must be rather long and thin when compared to the multistage depressed
collector for a conventional klystron, as described previously.
41-47 VSB
Signal. (Low 35-53 MHz
Level. -10 dBm anti-intermodulation
to each mixer) Level
cancellation signals
Main IF Adjust
Signal in X X
(no correction)
Delay
Adjust Main Signal
Path
Summing
Circuit Amplitude Phase
Adjust Adjust
Main IF Signal
out (with correction)
Figure 3.5.15 An example 8-VSB sideband intermodulation distortion corrector functional block
diagram for the broadcast television IF band from 41 to 47 MHz for use with DTV television trans-
mitters. (From [14]. Used with permission.)
behaves precisely like two mixers not driven into switching. As with the analog pre-correction
process described previously, pre-correction for the digital waveform also is possible.
The corrector illustrated in Figure 3.5.15 mimics the amplifying device, but contains phase
shift, delay, and amplitude adjustment circuitry to properly add (subtract), causing cancellation
of the intermodulation components adjacent to the parent signal at the amplifier output. Because
the intentional mixing circuit is done in a controlled way, it may be used at the IF frequency
before the RF conversion process in the transmitter so that an equal and opposite phase signal is
generated and added to the parent signal at IF. It will then go through the same RF conversion
and amplification process as the parent signal that will spawn the real intermodulation products.
The result is intermodulation cancellation at the output of the amplifier.
Figure 3.5.16 shows the relative amplitude content of the correction signal spanning 6 MHz
above and below the desired signal. Because the level of the correction signal must match that of
the out-of-band signal to be suppressed, it must be about 43 dB below the in-band signal accord-
ing to the example. This level of in-band addition is insignificant to the desired signal, but just
enough to cause cancellation of the out-of-band signal.
Using such an approach that intentionally generates the correct anti-intermodulation compo-
nent and causes it to align in time, be opposite phase, and equal in amplitude allows for cancella-
tion of the unwanted component, at least in part. The degree of cancellation has everything to do
with the precise alignment of these three attributes. In practice, it has been demonstrated [19]
that only the left or right shoulder may be optimally canceled by up to about 3–5 dB. This
amount may not seem to be significant, but it must be remembered that 3 dB of improvement is
cutting the power in half.
3-94 Television Transmission Systems
Pass Band
35 Frequency 53
Figure 3.5.16 After amplification, the out-of-band signal is corrected or suppressed by an amount
determined by the ability of the correction circuit to precisely match the delay, phase, and ampli-
tude of the cancellation signal. (From [14]. Used with permission.)
3.5.1 References
1. Plonka, Robert J.: “Planning Your Digital Television Transmission System,” Proceedings of
the 1997 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., pg. 89, 1997.
2. Ostroff, Nat S.: “A Unique Solution to the Design of an ATV Transmitter,” Proceedings of
the 1996 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., pg. 144, 1996.
3. Whitaker, Jerry C.: “Solid State RF Devices,” Radio Frequency Transmission Systems:
Design and Operation, McGraw-Hill, New York, pg. 101, 1990.
Television Transmitters 3-95
4. Whitaker, Jerry C.: “Microwave Power Tubes,” The Electronics Handbook, Jerry C. Whi-
taker (ed.), CRC Press, Boca Raton, Fla., pg. 413, 1996.
5. Tardy, Michel-Pierre: “The Experience of High-Power UHF Tetrodes,” Proceedings of the
1993 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., pg. 261, 1993.
6. Hulick, Timothy P.: “60 kW Diacrode UHF TV Transmitter Design, Performance and Field
Report,” Proceedings of the 1996 NAB Broadcast Engineering Conference, National Asso-
ciation of Broadcasters, Washington, D.C., pg. 442, 1996.
7. Whitaker, Jerry C.: “Microwave Power Tubes,” Power Vacuum Tubes Handbook, Van Nos-
trand Reinhold, New York, pg. 259, 1994.
8. Ericksen, Dane E.: “A Review of IOT Performance,” Broadcast Engineering, Intertec Pub-
lishing, Overland Park, Kan., pg. 36, July 1996.
9. Aitken, S., D. Carr, G. Clayworth, R. Heppinstall, and A. Wheelhouse: “A New, Higher
Power, IOT System for Analogue and Digital UHF Television Transmission,” Proceedings
of the 1997 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., pg. 531, 1997.
10. Symons, Robert S.: “The Constant Efficiency Amplifier,” Proceedings of the NAB Broad-
cast Engineering Conference, National Association of Broadcasters, Washington, D.C., pp.
523–530, 1997.
11. Priest, D. H., and M. B. Shrader: “The Klystrode—An Unusual Transmitting Tube with
Potential for UHF-TV,” Proc. IEEE, vol. 70, no. 11, pp. 1318–1325, November 1982.
12. Gilmore, A. S.: Microwave Tubes, Artech House, Dedham, Mass., pp. 196–200, 1986.
13. Symons, R., M. Boyle, J. Cipolla, H. Schult, and R. True: “The Constant Efficiency Ampli-
fier—A Progress Report,” Proceedings of the NAB Broadcast Engineering Conference,
National Association of Broadcasters, Washington, D.C., pp. 77–84, 1998.
14. Hulick, Timothy P.: “Very Simple Out-of-Band IMD Correctors for Adjacent Channel
NTSC/DTV Transmitters,” Proceedings of the Digital Television '98 Conference, Intertec
Publishing, Overland Park, Kan., 1998.
g g
Chapter
3.6
Multiple Transmitter Networks
3.6.1 Introduction1
Many of the challenges of RF transmission, especially as they apply to digital transmission, can
be addressed by using multiple transmitters to cover a service area [1]. Because of the limitations
in the spectrum available, many systems based on the use of multiple transmitters must operate
those transmitters all on the same frequency, hence the name single frequency network (SFN). At
the same time, use of SFNs leads to a range of additional complications that must be addressed
in the design of the network.
SFNs for single-carrier signals such as 8-VSB become possible because of the presence of
adaptive equalizers in receivers. When signals from multiple transmitters arrive at a receiver,
under the right conditions, the adaptive equalizer in that receiver can treat the several signals as
echoes of one another and extract the data they carry. The conditions are controlled by the capa-
bilities of the adaptive equalizer and will become less stringent as the technology of adaptive
equalizers improves over time.
1. This chapter is based on: ATSC, “ATSC Recommended Practice: Design Of Synchronized
Multiple Transmitter Networks,” Advanced Television Systems Committee, Washington,
D.C., Doc. A/111, September 3, 2004. Used with permission.
Editor’s note: This chapter provides an overview of multiple transmitter networks based on
ATSC A/111. Readers are encouraged to download the entire Recommended Practice from
the ATSC Web site (http://www.atsc.org). All ATSC Standards, Recommended Practices, and
Information Guides are available at no charge.
3-97
p
ability of service. These reductions, in turn, permit operation with less overall effective radiated
power (ERP) and/or antenna height.
When transmitters can be operated at lower power levels and/or elevations, the interference
they cause to their neighbors is reduced. Using multiple transmitters allows a station to provide
significantly higher signal levels near the edge of its service area without causing the level of
interference to its neighbor that would arise if the same signal levels were delivered from a sin-
gle, central transmitter. The interference reductions come from the significantly smaller interfer-
ence zones that surround transmitters that use relatively lower power and/or antenna heights.
With the use of multiple transmitters comes the ability to overcome terrain limitations by fill-
ing in areas that would otherwise receive insufficient signal level. When the terrain limitations
are caused by obstructions that isolate an area from another (perhaps the main) transmitter,
advantage may be taken of the obstructions in the design of the network. The obstructions can
serve to help isolate signals from different transmitters within the network, making it easier to
control interference between the network’s transmitters. When terrain obstructions are used in
this way, it may be possible to place transmitters farther apart than if such obstructions were not
utilized for isolation.
Where homes are illuminated by sufficiently strong signals from two or more transmitters, it
may be possible to take advantage of the multiple signals to provide more reliable indoor recep-
tion. When a single transmitter is used, standing waves within a home sheathed in metal likely
will result in areas within that home having signal levels too low to use. Signals arriving from
different directions will enter the resonant cavity of the home through different ports (windows)
and set up standing waves in different places. The result often may be that areas within the home
receiving low signal levels from one transmitter will receive adequate signal levels from another
transmitter, thereby making reliable reception possible in many more places within the home.
interference. Differential delays between signals above the echo interference threshold must fall
within the time window that the adaptive equalizer can correct if the signals are to be received.
Current receiver designs have a fixed time window inside which echoes can be equalized. The
amplitudes of correctable echoes also are a function of their time displacement from the main
signal. The closer together the signals are in time, the closer they can be in amplitude. The fur-
ther apart they are in time, the lower in level the echoes must be for the equalizer to work. These
relationships are improving dramatically in newer receiver front-end designs, and they can be
expected to continue improving at least over the next several generations of designs. As they
improve, limitations on SFN designs will be reduced.
The handling of signals with equal signal level echoes is a missing capability in early receiver
front-end designs, but it is now recognized as necessary for receivers to work in many situations
that occur naturally, without even considering their generation in SFNs. The reason for this is
that any time there is no direct path or line-of-sight from the transmitter to the receiver, the
receiver will receive all of its input energy from reflections of one sort or another. When this hap-
pens, there may be a number of signals (echoes) arriving at the receiver that are about equal in
amplitude, and they may vary over time, with the strongest one changing from time-to-time. This
is called a Rayleigh channel when it occurs, and it is now recognized that Rayleigh channels are
more prevalent than once thought. For example, they often exist in city canyons and mid-rise
areas, they exist behind hills, and so on. They also exist in many indoor situations. If receivers
are to deal with these cases, adaptive equalizers will have to be designed to handle them. Thus,
SFNs will be able to take advantage of receiver capabilities that are needed in natural circum-
stances.
Radio frequency signals travel at a speed of about 3/16 mile per microsecond. Another way to
express the same relationship is that radio frequency signals travel a mile in about 5-1/3 μs. If a
pair of transmitters emits the same signal simultaneously and a receiver is located equidistant
from the two transmitters, the signals will arrive at the receiver simultaneously. If the receiver is
not equidistant from the transmitters, the arrival times at the receiver of the signals from the two
transmitters will differ by 5-1/3 μs for each mile of difference in path length. In designing the
network, the determination of the sizes of the cells and the related spacing of the transmitters will
be dependent on this relationship between time and distance and on the delay spread capability
of the receiver adaptive equalizer.
Because receivers have limited delay-spread capability, there is a corresponding limit on the
sizes of cells and spacing of transmitters in SFNs. As receiver front-end technology improves
over time, this limitation can be expected to be relaxed. As the limitation on cell sizes is relaxed
and cells become larger, it can be helpful to network design to adjust the relative emission times
of the transmitters in the network. This allows putting the locus of equidistant points from vari-
ous transmitters where needed to maximize the audience reached and to minimize internal inter-
ference within the network. When such time offsets are used, it becomes desirable to be able to
measure the arrival times at receiving locations of the signals from the transmitters in the net-
work. Such measurements can be difficult since the transmitters are intentionally transmitting
exactly the same signals in order to allow receivers to treat them as echoes of one another. Some-
how the transmitters have to be differentiated from one another if their respective contributions
to the aggregate signal received at any location are to be determined. To aid in the identification
of individual transmitters in a network, a buried spread spectrum pseudorandom “RF Water-
mark” signal is included in ATSC Standard A/110, “Synchronization Stnadard for Distributed
Transmission” [2].
p
Consumer Receiver
Signal Processing
& Power Amplification
Power Amplification
Data
Receiver Signal Processing
Processing
Figure 3.6.1 Digital on-channel repeater (DOCR) generic block diagram. (From [1]. Used with per-
mission.)
loop back and re-enter the receiving antenna. This can also degrade the signal quality, causing
spectrum ripple and other distortions. The only way to limit or avoid the signal loopback in this
type of DOCR is to increase antenna isolation, which is determined by the site environment, or to
limit the DOCR output power. Usually, the RF processing DOCR transmitter output power is less
than 10 W, resulting in an effective radiated power (ERP) on the order of several dozen watts.
Conversion to an intermediate frequency (IF) for signal processing is the principal feature
that differentiates Figure 3.6.3 from the RF processing DOCR. In this arrangement, a local oscil-
lator and mixer are used to convert the incoming signal to the IF frequency, where it can be more
easily amplified and filtered. The same local oscillator used for the downconversion to IF in the
receiver can be used for upconversion in the transmitter, resulting in the signal being returned to
precisely the same frequency at which it originated (with some amount of the local oscillator
phase noise added to the signal). The delay time through the IF processing DOCR will be
decided mostly by the IF filter implemented. A SAW filter can have much sharper passband
edges, better control of envelope delay, greater attenuation in the stopband, and generally more
repeatable characteristics than most other kinds of filters. Transit delay time for the signal can be
on the order of 1–2 μs, so the delay through an IF processing DOCR, Figure 3.6.3, will be from a
fraction of a microsecond to about 2 μs—somewhat longer than the RF processing DOCR. The
IF processing DOCR has better first adjacent channel interference rejection capability than does
p
Figure 3.6.2 RF Processing DOCR configuration. Figure 3.6.3 IF processing DOCR configuration.
(From [1]. Used with permission.) (From [1]. Used with permission.)
the RF processing DOCR, but it retains the signal loopback problem, which limits its output
power.
Figure 3.6.4 shows a receiver that demodulates the incoming signal all the way to a digital
baseband signal in which forward error correction (FEC) can be applied. This restores the bit
stream to perfect condition, correcting all errors and eliminating all effects of the analog channel
through which the signal passed in reaching the DOCR. The bit stream then is transmitted, start-
ing with formation of the bit stream into the symbols in an exciter, just as in a normal transmitter.
If no special steps are taken to set the correct trellis encoder states, the output of the DOCR of
Figure 3.6.4 would be incoherent with respect to its input. This would result in the signal from
such a repeater acting like noise when interfering with the signal from another transmitter in the
network rather than acting like an echo of the signal from that other transmitter. Thus, additional
data processing is required to establish the correct trellis states for retransmission. It should also
be noted that this form of DOCR has a very long delay through it, measured in milliseconds,
mostly caused by the de-interleaving process. This delay is well outside the ATSC receiver equal-
ization range. Although by regenerating the DTV signal it totally eliminates the adjacent channel
interference and signal loopback problems, this type of DOCR has very little practical use,
unless the intended DOCR coverage area is totally isolated from the main DTV transmitter.
p
Figure 3.6.4 Baseband decoding DOCR configu- Figure 3.6.5 Baseband equalization DOCR con-
ration. (From [1]. Used with permission.) figuration. (From [1]. Used with permission.)
A more practical intermediate method is the baseband equalization DOCR (or EDOCR) as
shown in Figure 3.6.5. It fits between the techniques of Figures 3.6.2 or 3.6.3 and 3.6.4. This
type of DOCR demodulates the received signal and applies adaptive equalization in order to
reduce or eliminate adjacent channel interference, signal loopback, and multipath distortion
occurring in the path from the main transmitter to the EDOCR. In determining the correct 3-bit
symbol levels, it also carries out symbol level slicing or trellis decoding, which can achieve sev-
eral dB of noise reduction and minimizes the impact of channel distortions. The baseband output
of the equalizer and slicer is re-modulated, filtered, frequency shifted, and amplified for re-trans-
mission. The delay of the baseband processing is in the order of a few dozen VSB symbol times.
The total EDOCR internal delay is in the order of a few microseconds once the time delays of the
adaptive equalizer and the pulse shaping (root raised cosine) filters (one each for receive and
transmit) are taken into account. This amount of delay could have an impact on ATSC legacy
receivers. The baseband equalization DOCR allows retransmission of a clean signal without the
lengthy delays inherent in the baseband decode/regeneration method of Figure 3.6.4. It can also
transmit at higher power than the RF and IF processing DOCRs. The shortcoming of this method
is that since there is not complete error correction, any errors that occur in interpretation of the
received data will be built into the retransmitted signal. This makes it important in designing the
EDOCR installation to include sufficient receiving antenna gain and received signal level that
p
errors are minimized in the absence of error correction. If a fairly clean received signal cannot be
obtained, it may be better to use the RF or IF processing DOCR and allow viewers’ receivers to
apply full error correction to the relayed signal rather than retransmitting processed signals that
contain data errors.
All of the forms of DOCR described have a number of limitations in common. Most of the
limitations arise from the facts that a DOCR receives and re-transmits on the same frequency and
that obtaining good isolation between transmitting and receiving antennas is a very difficult
proposition. The result is coupling from the DOCR output back to its input. This coupling leads
to feedback around the amplifiers in the DOCR. Such feedback can result in oscillation in the
cases of the RF and IF processing DOCR. Short of oscillation, it can result in signal distortions
in amplitude and group delay ripple similar to those suffered in a propagation channel. These two
designs will also suffer from the accumulation of noise along the cascade of transmitters and
propagation channels from the signal source to the ultimate receiver. The application of adaptive
equalizers to the feedback path around the DOCR holds some promise to mitigate the distor-
tions, but it cannot help with the noise accumulation.
The feedback around a DOCR puts a limitation on the power that can be transmitted by such a
device. A margin must be provided to keep the system well below the level of oscillation, and the
point of oscillation will be determined by the isolation between the transmitting and receiving
antennas. All of this tends to make high power level operation of DOCRs problematic.
Similarly, the time delay through a DOCR is significant for network design. As one goes from
design-to-design from Figure 3.6.2 to 3.6.3, 3.6.5, and then 3.6.4, the time delay gets longer. The
time delay determines over what area the combination of signals from the transmitters in the net-
work stay within the capability of the receiver adaptive equalizer to correct the apparent echoes
caused by receiving signals from multiple transmitters. The geometry between the source trans-
mitter, the DOCR, and the receiver determine the delay spread actually seen by the receiver. To
this must be added the delay of the DOCR. Additional delay in the DOCR can only push the
delay spread in the wrong direction (extending pre-echo), further limiting the area in which a
receiver having a given adaptive equalizer capability will find the delay spread within its ability
to correct.
The relative merits of different DOCR configurations are summarized in Table 3.6.2.
to be broadcast are distributed to the transmitters, and a method in which standard MPEG-2
transport stream packets are delivered to the transmitters together with the necessary information
to allow the various transmitters to emit their signals in synchronization with one another. It is
the latter method that has been selected for use in ATSC Standard A/110 [2].
In the method documented in A/110, the data streams delivered to the transmitters are the
same as now delivered over standard STLs for the single transmitters currently in use, with infor-
mation added to those streams to allow synchronizing the transmitters. This method utilizes a
very small portion of the capacity of the channel but allows continued use of the entire existing
infrastructure designed and built around the 19.39 Mbps data rate. This technique permits com-
plete flexibility in setting the power levels and relative emission timing of the transmitters in a
network while assuring that they emit the same symbols for the same data inputs. While origi-
nally intended for use in SFNs, the selected method also permits extension to multiple frequency
networks (MFNs), using a second broadcast channel as an STL to deliver the data stream to mul-
tiple distributed translators that themselves operate in an SFN. Various combinations of distrib-
uted transmitters and distributed translators are possible, and, in some cases, whether a given
configuration constitutes an SFN or an MFN will depend only upon whether there are viewers in
a position to be able to receive the signals that are also relaying the data stream to successive
transmitters in the network.
Micro-Cell System
Micro-cell systems use very low power transmitters to cover very small areas [2]. They may be
intermixed with either large cell or small cell networks to fill in coverage in places like city can-
yons, tunnels, or small valleys. Cities with tall buildings that emulate the effect of mountains and
valleys with river gorges (e.g., New York and Chicago) can benefit from the use of microcells.
Design of micro-cell systems will require the use of different design tools than typically used for
broadcast applications—for example, those used for design of cellular telephone networks as
opposed to the Longley-Rice methods used for longer range propagation modeling.
Ultimately, the application of distributed transmission networks can use any or all of these
concepts, integrated together in a master plan. This will allow evolution from a single, high
power, tall tower system to a hybrid system based upon combinations of these concepts and other
unique solutions created for each individual broadcast market.
Video
Distributed Transmission Adapter GPS GPS
Video Souce Coding
Freq
and Compression
Data
Processing
p
Model
Audio Subsystem
Consumer Receiver
Audio
Service Time
Audio Souce Coding
Multiplexer
and Compression Transmitter
Freq Freq
Synchronization
Transport
Control Data Synchronized Synchronized
Data Data
Processing Processing
Figure 3.6.6 Synchronized DTV transmitter block diagram. (From [2]. Used with permission.)
p
tion for slaving the pre-coders and trellis coders in the transmitters and carries command infor-
mation specifying the necessary time offset for each transmitter. In addition, the DTxA indicates
operating mode to the transmitters and provides information to be transmitted in the data field
sync data segment through a field rate side channel, which carries information updated regularly
at a data field rate. To accomplish these functions, the DTxA includes a data processing model
equivalent to the data processing subsection of an A/53 modulator to serve as a master reference
to which the slave synchronizers at the transmitters are slaved.
3.6.4a Translators
A translator is part of a multiple frequency network [1]. It receives an off-air signal on one chan-
nel and retransmits it on another channel.
Even in some relatively unpopulated areas, especially where NTSC translator systems are
already deployed, there are not enough additional channels to accommodate traditional translator
networks for ATSC signals during the transition phase. In these situations, use of distributed
translators allows ATSC translator systems to be built using fewer channels.
A distributed translator system applies distributed transmission technology to create a net-
work of synchronized transmitters on one channel, which retransmits a signal received off-the-air
from a main transmitter, distributed transmitter, another translator, or a translator network. The
advantages of a distributed translator system over conventional translators include signal regen-
eration and conservation of spectrum.
The distributed transmission system initially was designed to use STLs to convey MPEG-2
transport stream streams to slave transmitters in a network. This certainly could be done for dis-
tributed translator systems, but where an off-air signal is available, use of STLs would be costly
and redundant. The advantage of a distributed translator system over a distributed transmission
system is that STLs are not required—the signal may be taken off the air.
3.6.6 References
1. ATSC, “ATSC Recommended Practice: Design Of Synchronized Multiple Transmitter Net-
works,” Advanced Television Systems Committee, Washington, D.C., Doc. A/111, Septem-
ber 3, 2004.
2. ATSC: “Synchronization Standard for Distributed Transmission,” Advanced Television
Systems Committee, Washington, D.C., Doc. A/110, July 14, 2004.
g g
Chapter
3.7
DTV Satellite Transmission
3.7.1 Introduction1
In recognition of the importance of satellite transmission in the distribution of digital video and
audio programming, the ATSC developed two standards optimized for use with the ATSC digital
television suite of standards. The first, document A/80, defines the parameters necessary to
transmit DTV signals (transport, video, audio, and data) over satellite to one or more production
and/or transmission centers. Although the ATSC had identified the particulars for terrestrial
transmission of DTV (using FEC-encoded 8VSB modulation), this method is inappropriate for
satellite transmission because of the many differences between the terrestrial and satellite trans-
mission environments. The second standard, document A/81, describes a direct-to-home satellite
transmission system intended for distribution of programming directly to consumers.
1. Editor’s note: This chapter provides an overview of ATSC DTV satellite services based on
ATSC A/80 and A/81. Readers are encouraged to download the entire standards from the
ATSC Web site (http://www.atsc.org). All ATSC Standards, Recommended Practices, and
Information Guides are available at no charge.
3-111
3-112 Television Transmission Systems
for the same data rate, progressively less bandwidth is consumed by QPSK, 8PSK, and 16QAM,
respectively, but the improved bandwidth efficiency is accompanied by an increase in power to
deliver the same level of signal quality.
A second parameter, coding, also influences the amount of bandwidth and power required for
transmission. Coding, or in this instance forward error correction (FEC), adds information to the
data stream that reduces the amount of power required for transmission and improves reconstruc-
tion of the data stream received at the demodulator. While the addition of more correction bits
improves the quality of the received signal, it also consumes more bandwidth in the process. So,
the selection of FEC serves as another tool to balance bandwidth and power in the satellite trans-
mission link. Other parameters exist as well, such as transmit filter shape factor (α), which have
an effect on the overall bandwidth and power efficiency of the system.
System operators optimize the transmission parameters of a satellite link by carefully consid-
ering a number of trade-offs. In a typical scenario for a broadcast network, material is generated
at multiple locations and requires delivery to multiple destinations by transmitting one or more
carriers over satellite, as dictated by the application. Faced with various size antennas, available
satellite bandwidth, satellite power, and a number of other variables, the operator will tailor the
system to efficiently deliver the data payload. The important tools available to the operator for
dealing with this array of system variables include the selection of the modulation, FEC, and α
value for transmission.
RF Equipment
Data
Data Stream IF * Up
Modulator HPA
Source Converter
Encoder /
Multiplexer
* Equalization Required
For Some Applications Satellite
Data
Data Stream De- IF *
LNB
Sink modulator
Decoder /
Demultiplexer Down
LNA
Converter
Reference Points
RF Equipment
Figure 3.7.1 Overall system block diagram of a digital satellite system. The ATSC standard
described in document A/80 covers the elements noted by the given reference points. (From [1].
Used with permission.)
that produce a digital data stream. This particular point, the accommodation of arbitrary data
streams, is a distinguishing feature of standard A/80.
The subject of the A/80 standard is the segment between the dashed lines designated by the
reference points on Figure 3.7.1; it includes the modulator and demodulator. Only the modula-
tion parameters are specified; the receive equipment is designed to recover the transmitted sig-
nal. The ATSC standard does not preclude combining equipment outside the dashed lines with
the modulator or demodulator, but it sets a logical demarcation between functions.
In the figure, the modulator accepts a data stream and operates upon it to generate an interme-
diate frequency (IF) carrier suitable for satellite transmission. The data are acted upon by for-
ward error correction (FEC), interleaving and mapping to QPSK, 8PSK or 16QAM, frequency
conversion, and other operations to generate the IF carrier. The selection of the modulation type
and FEC affects the bandwidth of the IF signal produced by the modulator. Selecting QPSK,
8PSK, or 16QAM consumes successively less bandwidth as the modulation type changes. It is
possible, then, to use less bandwidth for the same data rate or to increase the data rate through
the available bandwidth by altering the modulation type.
Coding or FEC has a similar impact on bandwidth. More powerful coding adds more infor-
mation to the data stream and increases the occupied bandwidth of the IF signal emitted by the
modulator. There are two types of coding applied in the modulator. An outer Reed-Solomon code
is concatenated with an inner convolutional/trellis code to produce error correction capability
exceeding the ability of either coding method used alone. The amount of coding is referred to as
the code rate, quantified by a dimensionless fraction (k/n) where n indicates the number of bits
out of the encoder given k input bits (e.g., rate 1/2 or rate 7/8). The Reed-Solomon code rate is
3-114 Television Transmission Systems
DATA
STREAM PACKETIZING RS OUTER CONVOLUTIONAL BASEBAND IF
INTERLEAVER
& ENERGY CODER INNER CODER & SHAPING &
(I=12)
DISPERSAL (204,188) MAPPING MODULATION
Figure 3.7.2 Block diagram of the baseband and modulator subsystem. (From [1]. Used with per-
mission.)
fixed at 204,188 but the inner convolutional/trellis code rate is selectable, offering the opportu-
nity to modify the transmitted IF bandwidth.
One consequence of selecting a more bandwidth-efficient modulation or a higher inner code
rate is an increase in the amount of power required to deliver the same level of performance. The
key measure of power is the Eb /No (energy per useful bit relative to the noise power per Hz), and
the key performance parameter is the bit error rate (BER) delivered at a particular E b /No. For
digital video, a BER of about 10-10 is necessary to produce high-quality video. Thus, noting the
E b /No required to produce a given BER provides a way of comparing modulation and coding
schemes. It also provides a relative measure of the power required from a satellite transponder, at
least for linear transponder operation.
The basic processes applied to the data stream are illustrated in Figure 3.7.2. Specifically,
• Packetizing and energy dispersal
• Reed-Solomon outer coding
• Interleaving
• Convolutional inner coding
• Baseband shaping for modulation
• Modulation
The input to the modulator is a data stream of specified characteristics. The physical and elec-
trical properties of the data interface, however, are outside the scope of this standard. The output
of the modulator is an IF signal that is modulated by the processed input data stream. This is the
signal delivered to RF equipment for transmission to the satellite. Table 3.7.1 lists the primary
system inputs and outputs.
The data stream is the digital input applied to the modulator. There are two types of packet
structures supported by the standard, as given in Table 3.7.2.
Type Description
1 The packet structure shall be a constant rate MPEG-2 transport per ISO/IEC 13818-1 (188 or 204 bytes per
packet including 0x47 sync, MSB first).
2 The input shall be a constant rate data stream that is arbitrary. In this case, the modulator takes successive
187 byte portions from this stream and prepends a 0x47 sync byte to each portion, to create a 188 byte
MPEG-2 like packet. (The demodulator will remove this packetization so as to deliver the original, arbitrary
stream at the demodulator output.)
requiring extensions. Transmission and conditional access subsystems are not defined, allowing
service providers to use existing subsystems.
The ATSC DTH satellite broadcast system consists of two major elements:
• The transmission system
• Integrated receiver decoder, commonly referred as a set top box (STB).
Em ission Mux
ATSC Com pliant A/81
M ultiprogram Com pliant
T ransport Stream T ransport
ASI
Modulator
O ptional Transport
Stream s
PSIP w ith
Satellite
Extensions
Data
Figure 3.7.3 Overview of the DTH satellite transmission system. (From [2]. Used with permission.)
Each multi-program transport stream output from the emission multiplexer to a modulator must
conform with:
• The transport, audio, and video format extensions defined for satellite delivery in standard A/
81.
• System information with all the normative elements from the ATSC PSIP standard (A/65)
and satellite extensions, such as the satellite virtual channel table (defined in A/81).
Transport streams at the output of the emission multiplex may also carry additional informa-
tion to support delivery of system-specific data (such as DVB-SI, ATSC A/56, control data, EIA-
608B captions using ANSI/SCTE 20, and MPEG-1 Layer 2 audio).
Control CA
PSIP
Figure 3.7.4 Functional block diagram of an IRD system. (From [2]. Used with permission.)
3.7.3d PSIP
Some Program and System Information Protocol (PSIP) tables used in the DTH satellite broad-
cast standard are common with terrestrial broadcast and/or cable systems defined in ATSC docu-
ment A/65 [2].
The following tables must be included in all ATSC-compliant transport streams to be trans-
mitted via satellite broadcast:
• The satellite virtual channel table (SVCT) defining, at a minimum, the virtual channel struc-
ture for the collection of MPEG-2 programs embedded in the transport stream in which the
SVCT is carried.
• The master guide table (MGT) defining the type, packet identifiers, and versions for all of the
other satellite PSIP tables included in the transport stream, except for the system time table
(STT).
• The rating region table (RRT) defining the TV parental guideline system referenced by any
content advisory descriptor carried within the transport stream.
3-118 Television Transmission Systems
Table 3.7.3 Allowed Compression Formats Under ATSC DTH Satellite Standard (After [2].)
vertical_size_value horizontal_size_value aspect_ratio_information frame_rate_code Progressive/
Interlaced
1080 1280 3 1,2,4,5,7,8 P
1080 1280 3 4,5,7,8 I
1080 1920 1, 3 1, 2, 4, 5,7,8 P
1080 1920 1, 3 4, 5,7,8 I
1080 1440 3 1, 2, 4, 5,7,8 P
1080 1440 3 4, 5,7,8 I
720 1280 1, 3 1, 2, 4, 5, 7, 8 P
480 720 2, 3 1, 2, 4, 5, 7, 8 P
480 720 2, 3 4, 5 I
480 704 2, 3 1, 2, 4, 5, 7, 8 P
480 704 2, 3 4, 5 I
480 640 1, 2 1, 2, 4, 5, 7, 8 P
480 640 1, 2 4, 5 I
480 544 2 1 P
480 544 2 4 I
480 480 2 4,5 I
480 528 2 1 P
480 528 2 4 I
480 352 2 1 P
480 352 2 4 I
Legend for MPEG-2 Coded Values
aspect_ratio_information: 1 = square samples, 2 = 4:3 display aspect ratio, 3 = 16:9 display aspect ratio
frame_rate_code: 1 = 23.976 Hz, 2 = 24 Hz, 4 = 29.97 Hz, 5 = 30 Hz, 7 = 59.94 Hz, 8 = 60 Hz
Progressive/Interlace: I= interlaced scan, P = progressive scan
• The system time table (STT), defining the current date and time of day and daylight savings
time transition timing.
• The first four aggregate event information tables (AEIT-0, AEIT-1, AEIT-2, and AEIT-3),
which deliver event title and schedule information that may be used to support an electronic
program guide application.
3.7.4 References
1. ATSC Standard: “Modulation And Coding Requirements For Digital TV (DTV) Applica-
tions Over Satellite,” Doc. A/80, Advanced Television Systems Committee, Washington,
D.C., July, 17, 1999.
2. ATSC Standard: “Direct-to-Home Satellite Broadcast Standard,” Doc. A/81, Advanced
Television Systems Committee, Washington, D.C., 2003.
3. ATSC Standard: “Standard for Coding 25/50 Hz Video,” Doc. A/63, Advanced Television
Systems Committee, Washington, D.C., May 2, 1997.
g g
3.8.Chapter
3.8
The DVB Standard
3.8.1 Introduction
The roots of European digital television can be traced back to the early 1980s, when development
work began on multiplexed analog component (MAC) hardware. Designed from the outset as a
direct broadcast satellite (DBS) service, MAC went through a number of infant stages, including
A, B, C, and E; MAC finally reached maturity in its D and D2 form. High-definition versions of
the transmission formats also were developed with an eye toward European-based HDTV pro-
gramming.
3-121
3-122 Television Transmission Systems
to add to the MPEG transport stream the necessary elements to bring digital television to the
home through cable, satellite, and terrestrial broadcast systems. Interactive television also was
examined to see how DVB might fit into such a framework for new video services.
MPEG-2
MPEG-2 specifies a data-stream syntax, and the system designer is given a “toolbox” from
which to make up systems incorporating greater or lesser degrees of sophistication [1]. In this
way, services avoid being overengineered, yet are able to respond fully to market requirements
and are capable of evolution.
The sound-coding system specified for all DVB applications is the MPEG audio standard
MPEG Layer II (MUSICAM), which is an audio coding system used for many audio products
and services throughout the world. MPEG Layer II takes advantage of the fact that a given sound
element will have a masking effect on lower-level sounds (or on noise) at nearby frequencies.
This is used to facilitate the coding of audio at low data rates. Sound elements that are present,
but would not be heard even if reproduced faithfully, are not coded. The MPEG Layer II system
can achieve a sound quality that is, subjectively, very close to the compact disc. The system can
be used for mono, stereo, or multilingual sound, and (in later versions) surround sound.
The first users of DVB digital satellite and cable services planned to broadcast signals up to
and including MPEG-2 Main Profile at Main Level, thus forming the basis for first-generation
European DVB receivers. Service providers, thus, were able to offer programs giving up to “625-
line studio quality” (ITU-R Rec. 601), with either a 4:3, 16:9, or 20:9 aspect ratio.
Having chosen a given MPEG-2 compliance point, the service provider also must decide on
the operating bit rates (variable or constant) that will be used. In general, the higher the bit rate,
the greater the proportion of transmitted pictures that are free of coding artifacts. Nevertheless,
the law of diminishing returns applies, so the relationship of bit rate to picture quality was given
careful consideration.
To complicate the choice, MPEG-2 encoder design has a major impact on receiver picture
quality. In effect, the MPEG-2 specification describes only syntax laws, thus leaving room for
technical-quality improvements in the encoder. Early tests by DVB partners established the
approximate relationship between bit rate and picture quality for the Main Profile/Main Level,
on the basis of readily available encoding technology. These tests suggested the following:
• To comply with ITU-R Rec. 601 “studio quality” on all material, a bit rate of up to approxi-
mately 9 Mbits/s was required.
• To match current “NTSC/PAL/SECAM quality” on most television material, a bit rate of 2.5
to 6 Mbits/s was required, depending upon the program material.
• Film material (24/25 pictures/s) is easier to code than scenes shot with a video camera, and it
also looked good at lower bit rates.
synchronization information necessary for the decoder to produce a complete video signal.
MPEG-2 also allows a separate service information (SI) system to complement the PSI.
DVB-SI
As envisioned by the system planners of DVB, the viewer of the future will be capable of receiv-
ing a multitude (perhaps hundreds) of channels via the DVB integrated receiver decoder (IRD)
[1]. These services could range from interactive television to near video-on-demand to special-
ized programming. To sort out the available offerings, the DVB-SI provides the elements neces-
sary for the development of an electronic program guide (EPG)—a feature which, it was
believed, would become an important part of new digital television services.
Key data necessary for the DVB IRD to automatically configure itself is provided for in the
MPEG-2 PSI. DVB-SI adds information that enables DVB IRDs to automatically tune to partic-
ular services and allows services to be grouped into categories with relevant schedule informa-
tion. Other information provided includes:
• Program start time
• Name of the service provider
• Classification of the event (sports, news, entertainment, and so on)
DVB-SI is based on four tables, plus a series of optional tables. Each table contains descriptors
outlining the characteristics of the services/event being described. The tables are:
• Network information table (NIT). The NIT groups together services belonging to a particular
network provider. It contains all of the tuning information that might be used during the setup
of an IRD. It also is used to signal a change in the tuning information.
• Service description table (SDT). The SDT lists the names and other parameters associated
with each service in a particular MPEG multiplex.
• Event information table (EIT). The EIT is used to transmit information relating to all the
events that occur or will occur in the MPEG multiplex. The table contains information about
the current transport and optionally covers other transport streams that the IRD can receive.
• Time and date table (TDT). The TDT is used to update the IRD internal clock.
In addition, there are three optional SI tables:
• Bouquet association table (BAT). The BAT provides a means of grouping services that might
be used as one way an IRD presents the available services to the viewer. A particular service
can belong to one or more “bouquets.”
• Running status table (RST). The sections of the RST are used to rapidly update the running
status of one or more events. The running status sections are sent out only once—at the time
the status of an event changes. The other SI tables normally are repetitively transmitted.
3-124 Television Transmission Systems
• Stuffing table (ST). The ST may be used to replace or invalidate either a subtable or a com-
plete SI table.
With these tools, DVB-SI covers the range of practical scenarios, facilitating a seamless tran-
sition between satellite and cable networks, near video-on-demand, and other operational config-
urations.
DVB-S
DVB-S is a satellite-based delivery system designed to operate within a range of transponder
bandwidths (26 to 72 MHz) accommodated by European satellites such as the Astra series,
Eutelsat series, Hispasat, Telecom series, Tele-X, Thor, TDF-1 and 2, and DFS [1].
DVB-S is a single carrier system, with the payload at its core. Surrounding this core are a
series of layers intended not only to make the signal less sensitive to errors, but also to arrange
the payload in a form suitable for broadcasting. The video, audio, and other data are inserted into
fixed-length MPEG transport-stream packets. This packetized data constitutes the payload. A
number of processing stages follow:
• The data is formed into a regular structure by inverting synchronization bytes every eighth
packet header.
• The contents are randomized.
• Reed-Solomon forward error correction (FEC) overhead is added to the packet data. This sys-
tem, which adds less than 12 percent overhead to the signal, is known as the outer code. All
delivery systems have a common outer code.
• Convolutional interleaving is applied to the packet contents.
• Another error-correction system, which uses a punctured convolutional code, is added. This
second error-correction system, the inner code, can be adjusted (in the amount of overhead)
to suit the needs of the service provider.
• The signal modulates the satellite broadcast carrier using quadrature phase-shift keying
(QPSK).
In essence, between the multiplexing and the physical transmission, the system is tailored to the
specific channel properties. The system is arranged to adapt to the error characteristics of the
channel. Burst errors are randomized, and two layers of forward error correction are added. The
second level (inner code) can be adjusted to suit the operational circumstances (power, dish size,
bit rate available, and other parameters).
DVB-C
The cable network system, known as DVB-C, has the same core properties as the satellite sys-
tem, but the modulation is based on quadrature amplitude modulation (QAM) rather than QPSK,
and no inner-code forward error correction is used [1]. The system is centered on 64-QAM, but
lower-level systems, such as 16-QAM and 32-QAM, also can be used. In each case, the data
capacity of the system is traded against robustness of the data. In terms of capacity, an 8 MHz
channel can accommodate a payload capacity of 38.5 Mbits/s if 64-QAM is used, without spill-
over into adjacent channels.
The DVB Standard 3-125
DVB-MC
The DVB-MC digital multipoint distribution system uses microwave frequencies below approxi-
mately 10 GHz for direct distribution to viewers' homes [1]. Because DVB-MC is based on the
DVB-C cable delivery system, it enables a common receiver to be used for both cable and micro-
wave transmissions.
DVB-MS
The DVB-MS digital multipoint distribution system uses microwave frequencies above approxi-
mately 10 GHz for direct distribution to viewers' homes [1]. Because this system is based on the
DVB-S satellite delivery system, DVB-MS signals can be received by DVB-S satellite receivers.
The receiver must be equipped with a small microwave multipoint distribution system (MMDS)
frequency converter, rather than a satellite dish.
DVB-T
DVB-T is the system specification for the terrestrial broadcasting of digital television signals [1].
DVB-T was approved by the DVB Steering Board in December 1995. This work was based on a
set of user requirements produced by the Terrestrial Commercial Module of the DVB project.
DVB members contributed to the technical development of DVB-T through the DTTV-SA (Dig-
ital Terrestrial Television—Systems Aspects) of the Technical Module. The European Projects
SPECTRE, STERNE, HD-DIVINE, HDTVT, dTTb, and several other organizations developed
system hardware and produced test results that were fed back to DTTV-SA.
As with the other DVB standards, MPEG-2 audio and video coding forms the payload of
DVB-T. Other elements of the specification include:
• A transmission scheme based on orthogonal frequency-division multiplexing (OFDM), which
allows for the use of either 1705 carriers (usually known as 2k), or 6817 carriers (8k). Concat-
enated error correction is used. The 2k mode is suitable for single-transmitter operation and
for relatively small single-frequency networks with limited transmitter power. The 8k mode
can be used both for single-transmitter operation and for large-area single-frequency net-
works. The guard interval is selectable.
• Reed-Solomon outer coding and outer convolutional interleaving are used, as with the other
DVB standards.
• The inner coding (punctured convolutional code) is the same as that used for DVB-S.
• The data carriers in the coded orthogonal frequency-division multiplexing (COFDM) frame
can use QPSK and different levels of QAM modulation and code rates to trade bits for rug-
gedness.
• Two-level hierarchical channel coding and modulation can be used, but hierarchical source
coding is not used. The latter was deemed unnecessary by the DVB group because its benefits
did not justify the extra receiver complexity that was involved.
• The modulation system combines OFDM with QPSK/QAM. OFDM uses a large number of
carriers that spread the information content of the signal. Used successfully in DAB (digital
audio broadcasting), OFDM’s major advantage is its resistance to multipath.
3-126 Television Transmission Systems
Improved multipath immunity is obtained through the use of a guard interval, which is a por-
tion of the digital signal given away for echo resistance. This guard interval reduces the transmis-
sion capacity of OFDM systems. However, the greater the number of OFDM carriers provided,
for a given maximum echo time delay, the less transmission capacity is lost. But, certainly, a
tradeoff is involved. Simply increasing the number of carriers has a significant, detrimental
impact on receiver complexity and on phase-noise sensitivity.
Because of the favorable multipath performance of OFDM, it is possible to operate an over-
lapping network of transmitting stations with a single frequency. In the areas of overlap, the
weaker of the two received signals is similar to an echo signal. However, if the two transmitters
are far apart, causing a large time delay between the two signals, the system will require a large
guard interval.
The potential exists for three different operating environments for digital terrestrial television
in Europe:
• Broadcasting on a currently unused channel, such as an adjacent channel, or broadcasting on
a clear channel
• Broadcasting in a small-area single-frequency network (SFN)
• Broadcasting in a large-area SFN
One of the main challenges for DVB-T developers was that different operating environments
lead to somewhat different optimum OFDM systems. The common 2k/8k specification was
developed to offer solutions for most operating environments.
k=0 k=N
Figure 3.8.1 Frequency domain representation of orthogonal carriers. (From [2]. Used with per-
mission.)
OFDM symbol 0
OFDM symbol 1
OFDM symbol 2
time
OFDM symbol 3
OFDM symbol 4
OFDM symbol x
frequency
Figure 3.8.2 Positions of pilots in the OFDM symbol. (From [2]. Used with permission.)
bol duration. Only then, the unused time of the guard interval can be a modest fraction of the
useful time.
• Reference signals, called pilots, are inserted in the frequency domain. The position, phase,
and energy level of the pilots are pre-defined, as shown in Figure 3.8.2, enabling the receiver
to reconstruct the shape of the channel. Knowing the fading, amplification, and phase-shift of
all the individual sub-carriers, the receiver is able to equalize each subcarrier. This reverses
effects of the transmission channel.
An echo is a copy of the original signal delayed in time. Problems can occur when one OFDM
symbol overlaps with the next one. There is no correlation between two consecutive OFDM sym-
bols and therefore interference from one symbol with the other will result in a disturbed signal.
Because of the efficient spectrum usage, the interfering signal very much resembles white noise.
The DVB-T standard adds a cyclic copy of the last part of the OFDM symbol in front of the sym-
bol to overcome this problem. This guard-interval protects the OFDM symbol from being dis-
turbed by its predecessor, as illustrated in Figure 3.8.3.
When ISI occurs, the resulting signal to noise ratio can be described as [2]
⎛ A⋅t ⎞
SNR = 10⋅ 10 log⎜ ⎟dB
⎝ A′⋅Tu ⎠
(3.8.1)
Where:
A’ = the power of the echo
A = the power of the original signal
t = the duration of interference in complex samples
Tu = the duration of one OFDM symbol without guard interval
The receiver must find a window of duration Tu within the Tu plus Tg time frame that suffers
from minimal ISI.
The OFDM method uses N carriers. At least N complex discrete time samples are required to
represent the OFDM symbol. The N complex time domain samples (0…N – 1) resulting from a
single sub-carrier k modulated with Ck within an OFDM symbol are
The DVB Standard 3-129
Main signal +
delayed Δ in
time (echo)
ISI free window
=
Sum
ISI
Figure 3.8.3 Protection of the OFDM symbol against echoes through the use of a guard-interval.
(From [2]. Used with permission.)
2π
C j k⋅n
sofdmk [ n] = N
k
⋅e N
(3.8.2)
Where:
N = number of sub-carriers and time-domain samples used
n = time-domain sample index (0…N – 1)
k = sub-carrier index (0…N – 1)
Ck = complex phase and amplitude information to be transmitted
Both k and Ck are constant for a single sub-carrier during the period of an OFDM-symbol.
When examining Equation (3.8.2), it appears that the N complex samples for sub-carrier k rotate
exactly k circles in the complex plane during the useful period of an OFDM-symbol.
The complete time-domain symbol is constructed from the N subcarriers by super-imposing
these waves onto each other
N −1
sofdm [ n] = ∑ sofdmk [ n ]
k=0
(3.8.3)
Inside the DVB-T receiver, the OFDM signal is analyzed by applying an FFT to the time-
domain signal. The originally transmitted information is reconstructed by comparing each sub-
carrier with a reference subcarrier of known phase and amplitude, and equal frequency
2π
j k⋅n
sref k [ n] = 1⋅e N
(3.8.4)
3-130 Television Transmission Systems
As a result of the orthogonality of the N subcarriers, a zero result is obtained in the FFT for any
other subcarrier than the reference
N −1
sofdml [ n ]
(l ≠ k ) ⇒ ∑ sref k [ n ]
=0
n=0
(3.8.5)
Therefore, to isolate the transmitted information for subcarrier k, the complete OFDM-symbol
can be analyzed without error using an FFT
N −1
s [n]
∑ sofdm [n] = Ck′
n=0 ref k
(3.8.6)
2π
C j k⋅( n−Δ )
sofdmk [ n] = N
k
⋅e N
(3.8.7)
where Δ = delay in units of complex samples. Then, this yields at the output of the FFT
N −1 2π
s [n] j k⋅Δ
∑ sofdm [n] = Ck′ ⋅e N
n=0 ref k
(3.8.8)
Equation (3.8.8) shows that a delay in the input signal causes a rotation over the carriers in the
frequency domain. Adding this delayed signal to the original will result in fading and amplifica-
tion of different parts of the frequency domain. This effect is graphically shown in Figure 3.8.4.
In order to reconstruct this distortion of the channel, the OFDM symbol contains reference
signals (pilots). They are all modulated with known phase and amplitude information (Ck ). After
collecting pilots from four symbols, channel information is available at every third sub-carrier.
The missing information for the two sub-carriers in between the reference signals is obtained
through interpolation. Of course, this interpolation sets a limit to the maximum frequency of the
distortion and therefore to the maximum delay (Δ) of the echo. Figure 3.8.5 shows this limitation
when using a standard interpolation technique.
Delays in the range of –N/6 and +N/6 can be resolved. The OFDM symbol is protected by the
guard interval from echoes up to a delay of N/4 [2]. Therefore, it makes sense to shift the interpo-
lation filter range to cope with delay in the range of 0 to N/3 (Figure 3.8.6).
The DVB Standard 3-131
1.5
4
1 2
2π
j⋅ ⋅k ⋅ Δ
'
C ⋅e N
-2
Im
0
-4
-0.5
-6
-1
-8
-1.5 -10
0 1000 2000 3000 4000 5000 6000
0.5
1.5
-1.5
-0.5
0
-1
carrier number k
Re
Figure 3.8.4 Fading effect of an echo. (From [2]. Used with permission.)
subsampling by three
with reference signals (pilots)
Figure 3.8.5 Limitation caused by a small number of reference signals. (From [2]. Used with per-
mission.)
Figure 3.8.6 Shift of the interpolation filter towards 0 to N/3 distortion range. (From [2]. Used with
permission.)
3-132 Television Transmission Systems
Pre-echo
Pre-echo aliased into visible
region of interpolation filter.
Main signal
Figure 3.8.8 Delay input signal of FFT to shift pre-echo into interpolation filters range. (From [2].
Used with permission.)
When a receiver locks to the strongest signal instead of the nearest in case of a pre-echo, it
can be seen that this pre-echo cannot be resolved by the shifted interpolation filter. This pre-echo
at –Δ is aliased into the interpolation filters range to an echo at N/3 – Δ, which in the case of a
strong pre-echo is disastrous. (See Figure 3.8.7.) Additionally, synchronized in this manner, the
pre-echo causes ISI.
For the interpolation filter to reconstruct the distorted channel correctly, the input signal of
the FFT must be delayed in time. The receiver then synchronizes to the pre-echo that will yield
the channel distortion shown Figure 3.8.8. The interpolation filter can resolve this distortion and,
therefore, the originally transmitted information can be retrieved without error.
In order synchronize to a weak pre-echo, the receiver must be able to detect the presence of it.
Echoes can be detected by performing an inverse Fourier transform on the available reference
carriers. This yields the channel impulse response. The impulse response can be scanned for pre-
echoes.
3.8.4 References
1. Based on technical reports and background information provided by the DVB Consortium.
2. van Klinken, N., and W. Renirie: “Receiving DVB: Technical Challenges,” Proceedings of
the International Broadcasting Convention, IBC, Amsterdam, September 2000.
The DVB Standard 3-133
3.8.5 Bibliography
ETS-300-421, “Digital Broadcasting Systems for Television, Sound, and Data Services; Framing
Structure, Channel Coding and Modulation for 11–12 GHz Satellite Services,” DVB
Project technical publication.
ETS-300-429, “Digital Broadcasting Systems for Television, Sound, and Data Services; Framing
Structure, Channel Coding and Modulation for Cable Systems,” DVB Project technical
publication.
ETS-300-468, “Digital Broadcasting Systems for Television, Sound, and Data Services; Specifi-
cation for Service Information (SI) in Digital Video Broadcasting (DVB) Systems,” DVB
Project technical publication.
ETS-300-472, “Digital Broadcasting Systems for Television, Sound, and Data Services; Specifi-
cation for Carrying ITU-R System B Teletext in Digital Video Broadcasting (DVB) Bit-
streams,” DVB Project technical publication.
ETS-300-473, “Digital Broadcasting Systems for Television, Sound, and Data Services; Satellite
Master Antenna Television (SMATV) Distribution Systems,” DVB Project technical publi-
cation.
European Telecommunications Standards Institute: “Digital Video Broadcasting; Framing Struc-
ture, Channel Coding and Modulation for Digital Terrestrial Television (DVB-T)”, March
1997.
Lee, E. A., and D.G. Messerschmitt: Digital Communication, 2nd ed.,. Kluwer, Boston, Mass.,
1994.
Muschallik, C.: “Improving an OFDM Reception Using an Adaptive Nyquist Windowing,” IEEE
Trans. on Consumer Electronics, no. 03, 1996.
Pollet, T., M. van Bladel, and M. Moeneclaey: “BER Sensitivity of OFDM Systems to Carrier
Frequency Offset and Wiener Phase Noise,” IEEE Trans. on Communications, vol. 43,
1995.
Robertson, P., and S. Kaiser: “Analysis of the Effects of Phase-Noise in Orthogonal Frequency
Division Multiplex (OFDM) Systems,” ICC 1995, pp. 1652–1657, 1995.
Sari, H., G. Karam, and I. Jeanclaude: “Channel Equalization and Carrier Synchronization in
OFDM Systems,” IEEE Proc. 6th. Tirrenia Workshop on Digital Communications, Tirre-
nia, Italy, pp. 191–202, September 1993.
g g
Section
Skin Effect
The effective resistance offered by a given conductor to radio frequencies is considerably higher
than the ohmic resistance measured with direct current. This is because of an action known as the
skin effect, which causes the currents to be concentrated in certain parts of the conductor and
leaves the remainder of the cross section to contribute little or nothing toward carrying the
applied current.
When a conductor carries an alternating current, a magnetic field is produced that surrounds
the wire. This field continually expands and contracts as the ac wave increases from zero to its
maximum positive value and back to zero, then through its negative half-cycle. The changing
magnetic lines of force cutting the conductor induce a voltage in the conductor in a direction that
tends to retard the normal flow of current in the wire. This effect is more pronounced at the cen-
ter of the conductor. Thus, current within the conductor tends to flow more easily toward the sur-
face of the wire. The higher the frequency, the greater the tendency for current to flow at the
surface. The depth of current flow is a function of frequency, and it is determined from the equa-
tion d = 2.6 ⁄ μf where d = depth of current in mils, μ = permeability (copper = 1, steel =
300), and f = frequency of signal in mHz.
It can be calculated that at a frequency of 100 kHz, current flow penetrates a conductor by 8
mils. At 1 MHz, the skin effect causes current to travel in only the top 2.6 mils in copper, and
even less in almost all other conductors. Therefore, the series impedance of conductors at high
frequencies is significantly higher than at low frequencies.
When a circuit is operating at high frequencies, the skin effect causes the current to be redis-
tributed over the conductor cross section in such a way as to make most of the current flow where
4-1
y
it is encircled by the smallest number of flux lines. This general principle controls the distribu-
tion of current regardless of the shape of the conductor involved. With a flat-strip conductor, the
current flows primarily along the edges, where it is surrounded by the smallest amount of flux.
The skin effect is one of the fundamental physical properties that influences all RF intercon-
nection devices and systems.
In This Section:
Andrew Corporation: “Circular Waveguide: System Planning, Installation and Tuning,” Techni-
cal Bulletin 1061H, Orland Park, Ill., 1980.
Ben-Dov, O., and C. Plummer: “Doubly Truncated Waveguide,” Broadcast Engineering, Intertec
Publishing, Overland Park, Kan., January 1989.
Benson, K. B., and J. C. Whitaker: Television and Audio Handbook for Technicians and Engi-
neers, McGraw-Hill, New York, N.Y., 1989.
Cablewave Systems: “Rigid Coaxial Transmission Lines,” Cablewave Systems Catalog 700,
North Haven, Conn., 1989.
Cablewave Systems: “The Broadcaster’s Guide to Transmission Line Systems,” Technical Bulle-
tin 21A, North Haven, Conn., 1976.
Crutchfield, E. B. (ed.), NAB Engineering Handbook, 8th ed., National Association of Broad-
casters, Washington, D.C., 1992.
DeComier, Bill: “Inside FM Multiplexer Systems,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., May 1988.
Fink, D., and D. Christiansen (eds.), Electronics Engineers’ Handbook, 2nd ed., McGraw-Hill,
New York, N.Y., 1982.
Fink, D., and D. Christiansen (eds.): Electronics Engineers’ Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Harrison, Cecil: “Passive Filters,” in The Electronics Handbook, Jerry C. Whitaker (ed.), CRC
Press, Boca Raton, Fla., pp. 279–290, 1996.
Heymans, Dennis: “Hot Switches and Combiners,” Broadcast Engineering, Overland Park,
Kan., December 1987.
Jordan, Edward C.: Reference Data for Engineers: Radio, Electronics, Computer and Communi-
cations, 7th ed., Howard W. Sams, Indianapolis, IN, 1985.
Krohe, Gary L.: “Using Circular Waveguide,” Broadcast Engineering, Intertec Publishing, Over-
land Park, Kan., May 1986.
Perelman, R., and T. Sullivan: “Selecting Flexible Coaxial Cable,” Broadcast Engineering, Inter-
tec Publishing, Overland Park, Kan., May 1988.
Stenberg, James T.: “Using Super Power Isolators in the Broadcast Plant,” Proceedings of the
Broadcast Engineering Conference, Society of Broadcast Engineers, Indianapolis, IN,
1988.
Surette, Robert A.: “Combiners and Combining Networks,” in The Electronics Handbook, Jerry
C. Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 1368–1381, 1996.
Terman, F. E.: Radio Engineering, 3rd ed., McGraw-Hill, New York, N.Y., 1947.
Whitaker, Jerry C., G. DeSantis, and C. Paulson: Interconnecting Electronic Systems, CRC
Press, Boca Raton, Fla., 1993.
Whitaker, Jerry C.: Radio Frequency Transmission Systems: Design and Operation, McGraw-
Hill, New York, N.Y., 1990.
y
Vaughan, T., and E. Pivit: “High Power Isolator for UHF Television,” Proceedings of the NAB
Engineering Conference, National Association of Broadcasters, Washington, D.C., 1989.
g g
Chapter
4.1
Transmission Line
4.1.1 Introduction
Two types of coaxial transmission line are in common use today: rigid line and corrugated (semi-
flexible) line. Rigid coaxial cable is constructed of heavy-wall copper tubes with Teflon or
ceramic spacers. (Teflon is a registered trademark of DuPont.) Rigid line provides electrical per-
formance approaching an ideal transmission line, including:
• High power-handling capability
• Low loss
• Low VSWR (voltage standing wave ratio)
Rigid transmission line is, however, expensive to purchase and install.
The primary alternative to rigid coax is semiflexible transmission line made of corrugated
outer and inner conductor tubes with a spiral polyethylene (or Teflon) insulator. The internal con-
struction of a semiflexible line is shown in Figure 4.1.1. Semiflexible line has four primary ben-
efits:
• It is manufactured in a continuous length, rather than the 20-ft sections typically used for
rigid line.
• Because of the corrugated construction, the line may be shaped as required for routing from
the transmitter to the antenna.
• The corrugated construction permits differential expansion of the outer and inner conductors.
Each size of line has a minimum bending radius. For most installations, the flexible nature of
corrugated line permits the use of a single piece of cable from the transmitter to the antenna, with
no elbows or other transition elements. This speeds installation and provides for a more reliable
system.
4-5
4-6 RF Interconnection Devices and Systems
(a ) (b)
Figure 4.1.1 Semiflexible coaxial cable: (a) a section of cable showing the basic construction, (b)
cable with various terminations. (Courtesy of Andrew.)
1
Vp = ------------------ (4.1.1)
L× C
Where:
L = inductance in henrys per foot
C = capacitance in farads per foot
and
Vp
Vr = ----- × 100 percent (4.1.2)
c
Where:
V p = velocity of propagation
c = 9.842 × 108 feet per second (free-space velocity)
Vr = velocity of propagation as a percentage of free-space velocity
Transmission Line 4-7
7.50 × V r
Fc = ----------------------- (4.1.3)
D i + Do
Where:
F c = cutoff frequency in gigahertz
V r = velocity (percent)
Di = inner diameter of outer conductor in inches
Do = outer diameter of inner conductor in inches
At dc, current in a conductor flows with uniform density over the cross section of the conduc-
tor. At high frequencies, the current is displaced to the conductor surface. The effective cross
section of the conductor decreases, and the conductor resistance increases because of the skin
effect.
Center conductors are made from copper-clad aluminum or high-purity copper and can be
solid, hollow tubular, or corrugated tubular. Solid center conductors are found on semiflexible
cable with 1/2 in or smaller diameter. Tubular conductors are found in 7/8 in or larger-diameter
cables. Although the tubular center conductor is used primarily to maintain flexibility, it also can
be used to pressurize an antenna through the feeder.
4.1.2b Dielectric
Coaxial lines use two types of dielectric construction to isolate the inner conductor from the
outer conductor. The first is an air dielectric, with the inner conductor supported by a dielectric
spacer and the remaining volume filled with air or nitrogen gas. The spacer, which may be con-
structed of spiral or discrete rings, typically is made of Teflon or polyethylene. Air-dielectric
cable offers lower attenuation and higher average power ratings than foam-filled cable but
requires pressurization to prevent moisture entry.
Foam-dielectric cables are ideal for use as feeders with antennas that do not require pressur-
ization. The center conductor is surrounded completely by foam-dielectric material, resulting in
a high dielectric breakdown level. The dielectric materials are polyethylene-based formulations,
which contain antioxidants to reduce dielectric deterioration at high temperatures.
4.1.2c Impedance
The expression transmission line impedance applied to a point on a transmission line signifies
the vector ratio of line voltage to line current at that particular point. This is the impedance that
would be obtained if the transmission line were cut at the point in question, and the impedance
4-8 RF Interconnection Devices and Systems
Voltage distribution
sured.
Because the voltage and current distri-
bution on a line are such that the current
tends to be small when the voltage is large
(and vice versa), as shown in Figure 4.1.2, Distance from load
the impedance will, in general, be oscilla-
tory in the same manner as the voltage
(large when the voltage is high and small
(b)
when the voltage is low). Thus, in the case
Impedance (ohms)
of a short-circuited receiver, the imped-
ance will be high at distances from the
receiving end that are odd multiples of 1/4
wavelength, and it will be low at distances
Distance from load
that are even multiples of 1/4 wavelength.
The extent to which the impedance
fluctuates with distance depends on the (c )
standing wave ratio (ratio of reflected to
incident waves), being less as the reflected
wave is proportionally smaller than the 90 o lag
Impedance phase
incident wave. In the particular case where
the load impedance equals the characteris-
tic impedance, the impedance of the trans- 90 o lead
mission line is equal to the characteristic Distance from load
impedance at all points along the line.
The power factor of the impedance of a Figure 4.1.2 Magnitude and power factor of line
transmission line varies according to the impedance with increasing distance from the load
standing waves present. When the load for the case of a short-circuited receiver and a line
impedance equals the characteristic with moderate attenuation: (a) voltage distribution,
impedance, there is no reflected wave, and (b) impedance magnitude, (c) impedance phase.
the power factor of the impedance is equal
to the power factor of the characteristic
impedance. At radio frequencies, the power factor under these conditions is accordingly resis-
tive. However, when a reflected wave is present, the power factor is unity (resistive) only at the
points on the line where the voltage passes through a maximum or a minimum. At other points
the power factor will be reactive, alternating from leading to lagging at intervals of 1/4 wave-
length. When the line is short-circuited at the receiver, or when it has a resistive load less than the
characteristic impedance so that the voltage distribution is of the short-circuit type, the power
factor is inductive for lengths corresponding to less than the distance to the first voltage maxi-
mum. Thereafter, it alternates between capacitive and inductive at intervals of 1/4 wavelength.
Similarly, with an open-circuited receiver or with a resistive load greater than the characteristic
impedance so that the voltage distribution is of the open-circuit type, the power factor is capaci-
tive for lengths corresponding to less than the distance to the first voltage minimum. Thereafter,
the power factor alternates between capacitive and inductive at intervals of 1/4 wavelength, as in
the short-circuited case.
Transmission Line 4-9
F cf = -4---90.4 × n
------------------- (4.1.4)
L
Where:
F cr = the critical frequency
4-10 RF Interconnection Devices and Systems
Table 4.1.1 Representative Specifications for Various Types of Flexible Air-Dielectric Coaxial
Cable
Cable size Maximum Velocity Peak power Average power Attenuation (note 1)
(in.) frequency (percent) 1 MHz (kW)
100 MHz 1 MHz (kW) 100 MHz (dB) 1 Mhz (dB)
(MHz) (kW)
1 2.7 92.1 145 145 14.4 0.020 0.207
3 1.64 93.3 320 320 37 0.013 0.14
4 1.22 92 490 490 56 0.010 0.113
5 0.96 93.1 765 765 73 0.007 0.079
Note 1: Attenuation specified in dB/100 ft.
n = any integer
L = transmission line length in feet
For most applications, the critical frequency for a chosen line length should not fall closer
than ±2 MHz of the passband at the operating frequency.
Attenuation is related to the construction of the cable itself and varies with frequency, product
dimensions, and dielectric constant. Larger-diameter cable exhibits lower attenuation than
smaller-diameter cable of similar construction when operated at the same frequency. It follows,
therefore, that larger-diameter cables should be used for long runs.
Air-dielectric coax exhibits less attenuation than comparable-size foam-dielectric cable. The
attenuation characteristic of a given cable also is affected by standing waves present on the line
resulting from an impedance mismatch. Table 4.1.1 shows a representative sampling of semiflex-
ible coaxial cable specifications for a variety of line sizes.
4.1.3 Waveguide
As the operating frequency of a system reaches into the UHF band, waveguide-based transmis-
sion line systems become practical. From the mechanical standpoint, waveguide is simplicity
itself. There is no inner conductor; RF energy is launched into the structure and propagates to the
load. Several types of waveguide are available, including rectangular, square, circular, and ellipti-
cal. Waveguide offers several advantages over coax. First, unlike coax, waveguide can carry
more power as the operating frequency increases. Second, efficiency is significantly better with
waveguide at higher frequencies.
Rectangular waveguide commonly is used in high-power transmission systems. Circular
waveguide also may be used. The physical dimensions of the guide are selected to provide for
propagation in the dominant (lowest-order) mode.
Waveguide is not without its drawbacks, however. Rectangular or square guide constitutes a
large windload surface, which places significant structural demands on a tower. Because of the
physical configuration of rectangular and square guide, pressurization is limited, depending on
the type of waveguide used (0.5 psi is typical). Excessive pressure can deform the guide shape
and result in increased VSWR. Wind also may cause deformation and ensuing VSWR problems.
These considerations have led to the development of circular and elliptical waveguide.
Transmission Line 4-11
Fc = C
------------ (4.1.5)
2× a
Where:
Fc = waveguide cutoff frequency
c = 1.179 × 1010 inches per second (the velocity of light)
a = the wide dimension of the guide
The cutoff frequency for circular waveguide is defined by
Fc = c
------------------------ (4.1.6)
3.41 × ′ a
where a′ = the radius of the guide.
There are four common propagation modes in waveguide:
• TE 0,1, the principal mode in rectangular waveguide.
• TE 1,0, also used in rectangular waveguide.
• TE 1,1, the principal mode in circular waveguide. TE 1,1 develops a complex propagation pat-
tern with electric vectors curving inside the guide. This mode exhibits the lowest cutoff fre-
quency of all modes, which allows a smaller guide diameter for a specified operating
frequency.
• TM 0,1, which has a slightly higher cutoff frequency than TE1,1 for the same size guide.
Developed as a result of discontinuities in the waveguide, such as flanges and transitions,
TM 0,1 energy is not coupled out by either dominant or cross-polar transitions. The parasitic
energy must be filtered out, or the waveguide diameter chosen carefully to reduce the
unwanted mode.
The field configuration for the dominant mode in rectangular waveguide is illustrated in Fig-
ure 4.1.3. Note that the electric field is vertical, with intensity maximum at the center of the
guide and dropping off sinusoidally to zero intensity at the edges. The magnetic field is in the
form of loops that lie in planes that are at right angles to the electric field (parallel to the top and
4-12 RF Interconnection Devices and Systems
(a) (b)
b b
(c )
Electric field
Magnetic field
Figure 4.1.3 Field configuration of the dominant or TE1,0 mode in a rectangular waveguide: (a)
side view, (b) end view, (c) top view.
bottom of the guide). The magnetic field distribution is the same for all planes perpendicular to
the Y-axis. In the X direction, the intensity of the component of magnetic field that is transverse
to the axis of the waveguide (the component in the direction of X) is at any point in the
waveguide directly proportional to the intensity of the electric field at that point. This entire con-
figuration of fields travels in the direction of the waveguide axis (the Z direction in Figure 4.1.3).
The field configuration for the TE1,1 mode in circular waveguide is illustrated in Figure 4.1.4.
The TE 1,1 mode has the longest cutoff wavelength and is, accordingly, the dominant mode. The
next higher mode is the TM0,1, followed by TE2,1.
4.1.3c Efficiency
Waveguide losses result from the following:
Transmission Line 4-13
(a) (b )
• Power dissipation in the waveguide walls and the dielectric material filling the enclosed space
• Leakage through the walls and transition connections of the guide
• Localized power absorption and heating at the connection points
The operating power of waveguide can be increased through pressurization. Sulfur hexafluo-
ride commonly is used as the pressurizing gas.
(a ) Output
Rotation
(b )
Output
Cross polarization rotation
Required
cross polarization
load
Figure 4.1.6 The effects of parasitic energy in circular waveguide: (a) trapped cross-polarization
energy, (b) delayed transmission of the trapped energy.
also presents lower and more uniform windloading than rectangular waveguide, reducing tower
structural requirements.
The same physical properties of circular waveguide that give it good power handling and low
attenuation also result in electrical complexities. Circular waveguide has two potentially
unwanted modes of propagation: the cross-polarized TE 1,1 and TM 0,1 modes.
Circular waveguide, by definition, has no short or long dimension and, consequently, no
method to prevent the development of cross-polar or orthogonal energy. Cross-polar energy is
formed by small ellipticities in the waveguide. If the cross-polar energy is not trapped out, the
parasitic energy can recombine with the dominant-mode energy.
Parasitic Energy
Hollow circular waveguide works as a high-Q resonant cavity for some energy and as a transmis-
sion medium for the rest. The parasitic energy present in the cavity formed by the guide will
appear as increased VSWR if not disposed of. The polarization in the guide meanders and rotates
as it propagates from the source to the load. The end pieces of the guide, typically circular-to-
rectangular transitions, are polarization-sensitive. See Figure 4.1.6a. If the polarization of the
incidental energy is not matched to the transition, energy will be reflected.
Transmission Line 4-15
Outer covering
of waveguide
Waveguide as viewed
from flange
Several factors can cause this undesirable polarization. One cause is out-of-round guides that
result from nonstandard manufacturing tolerances. In Figure 4.1.6, the solid lines depict the situ-
ation at launching: perfectly circular guide with perpendicular polarization. However, certain
ellipticities cause polarization rotation into unwanted states, while others have no effect. A 0.2
percent change in diameter can produce a –40 dB cross-polarization component per wavelength.
This is roughly 0.03 in for 18 in of guide length.
Other sources of cross polarization include twisted and bent guides, out-of-roundness, offset
flanges, and transitions. Various methods are used to dispose of this energy trapped in the cavity,
including absorbing loads placed at the ground and/or antenna level.
DTW exhibits about 3 percent higher windloading than an equivalent run of circular
waveguide (because of the transition section at the flange joints), and 32 percent lower loading
than comparable rectangular waveguide.
4.1.4 Bibliography
Andrew Corporation: “Broadcast Transmission Line Systems,” Technical Bulletin 1063H,
Orland Park, Ill., 1982.
Andrew Corporation: “Circular Waveguide: System Planning, Installation and Tuning,” Techni-
cal Bulletin 1061H, Orland Park, Ill., 1980.
Ben-Dov, O., and C. Plummer: “Doubly Truncated Waveguide,” Broadcast Engineering, Intertec
Publishing, Overland Park, Kan., January 1989.
Benson, K. B., and J. C. Whitaker: Television and Audio Handbook for Technicians and Engi-
neers, McGraw-Hill, New York, N.Y., 1989.
Cablewave Systems: “The Broadcaster’s Guide to Transmission Line Systems,” Technical Bulle-
tin 21A, North Haven, Conn., 1976.
Cablewave Systems: “Rigid Coaxial Transmission Lines,” Cablewave Systems Catalog 700,
North Haven, Conn., 1989.
Crutchfield, E. B. (ed.), NAB Engineering Handbook, 8th Ed., National Association of Broad-
casters, Washington, D.C., 1992.
Fink, D., and D. Christiansen (eds.), Electronics Engineers’ Handbook, 2nd ed., McGraw-Hill,
New York, N.Y., 1982.
Fink, D., and D. Christiansen (eds.): Electronics Engineers’ Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Jordan, Edward C.: Reference Data for Engineers: Radio, Electronics, Computer and Communi-
cations, 7th ed., Howard W. Sams, Indianapolis, IN, 1985.
Krohe, Gary L.: “Using Circular Waveguide,” Broadcast Engineering, Intertec Publishing, Over-
land Park, Kan., May 1986.
Perelman, R., and T. Sullivan: “Selecting Flexible Coaxial Cable,” Broadcast Engineering, Inter-
tec Publishing, Overland Park, Kan., May 1988.
Terman, F. E.: Radio Engineering, 3rd ed., McGraw-Hill, New York, N.Y., 1947.
Whitaker, Jerry C., G. DeSantis, and C. Paulson: Interconnecting Electronic Systems, CRC
Press, Boca Raton, Fla., 1993.
Whitaker, Jerry C.: Radio Frequency Transmission Systems: Design and Operation, McGraw-
Hill, New York, N.Y., 1990.
g g
Chapter
4.2
RF Combiner and Diplexer Systems
4.2.1 Introduction
The basic purpose of an RF combiner is to add two or more signals to produce an output signal
that is a composite of the inputs. The combiner performs this signal addition while providing iso-
lation between inputs. Combiners perform other functions as well, and can be found in a wide
variety of RF transmission equipment. Combiners are valuable devices because they permit mul-
tiple amplifiers to drive a single load. The isolation provided by the combiner permits tuning
adjustments to be made on one amplifier—including turning it on or off—without significantly
affecting the operation of the other amplifier. In a typical application, two amplifiers drive the
hybrid and provide two output signals:
• A combined output representing the sum of the two input signals, typically directed toward
the antenna.
• A difference output representing the difference in amplitude and phase between the two input
signals. The difference output typically is directed toward a dummy (reject) load.
For systems in which more than two amplifiers must be combined, two or more combiners are
cascaded.
Diplexers are similar in nature to combiners but permit the summing of output signals from
two or more amplifiers operating at different frequencies. This allows, for example, the outputs
of several transmitters operating on different frequencies to utilize a single broadband antenna.
4-17
p y
elements (i.e., resistors, inductors, and capacitors). Filters are generally categorized by the fol-
lowing general parameters:
• Type
• Alignment (or class)
• Order
(a) (b )
(c) (d)
Figure 4.2.1 Filter characteristics by type: (a) low-pass, (b) high-pass, (c) bandpass, (d) bandstop.
(From [1]. Used with permission.)
Each filter alignment has a frequency response with a characteristic shape, which provides
some particular advantage. (See Figure 4.2.2.) Filters with Butterworth, Chebyshev, or Bessel
alignment are called all-pole filters because their low-pass transfer functions have no zeros.
Table 4.2.1 summarizes the characteristics of the standard filter alignments.
(a) (b )
Figure 4.2.2 Filter characteristics by alignment, third-order, all-pole filters: (a) magnitude, (b) mag-
nitude in decibels. (From [1]. Used with permission.)
n is the order of the filter (Figure 4.2.3). In the vicinity of fc, both filter alignment and filter order
determine rolloff.
Figure 4.2.3 The effects of filter order on rolloff (Butterworth alignment). (From [1]. Used with per-
mission.)
Port 3
Port 4
Outer cover
Port 1
transmitters to feed a single load. The combiner accepts one RF source and splits it equally into
two parts. One part arrives at output port C with 0° phase (no phase delay; it is the reference
phase). The other part is delayed by 90° at port D. A second RF source connected to input port B,
but with a phase delay of 90°, also will split in two, but the signal arriving at port C now will be
in phase with source 1, and the signal arriving at port D will cancel, as shown in the figure.
Output port C, the summing point of the hybrid, is connected to the load. Output port D is
connected to a resistive load to absorb any residual power resulting from slight differences in
amplitude and/or phase between the two input sources. If one of the RF inputs fails, half of the
remaining transmitter output will be absorbed by the resistive load at port D.
The four-port hybrid works only when the two signals being mixed are identical in frequency
and amplitude, and when their relative phase is 90°.
p y
Antenna
T 1 input (0o)
T 1 (-0 o )
A C T 1 +T 2
T 2 (-0 o ) (twice the power
of either source)
Reject load
T 1 (+90 o )
B D T 1 -T 2
T 2 (-90 o ) (zero power)
T2 input (-90o )
Figure 4.2.5 Operating principles of a hybrid combiner. This circuit is used to add two identical sig-
nals at inputs A and B.
Operation of the hybrid can best be described by a scattering matrix in which vectors are used
to show how the device functions. Such a matrix is shown in Table 4.2.2. In a 3 dB hybrid, two
signals are fed to the inputs. An input signal at port 1 with 0° phase will arrive in phase at port 3,
and at port 4 with a 90° lag (–90°) referenced to port 1. If the signal at port 2 already contains a
90° lag (–90° referenced to port 1), both input signals will combine in phase at port 4. The signal
from port 2 also experiences another 90° change in the hybrid as it reaches port 3. Therefore, the
signals from ports 1 and 2 cancel each other at port 3.
If the signal arriving at port 2 leads by 90° (mode 1 in the table), the combined power from
ports 1 and 2 appears at port 4. If the two input signals are matched in phase (mode 4), the output
ports (3 and 4) contain one-half of the power from each of the inputs.
If one of the inputs is removed, which would occur in a transmitter failure, only one hybrid
input receives power (mode 5). Each output port then would receive one-half the input power of
the remaining transmitter, as shown.
The input ports present a predictable load to each amplifier with a VSWR that is lower than
the VSWR at the output port of the combiner. This characteristic results from the action of the
difference port, typically connected to a dummy load. Reflected power coming into the output
port will be directed to the reject load, and only a portion will be fed back to the amplifiers. Fig-
ure 4.2.6 illustrates the effect of output port VSWR on input port VSWR, and on the isolation
between ports.
As noted previously, if the two inputs from the separate amplifiers are not equal in amplitude
and not exactly in phase quadrature, some power will be dissipated in the difference port reject
load. Figure 4.2.7 plots the effect of power imbalance, and Figure 4.2.8 plots the effects of phase
imbalance. The power lost in the reject load can be reduced to a negligible value by trimming the
amplitude and/or phase of one (or both) amplifiers.
p y
50 1.18
Practical 45 1.155
40 1.13
35 1.105
30 1.08
25 1.055
Isolation
20 1.03
15 1.005
1.0 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55
4 1
3 2
Figure 4.2.6 The effects of load VSWR on input VSWR and isolation: (a) respective curves, (b)
coupler schematic.
100
80
60
40
20
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Transmitter power imbalance factor (K)
Pa
K=
Pb
Figure 4.2.7 The effects of power imbalance at the inputs of a hybrid coupler.
100
80
60
40
20
0
0 20 40 60 80 100 120 140 160 180
Deviation from quadrature (deg.)
1 2
Dielectric
λ /4
3 4
tive because of its bandwidth and amenability to various physical implementations. Such a
device is illustrated in Figure 4.2.9.
SPST
Load A
Transmitter
Load B
DPDT transfer
Load A
Transmitter 1
Load B
Transmitter 2
(a) (b )
1 3 Load 1 3
P P 2P
3 dB 3 dB Load
P 2P P
2 4 2 4
Phase Q =0 o Phase Q = 180 o
shifter shifter
Figure 4.2.11 Hybrid switching configurations: (a) phase set so that the combined energy is deliv-
ered to port 4, (b) phase set so that the combined energy is delivered to port 3.
T1 A
Phase
shifter
T2 B
Phase
shifter
Figure 4.2.12 Additional switching and combining functions enabled by adding a second hybrid
and another phase shifter to a hot switching combiner.
between the two input signals to the second hybrid provides the needed control to switch the
input signal between the two output ports.
If a continuous analog phase shifter is used, the transfer switch shown in Figure 4.2.12 also
can act as a hot switchless combiner where RF generators 1 and 2 can be combined and fed to
either output A or B. The switching or combining functions are accomplished by changing the
physical position of the phase shifter.
Note that it does not matter whether the phase shifter is in one or both legs of the system. It is
the phase difference (θ1 – θ2) between the two input legs of the second hybrid that is important.
With 2-phase shifters, dual drives are required. However, the phase shifter needs only two posi-
tions. In a 1-phase shifter design, only a single drive is required, but the phase shifter must have
four fixed operating positions.
(a )
(b )
(c )
Figure 4.2.13 Basic characteristics of a circulator: (a) operational schematic, (b) distributed con-
stant circulator, (c) lump constant circulator. (From [2]. Used with permission.)
port 2. An important benefit of this one-way power transfer is that the input VSWR at port 1 is
dependent only on the VSWR of the load placed at port 3. In most applications, this load is a
resistive (dummy) load that presents a perfect load to the transmitter.
The unidirectional property of the isolator results from magnetization of a ferrite alloy inside
the device. Through correct polarization of the magnetic field of the ferrite, RF energy will travel
through the element in only one direction (port 1 to 2, port 2 to 3, and port 3 to 1). Reversing the
polarity of the magnetic field makes it possible for RF flow in the opposite direction.
p y
In the basic design, the ferrite is placed in the center of a Y-junction of three transmission
lines, either waveguide or coax. Sections of the material are bonded together to form a thin cylin-
der perpendicular to the electric field. Even though the insertion loss is low, the resulting power
dissipated in the cylinder can be as high as 2 percent of the forward power. Special provisions
must be made for heat removal. It is efficient heat-removal capability that makes high-power
operation possible.
The insertion loss of the ferrite must be kept low so that minimal heat is dissipated. Values of
ferrite loss on the order of 0.05 dB have been produced. This equates to an efficiency of 98.9
percent. Additional losses from the transmission line and matching structure contribute slightly
to loss. The overall loss is typically less than 0.1 dB, or 98 percent efficiency. The ferrite element
in a high-power system is usually water-cooled in a closed-loop path that uses an external radia-
tor.
The two basic circulator implementations are shown in Figures 4.2.13a and 4.2.13b. These
designs consist of Y-shaped conductors sandwiched between magnetized ferrite discs [2]. The
final shape, dimensions, and type of material varies according to frequency of operation, power
handling requirements, and the method of coupling. The distributed constant circulator is the
older design; it is a broad-band device, not quite as efficient in terms of insertion loss and leg-to-
leg isolation, and considerably more expensive to produce. It is useful, however, in applications
where broad-band isolation is required. More common is the lump constant circulator, a less
expensive and more efficient, but narrow-band, design.
At least one filter is always installed directly after an isolator, because the ferrite material of
the isolator generates harmonic signals. If an ordinary band-pass or band-reject filter is not to be
used, a harmonic filter will be needed.
4.2.4b Applications
The high-power isolator permits a transmitter to operate with high performance and reliability
despite a load that is less than optimum. The problems presented by ice formations on a transmit-
ting antenna provide a convenient example. Ice buildup will detune an antenna, resulting in
reflections back to the transmitter and high VSWR. If the VSWR is severe enough, transmitter
power will have to be reduced to keep the system on the air. An isolator, however, permits contin-
ued operation with no degradation in signal quality. Power output is affected only to the extent of
the reflected energy, which is dissipated in the resistive load.
A high-power isolator also can be used to provide a stable impedance for devices that are sen-
sitive to load variations, such as klystrons. This allows the device to be tuned for optimum per-
formance regardless of the stability of the RF components located after the isolator. Figure
4.2.14 shows the output of a wideband (6 MHz) klystron operating into a resistive load, and into
an antenna system. The power loss is the result of an impedance difference. The periodicity of
the ripple shown in the trace is a function of the distance of the reflections from the source.
Hot Switch
The circulator can be made to perform a switching function if a short circuit is placed at the out-
put port. Under this condition, all input power will be reflected back into the third port. The use
of a high-power stub on port 2, therefore, permits redirecting the output of an RF generator to
port 3.
p y
(a )
Amplitude, 10 dB/div
Frequency, 1 MHz/div
(b )
Amplitude, 10 dB/div
Frequency, 1 MHz/div
Figure 4.2.14 Output of a klystron operating into different loads through a high-power isolator: (a)
resistive load, (b) an antenna system.
At odd 1/4-wave positions, the stub appears as a high impedance and has no effect on the out-
put port. At even 1/4-wave positions, the stub appears as a short circuit. Switching between the
antenna and a test load, for example, can be accomplished by moving the shorting element 1/4
wavelength.
Diplexer
An isolator can be configured to combine the aural and visual outputs of a TV transmitter into a
single output for the antenna. The approach is shown in Figure 4.2.15. A single notch cavity at
the aural frequency is placed on the visual transmitter output (circulator input), and the aural sig-
nal is added (as shown). The aural signal will be routed to the antenna in the same manner as it is
reflected (because of the hybrid action) in a conventional diplexer.
p y
Aural notch
filter
Output to antenna
F1
F1+ F2 F1+ F2+ F3 F1+ F2+ F3+ F4
F F F
2 3 4
Multiplexer
A multiplexer can be formed by cascading multiple circulators, as illustrated in Figure 4.2.16.
Filters must be added, as shown. The primary drawback of this approach is the increased power
dissipation that occurs in circulators nearest the antenna.
4.2.5 References
1. Harrison, Cecil: “Passive Filters,” in The Electronics Handbook, Jerry C. Whitaker (ed.),
CRC Press, Boca Raton, Fla., pp. 279–290, 1996.
2. Surette, Robert A.: “Combiners and Combining Networks,” in The Electronics Handbook,
Jerry C. Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 1368–1381, 1996.
p y
4.2.6 Bibliography
DeComier, Bill: “Inside FM Multiplexer Systems,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., May 1988.
Heymans, Dennis: “Hot Switches and Combiners,” Broadcast Engineering, Overland Park,
Kan., December 1987.
Stenberg, James T.: “Using Super Power Isolators in the Broadcast Plant,” Proceedings of the
Broadcast Engineering Conference, Society of Broadcast Engineers, Indianapolis, IN,
1988.
Vaughan, T., and E. Pivit: “High Power Isolator for UHF Television,” Proceedings of the NAB
Engineering Conference, National Association of Broadcasters, Washington, D.C., 1989.
p y
g g
Section
5-1
g y
the antenna may be increased by increasing the diameter of the elements, or by using cones or
cylinders rather than wires or rods. Such modifications also increase the impedance of the
antenna.
The dipole can be straight (in-line) or bent into a V-shape. The impedance of the V-dipole is a
function of the V angle. Changing the angle effectively tunes the antenna. The vertical radiation
pattern of the V-dipole antenna is similar to the straight dipole for angles of 120º or less.
These and other basic principles have led to the development of many different types of
antennas for broadcast applications. With the implementation of digital radio and television
broadcasting, antenna designs are being pushed to new limits as the unique demands of digital-
services are identified and solutions implemented.
In This Section:
Heating 5-85
References 5-86
Bibliography 5-86
Carpenter, Roy, B.: “Improved Grounding Methods for Broadcasters,” Proceedings, SBE
National Convention, Society of Broadcast Engineers, Indianapolis, IN, 1987.
Chick, Elton B.: “Monitoring Directional Antennas,” Broadcast Engineering, Intertec Publish-
ing, Overland Park, Kan., July 1985.
Clark, R. N., and N. A. L. Davidson: “The V-Z Panel as a Side Mounted Antenna,” IEEE Trans.
Broadcasting, vol. BC-13, no. 1, pp. 3–136, January 1967.
Davis, Gary, and Ralph Jones: Sound Reinforcement Handbook, Yamaha Music Corporation, Hal
Leonard Publishing, Milwaukee, WI, 1987.
DeDad, John A., (ed.): “Basic Facility Requirements,” in Practical Guide to Power Distribution
for Information Technology Equipment, PRIMEDIA Intertec, Overland Park, Kan., pp. 24,
1997.
Defense Civil Preparedness Agency, EMP and Electric Power Systems, Publication TR-6l-D,
U.S. Department of Commerce, National Bureau of Standards, Washington, D.C., July
1973.
Defense Civil Preparedness Agency, “EMP Protection for AM Radio Stations,” Washington,
D.C., TR-61-C, May 1972.
Defense Civil Preparedness Agency, EMP Protection for Emergency Operating Centers, Federal
Information Processing Standards Publication no. 94, Guideline on Electrical Power for
ADP Installations, U.S. Department of Commerce, National Bureau of Standards, Wash-
ington, D.C., 1983.
DeVito, G: “Considerations on Antennas with no Null Radiation Pattern and Pre-established
Maximum-Minimum Shifts in the Vertical Plane,” Alta Frequenza, vol. XXXVIII, no. 6,
1969.
DeVito, G., and L. Mania: “Improved Dipole Panel for Circular Polarization,” IEEE Trans.
Broadcasting, vol. BC-28, no. 2, pp. 65–72, June 1982.
DeWitt, William E.: “Facility Grounding Practices,” in The Electronics Handbook, Jerry C. Whi-
taker (ed.), CRC Press, Boca Raton, Fla., pp. 2218–2228, 1996.
Drabkin, Mark, and Roy Carpenter, Jr., “Lightning Protection Devices: How Do They Com-
pare?,” Mobile Radio Technology, PRIMEDIA Intertec, Overland Park, Kan., October
1988.
Dudzinsky, S. J., Jr.: “Polarization Discrimination for Satellite Communications,” Proc. IEEE,
vol. 57, no. 12, pp. 2179–2180, December 1969.
Fardo, S., and D. Patrick: Electrical Power Systems Technology, Prentice-Hall, Englewood Cliffs,
N.J., 1985.
Fink, D., and D. Christiansen (eds.): Electronics Engineer's Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Fisk, R. E., and J. A. Donovan: “A New CP Antenna for Television Broadcast Service,” IEEE
Trans. Broadcasting, vol. BC-22, no. 3, pp. 91–96, September 1976.
Fowler, A. D., and H. N. Christopher: “Effective Sum of Multiple Echoes in Television,” J.
SMPTE, SMPTE, White Plains, N. Y., vol. 58, June 1952.
g y
Fumes, N., and K. N. Stokke: “Reflection Problems in Mountainous Areas: Tests with Circular
Polarization for Television and VHF/FM Broadcasting in Norway,” EBU Review, Technical
Part, no. 184, pp. 266–271, December 1980.
Heymans, Dennis: “Channel Combining in an NTSC/ATV Environment,” Proceedings of the
1996 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., pg. 165, 1996.
Hill, Mark: “Computer Power Protection,” Broadcast Engineering, PRIMEDIA Intertec, Over-
land Park, Kan., April 1987.
Hill, P. C. J.: “Measurements of Reradiation from Lattice Masts at VHF,” Proc. IEEE, vol. III,
no. 12, pp. 1957–1968, December 1964.
Hill, P. C. J.: “Methods for Shaping Vertical Pattern of VHF and UHF Transmitting Aerials,”
Proc. IEEE, vol. 116, no. 8, pp. 1325–1337, August 1969.
IEEE Standard 100: Definitions of Electrical and Electronic Terms, IEEE, New York, N.Y.
IEEE Standard 142: “Recommended Practice for Grounding Industrial and Commercial Power
Systems,” IEEE, New York, N.Y., 1982.
IEEE Standard 1100: “Recommended Practice for Powering and Grounding Sensitive Electron-
ics Equipment,” IEEE, New York, N.Y., 1992.
Jay, Frank, ed., IEEE Standard Directory of Electrical and Electronics Terms, 3rd ed., IEEE, New
York, N.Y., 1984.
Johns, M. R., and M. A. Ralston: “The First Candelabra for Circularly Polarized Broadcast
Antennas,” IEEE Trans. Broadcasting, vol. BC-27, no. 4, pp. 77–82, December 1981.
Johnson, R. C, and H. Jasik: Antenna Engineering Handbook, 2d ed., McGraw-Hill, New York,
N.Y., 1984.
Jordan, Edward C. (ed.): “Reference Data for Engineers—Radio, Electronics, Computer and
Communications, 7th ed., Howard W. Sams, Indianapolis, Ind., 1985.
Kaiser, Bruce A., “Can You Really Fool Mother Nature?,” Cellular Business, PRIMEDIA Inter-
tec, Overland Park, Kan., March 1989.
Kaiser, Bruce A., “Straight Talk on Static Dissipation,” Proceedings of ENTELEC 1988, Energy
Telecommunications and Electrical Association, Dallas, Tex., 1988.
Key, Lt. Thomas, “The Effects of Power Disturbances on Computer Operation,” IEEE Industrial
and Commercial Power Systems Conference paper, Cincinnati, Ohio, June 7, 1978.
Knight, P.: “Reradiation from Masts and Similar Objects at Radio Frequencies,” Proc. IEEE, vol.
114, pp. 30–42, January 1967.
Kraus, J. D.: Antennas, McGraw-Hill, New York, N.Y., 1950.
Lanphere, John: “Establishing a Clean Ground,” Sound & Video Contractor, PRIMEDIA Inter-
tec, Overland Park, Kan., August 1987.
Lawrie, Robert: Electrical Systems for Computer Installations, McGraw-Hill, New York, N.Y.,
1988.
g y
Lehtinen, Rick: “Hardening Towers,” Broadcast Engineering, Overland Park, Kan., pp. 94–106,
March 1990.
Lessman, A. M.: “The Subjective Effect of Echoes in 525-Line Monochrome and NTSC Color
Television and the Resulting Echo Time Weighting,” J. SMPTE, SMPTE, White Plains,
N.Y., vol. 1, December 1972.
Little, Richard: “Surge Tolerance: How Does Your Site Rate?” Mobile Radio Technology, Inter-
tec Publishing, Overland Park, Kan., June 1988.
Lobnitz, Edward A.: “Lightning Protection for Tower Structures,” in NAB Engineering Hand-
book, 9th ed., Jerry C. Whitaker (ed.), National Association of Broadcasters, Washington,
D.C., 1998.
Martzloff, F. D., “The Development of a Guide on Surge Voltages in Low-Voltage AC Power Cir-
cuits,” 14th Electrical/Electronics Insulation Conference, IEEE, Boston, Mass., October
1979.
Mertz, P.: “Influence of Echoes on Television Transmission,” J. SMPTE, SMPTE, White Plains,
N.Y., vol. 60, May 1953.
Midkiff, John: “Choosing the Right Coaxial Cable Hanger,” Mobile Radio Technology, Intertec
Publishing, Overland Park, Kan., April 1988.
Military Handbook 419A: “Grounding, Bonding, and Shielding for Electronic Systems,” U.S.
Government Printing Office, Philadelphia, PA, December 1987.
Moreno, T.: Microwave Transmission Design Data, Dover, New York, N.Y.
Mulherin, Nathan D.: “Atmospheric Icing on Communication Masts in New England.” CRREL
Report 86-17, U.S. Army Cold Regions Research and Engineering Laboratory, December
1986.
Mullaney, John H.: “The Folded Unipole Antenna,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., July 1986.
Mullaney, John H.: “The Folded Unipole Antenna for AM Broadcast,” Broadcast Engineering,
Intertec Publishing, Overland Park, Kan., January 1960.
Mullinack, Howard G.: “Grounding for Safety and Performance,” Broadcast Engineering, PRI-
MEDIA Intertec, Overland Park, Kan., October 1986.
NFPA Standard 70: “The National Electrical Code,” National Fire Protection Association,
Quincy, Mass., 1993.
Nott, Ron, “The Sources of Atmospheric Energy,” Proceedings of the SBE National Convention
and Broadcast Engineering Conference, Society of Broadcast Engineers, Indianapolis, IN,
1987.
Perini, J.: “Echo Performance of TV Transmitting Systems,” IEEE Trans. Broadcasting, vol. BC-
16, no. 3, September 1970.
Perini, J.: “Improvement of Pattern Circularity of Panel Antenna Mounted on Large Towers,”
IEEE Trans. Broadcasting, vol. BC-14, no. 1, pp. 33–40, March 1968.
g y
Perini, J., and M. H. Ideslis: “Radiation Pattern Synthesis for Broadcast Antennas,” IEEE Trans.
Broadcasting, vol. BC-18, no. 3, pg. 53, September 1972.
Plonka, Robert J.: “Can ATV Coverage Be Improved With Circular, Elliptical, or Vertical Polar-
ized Antennas?” Proceedings of the 1996 NAB Broadcast Engineering Conference,
National Association of Broadcasters, Washington, D.C., pg. 155, 1996.
Praba, K.: “Computer-aided Design of Vertical Patterns for TV Antenna Arrays,” RCA Engineer,
vol. 18-4, January–February 1973.
Praba, K.: “R. F. Pulse Measurement Techniques and Picture Quality,” IEEE Trans. Broadcast-
ing, vol. BC-23, no. 1, pp. 12–17, March 1976.
“Predicting Characteristics of Multiple Antenna Arrays,” RCA Broadcast News, vol. 97, pp. 63–
68, October 1957.
Sargent, D. W.: “A New Technique for Measuring FM and TV Antenna Systems,” IEEE Trans.
Broadcasting, vol. BC-27, no. 4, December 1981.
Schneider, John: “Surge Protection and Grounding Methods for AM Broadcast Transmitter
Sites,” Proceedings, SBE National Convention, Society of Broadcast Engineers, Indianapo-
lis, IN, 1987.
Schwarz, S. J.: “Analytical Expression for Resistance of Grounding Systems,” AIEE Transac-
tions, vol. 73, Part III-B, pp. 1011–1016, 1954.
Siukola, M. S.: “Size and Performance Trade Off Characteristics of Horizontally and Circularly
Polarized TV Antennas,” IEEE Trans. Broadcasting, vol. BC-23, no. 1, March 1976.
Siukola, M. S.: “The Traveling Wave VHF Television Transmitting Antenna,” IRE Trans. Broad-
casting, vol. BTR-3, no. 2, pp. 49-58, October 1957.
Siukola, M. S.: “TV Antenna Performance Evaluation with RF Pulse Techniques,” IEEE Trans.
Broadcasting, vol. BC-16, no. 3, September 1970.
Smith, Paul D.: “New Channel Combining Devices for DTV, Proceedings of the 1997 NAB
Broadcast Engineering Conference, National Association of Broadcasters, Washington,
D.C., pg. 218, 1996.
Sullivan, Thomas: “How to Ground Coaxial Cable Feedlines,” Mobile Radio Technology, PRI-
MEDIA Intertec, Overland Park, Kan., April 1988.
Sunde, E. D.: Earth Conduction Effects in Transmission Systems, Van Nostrand Co., New York,
N.Y., 1949.
Technical Reports LEA-9-1, LEA-0-10, and LEA-1-8, Lightning Elimination Associates, Santa
Fe Springs, Calif.
Ulman, Martin A., The Lightning Discharge, Academic Press, Orlando, Fla., 1987.
“WBAL, WJZ and WMAR Build World’s First Three-Antenna Candelabra,” RCA Broadcast
News, vol. 106, pp. 30–35, December 1959.
Webster's New Collegiate Dictionary.
Wescott, H. H.: “A Closer Look at the Sutro Tower Antenna Systems,” RCA Broadcast News,
vol. 152, pp. 35–41, February 1944.
g y
Westberg, J. M.: “Effect of 90° Stub on Medium Wave Antennas,” NAB Engineering Handbook,
7the ed., National Association of Broadcasters, Washington, D.C., 1985.
Whitaker, Jerry C., AC Power Systems Handbook, 2nd ed., CRC Press, Boca Raton, Fla., 1999.
Whitaker, Jerry C., Maintaining Electronic Systems, CRC Press, Boca Raton, Fla., 1991.
Whitaker, Jerry C., Radio Frequency Transmission Systems: Design and Operation,
McGraw-Hill, New York, N.Y., 1990.
Whythe, D. J.: “Specification of the Impedance of Transmitting Aerials for Monochrome and
Color Television Signals,” Tech. Rep. E-115, BBC, London, 1968.
g g
Chapter
5.1
Radio Antenna Principles
5.1.1 Introduction
Transmission is accomplished by the emission of coherent electromagnetic waves in free space
from one or more radiating elements that are excited by RF currents. Although, by definition, the
radiated energy is composed of mutually dependent magnetic and electric vector fields, it is con-
ventional practice to measure and specify radiation characteristics in terms of the electric field
only.
Ht F o
H = ------------ (5.1.1)
2733
Where:
H = length of the radiating element in electrical degrees
Ht = length of the radiating element in feet
Fo = frequency of operation in kHz
When the radiating element is measured in meters
Ht F o
H = ---------------- (5.1.2)
833.23
5-11
p
P
R = ---- (5.1.3)
2
I
Where:
R = radiation resistance
P = power delivered to the antenna
I = driving current at the antenna base
5.1.2a Polarization
Polarization is the angle of the radiated electric field vector in the direction of maximum radia-
tion. Antennas may be designed to provide horizontal, vertical, or circular polarization. Horizon-
tal or vertical polarization is determined by the orientation of the radiating element with respect
to earth. If the plane of the radiated field is parallel to the ground, the signal is said to be horizon-
tally polarized. If it is at right angles to the ground, it is said to be vertically polarized. When the
receiving antenna is located in the same plane as the transmitting antenna, the received signal
strength will be maximum.
Circular polarization (CP) of the transmitted signal results when equal electrical fields in the
vertical and horizontal planes of radiation are out-of-phase by 90° and are rotating a full 360° in
one wavelength of the operating frequency. The rotation can be clockwise or counterclockwise,
depending on the antenna design. This continuously rotating field gives CP good signal penetra-
tion capabilities because it can be received efficiently by an antenna of any random orientation.
Figure 5.1.1 illustrates the principles of circular polarization.
Electrical beam tilt can also be designed into a high gain antenna. A conventional antenna
typically radiates more than half of its energy above the horizon. This energy is lost for practical
purposes in most applications. Electrical beam tilt, caused by delaying the RF current to the
lower elements of a multi-element antenna, can be used to provide more useful power to the ser-
vice area.
Pattern optimization is another method that may be used to maximize radiation to the
intended service area. The characteristics of the transmitting antenna are affected, sometimes
p
5.1.3a L Network
The L network is shown in Figure 5.1.2. Common equations are as follows:
p
R2 X1 R2
Q = ----- –1 = = (5.1.4)
R1 X2
----- -----
R1
±R2
X2 = -------- (5.1.5)
Q
–R1R2
X1 = (5.1.6)
X2
---------------
⎛R ⎞
θ = tan−1⎜ 2 ⎟
⎝ X2 ⎠
(5.1.7)
Where:
R 1 = L network input resistance, Ω
R 2 = L network output resistance, Ω
X 1 = series leg reactance, Ω
X 2 = shunt leg reactance, Ω
Q = loaded Q of the L network
The loaded Q of the network is determined from Equation (5.1.4). Equation (5.1.5) defines
the shunt leg reactance, which is negative (capacitive) when θ is negative, and positive (induc-
tive) when θ is positive. The series leg reactance is found using Equation (5.1.6), the phase shift
via Equation (5.1.7), and the currents and voltages via Ohm's Law. Note that R2 (the resistance
on the shunt leg side of the L network) must always be greater than R1. An L network cannot be
used to match equal resistances, or to adjust phase independently of resistance.
ignored. Note also that the Q of a tee network increases with increasing phase shift. Common
equations are as follows:
R1 R2
X3 = ---------------- (5.1.8)
sin θ
R
1
X
1 = ----------- – X
3 (5.1.9)
tan θ
R
2
X
2 = ----------- – X
3 (5.1.10)
tan θ
X
Q = 1 (5.1.11)
1 -----
R
1
X
Q = 2 (5.1.12)
2 -----
R
2
P
I
1 = ----- (5.1.13)
R
1
P
I
2 = ----- (5.1.14)
R
2
2 2
I
3 = I
1 + I 2 – 2I 1 I 2 cos θ (5.1.15)
2
R
3 = ( Q2 + 1) R2 (5.1.16)
⎛X ⎞ ⎛X ⎞
θ = tan−1⎜ 1 ⎟± tan−1⎜ 2 ⎟
⎝ R1 ⎠ ⎝ R2 ⎠
(5.1.17)
Where:
R 1 = tee network input resistance, Ω
R 2 = tee network output resistance, Ω
R 3 = midpoint resistance of the network, Ω
I1 = tee network input current, A
I2 = tee network output current, A
p
5.1.3c Pi Network
The pi network is shown in Figure 5.1.4. It can also be considered as two L networks back-to-
back and, therefore, the same comments regarding overall loaded Q apply. Common equations
are as follows:
1
Y3 = ------------------------------- (5.1.18)
– sin θ R1R2
tan θ
Y1 = ----------------- (5.1.19)
R1 – Y 3
p
tan θ
Y2 = ----------------- (5.1.20)
R2 – Y 3
Q1 = R 1 Y 1 (5.1.21)
Q2 = R 2 Y 2 (5.1.22)
V1 = R1P (5.1.23)
V2 = R2P (5.1.24)
2 2
V3 = V 1 + V 2 – 2 V 1 V 2 cos θ (5.1.25)
2
Q +1
R 3 = -----2---------- (5.1.26)
R2
Where:
R 1 = pi network input resistance, Ω
R 2 = output resistance, Ω
V 1 = input voltage, V
V 2 = output voltage, V
V 3 = voltage across series element, V
P = power input to pi network, W
Y1 = input shunt element susceptance, mhos
Y2 = output shunt element susceptance, mhos
Y3 = series element susceptance, mhos
Q1 = input loaded Q
Q2 = output loaded Q
Note that susceptances have been used in Equations (5.1.18) through (5.1.22) instead of reac-
tances in order to simplify calculations. The same conventions regarding tee network currents
apply to pi network voltages—Equations (5.1.23, 5.1.24, and 5.1.25). The mid-point resistance
of a pi network is always less than R1 or R2. A pi network is considered as having a negative or
lagging phase shift when Y3 is positive, and vice versa.
Line Stretcher
A line stretcher makes a transmission line appear longer or shorter in order to produce sideband
impedance symmetry at the transmitter PA (see Figure 5.1.5). This is done to reduce audio dis-
tortion in an envelope detector—the type of detector that most monophonic AM receivers
employ. Symmetry is defined as equal sideband resistances, and equal—but opposite sign—side-
band reactances.
p
There are two possible points of symmetry, each 90° from the other. One produces sideband
resistances greater than the carrier resistance, and the other produces the opposite effect. One
side will create a pre-emphasis effect, and the other a de-emphasis effect.
Depending on the Q of the transmitter output network, one point of symmetry may yield
lower sideband VSWR at the PA than the other. This results from the Q of the output network
opposing the Q of the antenna in one direction, but aiding the antenna Q in the other direction.
Common line stretcher equations include:
R 2s + X 2s
Rp = ------------------ (5.1.27)
Rs
R 2s + X 2s
Xp = ------------------ (5.1.28)
Xs
Where:
Rs = series configuration resistance, Ω
R p = parallel configuration resistance, Ω
Xs = series reactance, Ω
X p = parallel reactance, Ω
p
Figure 5.1.12 Block diagram of an AM directional antenna feeder system for a two tower array.
5.1.5b Bearing
The bearing or azimuth of the towers from
the reference tower or point is specified
clockwise in degrees from true north. The
distinction between true and magnetic
north is vital. The magnetic North Pole is
not at the true or geographic North Pole.
(In fact, it is in the vicinity of 74° north,
101° west, in the islands of northern Can-
ada.) The difference between magnetic
and true bearings is called variation or
magnetic declination. Declination, a term
generally used by surveyors, varies for dif-
ferent locations; it is not a constant. The
earth's magnetic field is subject to a num-
ber of changes in intensity and direction.
Figure 5.1.16 Directional pattern generated by the
These changes take place over daily,
tower orientation shown in Figure 5.1.13, but with
yearly, and long-term (or secular) periods. a new set of electrical parameters.
p
The secular changes result in a relatively constant increase or decrease in declination over a
period of many years.
Figure 5.1.18 Three possible circuit configurations for phase sample pickup.
A shielded torroidal current transformer can also be used as the phase/current pickup ele-
ment. Such devices offer several advantages over the sample loop including greater stability and
reliability. Because they are located inside the tuning unit cabinet or house, TCTs are protected
from wind, rain, ice, and vandalism.
Unlike the rigid, fixed sample loop, torroidal current transformers are available in several sen-
sitivities, ranging from 0.25–1.0 V per ampere of tower current. Tower currents of up to 40 A or
more can be handled, providing a more usable range of voltages for the antenna monitor.
Figure 5.1.18 shows the various arrangements that can be used for phase/current sample
pickup elements.
Sample lines
The selection and installation of the sampling lines for a directional monitoring system are
important factors in the ultimate accuracy of the overall array.
With critical arrays (antennas requiring operation within tight limits specified in the station
license) all sample lines must be of equal electrical length and installed in such a manner that
corresponding lengths of all lines are exposed to equal environmental conditions.
While sample lines can be run above ground on supports (if protected and properly grounded)
the most desirable arrangement is direct burial using jacketed cable. Burial of sample line cable
p
is standard practice because proper burial offers good protection against physical damage and a
more stable temperature environment.
2
P = I R (5.1.29)
Where:
P = power in W
I = the common point current in A
R = the common point resistance in Ω
Monitor Points
Routine monitoring of a directional antenna involves the measurement of field intensity at cer-
tain locations away from the antenna, called monitor points. These points are selected and estab-
lished during the initial tune-up of the antenna system. Measurements at the monitor points
should confirm that radiation in prescribed directions does not exceed a value that would cause
interference to other stations operating on the same or adjacent frequencies. The field intensity
limits at these points are normally specified in the station license. Measurements at the monitor
points may be required on a weekly or a monthly basis, depending on several factors and condi-
tions relating to the particular station. If the system is not a critical array, quarterly measurements
may be sufficient.
(a )
(b )
arms can be matched to resonance by mechanical adjustment of the end fittings. Shunt-feeding
(when properly adjusted) provides equal currents in all four arms.
Wideband panel antennas are a fourth common type of antenna used for FM broadcasting.
Panel designs share some of the characteristics listed previously, but are intended primarily for
specialized installations in which two or more stations will use the antenna simultaneously. Panel
antennas are larger and more complex than other FM antennas, but offer the possibility for
shared tower space among several stations and custom coverage patterns that would be difficult
or even impossible with more common designs.
The ideal combination of antenna gain and transmitter power for a particular installation
involves the analysis of a number of parameters. As shown in Table 5.1.2, a variety of pairings
can be made to achieve the same ERP.
p
Table 5.1.2 Various Combinations of Transmitter Power and Antenna Gain that will Produce 100
KW Effective Radiated Power (ERP) for an FM Station
5.1.7 Bibliography
Benson, B., and Whitaker, J.: Television and Audio Handbook for Technicians and Engineers,
McGraw-Hill, New York, N.Y., 1989.
Bingeman, Grant: “AM Tower Impedance Matching,” Broadcast Engineering, Intertec Publish-
ing, Overland Park, Kan., July 1985.
Bixby, Jeffrey: “AM DAs—Doing it Right,” Broadcast Engineering, Intertec Publishing, Over-
land Park, Kan., February 1984.
Chick, Elton B.: “Monitoring Directional Antennas,” Broadcast Engineering, Intertec Publish-
ing, Overland Park, Kan., July 1985.
Fink, D., and D. Christiansen (eds.): Electronics Engineer's Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Jordan, Edward C. (ed.): “Reference Data for Engineers—Radio, Electronics, Computer and
Communications, 7th ed., Howard W. Sams, Indianapolis, Ind., 1985.
Mullaney, John H.: “The Folded Unipole Antenna for AM Broadcast,” Broadcast Engineering,
Intertec Publishing, Overland Park, Kan., January 1960.
Mullaney, John H.: “The Folded Unipole Antenna,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., July 1986.
Westberg, J. M.: “Effect of 90° Stub on Medium Wave Antennas,” NAB Engineering Handbook,
7the ed., National Association of Broadcasters, Washington, D.C., 1985.
p
g g
Chapter
5.2
Television Antenna Principles
5.2.1 Introduction
A wide variety of options are available for television station antenna system implementations.
Classic designs, such as the turnstile, continue to be used while advanced systems, based on
panel or waveguide techniques, are implemented to solve specific design and/or coverage issues.
Regardless of its basic structure, any antenna design must focus on three key electrical consider-
ations:
• Adequate power handling capability
• Adequate signal strength over the coverage area
• Distortion-free emissions characteristics
⎛ 1 ⎞ ⎛ VSWR+1⎞2
2
⎜ ⎟ =⎜ ⎟
⎝1+ Γ ⎠ ⎝ 2 VSWR ⎠
(5.2.1)
5-33
p
Parameter Carrier Level (%) Fraction of Time (%) Average Power (%)
Voltage Power
Sync 100 100 8 8
Blanking 75 56 92 52
Visual black-signal power 60
Aural power (percent of sync power) 20 (or 10)
Total transmitted power (percent peak-of-sync) 80 (or 70)
where Γ = the expected reflection coefficient resulting from ice or a similar error condition. This
derating factor is in addition to the derating required because of the normally existing VSWR in
the antenna system feed line.
The manufacturer’s power rating for feed system components is based on a fixed ambient
temperature. This temperature is typically 40°C (104ºF). If the expected ambient temperature is
higher than the quoted value, a good rule of thumb is to lower the rating by the same percentage.
Hence, the television power rating (including 20 percent aural) of the feed system is given by
1 ⎛ TT L ⎞⎛ VSWR +1⎞2
PTV ≈ PT L⎜ ⎟⎜ ⎟
0.8 ⎝ T ⎠⎝ 2 VSWR ⎠
(5.2.2)
Where:
P T/V = quoted average power for transmission line components with VSWR = 1.0
T/TT/L = ratio of expected to quoted ambient temperature
VSWR = worst-possible expected VSWR
Television antennas must also be designed to withstand voltage breakdown resulting from
high instantaneous peak power both inside the feed system and on the external surface of the
antenna. Improper air gaps or sharp edges on the antenna structure and insufficient safety factors
can lead to arcing and blooming. The potential problem resulting from instantaneous peak power
is aggravated when multiplexing two or more stations on the same antenna. In this latter case, if
all stations have the same input power, the maximum possible instantaneous peak power is pro-
portional to the number of the stations squared as derived below.
For a single channel, the maximum instantaneous voltage can occur when the visual and aural
peak voltages are in phase. Thus, with 20 percent aural, the worst-case peak voltage is
Where:
P PS = peak-of-sync input power
Z0 = characteristic impedance
and the equivalent peak power is
p
2
Vpeak
Ppeak = = 4.189 PPS
Z0
(5.2.4)
For N stations multiplexed on the same antenna, the equivalent peak voltage is
1
Vpeak = 2.047 PPS + 2.047 PPS + ... 2.047 N PPS
Z0
(5.2.5)
Experience has shown that the design peak power and the calculated peak power should be
related by a certain safety factor. This safety factor is made of two multipliers. The first value,
typically 3, is for the surfaces of pressurized components. This factor accounts for errors result-
ing from calculation and fabrication and/or design tolerances. The second value, typically 3,
accounts for humidity, salt, and pollutants on the external surfaces. The required peak power
capability, thus, is
Ps = 4.189× Fs × N 2 × PPS
(5.2.7)
Where:
P s = safe peak power
F s = 3 for internal pressurized surfaces
F s = 9 for internal external surfaces
The input power to the antenna (Pinput) is the transmitter output power minus the losses in the
interconnection hardware between the transmitter output and the antenna input. This hardware
consists of the transmission lines and the filtering, switching, and combining complex.
p
The gain of an antenna (Gantenna) denotes its ability to concentrate the radiated energy toward
the desired direction rather than dispersing it uniformly in all directions. It is important to note
that higher gain does not necessarily imply a more effective antenna. It does mean a lower trans-
mitter power output to achieve the licensed ERP.
The visual ERP, which must not exceed the FCC specifications, depends on the television
channel frequency, the geographical zone, and the height of the antenna-radiation center above
average terrain. The FCC emission rules relate to either horizontal or vertical polarization of the
transmitted field. Thus, the total permissible ERP for circularly polarized transmission is dou-
bled.
The FCC-licensed ERP is based on average-gain calculations for omnidirectional antennas
and peak-gain calculations for antennas designed to emit a directional pattern.
5.2.2 Polarization
The polarization of an antenna is the orientation of the electric field as radiated from the device.
When the orientation is parallel to the ground in the radiation direction-of-interest, it is defined
as horizontal polarization. When the direction of the radiated electric field is perpendicular to
the ground, it is defined as vertical polarization. These two states are shown in Figure 5.2.1.
Therefore, a simple dipole arbitrarily can be oriented for either horizontal or vertical polariza-
tion, or any tilted polarization between these two states.
If a simple dipole is rotated at the picture carrier frequency, it will produce circular polariza-
tion (CP), since the orientation of the radiated electric field will be rotating either clockwise or
counterclockwise during propagation. This is shown in Figure 5.2.1. Instead of rotating the
antenna, the excitation of equal longitudinal and circumferential currents in phase quadrature
will also produce circular polarization. Because any state of polarization can be achieved by judi-
cious choice of vertical currents, horizontal currents, and their phase difference, it follows that
the reverse is also true. That is, any state of polarization can be described in terms of its vertical
and horizontal phase and amplitude components.
Perfectly uniform and symmetrical circular polarization in every direction is not possible in
practice. Circular polarization is a special case of the general elliptical polarization, which char-
acterizes practical antennas. Other special cases occur when the polarization ellipse degenerates
into linear polarization of arbitrary orientation.
p
Figure 5.2.2 Elliptical polarization as a combination of two circularly polarized signals. As shown,
the resultant ellipse equals the right-hand component plus the left-hand component.
It should be pointed out that the investment required in a circularly polarized transmission
facility is considerably greater than that required for horizontal polarization, mostly because of
the doubling of transmitter power. While doubling of antenna gain instead of transmitter power is
possible, the coverage of high-gain antennas with narrow beam-width may not be adequate
within a 2- to 3-mi (3- to 5-km) radius of the support tower unless a proper shaping of the eleva-
tion pattern is specified and achievable.
Practical antennas do not transmit ideally perfect circular polarization in all directions. A
common figure of merit of CP antennas is the axial ratio. The axial ratio is not the ratio of the
vertical to horizontal polarization field strength. The latter is called the polarization ratio. Practi-
cal antennas produce elliptical polarization; that is, the magnitude of the propagating electric
field prescribes an ellipse when viewed from either behind or in front of the wave. Every ellipti-
cally polarized wave can be broken into two circularly polarized waves of different magnitudes
and sense of rotation, as shown in Figure 5.2.2. For television broadcasting, usually a right-hand
(clockwise) rotation is specified when viewed from behind the outgoing wave, in the direction of
propagation.
Referring to Figure 5.2.2, the axial ratio of the elliptically polarized wave can be defined in
terms of the axes of the polarization ellipse or in terms of the right-hand and left-hand compo-
nents. Denoting the axial ratio as R, then
p
E1 E R + E L
R= =
E 2 ER − E L
(5.2.8)
When evaluating the transfer of energy between two CP antennas, the important performance
factors are the power-transfer coefficient and the rejection ratio of the unwanted signals (echoes)
to the desired signal. Both factors can be analyzed using the coupling-coefficient factor between
two antennas arbitrarily polarized. For two antennas whose axial ratios are R1 and R2, the cou-
pling coefficient is
where α = the angle between the major axes of the individual ellipses of the antennas.
The plus sign in Equation (5.2.9) is used if the two antennas have the same sense of rotation
(either both right hand or left hand). The minus sign is used if the antennas have opposite senses
of polarization.
Two special cases are of importance when coupling between two elliptically polarized anten-
nas is considered. The first is when the major axes of the two ellipses are aligned (α = 0). The
second case is when the major axes are perpendicular to each other (α = ± π / 2).
For case 1, where the major axes of the polarization ellipses are aligned, the maximum power
transfer is
(1± R1 R2 )
2
f=
(1+ R12 )(1+ R22 )
(5.2.10)
f− (1− R1 R2 )
2
=
f+ (1+ R R )2
1 2
(5.2.11)
For case 2, where the major axes of the two polarization ellipses are perpendicular, the maxi-
mum power transfer is
p
( R1 ± R2 )
2
f=
(1+ R12 )(1+ R22 )
(5.2.12)
f− ( R1 − R2 )
2
=
f+ ( R + R )2
1 2
(5.2.13)
5.2.3 Gain
Antenna gain is a figure of merit that describes the antenna’s ability to radiate the input power
within specified sectors in the directions of interest. Broadcast antenna gains are defined relative
to the peak gain of a half-wavelength-long dipole. Gain is one of the most critical figures of
merit of the television broadcast antenna, as it determines the transmitter power required to
achieve a given ERP. Gain is related to the beamwidth of the elevation pattern, which in turn
affects the coverage and sets limits on the allowable windsway. It is related to height (windload)
of the antenna and to the noncircularity of the azimuthal pattern.
The gain of any antenna can be estimated quickly from its height and knowledge of its azi-
muthal pattern. It can be calculated precisely from measurements of the radiation patterns. It can
also be measured directly, although this is rarely done for a variety of practical reasons. The gain
of television antennas is always specified relative to a half-wavelength dipole. This practice dif-
fers from that used in nonbroadcast antennas.
Broadcast antenna gains are specified by elevation (vertical) gain, azimuthal (horizontal)
gain, and total gain. The total gain is specified at either its peak or average (rms) value. In the
U.S., the FCC allows average values for omnidirectional antennas but requires the peak-gain
specification for directional antennas. For circularly polarized antennas, the aforementioned
terms also are specified separately for the vertically and horizontally polarized fields.
For an explanation of elevation gain, consider the superimposed elevation patterns in Figure
5.2.3. The elevation pattern with the narrower beamwidth is obtained by distributing the total
input power equally among 10 vertical dipoles stacked vertically 1 wavelength apart. The wider-
beamwidth, lower peak-amplitude elevation pattern is obtained when the same input power is fed
into a single vertical dipole. Note that at depression angles below 5°, the lower-gain antenna
transmits a stronger signal. In the direction of the peak of the elevation patterns, the gain of the
10 dipoles relative to the single dipole is
p
⎛ 1.0 ⎞
2
ge =⎜ ⎟ = 10
⎝ 0.316 ⎠
(5.2.14)
It can be seen from Figure 5.2.3 that the elevation gain is proportional to the vertical aperture
of the antenna. The theoretical upper limit of the elevation gain for practical antennas is given by
A
GE = 1.22 η
λ
(5.2.15)
Where:
η = feed system efficiency
A/λ = the vertical aperture of the antenna in wavelengths of the operating television channel
In practice, the elevation gain varies from η × 0.85 A/λ to η × 1.1 A/λ.
To completely describe the gain performance of an antenna, it is necessary to also consider
the azimuth radiation pattern. An example pattern is shown in Figure 5.2.4.
p
2 2
E v + Eh
G = 4 πη
1.64 ∫∫ Ev + Eh sin θ d θ d φ
2 2
(5.2.16)
Where E v and Eh equal the magnitudes of the electric fields of the vertical polarization and the
horizontal polarization, respectively, in the direction of interest. This direction usually is the peak
of the main beam.
p
2
Eh
Gh = 4 πη
1.64 ∫∫ Eh sin θ d θ d φ
2
(5.2.17)
as the total (azimuthal and elevation) gain for the horizontal polarization component in the
absence of vertical polarization, and then let
2
Eh
Gv = 4 πη
1.64 ∫∫ Eh sin θ d θ d φ
2
(5.2.18)
be the total gain for the vertical polarization component in the absence of horizontal polarization.
Then
2
1+ Ev Eh
2
G 1+ P
= =
Gh 1+ (G G ) E E 1+ (Gh Gv ) P
2 2
h v v h
(5.2.19)
and
2
1+ Eh Ev
2
G 1+ P
= = 2
Gv 1+ (G G ) E E P + ( G v Gh )
2
v h h v
(5.2.20)
where ⎜P ⎜ is the magnitude of the polarization ratio. When the last two expressions are added
and rearranged, the total gain G of the antenna is obtained as
⎡ 1 ⎤
G =⎡
2⎤
⎣1+ P ⎦⎢ ⎥
⎢ (1 Gh )+ P Gv ⎦
⎥
2
⎣
(5.2.21)
The total gain can be broken into two components whose ratio is ⎜P ⎜2. For horizontal polariza-
tion
p
Gh
G=
1+(Gh Gv ) / P
2
(5.2.22)
Gv
G=
1+(Gv Gh ) / P
2
(5.2.23)
The last three expressions specify completely any antenna provided Gh, Gv, and ⎜P ⎜ are
known. From the definitions it can be seen that the first two can be obtained from measured-pat-
tern integration, and the magnitude of the polarization ratio is either known or can be measured.
When the antenna is designed for horizontal polarization, ⎜P ⎜ = 0 and G = Gh. For circular
polarization, ⎜P ⎜ = 1 in all directions, Gh = Gv and the gain of each polarization is half of the
total antenna gain.
N ⎛ 2π ⎞
E ( θ ) = ∑ Ai Pi ( θ ) exp ( j φ) exp⎜ j di sin θ ⎟
i=1 ⎝ λ ⎠
(5.2.24)
Where:
P i (θ) = vertical pattern of ith panel
Ai = amplitude of current in ith panel
θi = phase of current in ith panel
di = location of ith panel
In television applications, the normalized magnitude of the pattern is of primary interest. For
an array consisting of N identical radiators, spaced uniformly apart (d), and carrying identical
currents, the magnitude of the elevation, or vertical, radiation pattern, is given by
p
⎣( N π λ) d sin θ ⎦
sin⎡ ⎤
E (θ) = p (θ)
⎡( πd λ) sin θ ⎦
sin⎣ ⎤
(5.2.25)
where the first part of the expression on the right is commonly termed the array factor. The ele-
vation pattern of a panel antenna comprising six radiators, each 6 wavelengths long, is given in
Figure 5.2.5. The elevation-pattern power gain ge of such an antenna can be determined by inte-
grating the power pattern, and is given by the expression
ge ≅
Nd
------ (5.2.26)
λ
Thus, for an antenna with N half-wave dipoles spaced 1 wavelength apart, the gain is essen-
tially equal to the number of radiators N. In practice, slightly higher gain can be achieved by the
use of radiators which have a gain greater than that of a half-wave dipole.
The array factor becomes zero whenever the numerator becomes zero, except at θ = 0 when
its value equals 1. The nulls of the pattern can be easily determined from the equation
N πd
sin θ m = m
λ
(5.2.27)
or
m
θ m = sin−1
ge
(5.2.28)
where m = 1, 2, 3 ... refers to the null number and θ m = depression angle at which a null is
expected in radians.
The approximate beamwidth corresponding to 70.7 percent of the field (50 percent of power),
or 3 dB below the maximum, can be determined from the array factor. The minimum beamwidth
is given by the expression
50.8 50.8
BWmin = = deg
N ( d λ) ge
(5.2.29)
It is interesting to note that the gain-beamwidth product is essentially a constant. The gain-
beamwidth product of practical antennas varies from 50.8 to 68, depending on the final shaping
of the elevation pattern.
p
Figure 5.2.5 Elevation pattern of an antenna array of six radiators, each six wavelengths long. All
radiators feed equally and in-phase.
Although the previous discussion has been concerned with arrays of discrete elements, the
concept can be generalized to include antennas that are many wavelengths long in aperture and
have a continuous illumination. In such cases, the classic techniques of null filling by judicious
antenna aperture current distribution, such as cosine-shaped or exponential illuminations, are
employed.
Elevation-Pattern Derivation
The signal strength at any distance from an antenna is directly related to the maximum ERP of
the antenna and the value of the antenna elevation pattern toward that depression angle. For a sig-
nal strength of 100 mV/m, which is considered adequate for analog television applications, and
assuming the FCC (50,50) propagation curves, the desired elevation pattern of an antenna can be
derived. Such curves provide an important basis for antenna selection.
Beam Tilting
Any specified tilting of the beam below the horizontal is easily achieved by progressive phase
shifting of the currents (It) in each panel in an amount equal to the following
where θ T is the required amount of beam tilt in degrees. Minor corrections, necessary to account
for the panel pattern, can be determined easily, by iteration.
In some cases, a progressive phase shifting of individual radiators may not be cost effective,
and several sections of the array can have the illuminating current of the same phase, thus reduc-
ing the number of different phasing lengths. In this case, the correct value of the phase angle for
each section can be computed.
When the specified beam tilt is comparable with the beam width of the elevation pattern, the
steering of the array reduces the gain from the equivalent array without any beam tilt. To improve
the antenna gain, mechanical tilt of the individual panels, as well as the entire antenna, is
resorted to in some cases. However, mechanically tilting the entire antenna results in variable
beam tilt around the azimuth.
Null-Fill
Another important antenna criterion is the null-fill requirement. If the antenna is near the center
of the coverage area, depending on the minimum gain, the nulls in the coverage area must be
filled in to provide a pattern that lies above the 100-mV/m curve for that particular height. For
low-gain antennas, this problem is not severe, especially when the antenna height is lower than
2000 ft (610 m) and only the first null has to be filled. But in the case of UHF antennas, with
gains greater than 20, the nulls usually occur close to the main beam and at least two nulls must
be filled.
When the nulls of a pattern are filled, the gain of the antenna is reduced. A typical gain loss of
1 to 2 dB generally is encountered.
The variables for pattern null-filling are the spacing of the radiators and the amplitudes and
phases of the feed currents. The spacings generally are chosen to provide the minimum number
p
of radiators necessary to achieve the required gain. Hence, the only variables are the 2(N – 1) rel-
ative amplitudes and phases.
The distance from the antenna to the null can be approximated if the gain and the height of the
antenna above average terrain are known. Because the distance at any depression angle θ can be
approximated as
H
D = 0.0109
θ
(5.2.31)
with H equal to the antenna height in feet, and the depression angle of the mth null equal to
⎛m⎞
θ m = 57.3sin−1⎜ ⎟ ge =elevation power gain of antenna
⎝ ge ⎠
(5.2.32)
then
H ge m m
Dm = 0.00019 miles for sin ≈
m ge ge
(5.2.33)
Figure 5.2.6 High-gain UHF antenna (ERP = 5 MW, gain = 60, beam tilt = 0.75°).
cular patterns (see Figure 5.2.7). Practical single-radiator-element patterns do not have sharp
transitions, and the resultant azimuthal pattern is not a perfect circle. Furthermore, the interradi-
ator spacing becomes important, because for a given azimuth, the signals from all the radiators
add vectorially. The resultant vector, which depends on the space-phase difference of the individ-
ual vectors, varies with azimuth, and the circularity deteriorates with increased spacing.
In the foregoing discussion, it is assumed that the radiators around the circular periphery are
fed in-phase. Similar omnidirectional patterns can be obtained with radial-firing panel antennas
when the panels differ in phase uniformly around the azimuth wherein the total phase change is a
multiple of 360°. The panels then are offset mechanically from the center lines as shown in Fig-
ure 5.2.8. The offset is approximately 0.19 wavelength for triangular-support towers and 0.18
wavelength for square-support towers. The essential advantage of such a phase rotation is that,
when the feedlines from all the radiators are combined into a common junction box, the first-
order reflections from the panels tend to cancel at the input to the junction box, resulting in a
considerable improvement in the input impedance.
The azimuthal pattern of a panel antenna is a cosine function of the azimuthal angle in the
front of the radiator, and its back lobe is small. The half-voltage width of the frontal main lobe
ranges from 90° for square-tower applications to 120° for triangular towers. The panels are
affixed on the tower faces. Generally, a 4-ft-wide tower face is small enough for U. S. channels 7
p
to 13. For channels 2 to 6, the tower-face size could be as large as 10 ft (3 m). Table 5.2.2 lists
typical circularity values.
In all the previous cases, the circular omnidirectional pattern of the panel antenna is achieved
by aligning the main beam of the panels along the radials. This is the radial-fire mode. Another
technique utilized in the case of towers with a large cross section (in wavelengths) is the tangen-
tial-fire mode. The panels are mounted in a skewed fashion around a triangular or square tower,
as shown in Figure 5.2.9. The main beam of the panel is directed along the tangent of the circum-
scribed circle as indicated in the figure. The optimum interpanel spacing is an integer number of
wavelengths when the panels are fed in-phase. When a rotating-phase feed is employed, correc-
tion is introduced by modifying the offset—for example, by adding 1/3 wavelength when the
rotating phase is at 120°. The table of Figure 5.2.9 provides the theoretical circularities for ideal
elements. Optimization is achieved in practice by model measurements to account for the back
lobes and the effect of the tower members.
A measured pattern of such a tangential-fire array is shown in Figure 5.2.10.
In the case of directional antennas, the desired pattern is obtained by choosing the proper
input-power division to the panels of each layer, adjusting the phase of the input signal, and/or
mechanically tilting the panels. In practice, the azimuthal-pattern synthesis of panel antennas is
done by superposition of the azimuthal phase and amplitude patterns of each radiator, while
adjusting the amplitudes and phases of the feed currents until a suitable configuration of the pan-
els on the tower yields the desired azimuthal pattern.
p
The turnstile antenna is a top-mounted omnidirectional pole antenna. Utilizing a phase rota-
tion of four 90° steps, the crossed pairs of radiators act as two dipoles in quadrature, resulting in
a fairly smooth pattern. The circularity improves with decreasing diameter of the turnstile sup-
port pole.
The turnstile antenna can be directionalized. As an example, a peanut-shaped azimuthal pat-
tern can be synthesized, either by power division between the pairs of radiators or by introducing
proper phasing between the pairs. The pattern obtained by phasing the pairs of radiators by 70°
instead of the 90° used for an omnidirectional pattern is shown in Figure 5.2.11.
A directional pattern of a panel antenna with unequal power division is shown in Figure
5.2.12. The panels are offset to compensate for the rotating phase employed to improve the input
impedance of the antenna.
p
Figure 5.2.9 Tangential-fire mode. Note that with back lobes and tower reflections, the circularities
tend to be worse by about 2 dB than those given here, especially for large tower sizes.
Figure 5.2.10 Measured azimuthal pattern of a “tangential-fire” array of three panels fed in-phase
around a triangular tower with D = 7.13 wavelengths.
the power in the incident wave can be reflected at a number of points in the line. Additional
reflection occurs at the line-to-antenna interface, because the antenna per se presents an imper-
fect match to the incident wave. These reflections set up a reflected wave that combines with the
incident wave to form a standing wave inside the line. The characteristic of the standing-wave
pattern is periodic maximums and minimums of the voltage and current along the line. The ratio
of the maximum to minimum at any plane is called the voltage standing-wave ratio (VSWR).
Because the VSWR is varying along the transmission line, the reference plane for the VSWR
measurement must be defined. When the reference plane is at the gas stop input in the transmit-
ter room, the measured value is system VSWR. When the reference plane is at the input to the
antenna on the tower, the measured value is the antenna VSWR. The system VSWR differs from
the antenna VSWR owing to the introduction of standing waves by the transmission line. When
two sources of standing waves S1 and S2 exist, then the maximum VSWR is
S2
VSWR min = for S1 < S2
S1
(5.2.35)
More generally, the expected system VSWR for n such reflections is, for the maximum case
Figure 5.2.12 Directional panel antenna implemented using power division and offsetting.
Sn
VSWR min = for S1 < S2 ... Sn
S1S2 S3 ... Sn−1
(5.2.37)
If the calculated minimum VSWR is less than 1.00, then the minimum VSWR = 1.00.
As an example, consider an antenna with an input VSWR of 1.05 at visual and 1.10 at aural
carrier frequencies. If the transmission line per se has a VSWR of 1.05 at the visual and 1.02 at
aural carriers, the system VSWR will be between
1.05
1.00 = ≤ S vis ≤ 1.05×1.05 = 1.103
1.05
(5.2.38)
p
1.10
1.078 = ≤ Saural ≤1.10 ×1.02 = 1.122
1.02
(5.2.39)
1+ Γ
S=
1− Γ
(5.2.40)
where ⎮Γ ⎪ is the magnitude of the reflection coefficient at that frequency. For example, if 2.5
percent of the incident voltage is reflected at the visual carrier frequency, the VSWR at that fre-
quency is
1+ 0.025
S= = 1.05
1− 0.025
(5.2.41)
(a )
(b )
(c )
Figure 5.2.13 Example VSWR considerations for two antennas: (a) impedance, (b) reflected
pulse, (c) measured VSWR.
alone because it contains no information with respect to the phase of the reflection at each fre-
quency across the channel. Because the impedance representation contains the amplitude and
phase of the reflection coefficient, the pulse response can be calculated if the impedance is
known. The impedance representation is typically done on a Smith chart, which is shown in Fig-
p
ure 5.2.13 for antennas A and B. Some attempts at relating various shapes of VSWR curves to
subjective picture quality have been made. A good rule of thumb is to minimize the VSWR in the
region from –0.25 to + 0.75 MHz of the visual carrier frequencies to a level below 1.05 and not
to exceed the level of 1.08 at color subcarrier.
⎡ ⎤
L 1 −1⎢ S −1 ⎥
= tan ⎢ ⎥
λ 2π ⎢
⎣ ( )(
S − R2 ⎡
⎣ 1 R )
2
− S ⎤⎥
⎦⎦
(5.2.42)
Where:
S = existing VSWR
R = slug characteristic impedance ÷ line characteristic impedance
L/λ = length of slug
A graphic representation of this expression is given in Figure 5.2.14. The effect of the fringe
capacitance, resulting from the ends of the slug, is not included in the design chart because it is
negligible for all television channels. After the characteristic impedance of the slug is known, its
diameter is determined from
D
Z c = 138log10
d
(5.2.43)
Where:
D = the outside diameter of the slug conductor
d = the outside diameter of the inner conductor
p
The slug thus constructed is slowly slid over the inner conductor until the VSWR disappears.
This occurs within a line length of 1/2 wavelength.
There are two shortcomings in the single-slug technique:
• Access to the inner conductor is required
• The technique is not applicable if the VSWR at two frequencies must be minimized
The first shortcoming can be eliminated by using the variable transformer mentioned previ-
ously. While it is more expensive, slug machining and installation sliding adjustments are
avoided. The second shortcoming can be overcome with the double-slug technique, or more con-
veniently, with two variable transformers.
The single- and double-slug techniques can have an undesirable effect if not properly applied.
The slugs should be placed as near as possible to the source of the undesirable VSWR. Failure to
do so can lead to higher VSWRs at other frequencies across the bandwidth of interest. Thus, if
both the system and the antenna VSWRs are high at the visual carrier, slugging at the transmitter
room will lower the system VSWR but will not eliminate the undesirable echo. Another effect of
slugging is the potential alteration of the power and voltage rating. This is of particular impor-
tance if the undesirable VSWR is high. Generally, the slugging should be limited to situations
where the VSWR does not exceed 1.25.
p
Figure 5.2.15 In-place azimuthal pattern of one antenna in the presence of another 35 ft away
from it (horizontal polarization). The self pattern is the dotted curve.
When towers are located at a spacing greater than 100 ft (30 m), both the problem of azi-
muthal pattern deterioration and the magnitude of the ghost have to be considered. The magni-
tude of the reflection from an opposing structure decreases as the spacing increases, but not
linearly. For example, if the spacing is quadrupled, the magnitude of the reflection is reduced by
only one-half. When the antennas are located more than several hundred feet from each other, the
magnitude of the reflection is usually small enough to be ignored, as far as pattern deterioration
is concerned. However, a reflection of even 3 percent is noticeable to a critical viewer. Thus,
large separations, usually more than a thousand feet, are necessary to reduce the echo visibility
to an acceptably low level.
In the previous analysis for the illumination of the interfering structure by the antenna, it was
assumed that only the portion of the interfering structure in the aperture of the antenna was of
importance. This is true for separation distances comparable with the antenna aperture. However,
as the separation distance from the antenna increases, the elevation pattern in any vertical plane
changes its shape from a near-field to a far-field pattern. As the elevation pattern changes, more
of the opposing structure is illuminated by the primary signal. This effect of distance from the
antenna on the elevation pattern is illustrated in Figure 5.2.18. Note that the elevation pattern
shown is plotted against the height of the opposing structure, rather than the elevation angle.
p
Figure 5.2.16 In-place azimuthal pattern of one antenna in the presence of another 35 ft away
from it (vertical polarization). The self pattern is the dotted curve.
Figure 5.2.18 Illustration of opposing tower illumination for various spacings, given in wavelengths
between the transmitting antenna and the opposing structure.
In practice, optimization of a multiple-tower system may require both a model study and com-
puter simulation. The characteristics of the reflections of the particular tower in question can best
be determined by model measurements and the effect of spacing determined by integration of the
induced illumination on the interfering structure.
5.2.6 References
1. Fumes, N., and K. N. Stokke: “Reflection Problems in Mountainous Areas: Tests with Cir-
cular Polarization for Television and VHF/FM Broadcasting in Norway,” EBU Review,
Technical Part, no. 184, pp. 266–271, December 1980.
2. Hill, P. C. J.: “Methods for Shaping Vertical Pattern of VHF and UHF Transmitting Aeri-
als,” Proc. IEEE, vol. 116, no. 8, pp. 1325–1337, August 1969.
3. DeVito, G: “Considerations on Antennas with no Null Radiation Pattern and Pre-estab-
lished Maximum-Minimum Shifts in the Vertical Plane,” Alta Frequenza, vol. XXXVIII,
no. 6, 1969.
4. Perini, J., and M. H. Ideslis: “Radiation Pattern Synthesis for Broadcast Antennas,” IEEE
Trans. Broadcasting, vol. BC-18, no. 3, pg. 53, September 1972.
5. Praba, K.: “Computer-aided Design of Vertical Patterns for TV Antenna Arrays,” RCA
Engineer, vol. 18-4, January–February 1973.
p
5.2.7 Bibliography
Allnatt, J. W., and R. D. Prosser: “Subjective Quality of Television Pictures Impaired by Long
Delayed Echoes,” Proc. IEEE, vol. 112, no. 3, March 1965.
Bendov, O.: “Measurement of Circularly Polarized Broadcast Antennas,” IEEE Trans. Broad-
casting, vol. BC-19, no. 1, pp. 28–32, March 1972.
Dudzinsky, S. J., Jr.: “Polarization Discrimination for Satellite Communications,” Proc. IEEE,
vol. 57, no. 12, pp. 2179–2180, December 1969.
Fowler, A. D., and H. N. Christopher: “Effective Sum of Multiple Echoes in Television,” J.
SMPTE, SMPTE, White Plains, N. Y., vol. 58, June 1952.
Johnson, R. C, and H. Jasik: Antenna Engineering Handbook, 2d ed., McGraw-Hill, New York,
N.Y., 1984.
Kraus, J. D.: Antennas, McGraw-Hill, New York, N.Y., 1950.
Lessman, A. M.: “The Subjective Effect of Echoes in 525-Line Monochrome and NTSC Color
Television and the Resulting Echo Time Weighting,” J. SMPTE, SMPTE, White Plains,
N.Y., vol. 1, December 1972.
Mertz, P.: “Influence of Echoes on Television Transmission,” J. SMPTE, SMPTE, White Plains,
N.Y., vol. 60, May 1953.
Moreno, T.: Microwave Transmission Design Data, Dover, New York, N.Y.
Perini, J.: “Echo Performance of TV Transmitting Systems,” IEEE Trans. Broadcasting, vol. BC-
16, no. 3, September 1970.
Praba, K.: “R. F. Pulse Measurement Techniques and Picture Quality,” IEEE Trans. Broadcast-
ing, vol. BC-23, no. 1, pp. 12–17, March 1976.
Sargent, D. W.: “A New Technique for Measuring FM and TV Antenna Systems,” IEEE Trans.
Broadcasting, vol. BC-27, no. 4, December 1981.
Siukola, M. S.: “TV Antenna Performance Evaluation with RF Pulse Techniques,” IEEE Trans.
Broadcasting, vol. BC-16, no. 3, September 1970.
Whythe, D. J.: “Specification of the Impedance of Transmitting Aerials for Monochrome and
Color Television Signals,” Tech. Rep. E-115, BBC, London, 1968.
g g
Chapter
5.3
Television Transmitting Antennas
5.3.1 Introduction
Broadcasting is accomplished by the emission of coherent electromagnetic waves in free space
from a single or group of radiating-antenna elements, which are excited by modulated electric
currents. Although, by definition, the radiated energy is composed of mutually dependent mag-
netic and electric vector fields, conventional practice in television engineering is to measure and
specify radiation characteristics in terms of the electric field [1–3].
The field vectors may be polarized, or oriented, linearly, horizontally, vertically, or circularly.
Linear polarization is used for some types of radio transmission. Television broadcasting has
used horizontal polarization for the majority of the system standards in the world since its incep-
tion. More recently, the advantages of circular polarization have resulted in an increase in the use
of this form of transmission, particularly for VHF channels.
Both horizontal and circular polarization designs are suitable for tower-top or tower-face
installations. The latter option is dictated generally by the existence of previously installed tower-
top antennas. On the other hand, in metropolitan areas where several antennas must be located on
the same structure, either a stacking or a candelabra-type arrangement is feasible. Figure 5.3.1
shows an example of antenna stacking.
The use of stacked antennas provides a number of operational benefits, not the least of which
is reduced cost. There are numerous examples of such installations, including the Sears Tower in
Chicago (Figure 5.3.2) and Mt. Sutro in San Francisco, where antennas are mounted on a cande-
labra assembly (Figure 5.3.3). The implementation of DTV operation has generated considerable
interest in such designs primarily because of their ability to accommodate a great number of
antennas on a given structure.
Another solution to the multichannel location problem, where space or structural limitations
prevail, is to diplex two stations on the same antenna. This approach, while economical from the
installation aspect, can result in transmission degradation because of the diplexing circuitry, and
antenna-pattern and impedance broadbanding usually is required.
5-65
g
Figure 5.3.2 Twin tower antenna array atop the Sears Tower in Chicago.
thermore, the gain and pattern characteristics must be designed to achieve the desired coverage
within acceptable tolerances, and operationally with a minimum of maintenance. Tower-top,
pole-type antennas designed to meet these requirements can be classified into two broad catego-
ries:
g
Figure 5.3.3 The triangular candelabra of antennas at Mt. Sutro in San Francisco.
5.3.2c UHF Antennas for Tower-Top Installation Figure 5.3.4 Schematic of a multislot travel-
The slotted-cylinder antenna, commonly ing-wave antenna.
referred to as the pylon antenna, is a popular
top-mounted antenna for UHF applications.
Horizontally polarized radiation is achieved using axial resonant slots on a cylinder to generate
circumferential current around the outer surface of the cylinder. An excellent omnidirectional
azimuthal pattern is achieved by exciting four columns of slots around the circumference of the
cylinder, which is a structurally rigid coaxial transmission line.
The slots along the pole are spaced approximately one wavelength per layer and a suitable
number of layers are used to achieve the desired gain. Typical gains range from 20 to 40.
g
By varying the number of slots around the periphery of the cylinder, directional azimuthal
patterns are achieved. It has been found that the use of fins along the edges of the slot provide
some control over the horizontal pattern.
The ability to shape the azimuthal field of the slotted cylinder is somewhat restricted. Instead,
arrays of panels have been utilized, such as the zigzag antenna [4]. In this antenna, the vertical
component of the current along the zigzag wire is mostly cancelled out, and the antenna can be
considered as an array of dipoles. Several such panels can be mounted around a polygonal
periphery, and the azimuthal pattern can be shaped by the proper selection of the feed currents to
the various panels.
two crossed dipoles or a pair of horizontal and vertical dipoles are used [6]. A variety of cavity-
backed crossed-dipole radiators are also utilized for circular polarization transmission.
The azimuthal pattern of each panel antenna is unidirectional, and three or four such panels
are mounted on the sides of a triangular or square tower to achieve omnidirectional azimuthal
patterns. The circularity of the azimuthal pattern is a function of the support tower size [7].
Calculated circularities of these antennas are shown in Table 5.3.1 for idealized panels. The
panels can be fed in-phase, with each one centered on the face of the tower, or fed in rotating
phase with proper mechanical offset on the tower face. In the latter case, the input impedance
match is far better.
Directionalization of the azimuthal pattern is realized by proper distribution of the feed cur-
rents to the individual panels in the same layer. Stacking of the layers provides gains comparable
with those of top-mounted antennas.
The main drawbacks of panel antennas are high wind-load, complex feed system inside the
antenna, and the restriction on the size of the tower face in order to achieve smooth omnidirec-
tional patterns. However, such devices provide an acceptable solution for vertical stacking of sev-
eral antennas or where installation considerations are paramount.
Figure 5.3.6 Various antenna designs for VHF and UHF broadcasting. Note that not all these
designs have found commercial use.
at each of the corners of the tower and oriented such that the main beam is along the normal to
the radius through the center of the tower. The resultant azimuthal pattern is usually acceptable
for horizontal polarization transmission.
Figure 5.3.6 summarizes the most common TV antenna technologies for VHF and UHF
applications.
Bandwidth
The ability to combine multiple channels in a single transmission system depends upon the band-
width capabilities of the antenna and waveguide or coax. The antenna must have the necessary
bandwidth in both pattern and impedance (VSWR). It is possible to design an antenna system for
low power applications using coaxial transmission line that provides whole-band capability. For
high power systems, waveguide bandwidth sets the limits of channel separation.
g
(a ) (b) (c )
Figure 5.3.8 Measured antenna patterns for three types of panel configurations at various operat-
ing frequencies: (a) 5 panels per bay, (b) 6 panels per bay, (c) 8 panels per bay.
Antenna pattern performance is not a significant limiting factor. As frequency increases, the
horizontal pattern circularity deteriorates, but this effect is generally acceptable, given the pri-
mary project objectives. Also, the electrical aperture increases with frequency, which narrows
the vertical pattern beamwidth. If a high gain antenna were used over a wide bandwidth, the
increase in electrical aperture might make the vertical pattern beamwidth unacceptably narrow.
This is, however, usually not a problem because of the channel limits set by the waveguide.
Horizontal Pattern
Because of the physical design of a broadband panel antenna, the cross-section is larger than the
typical narrowband pole antenna. Therefore, as the operating frequencies approach the high end
of the UHF band, the circularity (average circle to minimum or maximum ratio) of an omnidirec-
tional broadband antenna generally deteriorates.
Improved circularity is possible by arranging additional panels around the supporting struc-
ture. Previous installations have used 5, 6, and 8 panels per bay. These are illustrated in Figure
5.3.8 along with measured patterns at different operating channels. These approaches are often
required for power handling considerations, especially when three or four transmitting channels
are involved.
The flexibility of the panel antenna allows directional patterns of unlimited variety. Two of
the more common applications are shown in Figure 5.3.9. The peanut and cardioid types are
often constructed on square support spines (as indicated). A cardioid pattern can also be pro-
duced by side-mounting on a triangular tower. Different horizontal radiation patterns for each
channel can also be provided, as indicated in Figure 5.3.10. This is accomplished by changing the
power and/or phase to some of the panels in the antenna with frequency.
Most of these antenna configurations are also possible using a circularly-polarized panel. If
desired, the panel can be adjusted for elliptical polarization, with the vertical elements receiving
g
(a ) (b)
Figure 5.3.9 Common directional antenna patterns: (a) peanut, (b) cardioid.
• Constant impedance—designs that consist of two identical filters placed between two
hybrids.
• Starpoint—designs that consist of single bandpass filters phased into a common output tee.
• Resonant loop—types that utilize two coaxial lines placed between two hybrids; the coaxial
lines are of a calculated length.
• Common line—types that use a combination of band-stop filters matched into a common out-
put tee.
The dual-mode channel combiner is a device that permits a single transmission line on a
tower to feed two separate antennas. Dual-mode channel combining is the process by which two
channels are combined within the same waveguide, but in separate orthogonal modes of propa-
gation [14].
The device combines two different television channels from separate coaxial feedlines into a
common circular waveguide. Within the circular waveguide, one channel propagates in the TE11
mode while the other channel propagates in the TM01 mode. The dual-mode channel combiner is
reciprocal and, therefore, also may be used to efficiently separate two TE 11/TM 01 mode-isolated
channels that are propagating within the same circular waveguide into two separate coaxial lines.
This provides a convenient method for combining at the transmitters and splitting at the anten-
nas. The operating principles of the dual-mode channel combiner are described in [14].
5.3.5 References
1. Kraus, J. D.: Antennas, McGraw-Hill, New York, N.Y., 1950.
2. Moreno, T.: Microwave Transmission Design Data, Dover, New York, N.Y.
3. Johnson, R. C, and H. Jasik: Antenna Engineering Handbook, 2d ed., McGraw-Hill, New
York, N.Y., 1984.
4. Clark, R. N., and N. A. L. Davidson: “The V-Z Panel as a Side Mounted Antenna,” IEEE
Trans. Broadcasting, vol. BC-13, no. 1, pp. 3–136, January 1967.
5. Brawn, D. A., and B. F. Kellom: “Butterfly VHF Panel Antenna,” RCA Broadcast News,
vol. 138, pp. 8–12, March 1968.
6. DeVito, G. G., and L. Mania: “Improved Dipole Panel for Circular Polarization,” IEEE
Trans. Broadcasting, vol. BC-28, no. 2, pp. 65–72, June 1982.
g
7. Perini, J.: “Improvement of Pattern Circularity of Panel Antenna Mounted on Large Tow-
ers,” IEEE Trans. Broadcasting, vol. BC-14, no. 1, pp. 33–40, March 1968.
8. “Predicting Characteristics of Multiple Antenna Arrays,” RCA Broadcast News, vol. 97, pp.
63–68, October 1957.
9. Hill, P. C. J.: “Measurements of Reradiation from Lattice Masts at VHF,” Proc. IEEE, vol.
III, no. 12, pp. 1957–1968, December 1964.
10. “WBAL, WJZ and WMAR Build World’s First Three-Antenna Candelabra,” RCA Broad-
cast News, vol. 106, pp. 30–35, December 1959.
11. Knight, P.: “Reradiation from Masts and Similar Objects at Radio Frequencies,” Proc.
IEEE, vol. 114, pp. 30–42, January 1967.
12. Siukola, M. S.: “Size and Performance Trade Off Characteristics of Horizontally and Cir-
cularly Polarized TV Antennas,” IEEE Trans. Broadcasting, vol. BC-23, no. 1, March
1976.
13. Heymans, Dennis: “Channel Combining in an NTSC/ATV Environment,” Proceedings of
the 1996 NAB Broadcast Engineering Conference, National Association of Broadcasters,
Washington, D.C., pg. 165, 1996.
14. Smith, Paul D.: “New Channel Combining Devices for DTV, Proceedings of the 1997 NAB
Broadcast Engineering Conference, National Association of Broadcasters, Washington,
D.C., pg. 218, 1996.
15. Bendov, Oded: “Coverage Contour Optimization of HDTV and NTSC Antennas,” Proceed-
ings of the 1996 NAB Broadcast Engineering Conference, National Association of Broad-
casters, Washington, D.C., pg. 69, 1996.
16. Plonka, Robert J.: “Can ATV Coverage Be Improved With Circular, Elliptical, or Vertical
Polarized Antennas?” Proceedings of the 1996 NAB Broadcast Engineering Conference,
National Association of Broadcasters, Washington, D.C., pg. 155, 1996.
5.3.6 Bibliography
Fisk, R. E., and J. A. Donovan: “A New CP Antenna for Television Broadcast Service,” IEEE
Trans. Broadcasting, vol. BC-22, no. 3, pp. 91–96, September 1976.
Johns, M. R., and M. A. Ralston: “The First Candelabra for Circularly Polarized Broadcast
Antennas,” IEEE Trans. Broadcasting, vol. BC-27, no. 4, pp. 77–82, December 1981.
Siukola, M. S.: “The Traveling Wave VHF Television Transmitting Antenna,” IRE Trans. Broad-
casting, vol. BTR-3, no. 2, pp. 49-58, October 1957.
Wescott, H. H.: “A Closer Look at the Sutro Tower Antenna Systems,” RCA Broadcast News,
vol. 152, pp. 35–41, February 1944.
g
g g
Chapter
5.4
Tower Construction and Maintenance
5.4.1 Introduction
The transmitting tower is the most visible component of any broadcast transmission facility. It is
also the component most vulnerable to hostile elements. The basic task of the station engineer is
to keep the tower erect in the face of natural and man-made forces until the structure is no longer
needed. The most common reasons for tower failure are poor construction, poor maintenance,
overloading, icing, and accidents.
5-81
5-82 Transmitting Antennas and Systems
types of marking and lighting systems is desired. With respect to telecommunications towers, the
most common option approved by the FAA is the substitution of white flashing lights for a com-
bination of red lights and painting. Any preferences or requests for deviation from standards
must be submitted to the FAA regional office that services the area where the structure would be
located. The FAA regional office then conducts an aeronautical study of the safety of the struc-
ture and considers the proposed deviations or preferences in conducting its analysis.
Where the FAA approves the substitution of high intensity white lights for a combination of
red lights and painting, and the antenna tower is located in a residential neighborhood, the Com-
mission requires the applicant to prepare an environmental assessment [47 CFR § 1.1307(a)(8)].
The Commission, upon review of the environmental assessment, may determine that the pro-
posed substitution of high intensity white lights would not have a significant impact, and may
process the application without further review [47 CFR § 1.1308(d)]. If, however, based upon a
review of the environmental assessment, the Commission determines that the proposed high
intensity lights would have a significant environmental impact upon the human environment, the
Commission will inform the applicant. The applicant will have the opportunity to amend its
application to eliminate the environmental problem. If the problem is not eliminated, the Com-
mission will publish in the Federal Register a Notice of Intent that an Environmental Impact
Statement be prepared [47 CFR § 1.1308(c)]. The Commission may, to assist in the preparation
of an Environmental Impact Statement, request further information from the applicant, interested
persons, and other agencies or authorities. The Commission may also direct that objections to the
proposed lighting be raised with the appropriate state or local authorities [47 CFR § 1.1314(d)].
As part of the its aeronautical study, the FAA may—if it considers it necessary—solicit com-
ments from or convene a meeting of all interested persons for the purpose of gathering all facts
relevant to the effect of the proposed construction on the safe and efficient utilization of the nav-
igable airspace. (See 14 CFR §§ 77.35, 77.41–77.69.) The FAA regional office forwards its rec-
ommendation to FAA headquarters in Washington for final approval. The final FAA
determination also must be submitted to the FCC with any antenna construction permit applica-
tion that requires FAA notification. These structures are subject to inspection and enforcement of
marking and lighting requirements by the FCC.
Examples of various tower lighting options are given in Figure 5.4.1.
Figure 5.4.1 Common approaches to tower obstruction marking and lighting. (After [1].)
5.4.3 Ice
Tower owners have long known that atmospheric icing of transmission towers can cause prob-
lems ranging in severity from transmission pattern distortion to complete tower collapse. Ice
forming between antenna radiating elements can cause electrical shorting and equipment burn-
out. Ice can stretch guy lines. Also, towers near populated areas are subject to the added liability
of falling ice, which threatens lives and surrounding property.
There are two recognized sources of ice accretion. The first is in-cloud icing, in which super-
cooled water droplets float in the air and contact a surface because of air movement. The second
is precipitational icing, where the droplets are massive enough to fall from the atmosphere onto
the tower structure.
These two sources form three types of ice, as illustrated in Figure 5.4.2. Glaze ice is usually the
product of freezing rain or of airborne spray from nearby bodies of water. It forms at relatively
high temperatures (0°C to −3°C) and forms on surfaces as a tightly bonded, clear, dense, glass-
like coating. This type of icing is the most serious threat to structures because of its density and
the large additional loads it may impart.
Rime, or fluffy, white ice, forms more frequently than glaze in mountainous areas. Rime ice
varies from “soft” to “hard” depending on its density, clarity, and crystal structure. Soft rime
forms at low temperatures (−5°C to −25°C) and low wind speeds. The impinging droplets freeze
5-84 Transmitting Antennas and Systems
Heating
The only totally effective anti-icing method commonly available is heating, and it is the method
of choice for most station owners. Given the large power demands, heating is—in general—used
only to prevent icing of the radiating elements of FM and TV antennas. Heating units are fac-
5-86 Transmitting Antennas and Systems
tory-built into antennas and must be activated before icing can begin. These low-wattage heaters
usually cannot keep up with the accretion rate if ice is allowed to accumulate appreciably before
the heaters are activated. Some station operators manually activate heaters based on the local
weather forecast or individual judgment. Others prefer the more cautious alternative of operating
de-icers for the entire season. A third alternative is to provide for automatic activation via ther-
mal, precipitation, and/or icing sensors.
5.4.4 References
1. Lehtinen, Rick: “Hardening Towers,” Broadcast Engineering, Overland Park, Kan., pp.
94–106, March 1990.
5.4.5 Bibliography
Bishop, Don: “How the FCC Decides Whether Your Tower is OK,” Mobile Radio Technology,
Intertec Publishing, Overland Park, Kan., April 1988.
Bishop, Don: “Tower Maintenance,” Mobile Radio Technology, Intertec Publishing, Overland
Park, Kan., April 1988.
Bishop, Don: “When to Use Strobes on Communications Towers,” Mobile Radio Technology,
Intertec Publishing, Overland Park, Kan., March 1989.
Mulherin, Nathan D.: “Atmospheric Icing on Communication Masts in New England.” CRREL
Report 86-17, U.S. Army Cold Regions Research and Engineering Laboratory, December
1986.
g g
Chapter
5.5
Tower Grounding
5.5.1 Introduction
The attention given to the design and installation of a tower ground system is a key element in
the day-to-day reliability of the transmission plant. A well-designed and -installed ground net-
work is invisible to the engineering staff. A marginal ground system, however, will cause prob-
lems on a regular basis. Grounding schemes can range from simple to complex, but any system
serves three primary purposes:
• Provides for operator safety.
• Protects electronic equipment from damage caused by transient disturbances.
• Diverts stray radio frequency energy from sensitive audio, video, control, and computer
equipment.
Any ground system consists of two key elements: 1) the earth-to-grounding electrode inter-
face, and 2) the RF, ac power, and signal-wiring systems.
5-87
g
the earth. It is used for establishing and maintaining the potential of the earth (or of the conduct-
ing body), or approximately that potential, on conductors connected to it, and for conducting
ground current to and from the earth (or the conducting body).” [2] Based on this definition, the
reasons for grounding can be identified as:
• Personnel safety by limiting potentials between all non-current-carrying metal parts of an
electrical distribution system.
• Personnel safety and control of electrostatic discharge (ESD) by limiting potentials between
all non-current-carrying metal parts of an electrical distribution system and the earth.
• Fault isolation and equipment safety by providing a low-impedance fault return path to the
power source to facilitate the operation of overcurrent devices during a ground fault.
The IEEE definition makes an important distinction between ground and earth. Earth refers
to mother earth, and ground refers to the equipment grounding system, which includes equip-
ment grounding conductors, metallic raceways, cable armor, enclosures, cabinets, frames, build-
ing steel, and all other non-current-carrying metal parts of the electrical distribution system.
There are other reasons for grounding not implicit in the IEEE definition. Overvoltage con-
trol has long been a benefit of proper power system grounding, and is described in IEEE Stan-
dard 142, also known as the Green Book [3]. With the increasing use of electronic computer
systems, noise control has become associated with the subject of grounding, and is described in
IEEE Standard 1100, the Emerald Book. [4].
a given volume of soil and measuring the resulting voltage drop. When soil resistivity is known,
the earth electrode resistance of any given configuration (single rod, multiple rods, or ground
ring) can be determined by using standard equations developed by Sunde [7], Schwarz [8], and
others.
Earth resistance values should be as low as practicable, but are a function of the application.
The NEC approves the use of a single made electrode if the earth resistance does not exceed 25
Ω. Methods of reducing earth resistance values include the use of multiple electrodes in parallel,
the use of ground rings, increased ground rod lengths, installation of ground rods to the perma-
nent water level, increased area of coverage of ground rings, and the use of concrete-encased
electrodes, ground wells, and electrolytic electrodes.
(a )
(b )
Figure 5.5.3 Charted grounding resistance as a function of ground-rod length: (a) data demon-
strating that ground-rod length in excess of 10 ft produces diminishing returns (1-in.-diameter rod)
[10], (b) data demonstrating that ground system performance continues to improve as depth
increases. (Chart b from [1]. Used with permission.)
ground to the horizontal rod. Taken by itself, the horizontal ground rod provides an earth-inter-
face resistivity of approximately 308 Ω when buried at a depth of 36 in.
Ground rods come in many sizes and lengths. The more popular sizes are 1/2, 5/8, 3/4, and 1
in. The 1/2-in. size is available in steel with stainless-clad, galvanized, or copper-clad rods. All-
stainless-steel rods also are available. Ground rods can be purchased in unthreaded or threaded
(sectional) lengths. The sectional sizes are typically 9/16-in. or 1/2-in. rolled threads. Couplers
are made from the same materials as the rods. Couplers can be used to join 8- or 10-ft-length
rods together. A 40-ft ground rod, for example, is driven one 10-ft section at a time.
The type and size of ground rod used is determined by how many sections are to be connected
and how hard or rocky the soil is. Copper-clad 5/8-in. × 10-ft rods are probably the most popular.
Copper cladding is designed to prevent rust. The copper is not primarily to provide better con-
ductivity. Although the copper certainly provides a better conductor interface to earth, the steel
g
Figure 5.5.4 The effectiveness of vertical ground rods compared with horizontal ground rods.
(After [10].)
that it covers is also an excellent conductor when compared with ground conductivity. The thick-
ness of the cladding is important only insofar as rust protection is concerned.
Wide variations in soil resistivity can be found within a given geographic area, as documented
in Table 5.5.1. The wide range of values shown results from differences in moisture content,
mineral content, and temperature.
Temperature is a major concern in shallow grounding systems because it has a significant
effect on soil resistivity [11]. During winter months, the ground system resistance can rise to
unacceptable levels because of the freezing of liquid water in the soil. The same shallow ground-
ing system can also suffer from high resistance in the summer as moisture is evaporated from
soil. It is advisable to determine the natural frost line and moisture profile for an area before
attempting design of a ground system.
Figure 5.5.5 describes a four-point method for in-place measurement of soil resistivity. Four
uniformly spaced probes are placed in a linear arrangement and connected to a ground resistance
test meter. An alternating current (at a frequency other than 60 Hz) is passed between the two
most distant probes, resulting in a potential difference between the center potential probes. The
meter display in ohms of resistance can then be applied to determine the average soil resistivity
in ohm-centimeters for the hemispherical area between the C1 and P2 probes.
Soil resistivity measurements should be repeated at a number of locations to establish a resis-
tivity profile for the site. The depth of measurement can be controlled by varying the spacing
between the probes. In no case should the probe length exceed 20 percent of the spacing between
probes.
g
Figure 5.5.5 The four-point method for soil resistivity measurement. (From [11]. Used with permis-
sion.)
After the soil resistivity for a site is known, calculations can be made to determine the effec-
tiveness of a variety of ground system configurations. Equations for several driven rod and radial
cable configurations are given in [11], which—after the solid resistivity is known—can be used
for the purpose of estimating total system resistance. Generally, driven rod systems are appropri-
ate where soil resistivity continues to improve with depth or where temperature extremes indicate
seasonal frozen or dry soil conditions. Figure 5.5.6 shows a typical soil resistivity map for the
U.S.
Figure 5.5.6 Typical soil resistivity map for the U.S. (From [11]. Used with permission.)
Figure 5.5.7 The effect of soil salting on ground-rod resistance with time. The expected resalting
period, shown here as two years, varies depending on the local soil conditions and the amount of
moisture present. (After [10].)
(a) (b)
Figure 5.5.8 An air-breathing chemically activated ground rod: (a) breather holes at the top of the
device permit moisture penetration into the chemical charge section of the rod, (b) a salt solution
seeps out of the bottom of the unit to form a conductive shell. (After [10].)
Figure 5.5.9 An alternative approach to the chemically activated ground rod. Multiple holes are
provided on the ground-rod assembly to increase the effective earth-to-electrode interface. Note
that chemical rods can be produced in a variety of configurations. (After [10].)
shows a counterpoise system made up of individual chemical ground rods interconnected with
radial wires buried below the surface.
Figure 5.5.11 Hub and spoke counterpoise ground system. (After [10].)
Figure 5.5.12 Tower grounding scheme using buried copper radials and chemical ground rods.
(After [10].)
The Ufer approach (named for its developer), however, must be designed into a new structure. It
cannot be added on later. The Ufer ground takes advantage of the natural chemical- and water-
retention properties of concrete to provide an earth ground. Concrete typically retains moisture
for 15 to 30 days after a rain. The material has a ready supply of ions to conduct current because
of its moisture-retention properties, mineral content, and inherent pH. The large mass of any
concrete foundation provides a good interface to ground.
A Ufer system, in its simplest form, is made by routing a solid-copper wire (no. 4 gauge or
larger) within the foundation footing forms before concrete is poured. Figure 5.5.13 shows one
such installation. The length of the conductor run within the concrete is important. Typically a
20-ft run (10 ft in each direction) provides a 5 Ω ground in 1000 Ω/m soil.
g
The last variable is a bit of a gamble. The 50 percent mean occurrence of lightning strikes is
perhaps 18 A, but superstrikes can occur that approach 100 to 200 kA.
Before implementing a Ufer ground system, consult a qualified contractor. Because the Ufer
ground system will be the primary grounding element for the facility, it must be done correctly.
Figure 5.5.21 Top view of recommended guy-anchor grounding technique. (After [9].)
downward from the lower side of each guy wire after connecting to each wire. To ensure that no
arcing will occur through the turnbuckle, a connection from the anchor plate to the perimeter
ground circle is recommended (use no. 2 gauge copper wire). This helps minimize the unavoid-
able inductance created by the conductor being in the air. Interconnect leads that are suspended
in air must be dressed so that no bending radius is less than 8 in.
A perimeter ground—a circle of wire connected at several points to ground rods driven into
the earth—should be installed around each guy-anchor point. The perimeter system provides a
good ground for the anchor, and when tied together with the tower base radials, acts to rapidly
dissipate lightning energy in the event of a flash. Tower base radials are buried wires intercon-
nected with the tower base ground that extend away from the center point of the structure.
The required depth of the perimeter ground and the radials depends upon soil conductivity.
Generally speaking, however, about 8 in. below grade is sufficient. In soil with good conductiv-
ity, the perimeter wire may be as small as no. 10 gauge. Because no. 2 gauge is required for the
segment of conductor suspended in air, it may be easier to use no. 2 throughout. An added advan-
tage is that the same size Cadweld molds may be used for all bonds.
Figure 5.5.23 Interconnecting the metal structures of a facility to the ground system. (After [9].)
g
Figure 5.5.24 Protection methods for personnel at an exposed site. (From [11]. Used with permis-
sion.)
connection. Hazardous voltage may exist between the power-line neutral and any point at earth
potential.
Do not remove any existing earth ground connections to power-line neutral, particularly if
they are installed by the power company. To do so may violate the local electrical code. The goal
of this interconnection is to minimize noise that may be present on the neutral, and to conduct
this noise as directly as possible outside to earth ground.
vehicle outside the gate, unlock and open the gate, and move the vehicle into the inside yard. The
technician would then leave the vehicle, and enter the building.
The threat of a direct lightning strike to the technician has been minimized by establishing a
protective zone over the areas to be traversed. This zone is created by the tower and air terminals
mounted atop light poles.
Step potentials are minimized through the use of a ground mat buried just below the surface
of the area where the technician is expected to be outside the vehicle. Ground mats are commer-
cially available, fabricated in a 6-in. × 6-in. square pattern using no. 8 AWG bare copper wire.
Each intersection is welded, creating—for all practical purposes—an equipotential area that
short-circuits the step potential gradient in the area above the mat. The mat, as a whole, will rise
and fall in potential as a result of the lightning current discharges; however, there will be very lit-
tle difference in potential between the technician's feet. Mats should be covered with six inches
of crushed stone or pavement.
The threat of dangerous touch potentials is minimized by bonding the following elements to
the ground system:
• Personnel ground mat
• Fence at each side of the gate opening
• Door frame of the transmitter building
• Flexible bonding connection between the swing gate and its terminal post
Such bonding will ensure that the object being touched by the technician is at or near the
same potential as his or her feet.
Bonding both sides of the gate opening to the mat helps to ensure that the technician and both
sides of the gate are at approximately the same potential while the gate is being handled. The
flexible bond between the gate and its support post can be accomplished using a commercially
available kit or by Cadwelding a short length of flexible 2/0 AWG welding cable between the two
elements.
ences significant rainfall before a lightning flash, protection will be enhanced. The worst case,
however, must be assumed: an early strike under dry conditions.
The surge impedance, measured by a dynamic ground tester, should be 25 Ω or less. This
upper-limit number is chosen so that less stress will be placed on the equipment and its surge
protectors. With an 18 kA strike to a 25 Ω ground system, the entire system will rise 450 kV
above the rest of the world at peak current. This voltage has the potential to jump almost 15.750
in. (0.35 in./10 kV at standard atmospheric conditions of 25° C, 30 in. of mercury and 50 percent
relative humidity).
For nonsoil conditions, tower anchor points should have their own radial systems or be encap-
sulated in concrete. Configure the encapsulation to provide at least 3 in. of concrete on all sides
around the embedded conductor. The length will depend on the size of the embedded conductor.
Rebar should extend as far as possible into the concrete. The dynamic ground impedance mea-
surements of the anchor grounds should each be less than 25 Ω.
The size of the bare conductor for each tower radial (or for an interconnecting wire) will vary,
depending on soil conditions. On rock, a bare no. 1/0 or larger wire is recommended. Flat, solid-
copper strap would be better, but may be blown or ripped if not covered with soil. If some
amount of soil is available, no. 6 cable should be sufficient. Make the interconnecting radial
wires continuous, and bury them as deep as possible; however, the first 6 to 10 in will have the
most benefit. Going below 18 in will not be cost-effective, unless in a dry, sandy soil where the
water table can be reached and ground-rod penetration is shallow. If only a small amount of soil
exists, use it to cover the radials to the extent possible. It is more important to cover radials in the
area near the tower than at greater distances. If, however, soil exists only at the outer distances
and cannot be transported to the inner locations, use the soil to cover the outer portions of the
radials.
If soil is present, install ground rods along the radial lengths. Spacing between ground rods is
affected by the depth that each rod is driven; the shallower the rod, the closer the allowed spac-
ing. Because the ultimate depth a rod can be driven cannot always be predicted by the first rod
driven, use a maximum spacing of 15 ft when selecting a location for each additional rod. Drive
rods at building corners first (within 24 in. but not closer than 6 in. to a concrete footer unless
that footer has an encapsulated Ufer ground), then fill in the space between the corners with
additional rods.
Drive the ground rods in place. Do not auger; set in place, then back-fill. The soil compact-
ness is never as great on augured-hole rods when compared with driven rods. The only exception
is when a hole is augured or blasted for a ground rod or rebar and then back-filled in concrete.
Because concrete contains lime (alkali base) and is porous, it absorbs moisture readily, giving it
up slowly. Electron carriers are almost always present, making the substance a relatively good
conductor.
If a Ufer ground is not being implemented, the radials may be Cadwelded to a subterranean
ring, with the ring interconnected to the tower foot pad via a minimum of three no. 1/0 wires
spaced at 120° angles and Cadwelded to the radial ring.
g
Figure 5.5.25 Transmission-line mounting and grounding procedures for a communications site.
g
diameter pipe, submerged 4 to 5 ft in an 18-in.-diameter (augered) hole will provide a good start
for a Ufer-based ground system. It should be noted that an augered hole is preferred because dig-
ging and repacking the soil around the pier will create higher ground resistance. In areas of good
soil conductivity (100 Ω / m or less), this basic Ufer system may be adequate for the antenna
ground.
Figure 5.5.26 shows the preferred method: a hybrid Ufer/ground-rod and radial system. A
cable connects the mounting pipe (Ufer ground) to a separate driven ground rod. The cable then
is connected to the facility ground system. In areas of poor soil conductivity, additional ground
rods are driven at increments (2.2 times the rod length) between the satellite dish and the facility
ground system. Run all cables underground for best performance. Make the interconnecting cop-
per wire no. 10 size or larger; bury the wire at least 8 in. below finished grade. Figure 5.5.27
shows the addition of a lightning rod to the satellite dish.
g
Figure 5.5.27 Addition of a lightning rod to a satellite antenna ground system. (After [9].)
5.5.10 References
1. Webster's New Collegiate Dictionary.
2. IEEE Standard 100: Definitions of Electrical and Electronic Terms, IEEE, New York, N.Y.
3. IEEE Standard 142: “Recommended Practice for Grounding Industrial and Commercial
Power Systems,” IEEE, New York, N.Y., 1982.
4. IEEE Standard 1100: “Recommended Practice for Powering and Grounding Sensitive Elec-
tronics Equipment,” IEEE, New York, N.Y., 1992.
5. DeWitt, William E.: “Facility Grounding Practices,” in The Electronics Handbook, Jerry C.
Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 2218–2228, 1996.
6. NFPA Standard 70: “The National Electrical Code,” National Fire Protection Association,
Quincy, Mass., 1993.
7. Sunde, E. D.: Earth Conduction Effects in Transmission Systems, Van Nostrand Co., New
York, N.Y., 1949.
g
8. Schwarz, S. J.: “Analytical Expression for Resistance of Grounding Systems,” AIEE Trans-
actions, vol. 73, Part III-B, pp. 1011–1016, 1954.
9. Block, Roger: “The Grounds for Lightning and EMP Protection,” PolyPhaser Corporation,
Gardnerville, Nev., 1987.
10. Carpenter, Roy, B.: “Improved Grounding Methods for Broadcasters,” Proceedings, SBE
National Convention, Society of Broadcast Engineers, Indianapolis, IN, 1987.
11. Lobnitz, Edward A.: “Lightning Protection for Tower Structures,” in NAB Engineering
Handbook, 9th ed., Jerry C. Whitaker (ed.), National Association of Broadcasters, Wash-
ington, D.C., 1998.
12. DeDad, John A., (ed.): “Basic Facility Requirements,” in Practical Guide to Power Distri-
bution for Information Technology Equipment, PRIMEDIA Intertec, Overland Park, Kan.,
pp. 24, 1997.
13. Military Handbook 419A: “Grounding, Bonding, and Shielding for Electronic Systems,”
U.S. Government Printing Office, Philadelphia, PA, December 1987.
14. IEEE 142 (Green Book): “Grounding Practices for Electrical Systems,” IEEE, New York,
N.Y.
5.5.11 Bibliography
Benson, K. B., and Jerry C. Whitaker: Television and Audio Handbook for Engineers and Tech-
nicians, McGraw-Hill, New York, N.Y., 1989.
Block, Roger: “How to Ground Guy Anchors and Install Bulkhead Panels,” Mobile Radio Tech-
nology, PRIMEDIA Intertec, Overland Park, Kan., February 1986.
Davis, Gary, and Ralph Jones: Sound Reinforcement Handbook, Yamaha Music Corporation, Hal
Leonard Publishing, Milwaukee, WI, 1987.
Defense Civil Preparedness Agency: “EMP Protection for AM Radio Stations,” Washington,
D.C., TR-61-C, May 1972.
Fardo, S., and D. Patrick: Electrical Power Systems Technology, Prentice-Hall, Englewood Cliffs,
N.J., 1985.
Hill, Mark: “Computer Power Protection,” Broadcast Engineering, PRIMEDIA Intertec, Over-
land Park, Kan., April 1987.
Lanphere, John: “Establishing a Clean Ground,” Sound & Video Contractor, PRIMEDIA Inter-
tec, Overland Park, Kan., August 1987.
Lawrie, Robert: Electrical Systems for Computer Installations, McGraw-Hill, New York, N.Y.,
1988.
Little, Richard: “Surge Tolerance: How Does Your Site Rate?” Mobile Radio Technology, Inter-
tec Publishing, Overland Park, Kan., June 1988.
Midkiff, John: “Choosing the Right Coaxial Cable Hanger,” Mobile Radio Technology, Intertec
Publishing, Overland Park, Kan., April 1988.
g
Mullinack, Howard G.: “Grounding for Safety and Performance,” Broadcast Engineering, PRI-
MEDIA Intertec, Overland Park, Kan., October 1986.
Schneider, John: “Surge Protection and Grounding Methods for AM Broadcast Transmitter
Sites,” Proceedings, SBE National Convention, Society of Broadcast Engineers, Indianapo-
lis, IN, 1987.
Sullivan, Thomas: “How to Ground Coaxial Cable Feedlines,” Mobile Radio Technology, PRI-
MEDIA Intertec, Overland Park, Kan., April 1988.
Technical Reports LEA-9-1, LEA-0-10, and LEA-1-8, Lightning Elimination Associates, Santa
Fe Springs, Calif.
g
g g
Chapter
5.6
Lightning Effects
5.6.1 Introduction
Natural phenomena of interest to facility managers consist mainly of lightning and related distur-
bances. The lightning effect can be compared to that of a capacitor, as shown in Figure 5.6.1. A
charged cloud above the earth will create an oppositely charged area below it of about the same
size and shape. When the voltage difference is sufficient to break down the dielectric (air), the
two “plates” of the “capacitor” will arc over and neutralize their respective charges. If the dielec-
tric spacing is reduced, as in the case of a conductive steel structure (such as a transmitting
tower), the arc-over will occur at a lower-than-normal potential, and will travel through the con-
ductive structure.
The typical duration of a lightning flash is approximately 0.5 s. A single flash is made up of
various discharge components, among which are typically three or four high-current pulses
called strokes. Each stroke lasts about one 1 ms; the separation between strokes is typically sev-
eral tens of milliseconds. Lightning often appears to flicker because the human eye can just
resolve the individual light pulses that are produced by each stroke.
5-115
g g
Figure 5.6.1 The lightning effect and how it can be compared to a more familiar mechanism, the
capacitor principle. Also shown are the parameters of a typical lightning strike.
• Solar wind: charged particles from the sun that continuously bombard the surface of the earth.
Because about half of the earth’s surface is always exposed to the sun, variations are experi-
enced from day to night. Solar wind particles travel at only 200 to 500 miles per second, com-
pared with cosmic particles that travel at near the speed of light. Because of their slower
speed, solar wind particles have less of an effect on air atoms and molecules.
• Natural radioactive decay: the natural disintegration of radioactive elements. In the process
of radioactive decay, air molecules are ionized near the surface of the earth. One of the results
is radon gas.
• Static electricity: energy generated by the interaction of moving air and the earth.
• Electromagnetic generation: energy generated by the movement of air molecules through the
magnetic field of the earth.
The combined effects of cosmic rays and solar wind account for most atmospheric electrical
energy.
Atmospheric energy is present at all times, even during clear weather conditions. This energy
takes the form of a voltage differential of 300 to 400 kV between the surface of the earth and the
ionosphere. The voltage gradient is nonlinear; near the surface it may be 150 V/m of elevation,
but it diminishes significantly at higher altitudes. Under normal conditions, the earth is negative
with respect to the ionosphere, and ions flow between the two entities. Because there are fewer
free ions near the earth than the ionosphere, the volts/meter value is thought to be greater
because of the lower effective conductivity of the air. This concept is illustrated in Figure 5.6.2.
Thermodynamic activity in a developing storm cloud causes it to become a powerfully
charged cell, usually negatively charged on the bottom and positively charged on the top. (See
Figure 5.6.3.) This voltage difference causes a distortion in the voltage gradient and, in fact, the
g g
polarity inverts, with the earth becoming positive with reference to the bottom of the cloud. This
voltage gradient increases to a high value, sometimes exceeding 10 kV/m of elevation. The over-
all charge between the earth and the cloud may be on the order of 10 to 100 MV, or more. When
sufficient potential difference exists, a lightning flash may occur.
Figure 5.6.4 shows the flash waveform for a typical lightning discharge. The rise time is very
fast, in the microsecond range, as the lightning channel is established. The trailing edge exhibits
a slow decay; the decay curve is known as a reciprocal double exponential waveform. The trail-
g g
ing edge is the result of the resistance of the ionized channel depleting energy from the cloud.
The path length for a lightning discharge is measured in kilometers. The most common source of
lightning is cumulonimbus cloud forms, although other types of clouds (such as nimbostratus)
occasionally can produce activity.
Although most lightning strikes are negative (the bottom of the cloud is negative with respect
to the earth), positive strikes also can occur. Such strikes have been estimated to carry as much as
10 times the current of a negative strike. A positive flash can carry 200 kiloamps (kA) or more of
discharge current. Such “hot strikes,” as they are called, can cause considerable damage. Hot
strikes can occur in the winter, and are often the after-effect of a particularly active storm. After a
number of discharges, the lower negative portion of the cloud will become depleted. When
charged, the lower portion may have functioned as a screen or shield between the earth and the
upper, positively charged portion of the cloud. When depleted, the shield is removed, exposing
the earth to the massive charge in the upper cloud containing potentials of perhaps 500 MV or
more.
100 kA, with a total charge as high as 100 coulombs (C). Although averages are difficult to
assess where lightning is concerned, a characteristic flash exhibits a 2 μs rise time, and a 10 to 40
μs decay to a 50 percent level. The peak current will average 18 kA for the first impulse, and
about half that for the second and third impulses. Three to four strokes per flash are common.
A lightning flash is a constant-current source. Once ionization occurs, the air becomes a con-
ductive plasma reaching 60,000°F, and becomes luminous. The resistance of an object struck by
lightning is of small consequence except for the power dissipation on that object, which is equiv-
alent to I2R. Fifty percent of all strikes will have a first discharge of at least 18 kA, 10 percent
will exceed 60 kA, and only one percent will exceed 120 kA.
Four specific types of cloud-to-ground lightning have been identified. They are categorized in
terms of the direction of motion (upward or downward) and the sign of the electric charge (posi-
tive or negative) of the initiating leader. The categories, illustrated in Figure 5.6.5, are defined as
follows:
• Category 1: negative leader cloud-to-ground discharge. By far the most common form of
lightning, such discharges account for 90 percent or more of the cloud-to-ground flashes
worldwide. Such events are initiated by a downward-moving negatively charged leader.
g g
• Category 2: positive leader ground-to-cloud discharge. This event begins with an upward-ini-
tiated flash from earth, generally from a mountaintop or tall steel structure. Category 2 dis-
charges are relatively rare.
• Category 3: positive leader cloud-to-ground discharge. Less than 10 percent of
cloud-to-ground lightning worldwide is of this type. Positive discharges are initiated by lead-
ers that do not exhibit the distinct steps of their negative counterparts. The largest recorded
peak currents are in the 200–300 kA range.
• Category 4: negative leader ground-to-cloud discharge. Relatively rare, this form of lightning
begins with an upward leader that exhibits a negative charge. Similar to Category 2 dis-
charges, Category 4 discharges occur primarily from a mountaintop or tall steel structure.
An idealized lightning flash is shown in Figure 5.6.6. The stepped leader initiates the first
return stroke in a negative cloud-to-ground flash by propagating downward in a series of discrete
steps, as shown. The breakdown process sets the stage for a negative charge to be lowered to the
ground. A fully developed leader lowers 10 C or more of negative cloud charge to near the
g g
ground within a few tens of milliseconds. The average return leader current measures from 100
A to 1 kA. During its trip toward earth, the stepped leader branches in a downward direction,
producing the characteristic lightning discharge.
The electrical potential difference between the bottom of the negatively charged leader chan-
nel and the earth can exhibit a magnitude in excess of 100 MV. As the leader tip nears ground
level, the electric field at sharp objects on the ground increases until the breakdown strength of
the atmosphere is exceeded. At that point, one or more upward-moving discharges are initiated,
and the attachment process begins. The leader channel is discharged when the first return stroke
propagates up the previously ionized and charged leader path. This process will repeat if suffi-
cient potential exists after the initial stroke. The time between successive strokes in a flash is
usually several tens of milliseconds.
dissipators based on this theory are shown in Figure 5.6.8. Key design elements for such dissipa-
tors include:
• Radius of the dissipator electrode. The purpose of the dissipator is to create a high field inten-
sity surrounding the device. Theory states that the electric field intensity will increase as the
electrode radius is reduced. Dissipators, therefore, generally use the smallest-radius elec-
trodes possible, consistent with structural integrity. There is, however, disagreement among
certain dissipation-array manufacturers on this point. The “optimum wire size,” according to
available literature, varies from 0.005-in.- to 1/8-in.-thick tapered spikes.
• Dissipator construction material. Important qualities include conductivity and durability. The
dissipator should be a good conductor to provide: 1) the maximum discharge of current dur-
ing normal operation, and 2) an efficient path for current flow in the event of a direct strike.
• Number of dissipator electrodes. Calculating the number of dissipator points is, again, the
subject of some debate. However, because the goal of the system is to provide a low-resis-
tance path to the atmosphere, it generally is assumed that the more discharge points, the more
g g
effective the system. Dissipator electrode requirements are determined by the type of struc-
ture being protected as well as the environmental features surrounding it.
• Density of dissipator electrodes. Experimentation by some manufacturers has shown that the
smaller the radius of the dissipator electrodes, the more closely they can be arranged without
reducing the overall efficiency of the dissipator. Although this convention seems reasonable,
disagreement exists among dissipation-array manufacturers. Some say the points should be
close together; others say they should be far apart.
• Configuration of the dissipator on the tower. Disagreement abounds on this point. One school
of thought supports the concept of a dedicated “umbrella-type” structure at the top of the
tower as the most efficient method of protecting against a lightning flash (Figure 5.6.9).
Another view holds that the dissipator need not be at the highest point, and that it may be
more effective if one or more dissipators are placed at natural dissipation points on the struc-
ture. Such points include side-mounted antennas and other sharp elements on the tower.
• Size and deployment of grounding electrodes. Some systems utilize an extensive ground sys-
tem, others do not. One manufacturer specifies a “collector” composed of wire radials
extending from the base of the tower and terminated by ground rods. Another manufacturer
does not require a ground connection to the dissipator.
Available literature indicates that from 10 μA to 10 mA flow through a properly designed dis-
sipative system into the surrounding air during a lightning storm. Figure 5.6.10 charts the dis-
charge current recorded during a period of storm activity at a protected site. Although a lightning
stroke can reach several hundreds of thousands of amperes, this energy flows for a very short
g g
period of time. The concept of the dissipative array is to continuously bleed off space charge cur-
rent to prevent a direct hit.
Proof that static dissipators work as intended is elusive and depends upon the definition of
“proof.” Empirical proof is difficult to obtain because successful performance of a static dissipa-
tor is evidenced by the absence of any results. Supporting evidence, both pro and con, is avail-
able from end-users of static dissipators, and from those who have studied this method of
reducing the incidence of lightning strikes to a structure.
Protection Area
The placement of a tall structure over low-profile structures tends to protect the facilities near the
ground from lightning flashes. The tall structure, typically a communications tower, is assumed
to shield the facility below it from hits. This cone of protection is determined by the following:
• Height of the tall structure
• Height of the storm cloud above the earth
g g
Figure 5.6.12 Protection zone for a tall tower under the “rolling sphere” theory.
The higher the cloud, the larger the radius of the base of the protecting cone. The ratio of radius
to base to height varies approximately from one to two, as illustrated in Figure 5.6.11.
Conventional wisdom has held that a tower, whether protected with a static dissipation array
or simply a tower-top lightning rod, provided a cone of protection stretching out on all sides of
the structure at an angle of about 45°. Although this theory held favor for many years, modifica-
tions have been proposed. One school of thought suggests that a smaller cone of perhaps 30°
from the tower is more realistic. Another suggests that the cone theory is basically flawed and,
instead, proposes a “rolling sphere” approach. This theory states that areas enclosed below a
150-ft rolling sphere will enjoy protection against lightning strikes. The concept is illustrated in
Figure 5.6.12. Note that the top of the tower shown in the figure is assumed to experience limited
protection. The concept, as it applies to side-mounted antennas, is shown in Figure 5.6.13. The
antenna is protected through the addition of two horizontally mounted lightning rods, one above
the antenna and one below.
Figure 5.6.15 The EMP effect and how it can induce damaging voltages onto utility company lines
and antenna structures. The expected parameters of an EMP event also are shown.
g g
Type of Conductor Rise Time (s) Peak Voltage (V) Peak Current (A)
Long unshielded cable (power line, long 10–8 to 10–7 105 to 5 × 106 103 to 104
antenna)
HF antenna system 10–8 to 10–7 104 to 106 500 to 104
VHF antenna system 10–9
to 10–8 103 to 105 100 to 103
UHF antenna system 10–9
to 10 –8
100 to 10 4 10 to 100
Shielded cable 10–6 to 10–4 1 to 100 0.1 to 50
The amplitude and polarization of the field produced by a high-altitude detonation depends
on the altitude of the burst, the yield of the device, and the orientation of the burst with respect to
the receiving point. The EMP field creates a short but intense broadband radio frequency pulse
with significant energy up to 100 MHz. Most of the radiated energy, however, is concentrated
below 10 MHz. Figure 5.6.16 shows the distribution of energy as a function of frequency. The
electric field can be greater than 50 kV/m, with a rise time measured in the tens of nanoseconds.
Figure 5.6.17 illustrates the field of a simulated EMP discharge.
Many times, lightning and other natural occurrences cause problems not because they strike a
given site, but because they strike part of the utility power system and are brought into the facil-
ity via the ac lines. Likewise, damage that could result from EMP radiation would be most severe
to equipment connected to the primary power source, because it is generally the most exposed
part of any facility. Table 5.6.1 lists the response of various systems to an EMP event.
sufficiently long line, substantial voltages can be coupled to the primary power system without a
direct hit. Likewise, the field created by EMP radiation can be coupled to the primary power
lines, but in this case at a much higher voltage (50 kV/m). Considering the layout of many parts
of the utility company power system—long exposed lines over mountaintops and the like—the
possibility of a direct lightning flash to one or more legs of the system is a distinct one.
Lightning is a point charge-injection process, with pulses moving away from the point of
injection. The amount of total energy (voltage and current) and the rise and decay times of the
energy seen at the load as a result of a lightning flash are functions of the distance between the
flash and the load and the physical characteristics of the power distribution system. Determining
factors include:
• Wire size
• Number and sharpness of bends
• Types of transformers
• Types of insulators
• Placement of lightning suppressors
The character of a lightning flash covers a wide range of voltage, current, and rise-time
parameters. Making an accurate estimate of the damage potential of a flash is difficult. A direct
hit to a utility power line causes a high-voltage, high-current wave to travel away from the point
of the hit in both directions along the power line. The waveshape is sawtooth in form, with a rise
time measured in microseconds or nanoseconds. The pulse travels at nearly the speed of light
until it encounters a significant change in line impedance. At this point, a portion of the wave is
reflected back down the line in the direction from which it came. This action creates a standing
wave containing the combined voltages of the two pulses. A high-energy wave of this type can
reach a potential sufficient to arc over to another parallel line, a distance of about 8 ft on a local
feeder (typically 12 kV) power pole.
5.6.7 Bibliography
Bishop, Don, “Lightning Devices Undergo Tests at Florida Airports,” Mobile Radio Technology,
PRIMEDIA Intertec, Overland Park, Kan., May 1990.
Bishop, Don, “Lightning Sparks Debate: Prevention or Protection?,” Mobile Radio Technology,
PRIMEDIA Intertec, Overland Park, Kan., January 1989.
Block, Roger, “Dissipation Arrays: Do They Work?,” Mobile Radio Technology, PRIMEDIA
Intertec, Overland Park, Kan., April 1988.
Block, Roger, “The Grounds for Lightning and EMP Protection,” PolyPhaser Corp., Gardnerv-
ille, Nev., 1987.
Defense Civil Preparedness Agency, EMP and Electric Power Systems, Publication TR-6l-D,
U.S. Department of Commerce, National Bureau of Standards, Washington, D.C., July
1973.
Defense Civil Preparedness Agency, EMP Protection for Emergency Operating Centers, Federal
Information Processing Standards Publication no. 94, Guideline on Electrical Power for
g g
Chapter
5.7
Transmitter Building Grounding
Practices
5.7.1 Introduction
Proper grounding is basic to protection against ac line disturbances. This applies whether the
source of the disturbance is lightning, power-system switching activities, or faults in the distribu-
tion network. Proper grounding is also a key element in preventing radio frequency interference
in transmission or computer equipment. A facility with a poor ground system can experience
RFI problems on a regular basis. Implementing an effective ground network is not an easy task.
It requires planning, quality components, and skilled installers. It is not inexpensive. However,
proper grounding is an investment that will pay dividends for the life of the facility.
5-133
g g
Figure 5.7.1 A facility ground system using the hub-and-spoke approach. The available real
estate at the site will dictate the exact configuration of the ground system. If a tower is located at
the site, the tower ground system is connected to the building ground as shown.
Figure 5.7.2 Facility ground using a perimeter ground-rod system. This approach works well for
buildings with limited available real estate.
g g
Figure 5.7.3 A typical guy-anchor and tower-radial grounding scheme. The radial ground is no. 6
copper wire. The ground rods are 5/8 in. × 10 ft. (After [1].)
not be disconnected or moved. Do not remove any existing earth ground connections to the
power-line neutral connection. To do so may violate local electrical code.
Bury all elements of the ground system to reduce the inductance of the overall network. Do
not make sharp turns or bends in the interconnecting wires. Straight, direct wiring practices will
reduce the overall inductance of the system and increase its effectiveness in shunting fast-rise-
time surges to earth. Figure 5.7.3 illustrates the interconnection of a tower and building ground
system. In most areas, soil conductivity is high enough to permit rods to be connected with no. 6
bare-copper wire or larger. In areas of sandy soil, use copper strap. A wire buried in low-conduc-
tivity, sandy soil tends to be inductive and less effective in dealing with fast-rise-time current
surges. Make the width of the ground strap at least 1 percent of its overall length. Connect buried
elements of the system as shown in Figure 5.7.4.
For small installations with a low physical profile, a simplified grounding system can be
implemented, as shown in Figure 5.7.5. A grounding plate is buried below grade level, and a
ground wire ties the plate to the microwave tower mounted on the building.
g g
Figure 5.7.4 Preferred bonding method for below-grade elements of the ground system. (After
[1].)
ing are referenced. A typical bulkhead installation for a small communications site is shown in
Figure 5.7.9.
Bulkhead Grounding
A properly installed bulkhead panel will exhibit lower impedance and resistance to ground than
any other equipment or cable grounding point at the facility [1]. Waveguide and coax line
grounding kits should be installed at the bulkhead panel as well as at the tower. Dress the kit tails
downward at a straight 45° angle using 3/8-in. stainless-steel hardware to the panel. Position the
g g
Figure 5.7.6 The basic design of a bulkhead panel for a facility. The bulkhead establishes the
grounding reference point for the plant.
Figure 5.7.7 The addition of a subpanel to a bulkhead as a means of providing a mounting surface
for transient-suppression components. To ensure that the bulkhead is capable of handling high
surge currents, use the hardware shown. (After [1].)
g g
Figure 5.7.10 The proper way to ground a bulkhead panel and provide a low-inductance path for
surge currents stripped from cables entering and leaving the facility. The panel extends along the
building exterior to below grade. It is silver-soldered to a no. 2/0 copper wire that interconnects
with the outside ground system. (After [1].)
stainless-steel lug at the tail end, flat against a cleaned spot on the panel. Joint compound will be
needed for aluminum and is recommended for copper panels.
Because the bulkhead panel will be used as the central grounding point for all the equipment
inside the building, the lower the inductance to the perimeter ground system, the better. The best
arrangement is to simply extend the bulkhead panel down the outside of the building, below
grade, to the perimeter ground system. This will give the lowest resistance and the smallest
inductive voltage drop. This approach is illustrated in Figure 5.7.10.
If cables are used to ground the bulkhead panel, secure the interconnection to the outside
ground system along the bottom section of the panel. Use multiple no. 1/0 or larger copper wire
or several solid-copper straps. If using strap, attach with stainless-steel hardware, and apply joint
compound for aluminum bulkhead panels. Clamp and Cadweld, or silver-solder for copper/brass
panels. If no. 1/0 or larger wire is used, employ crimp lug and stainless-steel hardware. Measure
the dc resistance. It should be less than 0.01 Ω between the ground system and the panel. Repeat
this measurement on an annual basis.
If the antenna feed lines do not enter the equipment building via a bulkhead panel, treat them
in the following manner:
g g
• Mount a feed-line ground bar on the wall of the building approximately 4 in. below the feed-
line entry point.
• Connect the outer conductor of each feed line to the feed-line ground bar using an appropriate
grounding kit.
• Connect a no. 1/0 cable or 3- to 6-in.-wide copper strap between the feed-line ground bar and
the external ground system. Make the joint a Cadweld or silver-solder connection.
• Mount coaxial arrestors on the edge of the bar.
• Weatherproof all connections.
Typical Installation
Figure 5.7.11 illustrates a common grounding arrangement for a remotely located grounded-
tower (FM, TV, or microwave radio) transmitter plant. The tower and guy wires are grounded
using 10-ft-long copper-clad ground rods. The antenna is bonded to the tower, and the transmis-
sion line is bonded to the tower at the point where it leaves the structure and begins the horizontal
run into the transmitter building. Before entering the structure, the line is bonded to a ground rod
through a connecting cable. The transmitter itself is grounded to the transmission line and to the
ac power-distribution system ground. This, in turn, is bonded to a ground rod where the utility
feed enters the building. The goal of this arrangement is to strip all incoming lines of damaging
overvoltages before they enter the facility. One or more lightning rods are mounted at the top of
the tower structure. The rods extend at least 10 ft above the highest part of the antenna assembly.
Such a grounding configuration, however, has built-in problems that can make it impossible
to provide adequate transient protection to equipment at the site. Look again at the example. To
equipment inside the transmitter building, two grounds actually exist: the utility company ground
and the antenna ground. One ground will have a lower resistance to earth, and one will have a
lower inductance in the connecting cable or copper strap from the equipment to the ground sys-
tem.
Using the Figure 5.7.11 example, assume that a transient overvoltage enters the utility com-
pany meter panel from the ac service line. The overvoltage is clamped by a protection device at
g g
Figure 5.7.11 A common, but not ideal, grounding arrangement for a transmission facility using a
grounded tower. A better configuration involves the use of a bulkhead panel through which all
cables pass into and out of the equipment building.
the meter panel, and the current surge is directed to ground. But which ground, the utility ground
or the antenna ground?
The utility ground surely will have a lower inductance to the current surge than the antenna
ground, but the antenna probably will exhibit a lower resistance to ground than the utility side of
the circuit. Therefore, the surge current will be divided between the two grounds, placing the
transmission equipment in series with the surge suppressor and the antenna ground system. A
transient of sufficient potential will damage the transmission equipment.
Transients generated on the antenna side because of a lightning discharge are no less trouble-
some. The tower is a conductor, and any conductor is also an inductor. A typical 150-ft self-sup-
porting tower may exhibit as much as 40 μH inductance. During a fast-rise-time lightning strike,
an instantaneous voltage drop of 360 kV between the top of the tower and the base is not
unlikely. If the coax shield is bonded to the tower 15 ft above the earth (as shown in the previous
figure), 10 percent of the tower voltage drop (36 kV) will exist at that point during a flash. Figure
5.7.12 illustrates the mechanisms involved.
The only way to ensure that damaging voltages are stripped off all incoming cables (coax, ac
power, and telephone lines) is to install a bulkhead entrance panel and tie all transient-suppres-
sion hardware to it. Configuring the system as shown in Figure 5.7.13 strips away all transient
voltages through the use of a single-point ground. The bulkhead panel is the ground reference for
g g
Figure 5.7.12 The equivalent circuit of the facility shown in Figure 5.7.11. Note the discharge cur-
rent path through the electronic equipment.
Figure 5.7.13 The preferred grounding arrangement for a transmission facility using a bulkhead
panel. With this configuration, all damaging transient overvoltages are stripped off the coax,
power, and telephone lines before they can enter the equipment building.
g g
Figure 5.7.14 The equivalent circuit of the facility shown in Figure 5.7.13. Discharge currents are
prevented from entering the equipment building.
the facility. With such a design, secondary surge current paths do not exist, as illustrated in Fig-
ure 5.7.14.
Protecting the building itself is another important element of lightning surge protection. Fig-
ure 5.7.15 shows a common system using multiple lightning rods. As specified by NFPA 78, the
dimensions given in the figure are as follows:
• A = 50-ft maximum spacing between air terminals
• B = 150-ft maximum length of a coursing conductor permitted without connection to a main
perimeter or downlead conductor
• C = 20- or 25-ft maximum spacing between air terminals along an edge
Figure 5.7.15 Conduction lightning-protection system for a large building. (From [12]. Used with
permission.)
faces must be cleaned, any finish removed to bare metal, and surface preparation com-
pound applied (where necessary). Protect all connections from moisture by appropriate
means (sealing compound and heat sink tubing).
5.7.3 References
1. Block, Roger, “The Grounds for Lightning and EMP Protection,” PolyPhaser Corporation,
Gardnerville, Nev., 1987.
5.7.4 Bibliography
Benson, K. B., and Jerry C. Whitaker, Television and Audio Handbook for Engineers and Techni-
cians, McGraw-Hill, New York, N.Y., 989.
Block, Roger, “How to Ground Guy Anchors and Install Bulkhead Panels,” Mobile Radio Tech-
nology, PRIMEDIA Intertec, Overland Park, Kan., February 1986.
Fardo, S., and D. Patrick, Electrical Power Systems Technology, Prentice-Hall, Englewood Cliffs,
N.J., 1985.
Little, Richard, “Surge Tolerance: How Does Your Site Rate?” Mobile Radio Technology, Inter-
tec Publishing, Overland Park, Kan., June 1988.
Schneider, John, “Surge Protection and Grounding Methods for AM Broadcast Transmitter
Sites,” Proceedings, SBE National Convention, Society of Broadcast Engineers, Indianapo-
lis, Ind., 1987.
Technical Reports LEA-9-1, LEA-0-10, and LEA-1-8, Lightning Elimination Associates, Santa
Fe Springs, Calif.
Whitaker, Jerry C., AC Power Systems Handbook, 2nd ed., CRC Press, Boca Raton, Fla., 1999.
g g
Section
Radio Receivers
6
The development of radio transmission and reception is one of the major technical achievements
of the twentieth century. The impact of voice broadcasts to the public, whether by commercial
stations or government-run organizations, have expanded the horizons of everyday citizens in
virtually every country on earth. It is hard to overestimate the power of radio broadcasting.
Technology has dramatically reshaped the transmission side of AM and FM broadcasting.
Profound changes have also occurred in receiver technology. Up until 1960, radio broadcasting
was basically a stationary medium. The receivers of that time were physically large and heavy,
and required 120 V ac power to drive them. The so-called portable radios of the day relied on
bulky batteries that offered only a limited amount of listening time. Automobile radios incorpo-
rated vibrator-choppers to simulate ac current. All the receivers available for commercial use
during the 1940s and 1950s used vacuum tubes exclusively.
The first technical breakthrough for radio broadcasting was the transistor, available commer-
cially at reasonable prices during the early 1960s. The transistor brought with it greatly reduced
physical size and weight, and even more importantly, it eliminated the necessity of ac line current
to power the radio. The first truly portable AM radios began to appear during the early 1960s,
with AM-FM versions following by the middle of the decade.
Many of the early receiver designs were marginal from a performance stand-point. The really
good receivers were still built around vacuum tubes. As designers learned more about transistors,
and as better transistors became available, tube-based receivers began to disappear. By 1970,
transistorized radios, as they were called, commanded the consumer market.
The integrated circuit (IC) was the second technical breakthrough in consumer receiver
design. This advance, more than anything else, made high-quality portable radios possible. It
also accelerated the change in listening habits from AM to FM. IC-based receivers allowed
designers to put more sophisticated circuitry into a smaller package, permitting consumers to
enjoy the benefits of FM broadcasting without the penalties of the more complicated receiver
required for FM stereo.
The move to smaller, lighter, more power-efficient radios have led to fundamental changes in
the way radios are built and serviced. In the days of vacuum-tube and transistor-based receivers,
the designer would build a radio out of individual stages that were interconnected to provide a
working unit. The stages for a typical radio included:
• RF amplifier
6-1
6-2 Section Six
• Local oscillator
• Intermediate frequency (IF) amplifier
• Detector and audio preamplifier
Today, however, large-scale integration (LSI) or even very large scale integration (VLSI)
techniques have permitted virtually all the active circuits of an AM-FM radio to be placed on a
single IC. Advanced circuitry has also permitted radio designers to incorporate all-electronic
tuning, eliminating troublesome and sometimes expensive mechanical components. Electroni-
cally tuned radios (ETRs) have made features such as “station scan” and “station seek” possible.
Some attempts were made to incorporate scan and seek features in mechanically tuned radios.
but the results were never very satisfactory.
The results of LSI-based receiver design have been twofold. First, radios based on advanced
chip technologies are much easier to build and are, therefore, usually less expensive to consum-
ers. Second, such radios are not serviceable. Most consumers today would not bother to have a
broken radio repaired. They would simply buy a new one and throw the old one away.
Still, however, it is important to know what makes a radio work. Although radios being built
with LSI and VLSI technology do not lend themselves to stage-by-stage troubleshooting as ear-
lier radios did, it is important to understand how each part of the system functions to make a
working unit. Regardless of the sophistication of a VLSI-based receiver, the basic principles of
operation are the same as a radio built of discrete stages.
In This Section:
Benson, K. Blair, and Jerry C. Whitaker: Television and Audio Handbook for Engineers and
Technicians, McGraw-Hill, New York, N.Y., 1990.
Engelson, M., and J. Herbert: “Effective Characterization of CDMA Signals,” Microwave Jour-
nal, pg. 90, January 1995.
Howald, R.: “Understand the Mathematics of Phase Noise,” Microwaves & RF, pg. 97, December
1993.
Johnson, J. B:, “Thermal Agitation of Electricity in Conduction,” Phys. Rev., vol. 32, pg. 97, July
1928.
Nyquist, H.: “Thermal Agitation of Electrical Charge in Conductors,” Phys. Rev., vol. 32, pg.
110, July 1928.
Pleasant, D.: “Practical Simulation of Bit Error Rates,” Applied Microwave and Wireless, pg. 65,
Spring 1995.
Rohde, Ulrich L.: Digital PLL Frequency Synthesizers, Prentice-Hall, Englewood Cliffs, N.J.,
1983.
Rohde, Ulrich L.: “Key Components of Modern Receiver Design—Part 1,” QST, pg. 29, May
1994.
Rohde, Ulrich L. Rohde and David P. Newkirk: RF/Microwave Circuit Design for Wireless
Applications, John Wiley & Sons, New York, N.Y., 2000.
Rohde, Ulrich L, and Jerry C. Whitaker: Communications Receivers, 3rd ed., McGraw-Hill,
New York, N.Y., 2000.
“Standards Testing: Bit Error Rate,” application note 3SW-8136-2, Tektronix, Beaverton, Ore.,
July 1993.
Using Vector Modulation Analysis in the Integration, Troubleshooting and Design of Digital RF
Communications Systems, Product Note HP89400-8, Hewlett-Packard, Palo Alto, Calif.,
1994.
Watson, R.: “Receiver Dynamic Range; Pt. 1, Guidelines for Receiver Analysis,” Microwaves &
RF, vol. 25, pg. 113, December 1986.
“Waveform Analysis: Noise and Jitter,” application note 3SW8142-2, Tektronix, Beaverton,
Ore., March 1993.
Wilson, E.: “Evaluate the Distortion of Modular Cascades,” Microwaves, vol. 20, March 1981.
Whitaker, Jerry C. (ed.): NAB Engineering Handbook, 9th ed., National Association of Broad-
casters, Washington, D.C., 1999.
g g
Chapter
6.1
Receiver Characteristics
Ulrich L. Rohde
6.1.1 Introduction1
The superheterodyne receiver makes use of the heterodyne principle of mixing an incoming sig-
nal with a signal generated by a local oscillator (LO) in a nonlinear element (Figure 6.1.1). How-
ever, rather than synchronizing the frequencies, the superheterodyne receiver uses a LO
frequency offset by a fixed intermediate frequency (IF) from the desired signal. Because a non-
linear device generates identical difference frequencies if the signal frequency is either above or
below the LO frequency (and also a number of other spurious responses), it is necessary to pro-
vide sufficient filtering prior to the mixing circuit so that this undesired signal response is sub-
stantially suppressed. The frequency of the undesired signal is referred to as an image frequency,
and a signal at this frequency is referred to as an image. The image frequency is separated from
the desired signal frequency by a difference equal to twice the IF. The preselection filtering
required at the signal frequency is much broader than if the filtering of adjacent channel signals
were required. The channel filtering is accomplished at IF. This is a decided advantage when the
receiver must cover a wide frequency band, because it is much more difficult to maintain con-
stant bandwidth in a tunable filter than in a fixed one. Also, for receiving different signal types,
the bandwidth can be changed with relative ease at a fixed frequency by switching filters of dif-
ferent bandwidths. Because the IF at which channel selectivity is provided is often lower than the
signal band frequencies, it may be easier to provide selectivity at IF, even if wide-band RF tuning
is not required.
1. This chapter is based on: Rohde, Ulrich L., and Jerry C. Whitaker: Communications Receiv-
ers: Principles and Design, 3rd ed., McGraw-Hill, New York, N.Y., 2000. Used with permis-
sion.
6-5
6-6 Radio Receivers
can be carried out. Additional IF filtering, data demodulation, and error control coding can all be
performed by digital circuits or a microprocessor.
An alternative to IF sampling and A/D conversion is the conversion of the signal to baseband
in two separate coherent demodulators driven by quadrature LO signals at the IF. The two out-
puts are then sampled at the appropriate rate for the baseband by two A/D converters or a single
multiplexed A/D converter, providing the in-phase and quadrature samples of the baseband sig-
nal. Once digitized, these components can be processed digitally to provide filtering, frequency
changing, phase and timing recovery, data demodulation, and error control.
NF
Sensitivity measures depend upon specific signal characteristics. The NF measures the effects of
inherent receiver noise in a different manner. Essentially it compares the total receiver noise with
the noise that would be present if the receiver generated no noise. This ratio is sometimes called
the noise factor F, and when expressed in dB, the NF. F is also defined equivalently as the ratio of
the S/N of the receiver output to the S/N of the source. The source generally used to test receivers
is a signal generator at local room temperature. An antenna, which receives not only signals but
noises from the atmosphere, the galaxy, and man-made sources, is unsuitable to provide a mea-
sure of receiver NF. However, the NF required of the receiver from a system viewpoint depends
on the expected S/N from the antenna. The effects of external noise are sometimes expressed as
an equivalent antenna NF.
For the receiver, we are concerned with internal noise sources. Passive devices such as con-
ductors generate noise as a result of the continuous thermal motion of the free electrons. This
type of noise is referred to generally as thermal noise, and is sometimes called Johnson noise
after the person who first demonstrated it. Using the statistical theory of thermodynamics,
Nyquist showed that the mean-square thermal noise voltage generated by any impedance
between two frequencies f1 and f2 can be expressed as
2 f2
Vn =4kt ∫ f1
R( f ) d f
(6.1.1)
(a)
(b )
Figure 6.1.2 Receiver sensitivity measurement: (a) test setup, (b) procedure.
Magnetic substances also produce noise, depending upon the residual magnetization and the
applied dc and RF voltages. This is referred to as the Barkhausen effect, or Barkhausen noise.
The greatest source of receiver noise, however, is generally that generated in semiconductors.
Like the older thermionic tubes, transistors and diodes also produce characteristic noise. Shot
noise resulting from the fluctuations in the carrier flow in semiconductor devices produces wide-
band noise, similar to thermal noise. Low-frequency noise or 1/f noise, also called flicker effect,
is roughly inversely proportional to frequency and is similar to the contact noise in contact resis-
tors. All of these noise sources contribute to the “excess noise” of the receiver, which causes the
NF to exceed 0 dB.
The NF is often measured in a setup similar to that of Figure 6.1.2, using a specially designed
and calibrated white-noise generator as the input. The receiver is tuned to the correct frequency
and bandwidth, and the output power meter is driven from a linear demodulator or the final IF
amplifier. The signal generator is set to produce no output, and the output power is observed. The
generator output is then increased until the output has risen 3 dB. The setting on the generator is
the NF in decibels.
The NF of an amplifier can also be calculated as the ratio of input to output S/N, per the equa-
tion
6-10 Radio Receivers
⎡ (S N )1 ⎤
NF = 10 log ⎢ ⎥
⎢⎣( S N ) 0 ⎥⎦
(6.1.2)
where NF is the noise figure in dB and (S/N)1 and (S/N)2 are the amplifier input and output SNR,
respectively.
The NF for a noiseless amplifier or lossless passive device is 0 dB; it is always positive for
nonideal devices. The NF of a lossy passive device is numerically equal to the device insertion
loss. If the input of a nonideal amplifier of gain G (dB) and noise figure NF (dB) were connected
to a matched resistor, the amplifier output noise power PNo (dB) would be
where k is Boltzmann’s constant (mW/°K), T is the resistor temperature in °K, and B is the noise
bandwidth in Hz.
When amplifiers are cascaded, the noise power rises toward the output as noise from succeed-
ing stages is added to the system. Under the assumption that noise powers add noncoherently, the
noise figure NFT of a cascade consisting of two stages of numerical gain A 1 and A2 and noise
factor N1 and N2, is given by Friis’ equation
⎡ N + ( N 2 – 1) ⎤
NFT = 10 log ⎢ 1 ⎥
⎣ A1 ⎦
(6.1.4)
where the noise factor is N = 10(NF/10) and the numerical gain is A = 10(G/10). The system NF,
therefore, is largely determined by the first stage NF when A1 is large enough to make
( N 2 – 1 ) ⁄ A 1 much smaller than N1.
MDS = k T B n F
(6.1.5)
Receiver Characteristics 6-11
In dBm, MDS = – 174 + 10 log ( B n + NF ) , where Bn is the noise bandwidth of the receiver. (0
dbm = decibels referenced to 1mW.)
The available thermal noise power per hertz is –174 dBm at 290 K (63ºF), an arbitrary refer-
ence temperature near standard room temperatures. When any two of the quantities in the expres-
sion are known, the third may be calculated. As in the case of NF measurements, care is required
in measuring MDS, because a large portion of the power being measured is noise, which pro-
duces MDS’ typical fluctuations.
6.1.3 Selectivity
Selectivity is the property of a receiver that allows it to separate a signal or signals at one fre-
quency from those at all other frequencies. At least two characteristics must be considered simul-
taneously in establishing the required selectivity of a receiver. The selective circuits must be
sufficiently sharp to suppress the interference from adjacent channels and spurious responses.
On the other hand, they must be broad enough to pass the highest sideband frequencies with
acceptable distortion in amplitude and phase. Each class of signals to be received may require
different selectivity to handle the signal sidebands adequately while rejecting interfering trans-
missions having different channel assignment spacings. However, each class of signal requires
about the same selectivity throughout all the frequency bands allocated to that class of service.
Older receivers sometimes required broader selectivity at higher frequencies to compensate for
greater oscillator drift. This requirement has been greatly reduced by the introduction of synthe-
sizers for control of LOs and economical high-accuracy and high-stability crystal standards for
the reference frequency oscillator.
Quantitatively the definition of selectivity is the bandwidth for which a test signal x decibels
stronger than the minimum acceptable signal at a nominal frequency is reduced to the level of
that signal. This measurement is relatively simple for a single selective filter or single-frequency
amplifier, and a selectivity curve can be drawn showing the band offset both above and below the
nominal frequency as the selected attenuation level is varied. Ranges of 80 to 100 dB of attenua-
tion can be measured readily, and higher ranges—if required—can be achieved with special care.
A test setup similar to Figure 6.1.2 may be employed with the receiver replaced by the selective
element under test. Proper care must be taken to achieve proper input and output impedance ter-
mination for the particular unit under test. The power output meter need only be sufficiently sen-
sitive, have uniform response over the test bandwidth, and have monotonic response so that the
same output level is achieved at each point on the curve. A typical IF selectivity curve is shown
in Figure 6.1.3.
The measurement of overall receiver selectivity, using the test setup of Figure 6.1.2, presents
some difficulties. The total selectivity of the receiving system is divided among RF, IF, and base-
band selective elements. There are numerous amplifiers and frequency converters, and at least
one demodulator intervening between input and output. Hence, there is a high probability of non-
linearities in the nonpassive components affecting the resulting selectivity curves. Some of the
effects that occur include overload, modulation distortion, spurious signals, and spurious
responses. If there is an AGC, it must be disabled so that it cannot change the amplifier gain in
response to the changing signal levels in various stages of the receiver. If there is only an AM or
FM demodulator for use in the measurement, distortions occur because of the varying attenua-
tion and phase shift of the circuits across the sidebands.
6-12 Radio Receivers
V o = Σ an V i
n
(6.1.6)
Receiver Characteristics 6-13
where a1 is the voltage amplification of the device and the higher-order an cause distortion.
Because the desired signal and the undesired interference are generally narrow-band signals,
we may represent Vi as a sum of sinusoids of different amplitudes and frequencies. Generally,
n
( A1 sin 2 π f1 t + A 2 sin 2 πf 2 t )
mf1 ± ( n – m ) f2
with m taking on all values from 0 to n. These intermodulation (IM) products may have the same
frequency as the desired signal for appropriate choices of f1 and f2. When n is even, the minimum
difference between the two frequencies for this to happen is the desired frequency itself. This
type of even IM interference can be reduced substantially by using selective filters.
When n is odd, however, the minimum difference can be very small. Because m and n – m can
differ by unity, and each can be close to the signal frequency, if the adjacent interferer is ξ f from
the desired signal, the second need be only
( 2ξ f ) ⁄ ( n – 1 )
further away for the product to fall at the desired frequency. Thus, odd-order IM products can be
caused by strong signals only a few channels removed from the desired signal. Selective filtering
capable of reducing such signals substantially is not available in most superheterodyne receivers
prior to the final IF. Consequently, odd-order IM products generally limit the dynamic range sig-
nificantly.
Other effects of odd-order distortion are desensitization and cross modulation. For the case
where n is odd, the presence of the desired signal and a strong interfering signal results in a prod-
uct of the desired signal with an even order of the interfering signal. One of the resulting compo-
nents of an even power of a sinusoid is a constant, so the desired signal is multiplied by that
constant and an even power of the interferer’s signal strength. If the interferer is sufficiently
strong, the resulting product will subtract from the desired signal product from the first power
term, reducing the effective gain of the device. This is referred to as desensitization. If the inter-
ferer is amplitude-modulated, the desired signal component will also be amplitude-modulated by
the distorted modulation of the interferer. This is known as cross modulation of the desired signal
by the interfering signal.
This discussion provides a simple theory that can be applied in considering strong signal
effects. However, the receiver is far more complicated than the single device, and strong signal
performance of single devices by these techniques can become rapidly intractable as higher-
order terms must be considered. Another mechanism also limits the dynamic range. LO noise
sidebands at low levels can extend substantially from the oscillator frequency. A sufficiently
strong off-tune signal can beat with these noise sidebands in a mixer, producing additional noise
in the desired signal band. Other characteristics that affect the dynamic range are spurious sig-
nals and responses, and blocking.
6-14 Radio Receivers
The effects described here occur in receivers, and tests to measure them are essential to deter-
mining the dynamic range. Most of these measurements involving the dynamic range require
more than one signal input. They are conducted using two or three signal generators in a test
setup such as that indicated in Figure 6.1.4.
6.1.4a Desensitization
Desensitization measurements are related to the 1-dB compression point and general linearity of
the receiver. Two signal generators are used in the setup of Figure 6.1.4. The controls of the
receiver under test are set as specified, usually to one of the narrower bandwidths and with AGC
defeated or manual gain control (MGC) set so as to avoid effects of the AGC system. The signal
in the operating channel is modulated and set to a specified level, usually to produce an output S/
N measurement of a particular level, for example, 13 dB. The interfering signal is moved off the
operating frequency by a predetermined amount so that it does not affect the S/N measurement
because of beat notes and is then increased in level until the S/N measurement is reduced by a
specified amount, such as 3 dB. More complete information can be obtained by varying the fre-
quency offset and plotting a desensitization selectivity curve. In some cases, limits for this curve
are specified. The curve may be carried to a level of input where spurious responses, reciprocal
mixing, or other effects prevent an unambiguous measurement. Measurements to 120 dB above
sensitivity level can often be achieved.
If the degradation level at which desensitization is measured is set to 1-dB, and the desensitiz-
ing signal is well within the passband of the preselector filters, the desensitization level corre-
sponds to 1 dB gain compression (GC), which is experienced by the system up to the first mixer.
(See the subsequent discussion of intermodulation and intercept points.) A gain compression (or
blocking) dynamic range can be defined by comparing the input signal level at 1-dB GC to the
MDS; i. e., dynamic range (dB) equals the GC (input dBm) minus the MDS (input dBm). This is
sometimes referred to as the single-tone dynamic range, because only a single interfering signal
is needed to produce GC.
Figure 6.1.4 Test setup for measuring the dynamic range properties of a receiver.
6.1.4c IM
As described in previous sections, IM produces sum and difference frequency products of many
orders that manifest themselves as interference. The measurement of the IM distortion perfor-
mance is one of the most important tests for a communications receiver. No matter how sensitive
a receiver may be, if it has poor immunity to strong signals, it will be of little use. Tests for even-
order products determine the effectiveness of filtering prior to the channel filter, while odd-order
products are negligibly affected by those filters. For this reason, odd-order products are generally
much more troublesome than even-order products and are tested for more frequently. The sec-
ond- and third-order products are generally the strongest and are the ones most frequently tested.
A two-signal generator test set is required for testing, depending on the details of the specified
test.
For IM measurements, the controls of the receiver under test are set to the specified band-
widths, operating frequency, and other settings as appropriate, and the gain control is set on man-
ual (or AGC disabled). One signal generator is set on the operating frequency, modulated and
adjusted to provide a specified S/N (that for sensitivity, for example). The modulation is dis-
abled, and the output level of this signal is measured. This must be done using the IF output, the
SSB output with the signal generator offset by a convenient audio frequency, or with the BFO on
and offset. Alternatively, the dc level at the AM demodulator can be measured, if accessible. The
signal generator is then turned off. It may be left off during the remainder of the test or retuned
and used to produce one of the interfering signals.
6-16 Radio Receivers
For second-order IM testing, two signal generators are now set to two frequencies differing
from each other by the operating frequency. These frequencies can be equally above and below
the carrier frequency at the start, and shifted on successive tests to assure that the preselection
filters do not have any weak regions. The signal with the frequency nearest to the operating fre-
quency must be separated far enough to assure adequate channel filter attenuation of the signal
(several channels). For third-order IM testing, the frequencies are selected in accordance with the
formula given previously so that the one further from the operating frequency is twice as far as
the one nearer to the operating frequency. For example, the nearer interferer might be three chan-
nels from the desired frequency and the further one, six channels in the same direction.
In either case, the voltage levels of the two interfering signal generators are set equal and are
gradually increased until an output equal to the original channel output is measured in the chan-
nel. One of several performance requirements may be specified. If the original level is the sensi-
tivity level, the ratio of the interfering generator level to the sensitivity level may have a specified
minimum. Alternatively, for any original level, an interfering generator level may be specified
that must not produce an output greater than the original level. Finally, an intercept point (IP)
may be specified.
The IP for the nth order of intermodulation occurs because the product is a result of the inter-
fering signal voltages being raised to the nth power. With equal voltages, as in the test, the result-
ant output level of the product increases as
V dn = c n V n
(6.1.7)
where cn is a proportionality constant and V is the common input level of the two signals.
Because the output from a single signal input V at the operating frequency is GvV, there is a the-
oretical level at which the two outputs would be equal. This value VIPn is the nth IP, measured at
the input. It is usually specified in dBm. In practice the IPs are not reached because as the ampli-
fiers approach saturation, the voltage at each measured frequency becomes a combination of
components from various orders of n. Figure 6.1.5 indicates the input-output power relationships
in second- and third-order IPs.
In Equation 6.1.7 we note that at the IP
This leads to
1 – n
⎡V ⎤
c n = G v (V IPn ) 1 – n
and V dn = G v V ⎢ ⎥
⎣V IPn ⎦
(6.1.9)
Figure 6.1.5 Input/output power relationships for second- and third-order intercept points.
⎡V ⎤
Rdn = 20 log ⎢ ⎥ = ( n – 1) [ 20 log V IPn – 20 log V ]
⎣V dn ⎦
(6.1.10)
If the intercept level is expressed in dBm rather than voltage, then the output power represented
by V must be similarly expressed.
The IM products we have been discussing originate in the active devices of the receiver, so
that the various voltages or power levels are naturally measured at the device output. The IP is
thus naturally referred to the device output and is so specified in most data sheets. In the forego-
ing discussion, we have referred the IP to the voltage level at the device input. If the input power
is required, we subtract from the output intercept level in decibels, the amplifier power gain or
loss. The relationship between input and output voltage at the IP is given in Equation 6.1.8. Ref-
erence of the IP to the device input is somewhat unnatural but is technically useful because the
receiver system designer must deal with the IP generation in all stages and needs to know at what
antenna signal level the receiver will produce the maximum tolerable IM products.
6-18 Radio Receivers
Consider the input power (in each signal) that produces an output IM product equal to the
MDS. The ratio of this power to the MDS may be called the third-order IM dynamic range. It
also is sometimes referred to as the two-tone dynamic range. Expressing Equation 6.1.10 in
terms of input power and input IP measured in dBm, we have
Rdn = ( n – 1) [ IPn ( in ) – P( in ) ]
(6.1.11)
When we substitute MDS for the distortion and MDS + DR for P(in) we obtain
2 [ IP3 ( in ) – MDS ]
DR =
3
(6.1.13)
A dynamic range could presumably be defined for other orders of IM, but it is not common to do
so. From the three different definitions of dynamic range described in this section, it should be
clear why it is important to be careful when comparing receiver specifications for this character-
istic.
Figure 6.1.6 Phase noise is critical to digitally modulated communication systems because of
the modulation errors it can introduce. Inter-symbol interference (ISI), accompanied by a rise
in BER, results when state values become so badly error-blurred that they fall into the
regions of adjacent states. This drawing depicts only the results of phase errors introduced
by phase noise; in actual systems, thermal noise, AM-to-PM conversion, differential group delay,
propagation, and other factors may also contribute to the spreading of state amplitude and
phase values. (Courtesy of Rohde & Schwarz.)
Figure 6.1.7 Effect of gain imbalance between I and Q channels on data signal phase con-stella-
tion. (Courtesy of Rohde & Schwarz.)
vide the demodulator with about 1 V on weak signals, would need the capability to handle thou-
sands of volts for strong signals without some form of gain control. Consequently, receivers
customarily provide means for changing the gain of the RF or IF amplifiers, or both.
For applications where the received signal is expected to remain always within narrow limits,
some form of manually selectable control can be used, which may be set on installation and sel-
dom adjusted. There are few such applications. Most receivers, however, even when an operator
is available, must receive signals that vary by tens of decibels over periods of fractions of sec-
6-20 Radio Receivers
Figure 6.1.8 Effect of quadrature offset on data signal phase constellation. (Courtesy of Rohde &
Schwarz.)
Figure 6.1.9 Effect of LO-feedthrough-based IQ offset on data signal phase constellation. (Cour-
tesy of Rohde & Schwarz.)
onds to minutes. The level also changes when the frequency is reset to receive other signals that
may vary over similar ranges but with substantially different average levels. Consequently, an
AGC is very desirable.
Some angle modulation receivers provide gain control by using amplifiers that limit on strong
signals. Because the information is in the angle of the carrier, the resulting amplitude distortion
is of little consequence. Receivers that must preserve AM or maintain very low angle modulation
distortion use amplifiers that can be varied in gain by an external control voltage. In some cases,
this has been accomplished by varying the operating points of the amplifying devices, but most
modern systems separate solid-state circuits or switched passive elements to obtain variable
attenuation between amplifier stages with minimum distortion. For manual control, provision
can be made to let an operator set the control voltage for these variable attenuators. For automatic
control, the output level from the IF amplifiers or the demodulator is monitored by the AGC cir-
Receiver Characteristics 6-21
Figure 6.1.11 Block diagram of a dual loop AGC system for a communications receiver.
6-22 Radio Receivers
To test for the AGC range and input-output curve, a single signal generator is used (as in Fig-
ure 6.1.2) in the AM mode with the receiver’s AGC actuated. The signal generator is set to sev-
eral hundred microvolts, and the baseband output level is adjusted to a convenient level for
output power measurement. The signal generator is then tuned to its minimum level and the out-
put level is noted. The signal is gradually increased in amplitude, and the output level is mea-
sured for each input level, up to a maximum specified level, such as 2 V. Figure 6.1.12 shows
some typical AGC curves. In most cases, there will be a low-input region where the signal out-
put, rising out of the noise, varies linearly with the input. At some point, the output curve bends
over and begins to rise very slowly. At some high level, the output may drop off because of satu-
ration effects in some of the amplifiers. The point at which the linear relationship ends is the
threshold of the AGC action. The point at which the output starts to decrease, if within a speci-
fied range, is considered the upper end of the AGC control range. The difference between these
two input levels is the AGC control range. If the curve remains monotonic to the maximum input
test level, that level is considered the upper limit of the range. A measure of AGC effectiveness is
the increase in output from a specified lower input voltage level to an upper input voltage level.
For example, a good design might have an AGC with a threshold below 1 μV that is monotonic
to a level of 1 V and has the 3 dB increase in output between 1 μV and 0.1 V.
The foregoing applies to a purely analog system. The advantage of a digital signal processor
(DSP)-based system include that AGC is handled internally. The receiver still requires a dual-
loop AGC of which the input stage AGC will remain analog.
tal modulation techniques. Because a digital radio system is a hybrid analog/digital device, many
of the test procedures outlined previously for analog receivers are useful and important in charac-
terizing a digital radio system. Additional tests, primarily involving the analysis of bit error rates
(BER), must also be run to properly identify any weak points in a particular receiver design or
basic architecture.
phase-based modulation schemes, such as binary PSK and quadrature PSK. For a given statisti-
cal phase-error characteristic, BER is degraded according to the percentage of time that the
phase error causes the signal position in signal space to cross a decision boundary.
6.1.7 Bibliography
Engelson, M., and J. Herbert: “Effective Characterization of CDMA Signals,” Microwave Jour-
nal, pg. 90, January 1995.
Howald, R.: “Understand the Mathematics of Phase Noise,” Microwaves & RF, pg. 97, December
1993.
Johnson, J. B:, “Thermal Agitation of Electricity in Conduction,” Phys. Rev., vol. 32, pg. 97, July
1928.
Nyquist, H.: “Thermal Agitation of Electrical Charge in Conductors,” Phys. Rev., vol. 32, pg.
110, July 1928.
Pleasant, D.: “Practical Simulation of Bit Error Rates,” Applied Microwave and Wireless, pg. 65,
Spring 1995.
Receiver Characteristics 6-25
Rohde, Ulrich L.: “Key Components of Modern Receiver Design—Part 1,” QST, pg. 29, May
1994.
Rohde, Ulrich L. Rohde and David P. Newkirk: RF/Microwave Circuit Design for Wireless
Applications, John Wiley & Sons, New York, N.Y., 2000.
“Standards Testing: Bit Error Rate,” application note 3SW-8136-2, Tektronix, Beaverton, Ore.,
July 1993.
Using Vector Modulation Analysis in the Integration, Troubleshooting and Design of Digital RF
Communications Systems, Product Note HP89400-8, Hewlett-Packard, Palo Alto, Calif.,
1994.
Watson, R.: “Receiver Dynamic Range; Pt. 1, Guidelines for Receiver Analysis,” Microwaves &
RF, vol. 25, pg. 113, December 1986.
“Waveform Analysis: Noise and Jitter,” application note 3SW8142-2, Tektronix, Beaverton,
Ore., March 1993.
g g
Chapter
6.2
The Radio Channel
Ulrich L. Rohde
6.2.1 Introduction1
The transmission of information from a fixed station to one or more mobile stations is governed
by the characteristics of the radio channel [1]. The RF signal arrives at the receiving antenna not
only on the direct path but is normally reflected by natural and artificial obstacles in its way.
Consequently, the signal arrives at the receiver several times in the form of echoes that are super-
imposed on the direct signal, as illustrated in Figure 6.2.1. This superposition may be an advan-
tage as the energy received in this case is greater than in single-path reception. However, this
characteristic may be a disadvantage when the different waves cancel each other under unfavor-
able phase conditions. In conventional car radio reception this effect is known as fading. It is par-
ticularly annoying when the vehicle stops in an area where the field strength is reduced because
of fading (for example, at traffic lights). Additional difficulties arise when digital signals are
transmitted. If strong echo signals (compared to the directly received signal) arrive at the receiver
with a delay in the order of a symbol period or more, time-adjacent symbols interfere with each
other. In addition, the receive frequency may be falsified at high vehicle speeds because of the
Doppler effect so that the receiver may have problems estimating the instantaneous phase in the
case of angle-modulated carriers. Both effects lead to a high symbol error rate even if the field
strength is sufficiently high.
Radio broadcasting systems using conventional frequency modulation are not seriously
affected by these interfering effects in most cases. If an analog system is replaced by a digital one
that is expected to offer advantages over the previous system, the designer must ensure that the
expected advantages—for example, improved audio S/N and the possibility to offer supplemen-
tary services to the subscriber—are not achieved at the expense of reception in hilly terrain or at
high vehicle speeds because of extreme fading. For this reason, a modulation method combined
1. This chapter is based on: Rohde, Ulrich L., and Jerry C. Whitaker: Communications Receiv-
ers: Principles and Design, 3rd ed., McGraw-Hill, New York, N.Y., 2000. Used with permis-
sion.
6-27
6-28 Radio Receivers
Figure 6.2.1 Mobile receiver affected by fading. (Courtesy of Rohde & Schwarz.)
with suitable error protection must be found for mobile reception in a typical radio channel that
is immune to fading, echo, and Doppler effects.
Figure 6.2.2 Receive signal as a function of time or position. (Courtesy of Rohde & Schwarz.)
Figure 6.2.3 Rayleigh and Rice distribution. (Courtesy of Rohde & Schwarz.)
moving, the channel impulse response is a function of time and of delays τ i; that is, it corre-
sponds to
h(t, τ) = ∑ a i δ(t − τ i )
N
(6.2.1)
6-30 Radio Receivers
Figure 6.2.4 Bit error rate in a Rayleigh channel. (Courtesy of Rohde & Schwarz.)
This shows that delta functions sent at different times t cause different reactions in the radio
channel.
In many experimental investigations, different landscape models with typical echo profiles
were created. The most important are:
The Radio Channel 6-31
T s >10T d
(6.2.2)
6-32 Radio Receivers
Figure 6.2.7 Artificial and natural echoes in the single-frequency network. (Courtesy of Rohde &
Schwarz.)
This has the consequence that relatively narrowband modulation methods have to be used. If this
is not possible, channel equalizing is required.
For channel equalizing, a continuous estimation of the radio channel is necessary. The estima-
tion is performed with the aid of a periodic transmission of data known to the receiver. In net-
works, a midamble consisting of, for example, 26 bits—the training sequence—can be
transmitted with every burst. The training sequence corresponds to a characteristic pattern of I/Q
signals that is held in a memory at the receiver. The baseband signals of every received training
sequence are correlated with the stored ones. From this correlation, the channel can be esti-
mated; the properties of the estimated channel will then be fed to the equalizer, as shown in Fig-
ure 6.2.8.
The equalizer uses the Viterbi algorithm (maximum sequence likelihood estimation) for the
estimation of the phases that most likely have been sent at the sampling times. From these phases
the information bits are calculated (Figure 6.2.9). A well designed equalizer then will superim-
pose the energies of the single echoes constructively, so that the results in an area where the ech-
oes are moderately delayed (delay times up to 16 μs at the receiver in this example) are better
The Radio Channel 6-33
Figure 6.2.10 BERs after the channel equalizer in different areas. (Courtesy of Rohde &
Schwarz.)
than in an area with no significant echoes (Figure 6.2.10). The remaining bit errors are elimi-
nated using another Viterbi decoder for the transmitter convolutionally encoded data sequences.
The ability of a mobile receiver to work in an hostile environment such as the radio channel
with echoes must be proven. The test is performed with the aid of a fading simulator, which sim-
ulates various scenarios with different delay times and different Doppler profiles. A signal gener-
ator produces undistorted I/Q modulated RF signals that are downconverted into the baseband.
Next, the I/Q signals are digitized and split into different channels where they are delayed and
attenuated, and where Doppler effects are superimposed. After combination of these distorted
6-34 Radio Receivers
signals at the output of the baseband section of the simulator, the signals modulate the RF carrier,
which is the test signal for the receiver under test (Figure 6.2.11).
v
fd = f c cos α
c
(6.2.3)
Where:
v = speed of vehicle
c = speed of light
f = carrier frequency
α = angle between v and the line connecting transmitter and receiver
In the case of multipath reception, the signals on the individual paths arrive at the receiving
antenna with different Doppler shifts because of the different angles αi, and the receive spectrum
is spread. Assuming an equal distribution of the angles of incidence, the power density spectrum
can be calculated as follows
The Radio Channel 6-35
1 1
P( f ) = for f < f d
π f −f
2 2
d
(6.2.4)
1
(Δf ) c =
Td
(6.2.5)
Figure 6.2.13 Effect of transfer function on modulated RF signals. (Courtesy of Rohde &
Schwarz.)
ject to the same fading. If the coherence bandwidth is narrow and the associated delay spread
wide, even very close adjacent frequencies are attenuated differently by the channel. The effect
on a broadband-modulated carrier with respect to the coherence bandwidth is obvious. The side-
bands important for the transmitted information are attenuated to a different degree. The result is
a considerable distortion of the receive signal combined with a high bit error rate even if the
received field strength is high. (See Figure 6.2.13).
Figure 6.2.14 Channel impulse response and transfer function as a function of time. (Courtesy of
Rohde & Schwarz.)
Figure 6.2.15 Phase uncertainty caused by Doppler effect. (Courtesy of Rohde & Schwarz.)
terrain without reflecting objects, the channel impulse response and the transfer function mea-
sured at different times are the same.
The effect on information transmission can be illustrated with a simple example. In the case
of MPSK modulation using hard keying, the transmitter holds the carrier phase for a certain
period of time; that is, for the symbol period T. In the case of soft keying with low-pass-filtered
baseband signals for limiting the modulated RF carrier, the nominal phase is reached at a spe-
cific time—the sampling time. In both cases the phase error ϕf = fdTS is superimposed onto the
nominal phase angle, which yields a phase uncertainty of Δϕ = 2ϕf at the receiver. The longer the
symbol period, the greater the angle deviation (Figure 6.2.15). Considering this characteristic of
the transmission channel, a short symbol period of Ts << (Δt)c should be used. However, this
requires broadband modulation methods.
Figure 6.2.16 shows the field strength or power arriving at the mobile receiver if the vehicle
moves in a Rayleigh distribution channel. Because the phase depends on the vehicle position, the
receiver moves through positions of considerably differing field strength at different times (time-
dependence of radio channel). In the case of frequency-selective channels, this applies to one fre-
quency only; that is, to a receiver using a narrowband IF filter for narrowband emissions. As Fig-
6-38 Radio Receivers
ure 6.2.16 shows, this effect can be reduced by increasing the bandwidth of the emitted signal
and consequently the receiver bandwidth.
6.2.3 References
1. Rohde, Ulrich L., and David P. Newkirk: RF/Microwave Circuit Design for Wireless Appli-
cations, John Wiley & Sons, New York, N.Y., 2000.
g g
Chapter
6.3
AM and FM Receivers
6.3.1 Introduction1
A simplex transmitter-receiver system is the simplest type of communication system possible. A
single source transmits to a single receiver. When the receiver can respond to a transmission
from the source and relay voice or other information back to the transmission site, a duplex
arrangement is realized. A two-way business radio system operating on different transmit and
receive frequencies employs a duplex arrangement. AM and FM broadcast systems utilize a sim-
plex star configuration in which one transmitter feeds many receivers. For greatest efficiency,
the receivers in a simplex star should be made as simple as possible, concentrating technical
complexity at the transmit site. Radio broadcasting today follows this basic rule.
AM and FM stations transmit at high power levels to facilitate simpler radio designs. AM sta-
tions in the U.S. can operate at up to 50 kW, while FM stations can operate at up to 100 kW. This
essentially eliminates the need for a sensitive antenna at the receiver. Stereo AM and stereo FM
systems are, likewise, designed to permit the more complex operations to be performed in the
encoding stage at the transmitter, rather than in the decoding circuits of the receiver.
1. Portions of this chapter were adapted form Rohde, Ulrich L., and Jerry C. Whitaker: Commu-
nications Receivers, 3rd ed., McGraw-Hill, New York, N.Y., 2000. Used with permission.
6-39
6-40 Radio Receivers
Figure 6.3.1 Simplified block diagram of a superheterodyne receiver. Depending on the sophisti-
cation of the radio, some of the stages shown may be combined into a single circuit. (From [1].
Used with permission.)
Channel filtering is accomplished by one or more fixed-frequency filters in the IF. This is a
decided advantage when the receiver must cover a wide frequency band, because it is more diffi-
cult to maintain constant bandwidth in a tunable filter than in a fixed one. An IF frequency of
455 kHz is used for AM receivers and 10.7 MHz for FM receivers.
As shown in Figure 6.3.1, the input signal is fed from the antenna to a preselector filter and
RF amplifier. The input circuit matches the antenna to the first amplifying device to achieve the
best sensitivity. It also provides the necessary selectivity to reduce the possibility of overload in
the first amplifier stage caused by strong, undesired signals. Because sufficient selectivity must
be provided to eliminate the image and other spurious signals prior to the mixer, preselection fil-
tering may he broken into two or more parts with intervening amplifiers (in premium receiver
designs) to minimize the effects of filter losses on the receiver noise figure (NF).
The LO provides a strong stable signal at the proper frequency to the mixer for conversion of
the desired signal to the IF. (The mixer stage may also be called the first detector, converter, or
frequency changer.) The operating frequency of the LO may be controlled by a variable capacitor
or varactor diode operating in a phase-locked-loop (PLL) system. Frequency synthesizer PLL
designs are the basis for electronically-tuned radios (ETRs).
The output of the mixer is applied to the IF amplifier, which raises the desired signal to a suit-
able power level for the demodulator. The demodulator derives from the IF signal the modulated
baseband waveform, which may be amplified by a baseband amplifier before being applied to a
decoder (for stereo broadcasts) or audio preamplifier (for monophonic broadcasts).
AM and FM Receivers 6-41
Figure 6.3.2 The causes of multipath in FM broadcasting. Distortion of the demodulated FM sig-
nal is related to both the direct-reflected signal ratio and the delay time of the reflected signal. Dis-
tortion increases as the signal ratio approaches 1:1, and as the secondary path delay increases.
FM is, however, affected by reflections of the transmitted signal that can mix at the receiver
and cause fading or distortion. This phenomenon is known as multipath (see Figure 6.3.2).
Reflections can be caused by mountains, steel-frame buildings, vehicles, and other objects. The
interference patterns set up as a result of multipath cause the signal strength to vary from one
location to another in an apparently random manner. It is often possible, for example, to improve
reception simply moving the antenna 2 ft or so (about 1/4 wavelength at FM frequencies).
The noise level at VHF is low compared to the MF band. Artificial noise can produce impul-
sive interference, but the use of hard limiting in receivers will eliminate most amplitude-related
noise. Interference from other FM stations can, however, produce noise at the receiver that usu-
ally cannot be stripped off by limiting.
be priced upwards of $400 than it is to build the radio just described. The design of any AM-FM
radio for use by consumers is an exercise in compromise.
Most radios today have been reduced to just a handful of LSI chips, or even a single VLSI
device. This dramatic move toward miniaturization has eliminated the traditional stages that
technicians are familiar with. Still, each stage exists in one form or another in virtually every
radio produced today. The circuits may be hidden on a slab of silicon, but they are there just the
same.
Whip Antenna
Nearly all automobile and some portable radios use whip antennas of about 2 to 3 ft in length for
both AM and FM reception. Seldom does the mounting surface resemble a plane. The problem
of coupling a whip optimally to the first active circuit is a difficult one. The antenna represents a
complex set of impedance, capacitance, reactance, and resistive components that vary with the
frequency and the surrounding physical structures. Still, given the ability to switch coupling cir-
6-44 Radio Receivers
Figure 6.3.3 Typical circuits used for coupling an antenna to a tuned resonant circuit. (From [1].
Used with permission.)
cuitry between AM and FM bands, coupling to the RF amplifier can be satisfactory for reason-
able efficiency and bandwidth.
Loop Antenna
The loop antenna has been used in portable AM receivers for many years. Its response differs
from the monopole in that when the face of the loop is vertical, it responds to the magnetic field
rather than the electric field. Instead of being omnidirectional in azimuth (like a whip), the loop
responds to the cosine of the angle between its face and the direction of the desired transmission.
This yields the familiar figure-eight pattern, which makes the loop useful for direction finding
by providing a sharp null for waves arriving perpendicular to the face.
Loops used for AM broadcast reception incorporate a high-permeability (ferrite) core to
facilitate reduced size. Such a loop may be tuned by a capacitance and connected directly to the
input device of the receiver. Coupling is usually simple, as illustrated in Figure 6.3.4. If the loop
AM and FM Receivers 6-45
Figure 6.3.4 Examples of coupling circuits used for AM broadcast reception using a loop antenna.
(From [1]. Used with permission.)
has an inductance lower than that required for proper input impedance, it may be connected in
series with an additional inductance for tuning (as shown). If the loop impedance is too high, the
receiver input may be tapped down on the circuit.
• Ceramic filter
The classical approach to radio filtering involved cascading single- or dual-resonator filters
separated by amplifier stages. Overall selectivity was provided by this combination of one- or
two-pole filters. This approach, however, had two distinct disadvantages: 1) the circuits were dif-
ficult to align properly, and 2) the system was susceptible to IM and overload (even in the early
IF stages) from out-of-band signals. The classic approach did have some advantages, though.
First, limiting from strong impulse noise would occur in early stages where the broad bandwidth
would reduce the noise energy more than after complete selectivity had been achieved. Second,
such designs were relatively inexpensive to manufacture.
Modern radios use multiresonator filters inserted as early as possible in the amplification
chain to reduce nonlinear distortion, simplify alignment, and permit easy attainment of a variety
of selectivity patterns. The simple single- or dual-resonator pairs are now used primarily for
impedance matching between stages or to reduce noise between broadband cascaded amplifiers.
LC Filter
lnductor-capacitor resonators are limited to Q values on the order of a few hundred for reason-
able sizes. In most cases, designers must be satisfied with rather low Q. The size of the filter
depends on the center frequency. Two separate LC filters can easily cover the AM and FM broad-
cast bands. Skirt selectivity depends on the number of resonators used. Ultimate filter rejection
can be made higher than 100 dB with careful design. Filter loss depends on the percentage band-
width required and the resonator Q. It can be as high as 1 dB per resonator at narrow bandwidths.
This type of filter does not generally suffer from nonlinearities. Frequency stability is limited
by the individual components and cannot be expected to achieve much better than 0.1 percent of
center frequency under extremes of temperature and aging. Except for front ends that require
broad bandwidth filters, LC filters have been largely superseded in modern radios by new filter
technologies.
except that the bandwidth is limited to several tenths of a percent. Monolithic quartz filters are
also smaller and lighter than discrete resonator filters.
Ceramic Filter
Piezoelectrie ceramics can be used to achieve some of the characteristics of a quartz filter, but at
a lower cost. Such filters are comparable in size to monolithic quartz filters but are available over
a limited center frequency range (100 to 700 kHz). This limits ceramic filters to IF applications
for AM reception (455 kHz). The cutoff rate, stability, and accuracy of a ceramic filter are not as
good as quartz but are adequate for many applications. Single- and double-resonator structures
are available. Multiple-resonator filters use electrical coupling between sections.
Figure 6.3.5 A gain-controlled RF amplifier integrated circuit suitable for use in AM-FM receivers.
mum useful frequency for a PIN diode attenuator varies inversely with the minority carrier life-
time. For available diodes, the low end of the HF band is near this limit.
Figure 6.3.6 Gain-control curves of the CA3002 RF amplifier shown in Figure 6.3.5. The “Mode A”
and “Mode B” traces refer to different circuit configurations for the device.
6.3.3d Mixer
In the mixer circuit, the RF and LO signals are acted upon by the nonlinear properties of a device
(or devices) to produce a third frequency, the IF. At AM frequencies, receivers are usually built
without RF preamplifiers; the antenna is fed directly to the mixer stage. In this frequency range,
artificial and atmospheric noise is usually greater than the receiver NF. At FM frequencies, an
RF amplifier is typically used. The mixer is located in the signal chain prior to the narrow filter-
ing of the first IF.
Ideally, the mixer should accept the RF and LO inputs and produce an output having only one
frequency (sum or difference), with signal modulation precisely transferred to this IF. In actual
practice, however, mixers produce the desired IF but also many undesired components that must
be filtered. Any device with nonlinear transfer characteristics can act as a mixer. For the pur-
poses of this discussion, two classes will be discussed:
• Passive mixers, which use diodes as the mixing elements
• Active mixers, which employ gain devices (e.g., bipolar transistors or FETs)
6-50 Radio Receivers
Figure 6.3.7 Schematic of a PIN diode attenuator. (From [1]. Used with permission.)
Figure 6.3.8 Simplified block diagram of a receiver with AGC. (From [1]. Used with permission.)
Passive Mixer
Passive mixers have been built using germanium and silicon diodes. The development of hot car-
rier diodes, however, has resulted in a significant improvement in passive mixers.
A single diode can be used to build a mixer. The performance of such a circuit is poor,
though, because the RF and LO frequencies (as well as their harmonics and other odd and even
mixing products) all appear at the output. As a result, a large number of spurious components are
produced that are difficult to remove. Moreover, there is no isolation of the LO and its harmonics
from the input circuit, necessitating the use of an RF amplifier to prevent oscillator radiation
from the antenna.
AM and FM Receivers 6-51
Active Mixer
The simplest type of active mixer uses a FET or bipolar transistor with the LO and RF signals
applied to the gate-source or base-emitter junction. This unbalanced mixer has the same draw-
backs as the simple diode mixer and is not used for high-performance receivers.
An improved configuration uses a dual-gate FET or cascode bipolar circuit with the LO and
RF signals applied to different gates (bases). The balanced transistor arrangement of Figure 6.3.5
can also be used as a mixer with the LO applied to the base of Q3 and the signal applied to the
bases of Q1 and/or Q5.
Active mixers can be implemented using a wide variety of devices and configurations,
depending on the specifications and cost structure of the receiver. Figure 6.3.11 shows a push-
pull balanced FET mixer. The circuit uses two dual-gate FETs in a push-pull arrangement
Figure 6.3.10 Schematic of a double-balanced passive mixer. (From [1]. Used with permission.)
6-52 Radio Receivers
Figure 6.3.11 Schematic diagram of a push-pull dual gate FET balanced mixer.
between the RF input (applied to the first gates) and the IF output. The oscillator is injected in
parallel on the second gates.
Active mixers have gain and are sensitive to mismatch conditions. If operated at high levels,
the collector or drain voltage can become so high that the base-collector or gate-drain junction
can open during a cycle and cause severe distortion. Control of the RF input by filtering out-of-
band signals and AGC are important considerations for active mixer designs. Advantages of the
active mixer include lower LO drive requirements and the possible elimination of an RF pream-
plifier stage.
• Frequency adjustment accuracy to match the center carrier frequencies of AM and FM broad-
cast stations
Synthesizers may categorized into two basic classes:
• Direct, in which the LO output is derived from the product of multiple mixing and filtering
• Indirect, in which the LO output is derived from a phase-locked loop that samples the direct
output to reduce spurious signals.
PLL Synthesizer
Most AM-FM receivers incorporating a frequency synthesized LO use a single-loop digital PLL
of the type shown in Figure 6.3.12. When describing frequency synthesizers mathematically, a
linearized model is generally used. Because most effects occurring in the phase detector are
highly nonlinear, however, only the so-called piecewise linear treatment allows adequate approx-
imation. The PLL is nonlinear because the phase detector is nonlinear. However, it can be accu-
rately approximated by a linear model when the loop is in lock.
Assume that the voltage-controlled oscillator (VCO) of Figure 6.3.12 is tunable over a range
of 88 to 108 MHz. The output is divided to the reference frequency in a programmable divider
stage whose output is fed to one of the inputs of the phase-frequency detector and compared with
the reference frequency (fed to the other input). The loop filter at the output of the phase detector
suppresses the reference frequency components, while also serving as an integrator. The dc con-
trol voltage output of the loop filter pulls the VCO until the divided frequency and phase equal
those of the reference. A fixed division of the frequency standard oscillator (not shown in Figure
6.3.12) produces the reference frequency of appropriate step size. The operating range of the
PLL is determined by the maximum operating frequency of the programmable divider, its divi-
sion range ratio, and the tuning range of the VCO.
There are various choices of loop filter types and response. Because the VCO by itself is an
integrator, a simple RC filter following the phase detector can be used. If the gain of the passive
loop is too low to provide adequate drift stability of the output phase (especially if a high division
ratio is used), an active amplifier may be used as an integrator. In most frequency synthesizers,
an active filter-integrator is preferred to a passive one. Figure 6.3.13 shows a passive RC filter
6-54 Radio Receivers
Frequency Divider
Frequency dividers are commonly built using transistor-
transistor logic (TTL), complementary MOS (CMOS), and Figure 6.3.13 Schematic dia-
low-power emitter-coupled logic (ECL) IC technologies. gram of a PLL passive RC filter.
Dividers come in two common categories: synchronous
counters and asynchronous counters. The frequency range
of the CMOS, depending on the process, is limited to 10 to
30 MHz. TTL operates successfully up to 100 MHz in a
ripple counter configuration. In a synchronous counter
configuration, TTL is limited to perhaps 30 MHz.
Frequency extension is possible through the use of an
ECL prescaler, available in variable-ratio and fixed-ratio
configurations. The term prescaling is generally used in the
Figure 6.3.14 Schematic dia-
sense of a predivider that is nonsynchronous with the rest
gram of an active filter for a sec-
of the chain. Fixed-ratio prescalers are used as ripple ond-order PLL.
counters preceding a synchronous counter. A single-loop
synthesizer loses resolution by the amount of prescaling.
Figure 6.3.15 shows a block diagram of the MC12012 (Motorola) variable-ratio dual-modu-
lus presealer. Through external programming, this ECL divider can be made to divide in various
ratios. With proper clocking, the device can be considered a synchronous counter. With such a
system, it is possible to increase the maximum operating frequency to about 400 MHz without
losing resolution.
Variable-Frequency Oscillator
The LO in the receiver must be capable of being turned over a specified frequency range, offset
from the desired operating band(s) by the IF. Prior to the advent of the varactor diode and good
switching diodes, it was customary to tune an oscillator mechanically using a variable capacitor
with an air dielectric, or in some cases by moving a powdered iron core inside a coil to make a
variable inductor. Automobile radios typically used the variable-inductor method of tuning the
AM broadcast band. Figure 6.3.16 shows the classic VFO circuits commonly used in receivers.
Different configurations are used in different applications, depending on the range of tuning and
whether the tuning elements are completely independent or have a common element (such as the
rotor of a tuning capacitor).
Newer receivers control the oscillator frequency by electronic rather than mechanical means.
Tuning is accomplished by a voltage-sensitive capacitor (varactor diode). Oscillators that are
tuned by varying the input voltage are referred to as voltage-controlled oscillators (VCOs).
The capacitance versus voltage curves of a varactor diode depend on the physical composi-
tion of the diode junction. Maximum values range from a few hundred picofarads, and useful
capacitance ratios range from about 5 to 15. Figure 6.3.17 shows three typical tuning circuits
incorporating varactor diodes. In all cases the voltage is applied through a large value resistor.
AM and FM Receivers 6-55
Figure 6.3.15 Block diagram of a divide-by-10/11 dual-modulus prescaler IC, the Motorola
MC12012. (Courtesy of Motorola.)
Diode Switching
Because diodes have a low resistance when biased in one direction and a very high resistance
when biased in the other, they may be used to switch RF circuits. A sufficiently large bias voltage
may be applied to keep the diode on when it is carrying RF currents, or off when it is subjected to
RF voltages. It is important that, in the forward-biased condition, the diode add as little resis-
tance as possible to the circuit and that it be capable of handling the maximum RF current plus
the bias current. When the diode is reverse-biased, the breakdown voltage must be higher than
the combined bias and RF peak voltage in the circuit. Almost any type of diode can perform
switching, but at high frequencies, PIN diodes are especially useful. Figure 6.3.18 shows three
examples of diode switching in RF circuits.
The advantage of electronic tuning using varactor diodes is only fully realized when band
selection also takes place electronically. Diode switches are preferable to mechanical switches
because of their high reliability. Diode switches eliminate the need for a mechanical link between
front panel controls and the tuned circuits to be switched.
Crystal-Controlled Oscillator
Piezoelectric quartz crystals are the basis for most PLL reference oscillators. Quartz crystals
have resonances that are much more stable than the LC circuits discussed so far and also have
very high Q. Consequently, quartz crystal resonators are typically used for high-stability fixed-
frequency oscillators. A piezoelectric material is one that develops a voltage when it is under a
mechanical strain or is placed under strain by an applied voltage. A physical piece of such mate-
rial, depending upon its shape, can have a number of mechanical resonances. By appropriate
shaping and location of the electrodes, one or another resonant mode of vibration can be favored,
so that the resonance may be excited by an external voltage.
The crystal exhibits at its frequency of oscillation the equivalent electric circuit shown in Fig-
ure 6.3.19. The series resonant circuit represents the effect of the crystal vibrator, and the shunt
capacitance is the result of the coupling plates and of capacitance to surrounding metallic objects
(such as the metal case). The resonant circuit represents the particular vibrating mode that is
6-56 Radio Receivers
Figure 6.3.16 Schematic diagrams of common oscillator circuits using vacuum-tube, transistor,
and FET active circuits. (From [1]. Used with permission.)
AM and FM Receivers 6-57
(a) (b )
(c)
Figure 6.3.17 Typical tuning circuits using varactor diodes as the control element: (a) single diode
in the circuit low side, (b) single diode in the circuit high side, (c) two diodes in a series back-to-
back arrangement. (From [2]. Used with permission.)
excited. If more than one mode can be excited, a more complex circuit would be required to rep-
resent the crystal.
The most common type of circuit using a fundamental (AT) crystal is an aperiodic oscillator,
which has no selective circuits other than the crystal. Such oscillators, often referred to as paral-
lel resonant oscillators, use the familiar Pierce and Clapp configurations (see Figure 6.3.20).
(a)
(b)
(c)
Figure 6.3.18 Typical circuits using diodes for band switching: (a) series diode arrangement, (b)
shunt-diode arrangement, (c) use of both series and shunt diodes. (From [1]. Used with permis-
sion.)
AM and FM Receivers 6-59
Figure 6.3.19 The equivalent electric circuit of a crystal at resonance (spurious and overtone
modes not shown).
(a) (b )
(c )
Figure 6.3.20 Common parallel resonant circuits used in fundamental crystal oscillators: (a)
Pierce circuit, (b) Clapp circuit, collector grounded, (c) Clapp circuit, base grounded. (From [2].
Used with permission.)
6-60 Radio Receivers
(a)
(b)
Figure 6.3.21 AM demodulators with idealized waveforms: (a) average demodulator and resulting
waveform, (b) envelope detector and resulting waveform. (From [1]. Used with permission.)
• Single-sideband (SSB) AM
• Vestigial-sideband (VSB) AM
• Phase modulation (PM)
• Frequency modulation (FM)
For the purposes of this discussion, we will concentrate on the two forms used for AM-FM
broadcast transmission.
AM Demodulation
An AM signal is made up of an RF sinusoid whose envelope varies at a relatively slow rate about
an average (carrier) level. Any sort of rectifier circuit will produce an output component at the
modulation frequency. Figure 6.3.21 illustrates two of the simple diode rectifier circuits that may
be used, along with idealized waveforms. The average output of the rectifier of Figure 6.3.21a is
proportional to the carrier plus the signal. The circuit exhibits, however, significant output
energy at the RF and its harmonics. A low-pass filter is necessary to eliminate these components.
If the selected filter incorporates a sufficiently large capacitor at its input, the effect is to produce
a peak rectifier, with the idealized waveform shown in Figure 6.3.21b. In this case the demodu-
lated output is increased from the average of a half a sine wave (0.637 peak) to the full peak, and
the RF components are substantially reduced. A peak rectifier used in this way is often referred
to as an envelope detector or demodulator. It is the circuit most frequently used for demodulating
AM broadcast signals.
AM signals may also be demodulated by using a coherent or synchronous demodulator. This
type of demodulator uses a mixer circuit, with an LO signal synchronized in frequency and phase
to the carrier of the AM input. Figure 6.3.22 illustrates three approaches to synchronous demod-
AM and FM Receivers 6-61
(a) (b)
(c )
Figure 6.3.22 Three types of synchronous demodulators: (a) diode-based circuit, (b) dual-gate
MOSFET-based circuit, (c) bipolar IC (CA3005/CA3006) basic circuit.
ulation. The synchronous component can be generated by an oscillator phase-locked to the car-
rier, as illustrated in Figure 6.3.23.
The synchronous demodulator translates the carrier and sidebands to baseband. As long as
the LO is locked to the carrier phase, baseband noise results only from the in-phase component
of the noise input. Consequently the noise increase and S/N reduction that occur at low levels in
the envelope demodulator are absent in the synchronous demodulator. The recovered carrier fil-
tering is narrow band, so that phase lock can be maintained at carrier-to-noise levels below useful
modulation output levels. This type of circuit, while better than an envelope demodulator, is not
6-62 Radio Receivers
(a)
(b)
Figure 6.3.23 Two approaches to AM carrier recovery: (a) system based on a filter, clipper, and
amplifier; (b) system based on a filter, clipper, and PLL. (From [1]. Used with permission.)
generally used for AM broadcast demodulation because of its complexity. Most stereo AM
receivers, however, incorporate synchronous demodulators as part of the decoding circuit.
FM Demodulation
The most common technique for FM demodulation incorporates the use of linear circuits to con-
vert frequency variations to envelope variations, followed by an envelope detector. Another tech-
nique used with linear integrated circuits involves the conversion of frequency variations to
phase variations that are then applied to a phase demodulator. Still other FM demodulators
employ PLLs and frequency-locked loops (FM feedback circuits), or counter circuits whose out-
put is proportional to the rate of zero crossings of the wave. Frequency demodulators are often
referred to as discriminators or frequency detectors.
Resonant circuits are used in discriminators to provide adequate sensitivity to small-percent-
age frequency changes. To eliminate the dc components, two circuits can be used, one tuned
above and one tuned below the carrier frequency. The outputs are demodulated by envelope
demodulators and are then subtracted, eliminating the dc component. Voltage sensitivity is dou-
bled compared to the use of a single circuit. This balanced arrangement also eliminates even-
order distortion so that the first remaining distortion term is third-order. Figure 6.3.24 shows one
implementation of this scheme, known in the U.S. as the Travis discriminator. Because the circuit
depends on the different amplitude responses of two circuits, it has sometimes been called an
amplitude discriminator.
AM and FM Receivers 6-63
Figure 6.3.24 Schematic diagram of the Travis discriminator. (From [3]. Used with permission.)
Figure 6.3.25 The Foster-Seeley FM discriminator circuit with tuned primary. (From [1]. Used with
permission.)
Figure 6.3.26 Schematic diagram of a basic ratio detector circuit. (From [1]. Used with permis-
sion.)
Amplitude Limiter
Amplitude limiting is essential for FM demodulators using analog circuits. Although solid-state
amplifiers tend to limit when the input signal becomes excessive, limiters that make use of this
characteristic often limit the envelope asymmetrically. For angle demodulation, symmetrical lim-
iting is desirable. AGC circuits, which can keep the signal output constant over wide ranges of
input signals, are unsuitable for limiting because they cannot be designed with sufficiently rapid
response to eliminate the envelope variations encountered in angle modulation interference. One
or more cascaded limiter stages are required for good FM demodulation.
Almost any amplifier circuit, when sufficiently driven, provides limiting. However, balanced
limiting circuits produce better results than those that are not balanced. In general, current cutoff
is more effective than current saturation in producing sharp limiting thresholds. Nonetheless,
overdriven amplifiers have been used in many FM systems to provide limiting. If the amplifier is
operated with low supply voltage and near cutoff, it becomes a more effective limiter. The tran-
AM and FM Receivers 6-65
(a)
(b) (c)
Figure 6.3.27 Typical FM limiter circuits: (a) balanced-transistor amplifier, (b) shunt-diode lim-
iter, (c) series diode limiter. (From [1]. Used with permission.)
sistor differential amplifier shown in Figure 6.3.27a is an excellent limiter when the bias of the
emitter load transistor is adjusted to cause cutoff to occur at small base-emitter input levels.
The classic shunt-diode limiter is shown in Figure 6.3.27b. It is important that the off resis-
tance of the diodes be much higher than the driving and load impedances, and the on resistance
be much lower. Figure 6.3.27c shows the classic series diode limiter. In this example the diodes
are normally biased on, so that they permit current flow between the driver and the load. As the
RF input voltage rises, one diode is cut off—and as it falls, the other is cut off. The effectiveness
of limiting is determined by the difference in off and on resistances of the diode, compared to the
driving and load impedances.
6.3.4a FM Stereo
The system devised for broadcasting stereo audio over FM has served the industry well. Key
requirements for the scheme were: 1) compatibility with monophonic receivers that existed at the
time the standard was developed, and 2) a robust signal that would not be degraded significantly
by multipath. Figure 6.3.28 shows the composite baseband that modulates the FM carrier for ste-
reophonic broadcasting. The two-channel baseband has a bandwidth of 53 kHz and is made up
of:
• A main channel (L + R) signal, which consists of the sum of left plus right audio signals (the
same signal broadcast by a monaural station). A fully modulated main channel will modulate
the FM transmitter to 45 percent when broadcasting stereo programming.
• A stereophonic subchannel (L – R), which consists of a double-sideband AM modulated car-
rier with a 38-kHz center frequency. The modulating signal is equal to the difference of the
left and right audio inputs. The subcarrier is suppressed to conserve modulation capability. As
a result, the AM sidebands have the same modulation potential as the main channel. A fully
modulated subchannel will modulate the FM transmitter to 45 percent when broadcasting ste-
reo programming.
• A 19-kHz subcarrier pilot, which is one-half the frequency of the stereophonic subcarrier and
in phase with it. The pilot supplies the reference signal needed by stereo receivers to reinsert
the 38-kHz carrier for demodulation of the double-sideband suppressed carrier transmission.
The pilot, in other words, is used to synchronize the decoder circuitry in the receiver to the
stereo generator at the transmitter. The frequency tolerance of the pilot is ±2 Hz. The pilot
modulates the transmitter 8 to 10 percent.
supplied to the matrix, which produces sum and difference components. The audio signals are
added to form the L + R main channel signal.
The difference signal is fed to a balanced modulator that generates the L – R subchannel.
Because a balanced modulator is used, the 38-kHz carrier is suppressed, leaving only the modu-
lated sidebands. The 19-kHz pilot signal is derived by dividing the 38-kHz oscillator by 2. The
main channel, stereophonic sub-channel, and pilot are then combined in the proper (45/45/10
percent) ratio to form the composite baseband.
The TDM method of generating a stereo signal is shown in block diagram form in Figure
6.3.30. The L + R and L – R signals are generated by an electronic switch that is toggled at a 38-
kHz rate. The switch samples one audio channel and then the other. Considerable harmonic
energy is generated in this process, requiring the use of a low-pass filter. When the harmonics are
filtered out, the proper composite waveform results. This approach, while simple and stable, may
produce unwanted artifacts, most notably reduced stereo separation, because of the filtering
requirements.
An improvement to the basic TDM concept is shown in Figure 6.3.31. By using a “soft
switch” to sample the left and right channels, it is possible to eliminate the low-pass filter and its
side-effects. The variable element shown in the figure consists of an electronic attenuator that is
capable of swinging between its minimum and maximum attenuation values at a 38-kHz rate.
Like the fast-switching TDM system, the L + R and L – R channels are generated in one opera-
tion. No filter is required at the output of the generator as long as the 38-kHz sine wave is free
from harmonics and the variable attenuator has good linearity.
Figure 3.30 Functional block diagram of a time-division multiplexing (TDM) FM stereo generator.
Figure 3.31 Functional block diagram of a time-division multiplexing stereo generator using a vari-
able electronic attenuator.
The composite signal from the demodulator is fed to a buffer amplifier and sampled by a PLL
within the decoder IC. A voltage controlled oscillator, typically running at 76 kHz (four times
the pilot frequency) is locked in phase with the pilot by the error output voltage of the PLL. The
oscillator signal is divided by 2, resulting in a square wave at 38 kHz with nearly perfect duty
cycle and fast rise and fall times. This signal drives the audio switcher (demultiplexer) to transfer
the composite baseband to the left and right audio outputs in synchronization with the station’s
stereo generator. A deemphasis circuit follows the matrix to complement the signal preemphasis
at the FM transmitter.
6.3.4b AM Stereo
AM stereo operation was approved by the FCC in 1981 using the C-QUAM (Motorola) system.
A modification of simple quadrature modulation, C-QUAM was designed to maintain compati-
bility with monophonic transmissions.
AM and FM Receivers 6-69
Figure 6.3.32 Block diagram of a stereo decoder using PLL-controlled time-division multiplexing.
The C-QUAM encoder is shown in Figure 6.3.33. As in FM stereo broadcasting, sum and dif-
ference signals of the left and right audio inputs are produced. Pure quadrature is generated by
taking the L + R and L – R signals and modulating two balanced modulators fed with RF signals
that are out of phase by 90° (producing components referred to as I and Q). As shown in the fig-
6-70 Radio Receivers
ure, the 90° phase shift is derived by using a Johnson counter, which divides an input frequency
(4 times the station carrier frequency) by 4 and provides digital signals precisely 90° out of phase
for the balanced modulators. The carrier is inserted directly from the Johnson counter. At the
output of the summing network, the result is a pure quadrature AM stereo signal. From there it is
passed through a limiter that strips the incompatible AM components from the signal. The output
of the limiter is amplified and sent to the transmitter in place of the crystal oscillator.
The left and right audio signals are summed and sent as compatible L + R to the audio input
terminals of the transmitter.
shifted in phase to the I demodulator, which loses some of its I amplitude. The envelope detector
sees no difference in the AM because of the phase modulation. When the envelope detector and
the I demodulator are compared, there is an error signal. The error signal increases the input level
to the detector. This makes the input signal to the I and Q demodulators look like a pure quadra-
ture signal, and the audio output yields the L – R information. The demodulator output is com-
bined with the envelope-detector output in a matrix to reconstruct the left and right audio
channels.
6.3.5 References
1. Rohde, Ulrich L, and Jerry C. Whitaker: Communications Receivers, 3rd ed., McGraw-
Hill, New York, N.Y., 2000.
2. Rohde, Ulrich L.: Digital PLL Frequency Synthesizers, Prentice-Hall, Englewood Cliffs,
N.J., 1983.
3. Amos, S. W.: “FM Detectors,” Wireless World, vol. 87, no. 1540, pg. 77, January 1981.
6.3.6 Bibliography
Benson, K. Blair, and Jerry C. Whitaker: Television and Audio Handbook for Engineers and
Technicians, McGraw-Hill, New York, N.Y., 1990.
Whitaker, Jerry C. (ed.): NAB Engineering Handbook, 9th ed., National Association of Broad-
casters, Washington, D.C., 1999.
g g
Section
The familiar—and ubiquitous—television receiver has gone through significant and far reaching
changes within the past few years. Once a relatively simple, single purpose device intended for
viewing over-the-air broadcasts, the television set has become the focal point of entertainment
and information services in the home. What began as an all-in-one-box receiver has evolved into
a suite of devices intended to serve a viewing public that demands more selections, more flexibil-
ity, simpler control, and better pictures and sound. The TV set is—in fact—going through the
same metamorphosis that audio systems did in the 1970s. The “console stereo” of the 1960s
evolved from a multipurpose, albeit inflexible, aural entertainment system into the component
scenario that is universal today for high-end audio devices.
The move to such a component approach to television is important because it permits the con-
sumer to select the elements and features that suit his or her individual needs and tastes. Upgrade
options also are simplified. By separating the display device and its related circuitry from the
receiver makes it possible to upgrade from, for example, an NTSC receiver to a NTSC/DTV-
compliant receiver without sacrificing the display—where most of the cost is concentrated and
where most of the really significant advancements are likely to be seen in the coming years.
Cable and satellite systems also play into this component scenario, as new services are rolled out
to consumers.
While the arguments in favor of a component approach to video devices are compelling, the
tradeoff is complexity and—of course—cost. Consumers have clearly indicated that they want
entertainment devices to be simpler to operate and simpler to install, and price is always an issue.
Here again, the integration of audio devices points the way for video manufacturers. There is
ample evidence that consumers will pay a premium if a given device or system gives them what
they want.
Apart from consumer television, videoconferencing is a related area of receiver system devel-
opment. Although not strictly a receiver (in many cases, it is really a computer), emerging desk-
top videoconferencing systems promise to bring visual communication to businesses with the
ease of a phone call or e-mail. Long hamstrung by bandwidth bottlenecks, the era of high-speed
real-time networking has—at last—made desktop videoconferencing possible, and more impor-
tantly, practical.
7-1
y
In This Section:
Citta, R. W., G. J. Sgrignoli, and R. Turner: “Digital Signal with Multilevel Signals and Sync
Recognition,” U.S. patent No. 5 598 220, 28 January 1997.
Compton, R. T., Jr., R. J. Huff, W. G. Swarner, and A. A. Ksienski: “Adaptive Arrays for Com-
munication Systems: An Overview of Research at the Ohio State University,” IEEE Trans.,
vol. AP-24, pg. 599, September 1976.
Cook, James H., Jr., Gary Springer, Jorge B. Vespoli: “Satellite Earth Stations,” in NAB Engi-
neering Handbook, Jerry C. Whitaker (ed.), National Association of Broadcasters, Wash-
ington, D.C., pp. 1285–1322, 1999.
DeVries, A., et al: “Characteristics of Surface-Wave Integratable Filters (SWIFS),” IEEE Trans.,
vol. BTR-17, no. 1, pg. 16.
Di Toro, M. J.: “Communications in Time-Frequency Spread Media,” Proc. IEEE, vol. 56, Octo-
ber 1968.
EIA/CEA-909, “Smart Antenna Interface,” Electronics Industries Alliance, Arlington, VA, Feb-
ruary 26, 2002.
Elliot, R. S.: Antenna Theory and Design, Prentice-Hall, Englewood Cliffs, N.J., pg. 64, 1981.
FCC Regulations, 47 CFR, 15.65, Washington, D.C.
Feinstein, J.: “Passive Microwave Components,” Electronic Engineers’ Handbook, D. Fink and
D. Christiansen (eds.), Handbook, McGraw-Hill, New York, N.Y., 1982.
Fink, D. G., and D. Christiansen (eds.): Electronic Engineer’s Handbook, 2nd ed., McGraw-Hill,
New York, 1982.
Fockens, P., and C. G. Eilers: “Intercarrier Buzz Phenomena Analysis and Cures,” IEEE Trans.
Consumer Electronics, vol. CE-27, no. 3, pg. 381, August 1981.
Gibson, E. D.: “Automatic Equalization Using Time-Domain Equalizers,” Proc. IEEE, vol. 53,
pg. 1140, August 1965.
Ghosh, M: “Blind Decision Feedback Equalization for Terrestrial Television Receivers,” Proc. of
the IEEE, vol. 86, no. 10, pp. 2070–2081, October, 1998.
Gibson, J. J., and R. M. Wilson: “The Mini-State—A Small Television Antenna,” RCA Engineer,
vol. 20, pp. 9–19, 1975.
Grossner, N.: Transformers for Electronic Circuits, 2nd ed., McGraw-Hill, New York, N.Y., pp.
344–358, 1983.
Hoff, L. E., and A. R. King: “Skywave Communication Techniques,” Tech. Rep. 709, Naval
Ocean Systems Center, San Diego, Calif, March 30,1981.
Hoffman, Gary A.: “IEEE 1394: The A/V Digital Interface of Choice,” 1394 Technology Associ-
ation Technical Brief, 1394 Technology Association, Santa Clara, Calif., 1999.
Horwitz, T. P., R. B. Lee, and G. Krishnamurthy: “Error Tracking Loop,” U.S. patent No. 5 406
587, 11 April 1995.
Hufford, G. A: “A Characterization of the Multipath in the HDTV Channel,” IEEE Trans. on
Broadcasting, vol. 38, no. 4, December 1992.
y
Hulst, G. D.: “Inverse Ionosphere,” IRE Trans., vol. CS-8, pg. 3, March 1960.
IEEE Guide for Surge Withstand Capability, (SWC) Tests, ANSI C37.90a-1974/IEEE Std. 472-
1974, IEEE, New York, N.Y., 1974.
Jasik, H., Antenna Engineering Handbook, McGraw-Hill, New York, Chapter 24, 1961.
Kahn, L. R.: “Ratio Squarer,” Proc. IRE, vol. 42, pg. 1704, November 1954.
Kase, C. A., and W. L. Pritchard: “Getting Set for Direct-Broadcast Satellites,” IEEE Spectrum,
IEEE, New York, N.Y., vol. 18, no. 8, pp. 22–28, 1981.
Kim, J. G., K. S. Kim, and S. W. Jung: “Phase Error Corrector for HDTV Reception System,”
U.S. patent No. 5 602 601, 11 February 1997.
Kraus, J. D.: Antennas, McGraw-Hill, New York, N.Y., Chapter 12, 1950.
Krishnamurthy, G., and R. B. Lee: “Error Tracking Loop Incorporating Simplified Cosine Look-
up Table,” U.S. patent No. 5 533 071, 2 July 1996.
Krishnamurthy, G., T. G. Laud, and R. B. Lee: “Polarity Selection Circuit for Bi-phase Stable
FPLL,” U. S. patent No. 5 621 483, 15 April 1997.
Limberg, A. L. R: “Plural-Conversion TV Receiver Converting 1st IF to 2nd IF Using Oscilla-
tions of Fixed Frequency Above 1st IF,” U. S. patent No. 6 307 595, 23 October 2001.
Lo, Y. T.: “TV Receiving Antennas,” in Antenna Engineering Handbook, H. Jasik (ed.),
McGraw-Hill, New York, N.Y., pp. 24–25, 1961.
Lucky, R. W.: “Automatic Equalization for Digital Communication,” Bell Sys.Tech. J., vol. 44,
pg. 547, April 1965.
Miyazawa, H.: “Evaluation and Measurement of Airplane Flutter Interference,” IEEE Trans. on
Broadcasting, vol. 35, no. 4, pp 362–367, December 1989.
Monsen, P.: “Fading Channel Communications,” IEEE Commun.., vol. 18, pg. 16, January 1980.
Mycynek, V., and G. J. Sgrignoli: “FPLL With Third Multiplier in an AC Path in the FPLL,” U.S.
patent No. 5 745 004, 28 April 1998.
NAB V TechCheck: “Antenna Control Interface Standard and Implementation,” National Associ-
ation of Broadcasters, Washington, D.C., April 22, 2002.
NAB TV TechCheck: “CEA Establishes Definitions for Digital Television Products,” National
Association of Broadcasters, Washington, D.C., September 1, 2000.
NAB TV TechCheck: “Consumer Electronic Consortium Publishes Updated Specifications for
Home Audio Video Interoperability,” National Association of Broadcasters, Washington,
D.C., May 21, 2001.
NAB TV TechCheck: “FCC Adopts Rules for Labeling of DTV Receivers,” National Association
of Broadcasters, Washington, D.C., September 25, 2000.
Neal, C. B., and S. Goyal: “Frequency and Amplitude Phase Effects in Television Broadcast Sys-
tems,” IEEE Trans., vol. CE-23, no. 3, pg. 241, August 1977.
O’Connor, Robert: “Understanding Television’s Grade A and Grade B Service Contours,” IEEE
Transactions on Broadcasting, vol. BC-14, no. 4, pp. 137–143, December 1968.
y
Plonka, Robert: “Can ATV Coverage be Improved with Circular, Elliptical or Vertical Polarized
Antennas?,” NAB Engineering Conference Proceedings, National Association of Broad-
casters, Washington, D.C., 1996.
Price, R., and P. E. Green, Jr.: “A Communication Technique for Multipath Channels,” Proc. IRE,
vol. 46, pg. 555, March 1958.
Proakis, J. G.: “Advances in Equalization for Intersymbol Interference,” in Advances in Commu-
nication Systems: Theory & Application, A. V. Balakrishnan and A. J. Viterbi (eds.), Aca-
demic Press, New York, N.Y., 1975.
Qureshi, S.: “Adaptive Equalization,” IEEE Commun., vol. 20, pg. 9, March 1982.
Qureshi, Shahid U. H.: “Adaptive Equalization,” Proceedings of the IEEE, IEEE, New York,
N.Y., vol. 73, no. 9, pp. 1349–1387, September 1985.
Radio Amateur’s Handbook, American Radio Relay League, Newington, Conn., 1983.
“Receiver Planning Factors Applicable to All ATV Systems,” Final Report of PS/WP3,
Advanced Television Systems Committee, Washington, D.C., December 1, 1994.
Riegler, R. L., and R. T. Compton, Jr.: “An Adaptive Array for Interference Rejection,” Proc.
IEEE, vol. 61, June 1973.
Rossweiler, G. C., F. Wallace, and C. Ottenhoff: “Analog versus Digital Null-Steering Control-
lers,” ICC ‘77 Conf. Rec., 1977.
Sgrignoli, G. J.: “Pilot Recovery and Polarity Detection System,” U.S. patent No. 5 675 283, 7
October1997.
“Television Receivers and Video Products,” UL 1410, Sec. 71, Underwriters Laboratories, Inc.,
New York, N.Y., 1981.
Ungerboeck, Gottfried: “Fractional Tap-Spacing Equalizer and Consequences for Clock Recov-
ery in Data Modems,” IEEE Transactions on Communications, IEEE, New York, N.Y., vol.
COM-24, no. 8, pp. 856–864, August 1976.
Whitaker, Jerry C.: Interactive TV Survival Guide, McGraw-Hill, New York, N.Y., 2001.
Widrow, B., and J. M. McCool: “A Comparison of Adaptive Algorithms Based on the Methods
of Steepest Descent and Random Search,” IEEE Trans., vol. AP-24, pg. 615, September
1976.
Widrow, B., and M. E. J. Hoff: “Adaptive Switching Circuits,” IRE 1960 Wescon Conv. Record,
pp. 563–587.
Widrow, B., P. E. Mantey, L. J. Griffiths, and B. B. Goode: “Adaptive Antenna Systems,” Proc.
IEEE, vol. 55, pg. 2143, December 1967.
Wilming, D. A.: “Data Frame Structure and Synchronization System for Digital Television Sig-
nal,” U.S. patent No. 5 629 958, 13 May 1997.
Yamada and Uematsu: “New Color TV with Composite SAW IF Filter Separating Sound and
Picture Signals,” IEEE Trans., vol. CE-28, no. 3, pg. 193.
y
g g
Chapter
7.1
Television Reception Principles
K. Blair Benson
7.1.1 Introduction
Television receivers provide black-and-white or color reproduction of pictures and the accompa-
nying monaural or stereophonic sound from signals broadcast through the air or via cable distri-
bution systems. The broadcast channels in the U.S. are 6 MHz wide for transmission of
conventional 525-line NTSC signals and DTV signals.
e = E -96.68
------------ (7.1.1)
f 1 f2
7-11
p p
Where
e = terminal voltage, μV, 300 Ω
E = field, μV/m
f1 and f2 = band-edge frequencies, MHz
Many sizes and form factors of receivers are manufactured. Portable personal types include
pocket-sized or hand-held models with picture sizes of 2 to 4 in (5 to 10 cm) diagonal for mono-
chrome and 5 to 6 in (13 to 15 cm) for color powered by either batteries or ac. Conventional cath-
ode ray tubes (CRTS) for picture displays in portable sets have essentially been supplanted by
flat liquid crystal displays and other flat-panel technologies.
Larger screen sizes are available in monochrome where low cost and light weight are prime
requirements. However, except where extreme portability is important, the vast majority of tele-
vision program viewing is in color. The 19-in (48-cm) and 27-in (69-cm) sizes dominate the mar-
ket, although the smaller 13-in (33-cm) size is popular as a second or semiportable set.
Television receiver functions can be broken down into several interconnected blocks. With the
rapidly increasing use of large-scale integrated circuits, the isolation of functions has become
more evident in the design and service of receivers, while at the same time the actual parts count
has dropped dramatically. The typical functional configuration of a receiver using a tri-gun pic-
ture tube, shown in Figure 7.1.1, will serve as a guide for the following description of receiver
design and operation. The discussions of each major block, in turn, are accompanied with more
detailed subblock diagrams.
Figure 7.1.1 Fundamental block diagram of a color receiver with a tri-gun picture tube display.
of combining the VHF and UHF circuits on a single printed circuit board in the same shielded
box with a common antenna connection, thus eliminating the need for an outrigger coupling unit.
Selectivity
The tuner bandpass generally is 10 MHz in order to ensure that the picture and sound signals of
the full 6 MHz television channel are amplified with no significant imbalance in levels or phase
distortion by the skirts of the bandpass filters. This bandpass characteristic usually is provided by
three tuned circuits:
• A single-tuned preselector between the antenna input and the RF amplifier stage
• A double-tuned interstage network between the RF and mixer stages
• A single-tuned coupling circuit at the mixer output.
The first two circuits are frequency-selective to the desired channel by varying either or both the
inductance and capacitance. The mixer output is tuned to approximately 44 MHz, the center fre-
quency of the IF channel.
The purpose of the RF selectivity function is to reduce all signals that are outside of the
selected television channel. For example, the input section of VHF tuners usually contains a
p p
high-pass filter and trap section to reject signals lower than Channel 2 (54 MHz), such as stan-
dard broadcast, amateur, and citizen's band (CB) emissions. In addition, a trap is provided to
reduce FM broadcast signals in the 88 to 108 MHz band. A list of the major interference prob-
lems is tabulated in Table 7.1.2 for VHF channels. In Table 7.1.3 for UHF channels, the formula
for calculation of the interfering channels is given in the second column, and the calculation for a
receiver tuned to Channel 30 is given in the third column.
VHF Tuner
A block diagram of a typical mechanical tuner is shown in Figure 7.1.2. The antenna is coupled
to a tunable RF stage through a bandpass filter to reduce spurious interference signals in the IF
band, or from FM broadcast stations and CB transmitters. Another bandpass filter is provided in
the UHF section for the same purpose. The typical responses of these filters are shown in Figures
7.1.3.a and b.
The RF stage provides a gain of 15 to 20 dB (approximately a 10:1 voltage gain) with a band-
pass selectivity of about 10 MHz between the –3 dB points on the response curve. The response
between these points is relatively flat with a dip of only a decibel or so at the midpoint. There-
fore, the response over the narrower 6 MHz television channel bandwidth is essentially flat.
VHF tuners have a rotary shaft that switches a different set of three or four coils or coil taps
into the circuit at each VHF channel position (2 to 13). The circuits with these switched coils
include:
• RF input preselection
• RF input coupling (single-tuned for monochrome, double-tuned for color)
• RF-to-mixer interstage
p p
In the first switch position (Channel 1), the RF stage is disabled and the mixer stage becomes
an IF amplifier stage, centered on 44 MHz for the UHF tuner.
The mixer stage combines the RF signal with the output of a tunable local oscillator to pro-
duce an IF of 43.75 MHz for the picture carrier signal and 42.25 MHz for the sound carrier sig-
nal. The local oscillator signal thus is always 45.75 MHz above that of the selected incoming
picture signal. For example, the frequencies for Channel 2 are listed in Table 7.1.4.
These frequencies were chosen to minimize interference from one television receiver into
another by always having the local-oscillator signal above the VHF channels. Note that the oscil-
lator frequencies for the low VHF channels (2 to 6) are between Channels 6 (low VHF) and 7
(high VHF), and the oscillator frequencies for high VHF fall above these channels.
p p
(a)
(b)
Figure 7.1.3 Filter response characteristics: (a) response of a tuner input FM bandstop filter, (b)
response of a tuner input with CB and IF traps.
The picture and sound signals of the full 6 MHz television channel are amplified with no sig-
nificant imbalance in levels or phase distortion by the skirts of the bandpass filters. This band-
pass characteristic usually is provided by three tuned circuits:
• A single-tuned preselector between the antenna input and the RF amplifier stage
• Double-tuned interstage network between the RF and mixer stages
• Single-tuned coupling circuit at the mixer output
The first two are frequency-selective to the desired channel by varying either or both the induc-
tance and capacitance. The mixer output is tuned to approximately 44 MHz, the center frequency
of the IF channel.
p p
UHF Tuner
The UHF tuner contains a tunable input circuit to select the desired channel, followed by a diode
mixer. As in a VHF tuner, the local oscillator is operated at 45.75 MHz above the selected input
channel signal. The output of the UHF mixer is fed to the mixer of the accompanying VHF tuner,
which functions as an IF amplifier. Selection between UHF and VHF is made by applying power
to the appropriate tuner RF stage.
Mechanical UHF tuners have a shaft that when rotated moves one set of plates of variable air-
dielectric capacitors in three resonant circuits. The first two are a double-tuned preselector in the
amplifier-mixer coupling circuit, and the third is the tank circuit of the local oscillator. In order
to meet the discrete selection requirement of the FCC, a mechanical detent on the rotation of the
shaft and a channel-selector indicator are provided, as illustrated in Figure 7.1.4.
The inductor for each tuned circuit is a rigid metal strip, grounded at one end to the tuner
shield and connected at the other end to the fixed plate of a three-section variable capacitor with
the rotary plates grounded. The three tuned circuits are separated by two internal shields that
divide the tuner box into three compartments.
(a )
(b )
Figure 7.1.5 Tuner-to-IF section link coupling: (a) low-side capacitive coupling, (b) low-side induc-
tive coupling.
tion and in reducing the generation of spurious signals in the IF section. On the other hand, the
low-side inductance gives a better termination to the link cable and therefore reduces interstage
cable loss. The necessary bandpass characteristics can be obtained either by undercoupled stag-
ger tuning or by overcoupled synchronous tuning, as illustrated in Figure 7.1.5.
Varactor Tuner
The varactor diode forms the basis for electronic tuning, which is accomplished by a change in
capacitance with the applied dc voltage to the device. One diode is used in each tuned circuit.
Unlike variable air-dielectric capacitors, varicaps have a resistive component in addition to their
capacitance that lowers the Q and results in a degraded noise figure. Therefore, varactor UHF
tuners usually include an RF amplifier stage, making it functionally similar to a VHF tuner. (See
Figure 7.1.6.)
The full UHF band can be covered by a single varicap in a tuned circuit because the ratio of
highest and lowest frequencies in the UHF bands is less than 2:1 (1.7). However, the ratio of the
highest to lowest frequencies in the two VHF bands is over twice (4.07) that of the UHF band.
This is beyond the range that typically can be covered by a tuned circuit using varicaps. This
problem is solved by the use of band switching between the low and high VHF channels. This is
accomplished rather simply by short-circuiting a part of the tuning coil in each resonant tank cir-
cuit to reduce its inductance. The short circuit is provided by a diode that has a low resistance in
the forward-biased condition and a low capacitance in the reverse-biased condition. A typical RF
p p
Figure 7.1.7 VHF tuner band-switching and tuning circuits. +VB = high VHF (active tuning induc-
tors = L3 in parallel with L1 and L2, L10, L11, L21). –VB = low VHF (active tuning inductors = L1 + L2,
L12, LI1 + L13, L14, L21 + L22).
input and oscillator circuit arrangement is shown in Figure 7.1.7. Applying a positive voltage to
V, switches the tuner to high VHF by causing the diodes to conduct and lower the inductance of
the tuning circuits.
Tuning Systems
The purpose of the tuning system is to set the tuner, VHF or UHF, to the desired channel and to
fine-tune the local oscillator for the video carrier from the mixer to be set at the proper IF fre-
quency of 45.75 MHz. In mechanical tuners, this obviously involves an adjustment of the rotary
selector switch and the capacitor knob on the switch shaft. In electronically tuned systems, the dc
tuning voltage can be supplied from the wiper arm of a potentiometer control connected to a
fixed voltage source as shown in Figure 7.1.8a.
Alternatively, multiples of this circuit, as shown in Figure 7.1.8b, can provide preset fine-tun-
ing for each channel. This arrangement most commonly is found in cable-channel selector boxes
supplied with an external cable processor.
In digital systems, such as that shown in Figure 7.1.8c, the tuning voltage can be read as a
digital word from the memory of a keyboard and display station (or remote control circuit). After
conversion from a digital code to an analog voltage, the tuning control voltage is sent to the
tuner.
Figure 7.1.8d shows a microprocessor system using a phase-locked loop to compare a
medium-frequency square-wave signal from the channel selector keyboard, corresponding to a
specific channel, with a signal divided down by 256 from the local oscillator. The error signal
generated by the difference in these two frequencies is filtered and used to correct the tuning
voltage being supplied to the tuner.
p p
(a ) (b)
(c )
(d )
Figure 7.1.8 Varactor-tuned systems: (a) simple potentiometer controlled varactor-tuned system,
(b) multiple potentiometers providing n-channel selection, (c) simplified memory tuning, (d) micro-
processor-based PLL tuning system.
p p
(a )
(b )
Figure 7.1.9 IF amplifier system: (a) typical IF amplifier strip block diagram and gain distribution,
(b) receiver AGC system.
• Local oscillator radiation does not interfere with another receiver on any channel or on any
channel image.
• No UHF signal falls on the image frequency of another station.
On the other hand, it should be pointed out that channels for certain public safety communica-
tions are allocated in the standard IF band. Because these transmitters radiate high power levels,
receivers require thorough shielding of the IF amplifier and IF signal rejection traps in the tuner
ahead of the mixer. In locations where severe cases of interference are encountered, the addition
of a rejection filter at the antenna input may be necessary.
Gain Characteristics
The output level of the picture and sound carriers from the mixer in the tuner is about 200 μV.
The IF section provides amplification to boost this level to about 2 V, which is required for linear
operation of the detector or demodulator stage. This is an overall gain of 80 dB. The gain distri-
bution in a typical IF amplifier using discrete gain stages is shown in Figure 7.1.9a.
Automatic gain control (AGC) in a closed feedback loop is used to prevent overload in the IF,
and the mixer stage as well, from strong signals (see Figure 7.1.9b). Input-signal levels may
range from a few microvolts to several hundred microvolts, thus emphasizing the need for AGC.
The AGC voltage is applied only to the IF for moderate signal levels so that the low-noise RF
amplifier stage in the tuner will operate at maximum gain for relatively weak tuner input signals.
A “delay” bias is applied to the tuner gain control to block application of the AGC voltage except
at very high antenna signal levels. As the antenna signal level increases, the AGC voltage is
applied first to the first and second IF stages. When the input signal reaches about 1 mV, the
AGC voltage is applied to the tuner, as well.
Response Characteristics
The bandpass of the IF amplifier must be wide enough to cover the picture and sound carriers of
the channel selected in the tuner while providing sharp rejection of adjacent channel signals.
Specifically, the upper adjacent-picture carrier and lower adjacent-sound channel must be attenu-
ated 40 and 50 dB, respectively, to eliminate visible interference patterns in the picture. The
sound carrier at 4.5 MHz below the picture carrier must be of adequate level to feed either a sep-
arate sound IF channel or a 4.5 MHz intercarrier sound channel. Furthermore, because in the
vestigial sideband system of transmission the video carrier lower sideband is missing, the
response characteristic is modified from flat response to attenuate the picture carrier by 50 per-
cent (6 dB).
In addition, in color receivers the color subcarrier at 3.58 MHz below the picture carrier must
be amplified without time delay relative to the picture carrier or distortion. Ideally, this requires
the response shown in Figure 7.1.10. Notice that the color IF is wider and has greater attenuation
of the channel sound carrier in order to reduce the visibility of the 920 kHz beat between the
color subcarrier and the sound carrier.
These and other more stringent requirements for color reception are illustrated in Figure
7.1.11. Specifically:
• IF bandwidth must be extended on the high-frequency video side to accommodate the color
subcarrier modulation sidebands that extend to 41.25 MHz (as shown in Figure 7.1.11). The
response must be stable and, except in sets with automatic chroma control (ACC), the
p p
Figure 7.1.10 Ideal IF amplitude response for color and monochrome reception.
response must not change with the input signal level (AGC) in order to maintain a constant
level of color saturation.
• More accurate tuning of the received signal must be accomplished in order to avoid shifting
the carriers on the tuner IF passband response. Deviation from their prescribed positions will
alter the ratio of luminance to chrominance (saturation). While this is corrected in receivers
with automatic fine tuning (AFT) and ACC, it can change the time relationship between color
and luminance that is apparent in the color picture as chroma being misplaced horizontally.
• Color subcarrier presence as a second signal dictates greater freedom from overload of ampli-
fier and detector circuits, which can result in spurious intermodulation signals visible as beat
patterns. These cannot be removed by subsequent filtering.
p p
• Envelope delay (time delay) of the narrow-band chroma and wide-band luminance signals
must be equalized so that the horizontal position of the two signals match in the color picture.
Although quartz and other materials have been used as SAW filter substrates, for television
applications lithium niobate and lithium tantalate are typical. When one set is driven by an elec-
tric signal voltage, an acoustic wave moves across the surface to the other set of fingers that are
connected to the load. The transfer amplitude-frequency response appears as a (sin x)/x (Figure
7.1.15).
A modification to the design in Figure 7.1.14 that gives more optimum television bandpass
and trap response consists of varying the length of the fingers to form a diamond configuration,
illustrated in Figure 7.1.16. This is equivalent to connecting several transducers with slightly dif-
ferent resonant frequencies and bandwidths in parallel. Other modification consists of varying
the aperture spacings, distance between the transducers, and the passive coupler strip patterns in
the space between the transducers.
blocks that meets these objectives has a gain of nearly 20 dB and a gain-control range of 24 dB.
A direct-coupled cascade of three stages yields an overall gain of 57 dB and a gain-control range
of 64 dB. The gain-control system internal to this IC begins to gain-reduce the third stage at an
IC input level of 100 μV of IF carrier. With increasing input signal level, the third stage gain
reduces to 0 dB and then is followed by the second stage to a similar level, followed by the first
in the same manner. By this means, a noise figure of 7 dB is held constant over an IF input signal
range of 40 dB. The need for a preamplifier ahead of the SAW IF becomes less important when
the IF amplifier noise figure is maintained constant by this cascaded control system.
The high gain and small size of an integrated-circuit IF amplifier places greater importance
on PC layout techniques and ground paths if stability is to be achieved under a wide range of
operating conditions. These considerations also carry over to the external circuits and compo-
nents.
Figure 7.1.17 Frequency response of SAW filter picture-carrier output. (After [3].)
Envelope Detector
Of the several types of demodulators, the envelope detector is the simplest. It consists of a diode
rectifier feeding a parallel load of a resistor and a capacitor (Figure 7.1.18). In other words, it is a
half-wave rectifier that charges the capacitor to the peak value of the modulation envelope.
Because of the large loss in the diode, a high level of IF voltage is required to recover 1 or 2
volts of demodulated video. In addition, unless the circuit is operated at a high signal level, the
curvature of the diode impedance curve near cutoff results in undesirable compression of peak-
white signals. The requirement for large signal levels and the nonlinearity of detection result in
design problems and certain performance deficiencies, including the following:
• Beat signal products will occur between the color subcarrier (42.17 MHz), the sound carrier
(41.25 MHz), and high-amplitude components in the video signal. The most serious is a 920
p p
(a)
(b)
Figure 7.1.19 Comparison of quadrature distortion in envelope and synchronous detectors: (a)
axis shift, (b) inverted and normal 2T-pulse response. (After [4].)
Hz (color to sound) beat and 60 Hz buzz in sound from vertical sync and peak-white video
modulation.
• Distortion of luminance toward black of as much as 10 percent and asymmetric transient
response. Referred to as quadrature distortion, this characteristic of the vestigial sideband is
aggravated by nonlinearity of the diode. (See Figure 7.1.19.)
• Radiation of the fourth harmonic of the video IF produced by the detection action directly
from the chassis, which can interfere with reception of VHF Channel 8 (180 to 186 MHz).
Even with these deficiencies, the diode envelope detector was used in the majority of the
monochrome and color television receivers dating back to vacuum tube designs up to the era of
discrete transistors.
p p
(a ) (b)
Figure 7.1.20 Transistor detector: (a) schematic diagram, (b) detection characteristic.
Transistor Detector
A transistor biased near collector cutoff and driven with a modulated carrier at an amplitude
greater than the bias level (see Figure 7.1.20) provides a demodulator that can have a gain of 15
or 20 dB over that of a diode. Consequently, less gain is required in the IF amplifier; in fact, in
some receiver designs, the third IF stage has been eliminated. Unfortunately, this is offset by the
same deficiencies in signal detection as the diode envelope detector.
Synchronous Detector
The synchronous detector is basically a balanced rectifier in which the carrier is sampled by an
unmodulated carrier at the same frequency as the modulated carrier. The unmodulated reference
signal is generated in a separate high-Q limiting circuit that removes the modulation. An alterna-
tive system for generating the reference waveform is by means of a local oscillator phase-locked
to the IF signal carrier.
The advantages of synchronous demodulation are:
• Higher gain than a diode detector
• Low level input, which considerably reduces beat-signal generation
• Low level detection operation, which reduces IF harmonics by more than 20 dB
• Little or no quadrature distortion (see Figure 7.1.19), depending upon the lack of residual
phase modulation (purity) of the reference carrier used for detection
• Circuit easily formatted in an IC
• Input level does not overload the amplifier stages, thus causing a bias shift.
• Bias operates each functional component of the system at its optimum linearity point, that is,
the lowest third-order product for amplifiers, and highest practical second-order for mixers
and detectors.
• Spurious responses in the output are 50 dB below the desired signal.
The function of the automatic gain control system is to maintain signal levels in these stages
at the optimum value over a large range of receiver input levels. The control voltage to operate
the system usually is derived from the video detector or the first video amplifier stage. Common
implementation techniques include the following:
• Average AGC, which operates on the principle of keeping the carrier level constant in the IF.
Changes in modulation of the carrier will affect the gain control, and therefore it is used only
in low-cost receivers.
• Peak or sync-clamp AGC, which compares the video sync-tip level with a fixed dc level. If
the sync-tip amplitude exceeds the reference level, a control voltage is applied to the RF and
IF stages to reduce their gain and thus restore the sync-tip level to the reference level.
• Keyed or gated AGC, which is similar to sync-clamp AGC. The stage where the comparison
of sync-tip and reference signals takes place is activated only during the sync-pulse interval
by a horizontal flyback pulse. Because the AGC circuit is insensitive to spurious noise signals
between sync pulses, the noise immunity is considerably improved over the other two sys-
tems.
AGC Delay
For best receiver SNR, the tuner RF stage is operated at maximum gain for RF signals up to a
threshold level of 1 mV. In discrete amplifier chains, the AGC system begins to reduce the gain
of the second IF stage proportionately as the RF signal level increases from just above the sensi-
tivity level to the second-stage limit of gain reduction (20 to 25 dB). For increasing signals, the
first IF stage gain is reduced. Finally, above the control delay point of 1mV, the tuner gain is
reduced. A plot of the relationships between receiver input RF level and the gain characteristics
of the tuner and IF are shown in Figure 7.1.21a and the noise figure is shown in Figure 7.1.21b.
System Analysis
The interconnection of the amplifier stages (RF, mixer, and IF), the detector, and the lowpass-fil-
tered control voltage is—in effect—a feedback system. The loop gain of the feedback system is
nonlinear, increasing with increasing signal level. There are two principal constraints that
designers must cope with:
• First, the loop gain should be large to maintain good regulation of the detector output over a
wide range of input signal levels. As the loop gain increases, the stability of the system will
decrease and oscillation can occur.
• Second, the ability of a receiver to reject impulse noise is inversely proportional to the loop
gain. Excessive impulse noise can saturate the detector and reduce the RF-IF gain, thereby
causing either a loss in picture contrast or a complete loss of picture. This problem can be
p p
(a ) (b )
Figure 7.1.21 Automatic gain control principles: (a) gain control as a function of input level, (b)
noise figure of RF and IF stages with gain control and the resulting receiver SNR.
alleviated by bandwidth-limiting the video signal fed to the AGC detector, or the use of keyed
or gated AGC to block false input signals (except during the sync pulse time).
A good compromise between regulation of video level and noise immunity is realized with a
loop-gain factor of 20 to 30 dB.
The filter network and filter time constants play an important part in the effectiveness of AGC
operation. The filter removes the 15.750 kHz horizontal sync pulses and equalizing pulses, the
latter in blocks in the 60 Hz vertical sync interval. The filter time constants must be chosen to
eliminate or minimize the following problems:
• Airplane flutter, a fluctuation in signal level caused by alternate cancellation and reinforce-
ment of the received signal by reflections from an airplane flying overhead in the path
between the transmitting and receiving antennas. The amplitude may vary as much as 2 to 1 at
rates from 50 to 150 Hz. If the time constants are too long, especially that of the control volt-
age to the RF stage, the gain will not change rapidly enough to track the fluctuating level of
the signal. The result will be a flutter in contrast and brightness of the picture.
• Vertical sync pulse sag, resulting from the AGC system speed of response being so fast that it
will follow changes in sync pulse energy during the vertical sync interval. Gain increases dur-
ing the initial half-width equalizing pulses, then decreases during the slotted vertical sync
pulse, increases again during the final equalizing pulses, and then returns to normal during
the end of vertical blanking and the next field of picture. Excessive sag can cause loss of
proper interlace and vertical jitter or bounce. Sag can be reduced by limiting the response of
the AGC control loop, or through the use of keyed AGC.
• Lock-out of the received signal during channel switching, caused by excessive speed of the
AGC system. This can result in as much as a 2:1 decrease in pull-in range for the AGC sys-
tem. In keyed AGC systems, if the timing of the horizontal gating pulses is incorrect, exces-
p p
sive or insufficient sync can result at the Table 7.1.5 Typical AFC Closed-Loop Charac-
sync separator, which—in turn—will teristics
upset the operation of the AGC loop. Parameter Value
Pull-in range ± 750 kHz
Hold-in range ± 1.5 MHz
7.1.2f Automatic Frequency Control (AFC) Frequency error for ±500 kHz offset < 50 kHz
Also called automatic fine-tuning (AFT), the
AFC circuit senses the frequency of the pic-
ture carrier in the IF section and sends a correction voltage to the local oscillator in the tuner if
the picture carrier is not on the standard frequency of 45.75 MHz.
Typical AFC systems consist of a frequency discriminator prior to the video detector, a low-
pass filter, and a varactor diode controlling the local oscillator. The frequency discriminator in
discrete transistor IF systems has typically been the familiar balanced-diode type used for FM
radio receivers with the components adjusted for wide-band operation centering on 45.75 MHz.
A small amount of unbalance is designed into the circuit to compensate for the unbalanced side-
band components of vestigial sideband signal characteristics. The characteristics of AFC closed
loops are shown in Table 7.1.5.
In early solid-state designs, the AFC block was a single IC with a few external components.
More current designs include the AFC circuit in the form of a synchronous demodulator on the
same IC die as the other functions of the IF section.
detector represents little improvement over an envelope diode detector unless a narrow-band fil-
ter is used in the reference channel [5].
Split-carrier sound processes the IF picture and sound carriers as shown in Figure 7.1.24.
Quasi-parallel sound utilizes a special filter such as the SAW filter of Figure 7.1.14 to elimi-
nate the Nyquist slope in the sound detection channel, thereby eliminating a major source of
ICPM generation in the receiver. The block diagram of this system is shown in Figure 7.1.25.
Nearly all sound channels in present-day television receivers are designed as a one- or two-IC
configuration. The single IC contains the functions of sound IF amplifier-limiter, FM detector,
volume control, and audio output. Two-chip systems usually incorporate stereo functionality or
audio power amplification.
Four types of detector circuits typically are used in ICs for demodulation of the FM sound
carrier:
• The quadrature detector, also known as the gated coincidence detector and analog multiplier,
measures the instantaneous phase shift across a reactive circuit as the carrier frequency shifts.
At center frequency (zero deviation) the LC phase network gives a 90° phase shift to V 2 com-
pared with V1. As the carrier deviates, the phase shift changes proportionately to the amount
of carrier deviation and direction.
• The balanced peak detector, which utilizes two peak or envelope detectors, a differential
amplifier, and a frequency-selective circuit or piezoceramic discriminator.
• The differential peak detector, which operates at a low voltage level and does not require
square-wave switching pulses. Therefore, it creates less harmonic radiation than the quadra-
ture detector. In some designs, a low-pass filter is placed between the limiter and peak detec-
tor to further reduce harmonic radiation and increase AM rejection.
• The phase-locked-loop detector, which requires no frequency-selective LC network to
accomplish demodulation. In this system, the voltage-controlled oscillator (VCO) is phase-
locked by the feedback loop into following the deviation of the incoming FM signal. The low-
frequency error voltage that forces the VCO to track is—in fact—the demodulated output.
p p
Figure 7.1.26 Contrast control circuits: (a) contrast control network in the emitter circuit, (b) equiv-
alent circuit at maximum contrast (maximum gain), (c) minimum contrast.
Picture Controls
A video gain or contrast control and a brightness or background control are provided to allow the
viewer to select the contrast ratio and overall brightness level that produce the most pleasing pic-
ture for a variety of scene material, transmission characteristics, and ambient lighting conditions.
The contrast control usually provides a 4:1 gain change. This is accomplished either by attenua-
tor action between the output of the video stage and the CRT or by changing the ac gain of the
video stage by means of an ac-coupled variable resistor in the emitter circuit. The brightness con-
trol shifts the dc bias level on the CRT to raise or lower the video signal with respect to the CRT
beam cutoff voltage level. (See Figure 7.1.26.)
AC and dc Coupling
For perfect picture transmission and reproduction, it is necessary that all shades of gray are
demodulated and reproduced accurately by the display device. This implies that the dc level
developed by the video demodulator, in response to the various levels of video carrier, must be
carried to the picture tube. Direct coupling or dc restoration is often used, especially in color
receivers where color saturation is directly dependent upon luminance level. (See Figure 7.1.27.)
p p
(a ) (b)
Figure 7.1.27 CRT luminance drive circuit: (a) brightness control in CRT cathode circuit, (b)
brightness control in CRT grid circuit.
Many low cost monochrome designs utilize only ac coupling with no regard for the dc infor-
mation. This eases the high-voltage power supply design as well as simplifying the video cir-
cuitry. These sets will produce a picture in which the average value of luminance remains nearly
constant. For example, a night scene having a level of 15 to 20 IRE units and no peak-white
excursions will tend to brighten toward the luminance level of the typical daytime scene (50 IRE
units). Likewise a full-raster white scene with few black excursions will tend to darken to the
average luminance level condition by use of partial dc coupling in which a high-resistance path
exists between the second detector and the CRT. This path usually has a gain of one-half to one-
fourth that of the ac signal path.
The transient response of the video amplifier is controlled by its amplitude and phase charac-
teristics. The low-frequency transient response, including the effects of dc restoration, if used, is
measured in terms of distortion to the vertical blanking pulse. Faithful reproduction requires that
the change in amplitude over the pulse duration, usually a decrease from initial value called sag
or tilt, be less than 5 percent. In general, there is no direct and simple relationship between the
sag and the lower 3 dB cutoff frequency. However, lowering the 3 dB cutoff frequency will
reduce the tilt, as illustrated in Figure 7.1.28.
(a ) (b)
Figure 7.1.28 Video stage low-frequency response: (a) square-wave output showing tilt, (b) RC
time constant circuits in the common-emitter stage that affect low-frequency response.
variety, substantially reduce tilt, and their effect must be considered in specifying the overall
response. Second, extended LF response makes the system more susceptible to instability and
low-frequency interference. Current coupling through a common power supply impedance can
produce the low-frequency oscillation known as “motorboating.” Motorboating is not usually a
problem in television receiver video amplifiers because they seldom employ the number of
stages required to produce regenerative feedback, but in multistage amplifiers the tendency
toward motorboating is reduced as the LF response is reduced.
A more commonly encountered problem is the effect of airplane flutter and line bounce.
Although a fast-acting AGC can substantially reduce the effects of low-frequency amplitude
variations produced by airplane reflections, the effect is so annoying visually as to warrant a sac-
rifice in LF response to bring about further reduction. A transient in-line voltage amplitude,
commonly called a line bounce, also can produce an annoying brightness transient that can simi-
larly be reduced through a sacrifice of LF response. Special circuit precautions against line
bounce include the longest possible power supply time constant, bypassing the picture tube elec-
trodes to the supply instead of ground, and the use of coupling networks to attenuate the response
sharply below the LF cutoff frequency. The overall receiver response is usually an empirically
determined compromise.
The high-frequency transient characteristic is usually expressed as the amplifier response to
an ideal input voltage or current step. This response is shown in Figure 7.1.29 and described in
the following terms:
• Rise time τ R is the time required for the output pulse to rise from 10 to 90 percent of its final
(steady-state) value.
• Overshoot is the amplitude by which the transient rise exceeds its final value, expressed as a
percentage of the final value.
• Preshoot is the amplitude by which the transient oscillatory output waveform exceeds its ini-
tial value.
• Smear is an abnormally slow rise as the output wave approaches its final value.
p p
objectionable outlining can occur, especially to those transients in the white direction. Further-
more, the visibility of background noise is increased.
For several reasons—including possible variations in transient response of the transmitted
signal, distortion due to multipath, antennas, receiver tolerances, SNR, and viewer preference—
it is difficult to define a fixed response at the receiver that is optimum under all conditions.
Therefore, it is useful to make the amplitude response variable so it can be controlled to best suit
the individual situation. Over the range of adjustment, it is assumed that the overall group delay
shall remain reasonably flat across the video band. The exact shape of the amplitude response is
directly related to the desired time domain response (height and width of preshoot, overshoot,
and ringing) and chroma subcarrier sideband suppression.
Because the peaked signal later operates on the nonlinear CRT gun characteristics, large
white preshoots and overshoots can contribute to excessive beam currents, which can cause CRT
spot defocusing. To alleviate this, circuits have been developed that compress large excursions of
the peaking component in the white direction. For best operation, it is desirable that the signals
being processed have equal preshoot and overshoot.
Low level, high frequency noise in the luminance channel can be removed by a technique
called coring. One coring technique involves nonlinearly operating on the peaking or edge-
enhancement signal, discussed earlier in this section. The peaking signal is passed through an
amplifier having low or essentially no gain in the mid-amplitude range. When this modified
peaking signal is added to the direct video, the large transitions will be enhanced, but the small
ones (noise) will not be, giving the illusion that the picture sharpness has been increased while
the noise has been decreased.
Burst Separation
Complete separation of the color synchronizing burst from video requires time gating. The gate
requirements are largely determined by the horizontal sync and burst specifications, illustrated in
Figure 7.1.30. It is essential that all video information be excluded. It is also desirable that both
the leading and trailing edges of burst be passed so that the complementary phase errors intro-
duced at these points by quadrature distortion average to zero. Widening the gate pulse to mini-
mize the required timing accuracy has a negligible effect on the noise performance of the
p p
reference system and may be beneficial in the presence of echoes. The ≈ 2 μs spacing between
the trailing edges of burst and horizontal blanking determines the total permissible timing varia-
tion. Noise modulation of the gate timing should not permit noise excursions to encroach upon
the burst because the resulting cross modulation will have the effect of increasing the noise
power delivered to the reference system.
greater instantaneous inaccuracies can be tolerated in the presence of thermal noise so that an
rms phase error specification of 5 to 10° at an S/N of unity may be regarded as typical.
Reference Systems
Three types of reference synchronization systems have been used:
• Automatic phase control of a VCO
• Injection lock of a crystal
• Ringing of a crystal filter
Best performance can be achieved by the APC loop. In typical applications, the figure of
merit can be made much smaller (better) for the APC loop than for the other systems by making
the factor (1/y) + m have a value considerably less than 1, even as small as 0.1. The parts count
for each type system, at one time much higher for the APC system, is no longer a consideration
because of IC implementations where the oscillator and phase detector are integrated and only
the resistors and capacitors of the filter network and oscillator crystal are external.
The APC circuit is a phase-actuated feedback system consisting of three functional compo-
nents:
• A phase detector
• Low-pass filter
• DC voltage-controlled oscillator
The overall system is illustrated in Figure 7.1.31 The characteristics of these three units
define both the dynamic and static loop characteristics and, hence, the overall system perfor-
mance.
The phase detector generates a dc output E whose polarity and amplitude are proportional to
the direction and magnitude of the relative phase difference dφ between the oscillator and syn-
chronizing (burst) signals.
The VCO is an IC implementation that requires only an external crystal and simple phase-
shift network. The oscillator can be shifted ± 45° by varying the phase-control voltage. This
leads to symmetrical pull-in and hold-in ranges.
p p
Chroma Demodulation
The chroma signal can be considered to be made up of two amplitude-modulated carriers having
a quadrature phase relationship. Each of these carriers can be individually recovered through the
use of a synchronous detector. The reference input to the demodulator is that phase which will
demodulate only the I signal. The output contains no Q signal.
Demodulation products of 7.16 MHz in the output signal can contribute to an optical interfer-
ence moiré pattern in the picture. This is related to the line geometry of the shadow mask. The
7.16 MHz output also can result in excessively high line-terminal radiation from the receiver. A
first-order low-pass filter with cutoff of 1 to 2 MHz usually provides sufficient attenuation. In
extreme cases, an LC trap may be required.
Demodulation Axis
Over the double sideband region (± 500 kHz around the subcarrier frequency) the chrominance
signal can be demodulated as pure quadrature AM signals along either the I and Q axis or R–Y
and B–Y axis. The latter signal leads to a simpler matrix for obtaining the color drive signals R,
G, and B.
Modern practice has moved away from the classic demodulation angles for two main reasons:
• Receiver picture tube phosphors have been modified to yield greater light output and can no
longer produce the original NTSC primary colors. The chromaticity coordinates of the pri-
mary colors, as well as the RGB current ratios to produce white balance, vary from one CRT
manufacturer to another.
• The white-point setup has, over the years, moved from the cold 9300 K of monochrome tubes
to the warmer illuminant C and D65, which produce more vivid colors, more representative of
those that can be seen in natural sunlight.
nals are matrixed at a level of a few volts and then amplified to a higher level (100 to 200 V) suit-
able for CRT cathode drive.
Direct current stability, frequency response, and linearity of the three stages, even if some-
what less than ideal, should be reasonably well matched to ensure overall gray-scale tracking.
Bias and gain adjustments should be independent in their operation, rather than interdependent,
and should minimally affect those characteristics listed previously in this section.
Figure 7.1.32 illustrates a simple example of one of the three CRT drivers. If the amplifier
black-level bias voltage equals the black level from the RGB decoder, drive adjustment will not
change the amplifier black-level output voltage level or affect CRT cutoff. Furthermore, if RB >>
R E, drive level will be independent of bias setting. Note also that frequency response-determin-
ing networks are configured to be unaffected by adjustments.
Frequently, the shunt peaking coil can be made common to all three channels, because differ-
ences between the channels are predominantly color-difference signals of relatively narrow band-
width. Although the frequency responses could be compensated to provide the widest possible
bandwidth, this is usually not necessary when the frequency response of preceding low-level
luminance processing (especially the peaking stage) is factored in. One exception in which out-
put stage bandwidth must be increased to its maximum is in an application, television receiver or
video monitor, where direct RGB inputs are provided for auxiliary services, such as computers
or DVDs.
(a)
(b)
(c)
Figure 7.1.33 Color system filtering: (a) frequency spectrum of NTSC color system, showing inter-
leaving of signals, (b) simplified 1H delay comb filter block diagram, (c) chrominance VC and lumi-
nance VL outputs of the comb filter.
p p
ily made, in principle, by delaying the composite video signal one horizontal scan period (63.555
μs in NTSC-M) and adding or subtracting to the undelayed composite video signal (Figure
7.1.33b and c).
The output of the sum channel will have frequencies at f (horizontal), and all integral multi-
ples thereof reinforce in phase, while those interleaved frequencies will be out of phase and will
cancel. This can be used as the luminance path. The difference channel will have integral fre-
quency multiples cancel while the interleaved ones will reinforce. This channel can serve as the
chrominance channel. The filter characteristic and interleaving are shown in Figure 7.1.33c.
Automatic Circuits
The relative level of the chroma subcarrier in the incoming signal is highly sensitive to transmis-
sion path disorders, thereby introducing objectionable variations in saturation. These can be
observed between one received signal and another or over a period of time on the same channel
unless some adaptive correction is built into the system. The color burst reference, transmitted at
40 IRE units peak-to-peak, is representative of the same path distortions and is normally used as
a reference for automatic gain controlling the chroma channel. A balanced peak detector or syn-
chronous detector, having good noise rejection characteristics, detects the burst level and pro-
vides the control signal to the chroma gain-controlled stage.
Allowing the receiver chroma channel to operate during reception of a monochrome signal
will result in unnecessary cross color and colored noise, made worse by the ACC increasing the
chroma amplifier gain to the maximum. Most receivers, therefore, cut off the chroma channel
system when the received burst level goes below approximately 5 to 7 percent. Hysteresis has
been used to minimize the flutter or threshold problem with varying signal levels.
Burst-referenced ACC systems perform adequately when receiving correctly modulated sig-
nals with the appropriate burst-to-chroma ratio. Occasionally, however, burst level may not bear a
correct relation to its accompanying chroma signal, leading to incorrectly saturated color levels.
It has been determined that most viewers are more critical of excessive color levels than insuffi-
cient ones. Experience has shown that when peak chroma amplitude exceeds burst by greater
than 2/1, limiting of the chroma signal is helpful. This threshold nearly corresponds to the ampli-
tude of a 75 percent modulated color bar chart (2.2/1). At this level, negligible distortion is intro-
duced into correctly modulated signals. Only those that are nonstandard are affected
significantly. The output of the peak chroma detector also is sent to the chroma gain-controlled
stage.
Tint
One major objective in color television receiver design is to minimize the incidence of flesh-tone
reproduction with incorrect hue. Automatic hue-correcting systems can be categorized into two
classes:
• Static flesh-tone correction, achieved by selecting the chroma demodulating angles and gain
ratios to desensitize the resultant color-difference vector in the flesh-tone region (+I axis).
The demodulation parameters remain fixed, but the effective Q axis gain is reduced. This has
the disadvantage of distorting hues in all four quadrants.
• Dynamic flesh-tone corrective systems, which can adaptively confine correction to the region
within several degrees of the positive I axis, leaving all other hues relatively unaffected. This
is typically accomplished by detecting the phase of the incoming chroma signal and modulat-
p p
Figure 7.1.34 Vertical interval reference (VIR) signal. Note that the chrominance and the program
color burst are in phase.
ing the phase angle of the demodulator reference signal to result in an effective phase shift of
10 to 15° toward the I axis for a chroma vector that lies within 30° of the I axis. This approach
produces no amplitude change in the chroma. In fact, for chroma saturation greater than 70
percent, the system is defeated on the theory that the color is not a flesh tone.
A simplification in circuitry can be achieved if the effective correction area is increased to the
entire positive-I 180° sector. A conventional phase detector can be utilized and the maximum
correction of approximately 20° will occur for chroma signals having phase of ± 45° from the I
axis. Signals with phase greater or less than 45° will have increasingly lower correction values,
as illustrated in Figure 7.1.34.
The vertical interval reference (VIR) signal, as shown in Figure 7.1.34, provides references
for black level, luminance, and—in addition to burst—a reference for chroma amplitude and
phase. While originally developed to aid broadcasters, it has been employed in television receiv-
ers to correct for saturation and hue errors resulting from transmitter or path distortion errors.
(a )
(b )
Figure 7.1.35 Sync separator: (a) typical circuit, (b) sync waveform.
signals. For noninterlaced signals, the systems usually operate in a free-running mode with injec-
tion of the video-derived sync pulse causing lock.
A critical characteristic, and probably the most complex portion of any countdown system, is
the circuitry that properly adjusts for nonstandard sync waveforms. These can be simple 525
noninterlaced fields, distorted vertical sync blocks, blocks having no horizontal serrations, fields
with excessive or insufficient lines, and a combination of the above.
Vertical Scanning
Class-B vertical circuits consist essentially of an audio frequency amplifier with current feed-
back. This approach maintains linearity without the need for an adjustable linearity control. Yoke
impedance changes caused by temperature variations will not affect the yoke current, thus, a
thermistor is not required. The current-sensing resistor must, of course, be temperature stable.
An amplifier that uses a single NPN and a single PNP transistor in the form of a complementary
output stage is given in Figure 7.1.41. Quasi-complementary, Darlington outputs and other com-
mon audio output stage configurations also can be used.
Establishing proper dc bias for the output stages is critical. Too little quiescent current will
result in crossover distortion that will impose a faint horizontal white line in the center of the pic-
ture even though the distortion may not be detectable on an oscilloscope presentation of the yoke
current waveform. Too much quiescent current results in excessive power dissipation in the out-
put transistors.
(a )
(b )
Figure 7.1.40 Gated phase detector in an IC implementation: (a) circuit diagram, (b) pulse-timing
waveforms.
voltage across the yoke goes positive, thus forcing QR into saturation. This places the cathode of
CR at the supply potential and the anode at a level of 1.5 to 2 times the supply, depending upon
the values of R R and CR.
Horizontal Scanning
The horizontal scan system has two primary functions:
• It provides a modified sawtooth-shaped current to the horizontal yoke coils to cause the elec-
tron beam to travel horizontally across the face of the CRT.
• It provides drive to the high-voltage or flyback transformer to create the voltage needed for
the CRT anode.
p p
Frequently, low-voltage supplies also are derived from the flyback transformer. The major
components of the horizontal-scan section consist of a driver stage, horizontal output device
(either bipolar transistor or SCR), yoke current damper diode, retrace capacitor, yoke coil, and
flyback transformer, as illustrated in Figure 7.1.43.
During the scan or retrace interval, the deflection yoke may be considered a pure inductance
with a dc voltage impressed across it. This creates a sawtooth waveform of current (see Figure
7.1.44). This current flows through the damper diode during the first half scan. It then reverses
direction and flows through the horizontal output transistor collector. This sawtooth-current
p p
waveform deflects the electron beam across the face of the picture tube. A similarly shaped cur-
rent flows through the primary winding of the high-voltage output transformer.
At the beginning of the retrace interval, the transformer and yoke inductances transfer energy
to the retrace-tuning capacitor and the accompanying stray capacitances, thereby causing a half
sine wave of voltage to be generated. This high-energy pulse appears on the transistor collector
and is stepped up, via the flyback transformer, to become the high voltage for the picture tube
anode. Finally, at the end of the cycle, the damper diode conducts, and another horizontal scan is
started.
the HV winding to primary winding. The peak voltage across the primary during retrace is given
by
T
V p ( pk ) = E in 0.8 ⎛⎝ 1.79 + 1.57 ------trace
-------------⎞
⎠
(7.1.2)
Tretrace
Where:
E in = supply voltage (B+)
0.8 = accounts for the pulse shape factor with third harmonic tuning
Ttrace = trace period, ≈ 52.0 μs
Tretrace = retrace period, ≈ 11.5 μs
p p
Integrated Flybacks
Most medium- and large-screen color receivers utilize an integrated flyback transformer in
which the HV winding is segmented into three or four parallel-wound sections. These sections
are series-connected with a diode between adjacent segments. These diodes are physically
mounted as part of the HV section. The transformer is then encapsulated in high-voltage polyes-
ter or epoxy.
Two HV winding construction configurations also have been used. One, the layer or solenoid-
wound type, has very tight coupling to the primary and operates well with no deliberate har-
monic tuning. Each winding (layer) must be designed to have balanced voltage and capacitance
with respect to the primary. The second, a bobbin or segmented-winding design, has high leak-
age inductance and usually requires tuning to an odd harmonic (e.g., the ninth). Regulation of
this construction is not quite as good as the solenoid-wound primary winding at a horizontal-fre-
quency rate. The +12 V, +24 V, +25 V, and –27 V supplies are scan-rectified. The +185 V, over-
voltage sensing, focus voltage, and 25 kV anode voltage are derived by retrace-rectified supplies.
The CRT filament is used directly in its ac mode.
Flyback-generated supplies provide a convenient means for isolation between different
ground systems, as required for an iso-hot chassis.
Power Supplies
Most receivers use the flyback pulse from the horizontal transformer as the source of power for
the various dc voltages required by the set. Using the pulse waveform at a duty cycle of 10 or 15
p p
Figure 7.1.45 Auxiliary power sources derived from the horizontal output transformer. (Courtesy
of Philips.)
percent, by proper winding direction and grounding of either end of the winding, several differ-
ent voltage sources can be created.
Scan rectified supplies are operated at a duty cycle of approximately 80 percent and are thus
better able to furnish higher current loads. Also, the diodes used in such supplies must be capable
of blocking voltages that are nine to ten times larger than the level they are producing. Diodes
having fast-recovery characteristics are used to keep the power dissipation at a minimum during
the turn-off interval because of the presence of this high reverse voltage.
A typical receiver system containing the various auxiliary power supplies derived from fly-
back transformer windings is shown in Figure 7.1.45. Transistor Q452 switches the primary
winding at a horizontal frequency rate. The +12 V, +24 V, +25 V, and –27 V supplies are scan-
rectified. The +185 V, overvoltage sensing, focus voltage, and 25 kV anode voltage are derived
by retrace-rectified supplies. The CRT filament is used directly in its ac mode.
As noted in the previous section, flyback-generated supplies provide a convenient means for
isolation between different ground systems. Figure 7.1.46 shows the block diagram of such a
television receiver power supply system [6, 7].
p p
Figure 7.1.46 Color receiver power supply using a switched-mode regulator system. (Courtesy of
Philips.)
7.1.3 References
1. FCC Regulations, 47 CFR, 15.65, Federal Communications Commission, Washington,
D.C.
2. DeVries, A.,et al: “Characteristics of Surface-Wave Integratable Filters (SWIFS),” IEEE
Trans., vol. BTR-17, no. 1, pg. 16.
3. Yamada and Uematsu: “New Color TV with Composite SAW IF Filter Separating Sound
and Picture Signals,” IEEE Trans., vol. CE-28, no. 3, pg. 193, 1982.
4. Neal, C. B., and S. Goyal: “Frequency and Amplitude Phase Effects in Television Broad-
cast Systems,” IEEE Trans., vol. CE-23, no. 3, pg. 241, August 1977.
5. Fockens, P., and C. G. Eilers: “Intercarrier Buzz Phenomena Analysis and Cures,” IEEE
Trans. Consumer Electronics, vol. CE-27, no. 3, pg. 381, August 1981.
6. IEEE Guide for Surge Withstand Capability, (SWC) Tests, ANSI C37.90a-1974/IEEE Std.
472-1974, IEEE, New York, N.Y., 1974.
p p
7. “Television Receivers and Video Products,” UL 1410, Sec. 71, Underwriters Laboratories,
Inc., New York, N.Y., 1981.
p p
g g
Chapter
7.2
ATSC DTV Receiver Systems
7.2.1 Introduction1
The introduction of a new television system must be viewed as a chain of elements that begins
with image and sound pickup and ends with image display and sound reproduction. The DTV
receiver is a vital link in this chain. By necessity, the ATSC system places considerable require-
ments upon the television receiver. The level of complexity of a DTV-compliant receiver is
unprecedented, and that complexity is made possible only through advancements in large-scale
integrated circuit design and fabrication.
The goal of any one-way broadcasting system, such as television, is to concentrate the hard-
ware requirements at the source as much as possible and to make the receivers—which greatly
outnumber the transmitters—as simple and inexpensive as possible. Despite the significant com-
plexity of a DTV receiver, this principal has been an important design objective from the start.
1. This chapter is based on: ATSC, “Guide to the Use of the Digital Television Standard,”
Advanced Television Systems Committee, Washington, D.C., Doc. A/54A, December 4,
2003. Used with permission.
Editor’s note: This chapter provides an overview of the ATSC DTV receiver system based on
ATSC A/54A. Readers are encouraged to download the entire Recommended Practice from
the ATSC Web site (http://www.atsc.org). All ATSC Standards, Recommended Practices, and
Information Guides are available at no charge.
7-61
y
Table 7.2.1 ATSC Receiver Planning Factors Used by the FCC (After [1].)
IF Filter
& NTSC Data Reed- Data
Phase Trellis
Tuner Syn- Rejection Equalizer De- Solomon De-
Tracker Decoder
chronous Filter Interleaver Decoder Randomizer
Detector
Sync
&
Timing
Figure 7.2.1 Simplified block diagram of the prototype VSB receiver. (From [2]. Used with permis-
sion.)
• Data de-interleaver
• Reed-Solomon decoder
• Data derandomizer
• Receiver loop acquisition sequencing
Descriptions of the major system elements of the prototype receiver are provided in the follow-
ing sections.
Current production designs usually differ somewhat from the prototype receiver with regard
to the order in which signal processing is performed [2]. Most recent designs digitize IF signals
and perform demodulation (such as synchronous detection) in the digital regime. In some
designs the arrangement of the NTSC rejection filter, equalizer, and phase-tracker (or symbol
synchronizer) portions of the receiver differ from those shown in Figure 7.2.1. Portions of the
sync and timing circuitry are likely to take their input signals after equalization, rather than
before, and bright-spectral-line symbol-clock-recovery circuits may respond to envelope detec-
tion of IF signals performed separately from synchronous detection. The cascading of trellis
decoder, data de-interleaver, Reed-Solomon decoder and data de-randomizer is characteristic of
many designs, however.
7.2.2a Tuner
The tuner as implemented in the prototype Grand Alliance system receives the 6 MHz signal
(UHF or VHF) from the antenna (Figure 7.2.2) [2]. The tuner is a high-side injection, double-
conversion type with a first IF frequency of 920 MHz. This places the image frequencies above 1
GHz, making them easy to reject by a fixed front-end filter. This first IF frequency was chosen
high enough that the input band-pass filter selectivity prevents first local oscillations (978–1723
MHz) leaking from the tuner front end and interfering with other UHF channels, but low enough
that second harmonics of UHF channels (470–806 MHz) fall above the first IF passband.
Although harmonics of cable-channel signals could occur in the first IF passband, they are not a
real problem because of the relatively flat spectrum (within 10 dB) and small signal levels (–28
dBm or less) used in cable systems.
y
AFC
Channel Tuning Delayed AGC
Figure 7.2.2 Block diagram of the tuner subsystem in the prototype receiver. (From [2]. Used with
permission.)
The tuner input has a band-pass filter that limits the frequency range to 50–810 MHz, reject-
ing all other non-television signals that may fall within the image frequency range of the tuner
(beyond 920 MHz). In addition, a broadband tracking filter rejects other television signals, espe-
cially those much larger in signal power than the desired signal power. This tracking filter is not
narrow, nor is it critically tuned, and introduces very little channel tilt, if any. This contrasts with
the tracking filter used in some NTSC single-conversion tuner designs, in which the tracking fil-
ter is required to reject image signals only 90 MHz away from the desired channel.
A 10 dB gain, wideband RF amplifier increases the signal level into the first mixer, and is the
predominant factor determining the receiver noise figure (7–9 dB over the entire VHF, UHF, and
cable bands). The first mixer is a highly linear, double-balanced design to minimize even-har-
monic generation. The first mixer is a high-side-injection type, driven by a synthesized low-
phase-noise local oscillator (LO) operating at a frequency above those of the broadcast signal
selected for reception. Both the channel tuning (first LO) and broadband tracking filtering (input
band-pass filter) are controlled by microprocessor. The tuner is capable of tuning the entire VHF
and UHF broadcast bands as well as all standard, IRC, and HRC cable bands.
The mixer is followed by an LC filter in tandem with a narrow 920 MHz band-pass ceramic
resonator filter. The LC filter provides selectivity against the harmonic and sub-harmonic spuri-
ous responses of the ceramic resonators. The 920 MHz ceramic resonator band-pass filter has a -
1 dB bandwidth of about 6 MHz. A 920 MHz IF amplifier is placed between the two filters.
Delayed AGC of the first IF signal is applied immediately following the first LC filter. The 30-
dB-range AGC circuit protects the remaining active stages from large signal overload.
The second mixer is a low-side-injection type and is driven by the second LO, which is an 876
MHz voltage-controlled SAW oscillator. (Alternatively, the second mixer could be a high-side-
injection type. [3]) The second oscillator is controlled by the frequency-and-phase-lock-loop
(FPLL) synchronous detector. The second mixer, the output signal of which occupies a 41–47
MHz second IF frequency passband, drives a constant-gain 44 MHz amplifier. The output of the
tuner feeds the IF SAW filter and synchronous detection circuitry. The dual-conversion tuner
y
used in the Grand Alliance receiver is made out of standard consumer electronic components,
and is housed in a stamped metal enclosure.
Since the testing of the original Grand Alliance systems, alternative tuner designs have been
developed. Practical receivers with commercially acceptable performance are now manufactured
using both dual-conversion and single-conversion tuners.
Compared to an NTSC analog television receiver, a DTV receiver has two particularly impor-
tant additional design requirements in the front-end portion up to and including the demodula-
tor(s). One of these requirements is that the phase noise of the local oscillator(s) must be low
enough to permit digital demodulation that is reasonably free of symbol jitter. Symbol jitter gives
rise to intersymbol interference (ISI), which should be kept below levels likely to introduce data-
slicing errors. The Grand Alliance receiver could accommodate a total phase noise (transmitter
and receiver) of –78 dBc/Hz at 20 kHz offset from the carrier. (Note, all figures are measured at
20 kHz offset, and assume a noise-like phase distribution free of significant line frequencies that
may be caused by frequency synthesis.) By 2002, fully integrated demodulator designs typically
showed an improvement 5 dB or so over the Grand Alliance hardware.
The other particularly important additional design requirement of the front-end portion of the
DTV receiver is that the immunity to interference must be better in general than in an analog TV
receiver. Table 7.2.2 shows that such better performance was assumed in making the present
channel assignments. During the interim transition period in which both NTSC and DTV signals
are broadcast, immunity to NTSC interference from channels other than the one selected for
reception will be particularly important. This is because the variations in received levels from
stations to adjacent stations can be very large, with variations on the order of 40 dB not uncom-
mon. DTV-to-DTV variations can be the same order of magnitude. DTV receivers can use digital
filtering methods for rejecting NTSC co-channel interference; such methods are inapplicable to
NTSC receivers.
The principal NTSC interference problem for DTV reception is cross-modulation of a strong
out-of-channel NTSC signal with a desired-channel DTV signal in the RF amplifier and first
mixer stages of the DTV receiver. Actually, the desired-channel DTV signal is less adversely
affected by cross-modulation with a strong out-of-channel signal than a desired-channel analog
NTSC signal is. However, during the transition period the DTV signal will be subject to a more
demanding interference environment. Because each broadcaster was provided with a DTV chan-
nel, DTV power had to be reduced to avoid loss of NTSC service area. Furthermore, DTV
assignments could not be protected from the UHF taboos nor by adjacent channel restrictions.
Thus, the generally accepted conclusion is that DTV receivers should be capable of significantly
better interference rejection than NTSC receivers currently produced are.
Tuner SAW IF
Filter Amplifier X
AFC
1st AGC 2nd AGC
Lowpass
L.O. L.O.
90 Filter
Reference Amplifer
Synthesizer Oscillator Limiter
3rd L.O.
APC
Channnel
Tuning
VCO Lowpass X
Filter
Q
X
Figure 7.2.3 Block diagram of the tuner, IF amplifier, and FPLL system in the prototype receiver.
(From [2]. Used with permission.)
Figure 7.2.3 illustrates portions of the prototype receiver, including the frequency- and phase-
lock loop (FPLL) circuit used for carrier recovery. The first LO is synthesized by a phase-lock
loop (PLL) and controlled by a microprocessor. The third LO is a fixed-frequency reference
oscillator. An automatic-phase-and-frequency control (AFPC) signal for the second LO comes
from the FPLL synchronous detector, which responds to the small pilot carrier in the received
DTV signal. The FPLL provides a pull-in range of ±100 kHz despite the automatic-phase-con-
trol low-pass filter having a cut-off frequency less than 2 kHz. When the second LO is in phase
lock, the relatively low cut-off frequency of this APC low-pass filter constrains operation of the
FPLL to a narrow enough bandwidth to be unaffected by most of the modulation of the digital
carrier by data and synchronizing signals. However, the FPLL bandwidth remains wide enough
to track out any phase noise on the signal (and, hence, on the pilot) of frequencies up to about 2
kHz. Tracking out low-frequency phase noise (as well as low frequency FM components) allows
the phase-tracking loop to be more effective.
The pilot beat signal in the Q baseband signal is apt to be at a frequency above the 2 kHz cut-
off frequency of the APC low-pass filter, so it cannot appear directly in the AFPC signal applied
to the VCO. The automatic-frequency-control (AFC) low-pass filter has a higher cut-off fre-
quency and selectively responds to any pilot beat signal in the I baseband signal that is below 100
kHz in frequency. The response of AFC filter exhibits positive phase shift (lag) of that pilot beat
signal that increases with frequency, from a 0º lag at zero-frequency to a 90º lag at a frequency
well above the 100 kHz cut-off frequency. The response to the pilot beat signal drives the ampli-
fier/limiter well into clipping, to generate a constant amplitude (±A) square wave that is used to
multiply the Q baseband DTV signal, to generate a switched-polarity Q baseband DTV signal as
a product output signal from the multiplier.
Without the AFC filter, the I channel beat note and the Q channel beat note would always be
phased at 90º relative to each other, and the direct component of the product output signal from
the multiplier would be zero because each half cycle of the ±A square wave would include equal
negative and positive quarter-cycles of the sinusoidal Q beat wave. However, the AFC filter lag
delays the square wave polarity transitions, so there is less than a quarter wave of the Q baseband
DTV signal before its sinusoidal zero-axis crossing, and more than a quarter wave of the Q base-
band signal after its sinusoidal zero-axis crossing. This results in a net dc value for the product
y
output signal from the multiplier. The polarity of the Q baseband DTV signal relative to the
square wave depends on the sense of the frequency offset between the IF pilot and the VCO
oscillations. Owing to the variable phase shift with frequency of the AFC filter, the amount of
phase shift between the square wave and the Q baseband beat note depends on the frequency dif-
ference between the VCO oscillation and the incoming pilot. The amount of this phase shift
determines the average value of the multiplied Q baseband DTV signal. This zero-frequency
component passes through the APC low-pass filter to provide a frequency-control signal that
reduces the difference between the VCO frequency and the carrier frequency of the incoming IF
signal. (An extended frequency sweep of the response of the FPLL will exhibit the traditional
bipolar S-curve AFC characteristic.)
When the frequency difference comes close to zero, the phase shift in the AFC filter
approaches zero, and so the AFC control voltage also approaches zero. The APC loop takes over
and phase-locks the incoming IF signal to the third LO. This is a normal phase-locked loop cir-
cuit, except for being bi-phase stable. However, the correct phase-lock polarity is determined by
forcing the polarity of the pilot to be the same as the known transmitted positive polarity [5–7].
The bi-phase stability arises in the following ways. When the VCO is in phase-lock, the
detected pilot signal in the real component I of the complex baseband DTV signal is at zero fre-
quency, causing the AFC filter response to be at a constant direct value. This constant direct
value conditions the amplifier/limiter output signal either to be continually at +A value or to be
continually at –A value. The Q baseband DTV signal resulting from the quadrature-phase syn-
chronous detection is multiplied by this value, and the APC low-pass filter response to the result-
ing product controls the frequency of the oscillations supplied by the second LO in the tuner. So,
there is no longer a zero-frequency AFC component generated by heterodyning higher-frequency
demodulation artifacts from the pilot. When the loop settles near zero degrees, the limited I value
is +A, and the Q baseband is used as in an ordinary PLL, locking the Q channel at 90º and the I
channel at 0º. However, if the loop happens to settle near 180º, the limited I value is –A, causing
the multiplied Q channel signal to be reversed in sense of polarity, and therefore driving the loop
to equilibrium at 180º.
The prototype receiver can acquire a signal and maintain lock at a signal-to-noise ratio of 0
dB or less, even in the presence of heavy interference. Because of its 100 kHz cut-off frequency,
the AFC low-pass filter rejects most of the spectral energy in the I baseband DTV signal with
5.38 MHz bandwidth. This includes most of the spectral energy in white noise, in randomized
data, and in the PN sequences in the DFS signal. The AFC low-pass filter also rejects most of the
demodulation artifacts in the I baseband DTV signal that arise from co-channel NTSC interfer-
ence, except those arising from the vestigial sideband of the co-channel NTSC interference.
Therefore, most of the energy from white noise, DTV symbols, and NTSC co-channel interfer-
ence is removed from the amplifier/limiter input signal. This makes it extremely likely that the
beat signal generated by demodulating the pilot is the largest component of the amplifier/limiter
input signal so that this beat signal “captures” the limiting and generates square wave essentially
unaffected by the other components. Any demodulation artifacts in the Q baseband DTV signal
that arise from co-channel NTSC interference that are down-converted to frequencies below the
2 kHz cut-off frequency of the APC low-pass filter when those artifacts are multiplied by the
amplifier/limiter output signal will be within 2 kHz of the digital carrier signal. Only a low-
energy portion of the NTSC vestigial sideband falls within this 4-kHz range of signal, so co-
channel NTSC interference will have little influence on AFC during pull-in of the second LO
towards phase-lock. When the second LO is phase-locked, the narrowband APC low-pass filter
y
7
5
3
1
-1
-3
-5
-7
Levels Before
Pilot Addition
(Pilot=1.25)
Sampling
Instants
1 2 3 4 5 6 7
Figure 7.2.4 Data segment sync waveforms. (From [2]. Used with permission.)
rejects most of the energy in white noise, in randomized data, in the DTV sync signals, and in co-
channel NTSC interference.
The synchronous detectors and the FPLL loop employed in the prototype VSB receiver were
constructed using analog electronic circuitry. Similar processing techniques can be employed in
an FPLL constructed in considerable portion using digital electronic circuitry, for use after syn-
chronous detectors constructed in digital circuitry and used for demodulating digitized IF DTV
signal.
Synchronous To Data
Detector A/D Detection
Symbol
Clock Segment Sync
Pattern Reference
Data
PLL Segment
fd Sync
Detector
AGC to AGC
IF & Tuner Generator
Gain
Reference Confidence
Counter
Other
Decoder
Circuits
Figure 7.2.5 Segment sync and symbol clock recovery with AGC. (From [2]. Used with permis-
sion.)
digitization, the data eyes in the digitized synchronous detection result cannot be observed by the
conventional oscilloscope procedure.
A phase-lock loop derives a clean 10.76 MHz symbol clock for the receiver. With the PLL
free-running, the data segment sync detector containing a 4-symbol sync correlator searches for
the two-level data segment sync (DSS) sequences occurring at the specified repetition rate. The
repetitive segment sync is detected while the random data is not, enabling the PLL to lock on the
sampled sync from the A/D converter, and achieve data symbol clock synchronization. When a
confidence counter reaches a predefined level of confidence that the segment sync has been
found, subsequent receiver loops are enabled.
Both the segment sync detection and the clock recovery can work reliably at signal-to-noise
ratios of 0 dB or less, and in the presence of heavy interference.
Minimum Data
+
Error Confidence Field
Integrator
Segment Counter Sync
- Detector #1
Received
I Channel Data
After NTSC Field Sync #1
Rejection Filter Reference
Data Symbols Data Segment Sync
Data
Field Sync #2
Reference
Minimum Data
-
Error Confidence Field
Integrator
Segment Counter Sync
+ Detector #2
Figure 7.2.6 The process of data field sync recovery in the prototype receiver. (From [2]. Used
with permission.)
When data segment syncs are detected, coherent AGC occurs using the measured segment
sync amplitudes [8]. The amplitude of the DSS sequences, relative to the discrete levels of the
random data, is determined in the transmitter. Once the DSS sequences are detected in the
receiver, they are compared to a reference value, with the difference (error) being integrated. The
integrator output then controls the IF and “delayed” RF gains, forcing them to whatever values
provide the correct DSS amplitudes.
v
C A
(a ) MHz
(b )
228fH
57fH 285fH
342fH
Lower Baseband Edge (f=0)
D f H = NTSC Horizontal Line Rate
1.
.5 .5
P
(c ) 0 0
Expanded Views Below
D
1
618.88kHz = 39 3 fH
P
(d )
309.44 309.44
kHz kHz 618.88 kHz
381 13 f H = 6.
6. MHz
Figure 7.2.7 Receiver filter characteristics: (a) primary NTSC components, (b) comb filter
response, (c) filter band edge detail, (d) expanded view of band edge. (From [2]. Used with per-
mission.)
Received Data
I Channel NTSC Rejection Filter MUX Output
Data Symbols +
-
D
( )2 ( )2
Integrator Integrator
Minimum
Energy
Detector
(D = 12 Symbols Delay)
Figure 7.2.8 Block diagram of the NTSC interference-rejection filter in the prototype receiver.
(From [2]. Used with permission.)
• Audio carrier (A) located 4.5 MHz above the video carrier frequency
If there is an expectation that the reception area for a DTV broadcasting station will suffer co-
channel interference from an NTSC broadcasting station, the DTV broadcasting station will gen-
erally shift its carrier to a frequency 28.615 kHz further from the lower frequency edge of the
channel allocation. As Figure 7.2.7 shows, this places the data carrier (pilot) 338.056 kHz from
the lower edge of the channel allocation, rather than the nominal 309.44 kHz.
The NTSC interference rejection filter (comb) is a single-tap linear feedforward filter, as
shown in Figure 7.2.8. Figure 7.2.7b shows the frequency response of the comb filter, which pro-
vides periodic spectral nulls spaced 57 × fH apart. That is, the nulls are 896.853 kHz apart,
which is (10.762 MHz / 12). There are 7 nulls within the 6 MHz channel. The demodulation arti-
fact of the video carrier falls 15.091 kHz above the second null in the comb filter response; the
demodulation artifact of the chroma subcarrier falls 7.224 kHz above the sixth null in the comb
filter response; and the demodulation artifact of the audio carrier falls 30.826 kHz above the sev-
enth null in the comb filter response. Owing to the audio carrier being attenuated by the Nyquist
roll-off of the DTV signal response, the demodulation artifact of the audio carrier in the comb
filter response is smaller than the demodulation artifact of the video carrier.
The comb filter, while providing rejection of steady-state signals located at the null frequen-
cies, has a finite response time of 12 symbols (1.115 μs). So, if the NTSC interfering signal has a
y
sudden step in carrier level (low-to-high, or high-to-low), one cycle of the zero-beat frequency
(offset) between the DTV and NTSC carrier frequencies will pass through the comb filter at an
amplitude proportional to the NTSC step size as instantaneous interference. Examples of such
steps of NTSC carrier are the leading and trailing edge of sync (40 IRE units). If the desired-to-
undesired (D/U) signal power ratio is large enough, data-slicing errors will occur. However,
interleaving will disperse the data-slicing errors caused by burst interference and will make it
easier for the Reed-Solomon code to correct them. (The Reed-Solomon error-correction cir-
cuitry employed in the Grand Alliance receiver locates as well as corrects byte errors and can
correct up to 10 byte errors/segment).
Although the comb filter reduces the NTSC interference, the data is also modified. The 7 data
eyes (8 levels) are converted to 14 data eyes (15 levels). The partial response process causes a
special case of intersymbol interference that doubles the number of like-amplitude eyes, but oth-
erwise leaves them open. The modified data signal can be properly decoded by the trellis
decoder. Note that, because of time sampling, only the maximum data eye value is seen after ana-
log-to-digital conversion.
NTSC interference can be automatically rejected by the circuit shown in Figure 7.2.8, which
determines whether or not the NTSC rejection filter is effective to reduce interference-plus-noise
in its response compared to interference-plus-noise in its input signal. The amount of interfer-
ence-plus-noise accompanying the received signal during the data field sync interval is measured
by comparing the received signal with a stored reference of the field sync. The interference-plus-
noise accompanying the NTSC rejection filter response during the data field sync interval is
measured by comparing that response with a combed version of the internally stored reference
field sync. The two interference-plus-noise measurements are each squared and integrated over a
few data field sync intervals. After a predetermined level of confidence is achieved, a multi-
plexer is conditioned to select the signal with the lower interference-plus-noise power for supply-
ing data to the remainder of the receiver.
There is a reason to not leave the rejection comb filter switched in all the time. The comb fil-
ter, while providing needed co-channel interference benefits, degrades white noise performance
by 3 dB. This is because the filter response is the difference of two full-gain paths, and as white
noise is un-correlated from symbol to symbol, the noise power doubles. There is an additional
0.3 dB degradation due to the 12 symbol differential coding. If little or no NTSC interference is
present, the comb filter is automatically switched out of the data path. When NTSC broadcasting
is discontinued, the comb filter can be omitted from digital television receivers.
The equalizer used in the prototype receiver was designed to achieve equalization by any of
three different methods:
• The equalizer can adapt on the prescribed binary training sequences in the DFS signals
• It can adapt on data symbols throughout the frame when the eyes are open
• Or it can adapt on data symbols throughout the frame when the eyes are closed (blind equal-
ization).
The principal differences among these three methods concern how the error estimate is gener-
ated.
The prototype receiver stores the prescribed binary training signal in read-only memory, and
the field sync recovery circuit determines the times that the prescribed binary training signal is
expected to be received. So, when adapting on the prescribed binary training sequences, the
receiver generates the exact reception error by subtracting the prescribed training sequence from
the equalizer response.
Tracking dynamic echoes requires tap adjustments more often than the training sequence is
transmitted. Therefore, the prototype Grand Alliance receiver was designed so that once equal-
ization is achieved, the equalizer switches to a decision-directed equalization mode that bases
adaptation on data symbols throughout the frame. In this decision-directed equalization mode,
reception errors are estimated by slicing the data with an 8-level slicer and subtracting it from the
equalizer response.
For fast dynamic echoes (e.g., airplane flutter) the prototype receiver used a blind equaliza-
tion mode to aid in acquisition of the signal. Blind equalization models the multi-level signal as
binary data signal plus noise, and the equalizer produces the error estimate by detecting the sign
of the output signal and subtracting a (scaled) binary signal from the output to generate the error
estimate.
To perform the LMS algorithm, the error estimate (produced using the training sequence, 8-
level slicer, or the binary slicer) is multiplied by delayed copies of the signal. The delay depends
upon which tap of the filter is being updated. This multiplication produces a cross-correlation
between the error signal and the data signal. The size of the correlation corresponds to the ampli-
tude of the residual echo present at the output of the equalizer and indicates how to adjust the tap
to reduce the error at the output.
A block diagram of the prototype system equalizer is shown in Figure 7.2.9. The dc bias of
the input signal is removed by subtraction as a preliminary step before equalization. The dc bias
is primarily the zero-frequency pilot component of the baseband DTV signal and is subject to
change when the receiver selects a different reception channel or when multipath conditions
change. The dc offset is tracked by measuring the dc value of the training signal.
The equalizer filter consists of two parts, a 64-tap feedforward transversal filter followed by a
192-tap decision-feedback filter. The equalizer operates at the 10.762 MHz symbol rate, as a T-
sampled (or synchronous) equalizer.
The output signals of the forward filter and feedback filter are summed to produce the equal-
izer response. The equalizer response is sliced by either an 8-level slicer (15-level slicer when the
comb filter is used) or by a binary slicer, depending upon whether the data eyes are open or
closed. (As pointed out in the previous section on interference filtering, the comb filter does not
close the data eyes, but generates in its response twice as many eyes of the same magnitude).
This sliced signal has the training signal and segment syncs reinserted. The resultant signal is fed
into the feedback filter, and subtracted from the output signal to produce the error estimate. The
y
I Channel Equalized
Input 64 Tap I Channel
Symbols + Output
10.76
Σ Forward
Filter
+
Σ
- Filter - 10.76
MS/sec. Taps MS/sec.
Tap Training
Measure
Σ Storage 192 Tap Sequence
Feedback M
Residual Filter U
μ X
DC Filter Slicer
Taps
ACC Tap
Delay 1 Storage Σ
D μ
Feedback
Forward ACC Tap Index
Tap Index
D
Error Delay 2
Estimate -
Σ
+
Figure 7.2.9 Simplified block diagram of the prototype VSB receiver equalizer. (From [2]. Used
with permission.)
error estimate is correlated with the input signal (for the forward filter), or by the output signal
(for the feedback filter). This correlation is scaled by a step-size parameter, and used to adjust
the value of the tap. The delay setting of the adjustable delays is controlled according to the index
of the filter tap that is being adjusted.
The complete prototype receiver demonstrated in field and laboratory tests showed cancella-
tion of –1 dB amplitude echoes under otherwise low-noise conditions, and –3 dB amplitude echo
ensemble with noise. In the latter case, the signal was 2.25 dB above the noise-only reception
threshold of the receiver.
Complex Q”
Multiplier
Accumulator Digital
Q’
Error
Limiter Filter
Decision
Phase Error
Gain Error
Sine Accumulator
Cosine
Table
Figure 7.2.10 The phase-tracking loop portion of the phase-tracker. (From [2]. Used with permis-
sion.)
The phase-tracking loop suppresses high-frequency phase noise that the IF PLL cannot sup-
press because of being relatively narrow-band. The IF PLL is relatively narrow-band in order to
lock on the pilot carrier with minimum data-induced jitter. The phase tracker takes advantage of
decision feedback to determine the proper phase and amplitude of the symbols and corrects a
wider bandwidth of phase perturbations. Because of the decision feedback, the phase tracker
does not introduce data-induced jitter. However, like any circuit that operates on data, the phase
tracker begins to insert excess noise as the S/N ratio drops to approach threshold and bad deci-
sions start to be made. Therefore, it may be desirable to vary phase tracker loop gain or switch it
out near threshold signal-to noise-ratio.
A block diagram of the phase-tracking loop is shown in Figure 7.2.10. The output of the real
equalizer operating on the I signal is first gain-controlled by a multiplier and then fed into a filter
that recreates an approximation of the Q signal. This is possible because of the VSB transmission
method, in which the I and Q components are related by a filter function that closely approxi-
mates a Hilbert transform. This filter is of simple construction, being a finite-impulse-response
(FIR) digital filter with fixed anti-symmetric coefficients and with every other coefficient equal
to zero. In addition, many filter coefficients are related by powers of two, thus simplifying the
hardware design.
These I and Q signals are then supplied to a complex multiplier operated as a de-rotator for
suppressing the phase noise. The amount of de-rotation is controlled by decision feedback of the
data taken from the output of the de-rotator. As the phase tracker is operating on the 10.76
Msymbol/s data, the bandwidth of the phase tracking loop is fairly large, approximately 60 kHz.
The gain multiplier is also controlled with decision feedback. Because the phase tracker adjusts
amplitude as well as phase, it can react to small AGC and equalizer errors. The phase tracker
time constants are relatively short, as reflected in its 60 kHz bandwidth.
The phase tracker provides an amplitude-tracking loop and an offset tracking loop as well as a
phase-tracking loop. The phase tracker provides symbol synchronization that facilitates the use
of a synchronous equalizer with taps at symbol-epoch intervals. Greater need for the phase
tracker is evidenced in the 16-VSB mode than in the 8-VSB mode, being essential for 16-VSB
reception with some tuners. The phase tracker also removes effects of AM and FM hum that may
be introduced in cable systems.
y
Trellis
Decoder
#0
Trellis
Decoder
#1
Trellis
Decoder
...
#2
...
Equalized ... ...
& ... Trellis
Phase- Corrected ... Decoded
... Data
Symbols ...
...
...
... ...
... ...
Trellis
Decoder
#10
Trellis
Decoder
#11
Figure 7.2.11 Functional diagram of the trellis code de-interleaver. (From [2]. Used with permis-
sion.)
Comb Filter
Segment Sync Removal
Equalizer
Σ Phase Tracker
M
U
To
Trellis
D Σ X Decoder
(D = 12 Symbols Delay)
Figure 7.2.12 Segment sync removal in prototype 8 VSB receiver. (From [2]. Used with permis-
sion.)
NTSC rejection filter, which has memory, represents another state machine seen at the input of
the trellis decoder. In order to minimize the expansion of trellis states, two measures are taken: 1)
special design of the trellis code, and 2) twelve-to-one interleaving of the trellis encoding. The
interleaving, which corresponds exactly to the 12 symbol delay in the NTSC rejection filter,
makes it so that each trellis decoder only sees a one-symbol delay NTSC rejection filter. By min-
imizing the delay stages seen by each trellis decoder, the expansion of states is also minimized. A
3.5 dB penalty in white noise performance is paid as the price for having good NTSC co-channel
performance. The additional 0.5 dB noise-threshold degradation beyond the 3 dB attributable to
comb filtering is due to the 12-symbol differential coding.
Because segment sync is not trellis encoded nor pre-coded, the presence of the segment sync
sequence in the data stream through the comb filter presents a complication that had to be dealt
with. Figure 7.2.12 shows the receiver processing that is performed when the comb filter is
present in the receiver. The multiplexer in the segment sync removal block is normally in the
upper position. This presents data that has been filtered by the comb to the trellis decoder. How-
ever, because of the presence of the sync character in the data stream, the multiplexer selects its
lower input during the four symbols that occur twelve symbols after the segment sync. The effect
of this sync removal is to present to the trellis decoder a signal that consists of only the difference
of two adjacent data symbols that come from the same trellis encoder, one transmitted before,
and one after the segment sync. The interference introduced by the segment sync symbol is
removed in this process, and the overall channel response seen by the trellis decoder is the single-
delay partial-response filter.
The complexity of the trellis decoder depends upon the number of states in the decoder trellis.
Since the trellis decoder operates on an 8-state decoder trellis when the comb filter is active, this
defines the amount of processing that is required of the trellis decoder. The decoder must per-
form an add-compare-select (ACS) operation for each state of the decoder. This means that the
decoder is performing 8 ACS operations per symbol time. When the comb filter is not activated,
the decoder operates on a 4-state trellis. The decoder hardware can be constructed such that the
same hardware that is decoding the 8-state comb-filter trellis can also decode the 4-state trellis
when the comb filter is disengaged, thus there is no need for separate decoders for the two
modes. The 8-state trellis decoder requires fewer than 5000 gates.
y
Partial
+ Response
Trellis
- Decoder
D
Received Data
Symbols Out
Optimal
Trellis
Decoder
Figure 7.2.13 Trellis decoding with and without the NTSC rejection filter. (From [2]. Used with per-
mission.)
After the transition period, when NTSC is no longer being transmitted, the NTSC rejection
filter and the 8-state trellis decoder can be omitted from digital television receivers.
(B-1)M
1
2 (B-2)M
From To
Trellis Reed-Solomon
Decoder Decoder
2M
50
M(=4 Bytes)
51
(B=)52
Figure 7.2.14 Functional diagram of the convolutional de-interleaver. (From [2]. Used with permis-
sion.)
Echoes are considered to be of substantial strength if their presence causes a readily observ-
able increase in the number of data-slicing errors. There have been reports of pre-echoes of sub-
stantial strength being observed in the field that are advanced as much as 30 μs with respect to
the principal signal. There have been a significant number of observations of post-echoes of sub-
stantial strength that are delayed on the order of 45 μs with respect to the principal signal.
Equalizers capable of suppressing pre-echoes advanced no more than 3 μs and delayed no
more than about 45 μs were the general rule in the 1995 to 2000 time frame. Since 2000 there has
been increased awareness that channel equalizers should have capability for suppressing further-
advanced pre-echoes and products with over 90 μs of total equalization range have been pro-
duced.
A detailed discussion of DTV receiver equalization is beyond the scope of this chapter. Inter-
ested readers should consult ATSC Informational Document T3-600, “DTV Signal Reception
and Processing Considerations” [18]. Readers are also encouraged to review ATSC A/74, “Rec-
ommended Practice: Receiver Performance Guidelines” [19].
7.2.4 References
1. “Receiver Planning Factors Applicable to All ATV Systems,” Final Report of PS/WP3,
Advanced Television Systems Committee, Washington, D.C., December 1, 1994.
2. ATSC, “Guide to the Use of the ATSC Digital Television Standard,” Advanced Television
Systems Committee, Washington, D.C., Doc. A/54A, December 4, 2003.
3. Limberg, A. L. R: “Plural-Conversion TV Receiver Converting 1st IF to 2nd IF Using
Oscillations of Fixed Frequency Above 1st IF,” U. S. patent No. 6 307 595, 23 October
2001.
4. Citta, R.: “Automatic phase and frequency control system,” U. S. patent No. 4 072 909, 7
February 1978.
5. Krishnamurthy, G., T. G. Laud, and R. B. Lee: “Polarity Selection Circuit for Bi-phase Sta-
ble FPLL,” U. S. patent No. 5 621 483, 15 April 1997.
6. Sgrignoli, G. J.: “Pilot Recovery and Polarity Detection System,” U.S. patent No. 5 675
283, 7 October1997.
7. Mycynek, V., and G. J. Sgrignoli: “FPLL With Third Multiplier in an AC Path in the
FPLL,” U.S. patent No. 5 745 004, 28 April 1998.
8. Citta, R. W., D. M. Mutzabaugh, and G. J. Sgrignoli: “Digital Television Synchronization
System and Method,” U.S. patent No. 5 416 524, 16 May 1995.
9. Citta, R. W., G. J. Sgrignoli, and R. Turner: “Digital Signal with Multilevel Signals and
Sync Recognition,” U.S. patent No. 5 598 220, 28 January 1997.
10. Widrow, B., and M. E. J. Hoff: “Adaptive Switching Circuits,” IRE 1960 Wescon Conv.
Record, pp. 563–587.
11. Horwitz, T. P., R. B. Lee, and G. Krishnamurthy: “Error Tracking Loop,” U.S. patent No. 5
406 587, 11 April 1995.
y
12. Krishnamurthy, G., and R. B. Lee: “Error Tracking Loop Incorporating Simplified Cosine
Look-up Table,” U.S. patent No. 5 533 071, 2 July 1996.
13. Kim, J. G., K. S. Kim, and S. W. Jung: “Phase Error Corrector for HDTV Reception Sys-
tem,” U.S. patent No. 5 602 601, 11 February 1997.
14. Horwitz, T. P., R. B. Lee, and G. Krishnamurthy: “Error Tracking Loop,” U.S. patent No. 5
406 587, 11 April 1995.
15. Citta, R. W., and D. A. Wilming: “Receiver for a Trellis Coded Digital Television Signal,”
U.S. patent No. 5 636 251, 3 June 1997.
16. Wilming, D. A.: “Data Frame Structure and Synchronization System for Digital Television
Signal,” U.S. patent No. 5 629 958, 13 May 1997.
17. Fimoff, M., S. F. Halozan, and R. C. Hauge: “Convolutional Interleaver and Deinterleaver,”
U.S. patent No. 5 572 532, 5 November 1996.
18. ATSC: “Technology Group Report T3-600, DTV Signal Reception and Processing Consid-
erations,” Advanced Television Systems Committee, Washington, D.C., September 18,
2003.
19. ATSC: “Recommended Practice A/74, Receiver Performance Guidelines,” Advanced Tele-
vision Systems Committee, Washington, D.C., June 18, 2004
7.2.5 Bibliography
ACATS: “Field Test Results of the Grand Alliance Transmission Subsystem,” Document SS/
WP2-1354, September 16, 1994.
ACATS: “Final Report of the Spectrum Utilization and Alternatives Working Party of the Plan-
ning Subcommittee of ACATS.”
ATSC: “ATSC Transmission System, Equipment and Future Directions: Report of the ATSC
Task Force on RF System Performance,” Advanced Television Systems Committee, Wash-
ington, D.C., April 12, 2001.
Bendov, O.: “On the Validity of the Longley-Rice (50,90/10) Propagation Model for HDTV Cov-
erage and Interference Analysis,” Proceedings of the Broadcast Engineering Conference,
National Association of Broadcasters, Washington, D.C., 1999.
Bendov, O., J. F. X. Brown, C. W. Rhodes, Y. Wu, and P. Bouchard: “DTV Coverage and Service
Prediction, Measurement and Performance Indices,” IEEE Trans. on Broadcasting, vol. 47,
no. 3, pg. 207, September 2001.
Ciciora, Walter, et. al.: “A Tutorial on Ghost Canceling in Television Systems,” IEEE Transac-
tions on Consumer Electronics, IEEE, New York, N.Y., vol. CE-25, no. 1, pp. 9–44, Febru-
ary 1979.
Ghosh, M: “Blind Decision Feedback Equalization for Terrestrial Television Receivers,” Proc. of
the IEEE, vol. 86, no. 10, pp. 2070–2081, October, 1998.
y
Gibson, J. J., and R. M. Wilson: “The Mini-State—A Small Television Antenna,” RCA Engineer,
vol. 20, pp. 9–19, 1975.
Hufford, G. A: “A Characterization of the Multipath in the HDTV Channel,” IEEE Trans. on
Broadcasting, vol. 38, no. 4, December 1992.
Miyazawa, H.: “Evaluation and Measurement of Airplane Flutter Interference,” IEEE Trans. on
Broadcasting, vol. 35, no. 4, pp 362–367, December 1989.
O’Connor, Robert: “Understanding Television’s Grade A and Grade B Service Contours,” IEEE
Transactions on Broadcasting, vol. BC-14, no. 4, pp. 137–143, December 1968.
Plonka, Robert: “Can ATV Coverage be Improved with Circular, Elliptical or Vertical Polarized
Antennas?,” NAB Engineering Conference Proceedings, National Association of Broad-
casters, Washington, D.C., 1996.
Qureshi, Shahid U. H.: “Adaptive Equalization,” Proceedings of the IEEE, IEEE, New York,
N.Y., vol. 73, no. 9, pp. 1349–1387, September 1985.
Ungerboeck, Gottfried: “Fractional Tap-Spacing Equalizer and Consequences for Clock Recov-
ery in Data Modems,” IEEE Transactions on Communications, IEEE, New York, N.Y., vol.
COM-24, no. 8, pp. 856–864, August 1976.
g g
Chapter
7.3
Receiver Antenna Systems
K. Blair Benson
7.3.1 Introduction
The antenna system is one of the most important circuit elements in the reception of television
signals. The ultimate quality of the picture and sound depends primarily on how well the antenna
system is able to capture the signal radiated by the transmitting antenna of the broadcast station
and to feed it, with minimum loss or distortion, through a transmission line to the tuner of the
receiver.
In urban and residential areas near the television station antenna, where strong signals are
present, a compact set-top telescoping rod or “rabbit ears” for VHF and a single loop for UHF
usually are quite adequate. The somewhat reduced signal strength in suburban and rural areas
generally requires a multielement roof-mounted antenna with a shielded 75Ω coaxial transmis-
sion line to feed the signal to the receiver.
Fringe areas, where the signal level is substantially lower, usually require a more complicated,
highly directional antenna, frequently on a tower, to produce an even marginal signal level. The
longer transmission line in such installations may dictate the use of an all-channel low-noise
preamplifier at the antenna to compensate for the loss of signal level in the line.
7-85
y
same as that of light in free space, which is 2.9983 × 1010 cm/s, or for most calculations, 3 ×
1010 cm/s (186,000 mi/s).
• Radiation—the emission of coherent modulated electromagnetic waves in free space from a
single or a group of radiating antenna elements. Although the radiated energy, by definition,
is composed of mutually dependent electric and magnetic field vectors having specific mag-
nitude and direction, conventional practice in television engineering is to measure and specify
radiation characteristics in terms of the electric field.
• Polarization—the angle of the radiated electrical field vector in the direction of maximum
radiation. If the plane of the field is parallel to the ground, the signal is horizontally polarized;
if at right angles to the ground, it is vertically polarized. When the receiving antenna is
located in the same plane as the transmitting antenna, the received signal strength will be
maximum. If the radiated signal is rotated at the operating frequency by electrical means in
feeding the transmitting antenna, the radiated signal is circularly polarized. Circularly polar-
ized signals produce equal received signal levels with either horizontal or vertical polarized
receiving antennas. (See Figure 7.3.1.)
• Beamwidth. In the plane of the antenna, beamwidth is the angular width of the directivity pat-
tern where the power level of the received signal is down by 50 percent (3 dB) from the max-
imum signal in the desired direction of reception. Using Ohm’s law (P = E2/R) to convert
power to voltage, assuming the same impedance R for the measurements, this is equal to a
drop in voltage of 30 percent (Figure 7.3.2).
• Gain—the signal level produced (or radiated) relative to that of a standard reference dipole.
Gain is used frequently as a figure of merit. Gain is closely related to directivity, which in turn
is dependent upon the radiation pattern. High values of gain usually are obtained with a
reduction in beamwidth. Gain can be calculated only for simple antenna configurations. Con-
sequently, it usually is determined by measuring the performance of an antenna relative to a
reference dipole.
• Input impedance—the terminating resistance into which a receiving antenna will deliver max-
imum power. Similar to gain, input impedance can be calculated only for very simple formats,
and instead is usually determined by actual measurement.
• Radiation resistance. Defined in terms of transmission, using Ohm’s law, as the radiated
power P from an antenna divided by the square of the driving current I at the antenna termi-
nals.
• Bandwidth—a general classification of the frequency band over which an antenna is effective.
This requires a specification of tolerances on the uniformity of response over not only the
effective frequency spectrum but, in addition, over that of individual television channels. The
tolerances on each channel are important because they can have a significant effect on picture
quality and resolution, on color reproduction, and on sound quality. Thus, no single broad-
band response characteristic is an adequate definition because an antenna’s performance
depends to a large degree upon the individual channel requirements and gain tolerances, and
the deviation from flat response over any single channel. This further complicates the
antenna-design task because channel width relative to the channel frequency is greatly differ-
ent between low and high channels.
y
The foregoing problem is readily apparent by a comparison of low VHF and high UHF chan-
nel frequencies with the 6 MHz single-channel spectrum width specified in the FCC Rules. VHF
Channel 2 occupies 11 percent of the spectrum up to that channel, while UHF Channel 69 occu-
y
(a) (b)
Figure 7.3.2 Normalized E-field pattern of λ/2 and short dipole antennas: (a) plot representation,
(b) three-dimensional view of the radiation pattern.
pies only 0.75 percent of the spectrum up to Channel 69. In other words, a VHF antenna must
have a considerably greater broadband characteristic than that of a UHF antenna.
(a ) (b) (c)
Figure 7.3.4 Balun transformers: (a) balanced to unbalanced, 1/1, (b) balanced to unbalanced, 4/
1, (c) balanced to unbalanced on symmetrical balun core, 4/1. (Illustrations a and b after [4].)
y
Folded Dipole
An increase in bandwidth and an increase in impedance can be accomplished through the use of
the two-element folded-dipole configuration shown in Figure 7.3.5. The impedance can be
increased by a factor of as much as 10 by using rods of different diameter and varying the spac-
ing. The typical folded dipole with elements of equal diameter has a radiation impedance of 290
Ω, four times that of a standard dipole and closely matching the impedance of 300 Ω twin lead.
The quarter-wave dipole elements connected to the closely coupled half-wave single element
act as a matching stub between the transmission line and the single-piece half-wave element.
This broadbands the folded-dipole antenna to span a frequency range of nearly 2 to 1. For exam-
ple, the low VHF channels, 2 through 6 (54–88 MHz), can be covered with a single antenna.
V Dipole
The typical set-top rabbit-ears antenna on VHF Channel 2 is a “short dipole” with an impedance
of 3 or 4 Ω and a capacitive reactance of several hundred ohms. Bending the ears into a V in
effect tunes the antenna and will affect its response characteristic, as well as the polarization. A
change in polarization may be used in city areas to select or reject cross-polarized reflected sig-
nals to improve picture quality.
The fan dipole, based upon the V antenna, consists of two or more dipoles connected in paral-
lel at their driving point and spread at the ends, thus giving it a broadband characteristic. It can be
optimized for VHF reception by tilting the dipoles forward by 33º to reduce the beam splitting
(i.e., improve the gain on the high VHF channels where the length is longer than a half-wave).
A ground plane or flat reflecting sheet placed at a distance of 1/16 to 1/4 wave behind a dipole
antenna, as shown in Figure 7.3.6, can increase the gain in the forward direction by a factor of 2.
This design is often used for UHF antennas, e.g., a “bow-tie” or cylindrical dipole with reflector.
For outdoor applications, a screen or parallel rods can be used to reduce wind resistance. At 1/4-
wave to 1/2-wave spacing, the antenna impedance is 80 to 125 Ω, respectively.
y
(a )
(b )
(c )
(d )
Figure 7.3.6 The corner-reflector antenna: (a) mechanical configuration, (b) calculated pattern of
a square corner-reflector antenna with antenna-to-corner spacing of 0.5 wavelength, (c) pattern at
1.0 wavelength, (d) pattern at 1.5 wavelength. (Illustrations b–d from [5]).
y
Table 7.3.1 Radiation Resistance for a Single-Conductor Loop Having a Diameter of 17.5 cm (6.9-
in) and a Wire Thickness of 0.2 mm (0.8-in)
channels. The gain of the Yagi over a standard dipole for up to 15 elements and beamwidth direc-
tivity are listed in Table 7.3.2.
(a )
(b )
Figure 7.3.9 Gain-vs.-bandwidth for a Yagi antenna: (a) measured gain of a five-element single-
channel Yagi and a broadband Yagi, (b) measured gain of three twin-driven 10-element yagi
antennas and a single-channel 10-element Yagi. (From [6].)
y
(a) (b)
errors that are too large for correction. For
entertainment programming, absolute cor-
rection is less critical and gives way pri-
marily to concealment techniques.
Phased Arrays
For some reception applications, the para-
bolic antenna is being superseded by a flat (c )
multielement antenna known as the planar
array. Ku-band antenna assemblies mea-
suring no more than a foot square have
been found to provide acceptable DBS
reception under a variety of weather condi-
tions.
The design principle is based upon that Figure 7.3.10 Basic types of satellite receive
of large phased arrays used for many years antennas: (a) prime focus, (b) double reflector, ( c)
in radar applications. Rather than collect- offset feed single reflector.
ing the radiated signal by focusing a para-
bolic reflector on a signal antenna, the
electric signal from each antenna is amplified and timed by a filter network so that all signals are
in phase at the summing point. The basic approach is shown in Figure 7.3.11. A more sophisti-
cated design employs a second smaller antenna to provide an error signal to operate automatic
circuits to align the antenna for maximum signal.
Printed circuit board manufacturing techniques are used to etch the antenna elements on a flat
panel, along with the coupling amplifier and phasing network for each antenna.
(a) (b )
(c)
Figure 7.3.11 Phased array antenna topologies: (a) end collector, (b) center collector, (c) series
phase shifter.
Coaxial shielded cable and several types of parallel line or balanced twin lead are commonly
used between the antenna and receiver. Figure 7.3.14 shows the insertion loss for five types of
line. Typically, the VSWR for the shielded coaxial lines is in the range of 1.3 to 1.5 in the VHF
band and 1.1 to 1.2 in the UHF band. The unshielded twin lead also has low VSWR (1.2 to 1.5)
in the VHF band, but increases to 2.0 in the UHF band.
Shielded twin-lead VSWR has values up to 2.5 in the VHF band and up to 3 at UHF frequen-
cies.
Although the twin-lead transmission line has lowest insertion loss under ideal conditions, the
presence of metal and moisture, as well as outdoor aging, will cause the electrical properties to
deteriorate, and—in fact—insertion loss may even exceed that of coaxial lines. Coaxial cable is
preferred in many installations because it can be routed along the antenna mast adjacent to
metallic structural members and directly through a hole in the house wall with no need for spac-
ers, which are required with twin lead.
• Gain, defined as power output to the transmission line divided by power available from the
antenna source.
• Noise figure.
• VSWR caused by mismatch between the source (antenna) and preamplifier input.
• Distortion level or maximum undesired signal level capability, frequently stated as third-
order intercept point of the amplifier.
y
Figure 7.3.12 Nomograph for transmission and deflection of power at high-voltage standing-wave
ratios (VSWR). (From: Feinstein, J.: “Passive Microwave Components,” Electronic Engineers’
Handbook, D. Fink and D. Christiansen (eds.), Handbook, McGraw-Hill, New York, N.Y., 1982 .)
y
Filter
Antenna Output
between N and E is achieved. Input to coax
Figure 7.3.17 shows an application of EIA/CEA-
909 in which a smart antenna preamplifier uses data Gain control bits
provided by the interface standard to adjust a prese- D/A
lector filter to a specific channel (or group of chan-
nels) and adjust the overall gain of the system, based ROM Channel #
upon input from the receiver.
Figure 7.3.17 Block diagram of a mast-
mounted smart antenna preamplifier.
7.3.7 References (After [14].)
1. Elliot, R. S.: Antenna Theory and Design, Prentice-Hall, Englewood Cliffs, N.J., pg. 64,
1981.
2. Lo, Y. T.: “TV Receiving Antennas,” in Antenna Engineering Handbook, H. Jasik (ed.),
McGraw-Hill, New York, N.Y., pp. 24–25, 1961.
3. Grossner, N.: Transformers for Electronic Circuits, 2nd ed., McGraw-Hill, New York, N.Y.,
pp. 344–358, 1983.
4. Radio Amateur’s Handbook, American Radio Relay League, Newington, Conn., 1983.
5. Kraus, J. D.: Antennas, McGraw-Hill, New York, N. Y., Chapter 12, 1950.
6. Jasik, H.: Antenna Engineering Handbook, McGraw-Hill, New York, N.Y., Chapter 24,
1961.
7. NAB: “Antenna Control Interface Standard and Implementation,” NAB TV TechCheck,
National Association of Broadcasters, Washington, D.C., April 22, 2002.
8. EIA/CEA-909, “Smart Antenna Interface,” Electronics Industries Alliance, Arlington, VA,
February 26, 2002.
y
g g
Chapter
7.4
Cable Television Systems
K. Blair Benson
7.4.1 Introduction
Cable television systems use coaxial and fiber optic cable to distribute video, audio, and data sig-
nals to homes or other establishments that subscribe to the service. Systems with bidirectional
capability can also transmit signals from various points within the network to the central origi-
nating point (head end).
Cable distribution systems typically use leased space on utility poles owned by a telephone or
power distribution company. In areas with underground utilities, cable systems are normally
installed underground, either in conduits or buried directly, depending on local building codes
and soil conditions.
While current cable practice involves extensive use of fiber in new construction and
upgrades, it is important to understand cable techniques used prior to fiber’s introduction. This is
partly because a significant portion of cable systems have not yet upgraded to fiber and because
these older cable techniques illustrate important technical principles for the service.
7-105
y
Systems constructed in the 1980s typically had a capacity of 50 to 100 channels and included
bidirectional transmission capability. Interactive programming, information retrieval, home mon-
itoring, and point-to-point data transmission were possible, although not widely implemented.
Many systems included an institutional network in addition to one or two subscriber networks.
The two main drivers of current progress in the cable industry are the hybrid fiber/coax
(HFC) architecture and digital video. These technologies have changed cable from a stable, well
understood, even stale technology into something with massive possibilities and choices [1].
In the mid 1990’s, the vision of what fiber in the cable plant could accomplish inspired the
cable industry to dramatically upgrade its physical facilities. The current vision driving progress
in the cable industry is digital video compression, which facilitates massive increases in the
amount of programming cable can deliver.
Interactive Service
While the main purpose of cable television is to deliver a greater variety of high-quality televi-
sion signals to subscribers, a growing interest is found in interactive communications, which
allow subscribers to interact with the program source and to request various types of informa-
tion. The growth in the capabilities of the Internet has led to renewed interest in this feature of
cable TV. An interactive system also can provide monitoring capability for special services such
as home security. Additional equipment is required in the subscriber’s home for such services.
Monitoring requires a home terminal, for example, whereas information retrieval requires a
modem (cable or conventional land-line) for data transmission.
y
Figure 7.4.1 Typical cable distribution network. (From [2]. Used with permission.)
Head End
The head end of a cable system is the origination point for all signals carried on the downstream
system. Signal sources include off-air stations, satellite services, and terrestrial microwave
relays. Programming may also originate at the head end. All signals are processed and then com-
bined for transmission via the cable system. In bidirectional systems, the head end also serves as
the collection (and turnaround) point for signals originated within the subscriber and business
networks.
The major elements of the head end are the antenna system, signal processing equipment,
pilot-carrier generators, combining networks, and equipment for any bidirectional and interactive
services. Figure 7.4.2 illustrates a typical analog system head end.
A cable television antenna system includes a tower and antennas for reception of local and
distant stations. The ideal antenna site is in an area of low ambient electrical noise, where it is
possible to receive the desired television channels with a minimum of interference and at a level
sufficient to obtain high-quality signals. For distant signals, tall towers and high-gain, directional
receiving antennas may be necessary to achieve sufficient gain of desired signals and to discrim-
inate against unwanted adjacent channel, co-channel, or reflected signals. For weak signals, low-
noise preamplifiers can be installed near the pickup antennas. Strong adjacent channels are atten-
uated with bandpass and bandstop filters.
Satellite earth receiving stations, or television receive only (TVRO) systems, are used by most
cable companies. Earth station sites are selected for minimum interference from terrestrial
y
Figure 7.4.2 A typical cable system head end. (From [2]. Used with permission.)
Cable Television Systems 7-109
y
microwave transmissions and an unobstructed line-of-sight path to the desired satellites. Most
earth stations use parabolic receiving antennas of various sizes, a low-noise preamplifier at the
focal point of the antenna, and a waveguide to transmit the signal to receivers.
Signal Processing
Several types of signal processing are performed at the head end, including:
• Regulation of the signal-to-noise ratio at the highest practical value
• Automatic control of the output level of the signal to a close tolerance
• Reduction of the aural carrier level of television signals to avoid interference with adjacent
cable channels
• Suppression of undesired out-of-band signals
Received signals are converted as necessary to different channel frequencies for cable use.
Such processing of off-air signals is accomplished with heterodyne processors or demodulator/
modulator pairs. Signals from satellites are processed within a receiver and placed on a vacant
channel by a modulator. Similarly, locally originated signals are converted to cable channels with
modulators.
Pilot-carrier generators provide precise reference levels for automatic level control in trunk-
system amplifiers. Generally, two reference pilots are used, one near each end of the cable spec-
trum. Combining networks group the signals from individual processors and modulators into a
single output for connection to the cable network.
Interactive services require one or more data receivers and transmitters at the head end. Poll-
ing of home terminals and data collection are controlled by a computer system (server). Where
cable networks are used for point-to-point data transmission, modems are required at each end
location, with RF-turnaround converters used to redirect incoming upstream signals back down-
stream. Most cable television systems use computerized switching systems to program one or
more video channels from multiple-program sources.
Business networks on cable systems require switching, processing, and turnaround equipment
at the head end. Video, data, and audio signals that originate within the network must be routed
back out over either the business or the subscriber network. One method of accomplishing this is
to demodulate the signals to the base band, route them through a switching network, and then
remodulate them onto the desired network. Another method is to convert the signals to a com-
mon intermediate frequency, route them through an RF switching network, and then up-convert
them to the desired frequency with heterodyne processors.
In larger systems, different areas of the network may be tied together with a central head end
using supertrunks or multichannel microwave. Supertrunks or microwave systems are also used
where the pickup point for distant over-the-air stations is located away from the central head end
of the system. In this method, called the hub system, supertrunks are high-quality, low-loss
trunks, which often use FM transmission of video signals or feed-forward amplifier techniques
to reduce distortion. Multichannel microwave transmission systems may use either amplitude or
frequency modulation.
For new construction, fiber optic lines are used almost exclusively for trunking. Fiber also
typically replaces cable during system retrofit upgrading, required to support new user features
such as additional channels, cable modems, and Internet telephony.
y
Heterodyne Processors
The heterodyne processor, shown in Figure 7.4.3, is the most common type of analog head-end
processor. The device converts the incoming signal to an intermediate frequency, where the aural
and visual carriers are separated. The signals are independently amplified and filtered with auto-
matic level control. The signals are then recombined and heterodyned to the desired output chan-
nel.
Prior to cable delivery systems, U.S. television licensing practices avoided placing two sta-
tions on adjacent channels. As a result, many television receivers were not designed to discrimi-
nate well between adjacent channel signals. When used with cable television, such receivers can
experience interference between the aural carrier of one channel and the visual carrier of the
adjacent channel. Channel processors alleviate this problem by changing the ratio between the
aural and visual carriers of each channel on the cable. The visual-to-aural carrier ratio is typi-
cally 15 to 16 dB on cable television systems. At one time, the aural carrier of broadcast stations
was only 10 dB lower than the visual carrier, but many stations now operate with the aural signal
20 dB below visual.
Channel conversion allows a signal received on one channel to be changed to a different chan-
nel for optimized transmission on the cable. Processors are generally modular, and with appro-
priate input and output modules, any input channel can be translated to any other channel of the
cable spectrum. Conversion is usually necessary for local broadcast stations carried on cable net-
works. If the local station were carried on its original carrier frequency, direct pickup of the sta-
tion off the air within the subscriber’s receiver would create interference with the signal from the
cable system.
The visual-signal intermediate-frequency (IF) passband of a typical heterodyne processor is
shown in Figure 7.4.4. The heterodyne processor is designed to reproduce the received signal
with a minimum of differential phase and amplitude distortion. This is best accomplished with a
y
Figure 7.4.4 Typical idealized video-IF response curve of a heterodyne processor. Note that the
visual carrier is located on the flat-loop portion of the curve in order to provide improved phase
response compared to television receivers. (From [2]. Used with permission.)
Figure 7.4.5 Block diagram of demodulator portion of a demodulator/modulator pair. (From [2].
Used with permission.)
y
Figure 7.4.6 The idealized video-IF response curve of a demodulator. The video carrier is located
6 dB below the maximum response. (From [2]. Used with permission.)
flat passband. Note that the visual carrier is positioned within the flatter portion of the passband
response curve. Television demodulators place the visual carrier at a point 6 dB down on the
response curve. For this reason, the heterodyne processor has better differential phase character-
istics than a modulator/demodulator pair.
An integral circuit of heterodyne processors is a standby carrier generator. If the incoming
signal fails for any reason, the standby signal is switched in to maintain a constant visual carrier
level, particularly on cable systems that use a television channel as a pilot carrier for trunk auto-
matic level control. Such standby carriers can include provisions for modulation from a local
source for emergency messages.
Demodulator/Modulator Pair
Demodulator/modulator pairs are commonly used to convert satellite or microwave signals to
cable channels. The demodulator is essentially a high-quality television receiver with base-band
video and audio outputs. The modulator is a low-power television transmitter. This approach to
channel processing provides increased selectivity, better level control, and more flexibility in the
switching of input and output signal paths, compared to other types of processors.
In the demodulator (Figure 7.4.5), an input amplifier, local oscillator, and mixer down-con-
vert the input signal to an intermediate frequency. Crystal control or phase-locked loops maintain
stable frequencies for the conversion. In the IF amplifier section, the aural and visual signals are
separated for detection. In comparing the IF response curve of a demodulator (Figure 7.4.6) with
that of the heterodyne processor (Figure 7.4.4), we find that the visual carrier is located on the
y
Figure 7.4.7 Block diagram of a conventional analog modulator. (From [2]. Used with permission.)
Figure 7.4.8 Block diagram of a single-channel amplifier with AGC. (From [2]. Used with permis-
sion.)
edge of the passband. The 4.5-MHz intercarrier sound is taken off before video detection, ampli-
fied, and limited to remove video components. The sound is often maintained on a 4.5-MHz
audio subcarrier for remodulation.
Demodulators are designed to minimize phase and amplitude distortion with linear detector
circuits. However, quadrature distortion is inherent in systems that use envelope detectors with
vestigial sideband signals. Such distortion can be corrected with video processing circuitry.
In the modulator (Figure 7.4.7), a composite input signal is applied to a separation section,
where video is separated from the sound subcarrier. The video is amplified, processed for dc res-
toration, and mixed with a carrier oscillator to an IF frequency. An amplifier boosts the signal to
the required power level, and a vestigial sideband filter is used to trim most of the lower sideband
for adjacent-channel operation.
y
Audio, if derived from the composite input, is separated as a 4.5 MHz signal from the visual
information, filtered, amplified, and limited before being mixed with the visual carrier. If audio
is provided separately to the modulator, an FM modulation circuit generates the 4.5 MHz aural
subcarrier to be combined with the visual carrier. After the two are combined, they are mixed
upward to the desired output channel frequency and applied to a bandpass filter to remove spuri-
ous products of the upconversion before being applied to the cable system.
Single-Channel Amplifiers
The single-channel or strip amplifier is designed to amplify one channel and include bandpass
and bandstop filters to reject signals from other channels. In simplest form, the unit consists of
an amplifier, filter, and power supply. More elaborate designs include automatic gain control
(AGC) and independent control of the aural and visual carrier levels. (See Figure 7.4.8.)
Single-channel amplifiers are useful where the desired signal levels are relatively high and
undesired signals are low or absent. Selectivity is low compared with other processor types, and
the units generally lack the independent control of carrier levels, the ratio between carriers, inde-
pendent AGC of the carriers, or limiting for the aural carrier. They cannot be used for channel
conversion, and they usually present difficulties in adjacent-channel applications because of their
limited selectivity.
Figure 7.4.10 A basic trunk system. (From [3]. Used with permission.)
transmission. Generally, these systems are bidirectional to allow interactive data transmission
and signaling. Modems for cable system applications are available for data speeds to 1.5 Mbits/s
and higher.
Amplification
Amplifiers in the cable system are used to overcome losses in the coaxial cable and other devices
of the system. From the output of a given amplifier, through the span of coaxial cable (and an
equalizer to linearize the cable losses), and to the output of the next amplifier, unity gain is
required so that the same signal level is maintained on all channels at the output of each ampli-
fier unit.
Standard cable system design places repeater amplifiers from 1400 to 2000 ft apart, depend-
ing on the size and type of cable used. This represents an electrical loss of about 20 dB at the
highest frequency carried. Systems with amplifier cascades to 64 units are possible, depending
on the number of channels carried, the performance specifications chosen, the modulation
scheme used, and the distortion-correction techniques used. Table 7.4.2 tabulates representative
figures of distortion versus the number of amplifiers in cascade.
Bridging Amplifier
Signals from a trunk system are fed to the distribution system and eventually to subscriber drops.
A wide-band directional coupler extracts a portion of the signal from the trunk for use in the
bridge amplifier. The bridge acts as a buffer, isolating the distribution system from the trunk,
while providing the signal level required to drive distribution lines.
Subscriber-Area Distribution
As shown in Figure 7.4.11, distribution lines are fed from a bridging station. Distribution lines
are routed through the subscriber area with amplifiers and tap-off devices to meet the subscriber
density of the area. The cable for distribution lines is identical to that used for trunks, except that
its diameter is commonly 1/2 or 5/8-in. with a resulting increase in losses, typically to 1.5 dB/
100 ft at 400 MHz.
As the signal proceeds on a distribution line, attenuation of the cable and insertion losses of
tap devices reduce its level to a point where line-extender amplifiers may be required. The gain
of line extenders is relatively high (25 to 30 dB), so generally no more than two are cascaded,
because the high level of operation creates a significant amount of cross-modulation and inter-
modulation distortion. These amplifiers typically do not use automatic level-control circuitry to
compensate for variations in cable attenuation caused by temperature changes. By limiting the
system to no more than two line extenders per distribution leg, the variations are not usually suf-
ficient to affect picture quality.
y
Figure 7.4.11 Distribution and feeder system. (From [3]. Used with permission.)
The splitter portion of a multitap is a hybrid design that introduces a substantial amount of
isolation from reflections or interference coming from a subscriber drop, both to the distribution
system and to other subscribers connected to the same tap.
The tap is the final service point immediately prior to a subscriber’s location, so it becomes a
convenient point where control of channels authorized to individuals can be applied. An addres-
sable tap, controlled from the system head end, can be used instead of the standard multitap to
enable access to basic cable services or to individual channels. When activated, the switch and
related components can enable or disable a channel, a group of channels, or the entire spectrum.
Figure 7.4.13 Functional block diagram of a trunk-AGC amplifier and its gain distribution. ( From
[2]. Used with permission.)
Gain Control
Because the attenuation characteristics of coaxial cable change with ambient temperature, trunk
amplifiers include automatic gain-control (AGC) circuits to compensate for the changes and to
maintain the output level at a predetermined standard. Automatic control depends on the use of
pilot carriers. One is at the lower end of the spectrum, while the other is at the upper limit. Spe-
cial-purpose pilot carriers also can be placed on the system at the head end. Some system
designs use video channels as pilots. AGC circuits sample the trunk amplifier output at the pilot
frequencies and feed a control voltage back to diode attenuators at an intermediate point in the
amplifier circuit. The upper pilot adjusts the gain of the amplifier, while the lower pilot controls
the slope of an equalizer. This method is effective in maintaining the entire spectrum at a con-
stant output level.
A second design of trunk amplifier uses feed-forward distortion correction and permits car-
riage of a larger number of channels (for equivalent gain) than the standard trunk amplifier.
Cross-Modulation
The maximum output level permissible in cable television amplifiers depends primarily on
cross-modulation of the picture signals. Cross-modulation is most likely in the output stage of an
amplifier, where levels are high. The cross-modulation distortion products at the output of a typ-
ical amplifier (such as Figure 7.4.13) are 96 dB below the desired signal at an output level of +32
dB mV. In the system shown in Figure 7.4.13, gains of the third and fourth stages are each 8 dB.
With an amplifier output level of +32 dB mV, the output of the third stage would be +24 dB mV,
and that of the second stage, 16 dB mV.
Figure 7.4.14 expresses the relationship between cross-modulation distortion and amplifier
output level. A 1 dB decrease in output level corresponds to a 2 dB decrease in cross-modulation
distortion.
Cross-modulation increases by 6 dB each time the number of amplifiers in cascade doubles.
The distortion also increases by 6 dB each time the number of channels on the system is doubled.
Experience has shown that cable subscribers find a cross-modulation level of –60 dB objection-
able. Therefore, based on a cross-modulation component of –96 dB relative to normal visual car-
y
rier levels in typical trunk amplifiers, 64 such amplifiers can be cascaded before cross-
modulation becomes objectionable.
Frequency Response
The most important design consideration of a cable amplifier is frequency response. A response
that is flat within ± 0.25 dB from 40 MHz to 450 MHz is required of an amplifier carrying 50 or
more 6 MHz television channels to permit a cascade of 20 or more amplifiers. Circuit designs
must pay special attention to high-frequency parameters of the transistors and associated compo-
nents, as well as circuit layout and packaging techniques.
The circuit given in Figure 7.4.15 represents an amplifier designed for flat response. The
feedback network (C1, R1, L1) introduces sufficient negative feedback to maintain a nearly con-
stant output over a wide frequency range. Collector transformer T2 is bifilar-wound on a ferrite
core and presents a constant 75 Ω impedance over the entire frequency range. Splitting trans-
formers T1 and T3, of similar construction, have an additional function of introducing 180º phase
shift for the push-pull transistor pair Q1 and Q2, while maintaining 75 Ω input and output imped-
ances.
Second-Order Distortion
When cable television systems carried only 12 channels, those channels were spaced to avoid the
second harmonic of any channel from falling within any other channel. The expansion of sys-
tems to carry channels covering more than one octave of spectrum causes second-order beat dis-
tortion characteristics of amplifiers to become important. Push-pull amplifier circuits, as shown
y
Noise Figure
The typical trunk amplifier hybrid module has a
noise figure of approximately 7 dB. Equalizers,
band-splitting filters, and other components pre-
ceding the module add to the overall station noise
figure, producing a total of approximately 10 dB.
A single amplifier produces an S/N of approx-
imately 60 dB for a typical signal input level.
Subjective tests show that an S/N of approxi-
mately 43 dB is acceptable to most viewers. Figure 7.4.15 High-performance, push-
Again, based on Table 7.4.2, noise increases 3 dB pull, wide-band amplifier stage for a trunk
as the cascade of amplifiers is doubled. There- amplifier. (From [2]. Used with permission.)
fore, a cascade of 32 amplifiers, resulting in a sig-
nal-to-noise ratio of 45 dB, is a commonly used
cable television system design specification.
Hum Modulation
Because power for the amplifiers is carried on coaxial cable, some hum modulation of the RF
signals occurs. This modulation results from power-supply filtering and saturation of the induc-
tors used to bypass ac current around active and passive devices within the system. A hum mod-
ulation specification of –40 dB relative to visual carriers has been found to be acceptable in most
applications.
Subscriber-Premises Equipment
The output of the tap device feeds a 75 Ω coaxial drop cable into the home. The cable, generally
about 1/4-in. in diameter, is constructed with an outer polyethylene jacket, a shield of aluminum
y
the program request, and subsequently places an entry into the automated billing system for that
subscriber.
Off-Premises Systems
The off-premises approach is compatible with recent industry trends to become more consumer
electronics friendly and to remove security-sensitive electronics from the customer’s home [1].
This method controls the signals at the pole rather than at a decoder in the home. This increases
consumer electronics compatibility because authorized signals are present in a descrambled for-
mat on the customer’s drop. Customers with cable-compatible equipment can connect directly to
the cable drop without the need for converter/descramblers. This allows the use of all VCR and
TV features.
Signal security and control in the off-premises devices take different forms. Nearly all off-
premises devices are addressable. Specific channels or all channels are controlled remotely.
Interdiction technology is one approach to plant program security. In this format, the pay tele-
vision channels to be secured are transported through the cable plant in the clear (not scrambled).
The security is generated on the pole at the subscriber module by adding interference carrier(s)
to the unauthorized channels. An electronic switch is incorporated, allowing signals to be turned
off. In addition to being consumer electronics friendly, this method of security does not degrade
the picture quality on an authorized channel.
y
friendly (easily serviced) plant. HFC makes two-way service practical. The bandwidth of coaxial
cable has no sharp cut off. In a coax-based system, it is the cascade of amplifiers that limits the
practical bandwidth. Twenty to forty amplifiers in cascade not only reduces bandwidth, but also
constitutes a considerable reliability hazard. Overlaying low-loss fiber over the trunk portion of
the plant eliminates the trunk amplifiers. This, in turn, leaves only the distribution portion of the
plant with its relatively short distances and only two or three amplifiers. Wider bandwidth is thus
facilitated.
Two way operation in a fiber system is quite practical for two reasons. First, the fiber itself is
not subject to ingress of interfering signals. Second, the cable system is broken up into a large
number of small cable systems, each isolated from the others by its own fiber link to the head-
end. If ingress should cause interference in one of these small cable systems, that interference
will not impair the performance of the other portions of the cable plant.
DWDM
CMTS Primary
DWDM
HDT
Video Redundant Optical
Server EDFA Switch
MUX DEMUX
log signal and fed to 1310 nm transmitters, which in turn feed multiple or individual optical
nodes. Each optical node typically serves 500 to 1000 subscribers, but nodes serving as few as
100 homes are practical. The fewer homes served per node, the more dedicated bandwidth is
available per subscriber. Because each hub serves on the order of 50,000 subs, the hubs must be
relatively large in order to house all of the equipment and fiber connections.
Several drawbacks exist for operators deploying traditional double-hop architectures for high-
bandwidth subscribers. Aside from the difficulty and expense of locating and building a large
hub site, there are the functional problems of redundancy and the sheer numbers of fibers neces-
sary to feed 500 or so nodes from each hub. For services such as telephony, most operators
require both equipment and fiber path redundancy down to at least the 500 to 1000 home level.
This is difficult to achieve in the double-hop architecture unless there are redundant receivers in
each node fed by redundant fiber rings and transmitters. This arrangement provides redundancy
down to the 100 home level, which might be considered overkill, and is definitely expensive. A
star architecture from the hub to the nodes would mitigate the redundancy problem, but would be
prohibitively expensive in terms of fiber cable, since there would be a unique fiber cable to each
node.
DWDM Architectures
A “hubless” architecture is shown in Figure 7.4.18 [4]. In this case, most of the equipment for-
merly located in the hub (CMTS, HDT, and video servers) has been pulled back to the head end.
Because the forward path signals are passively transmitted through the hub, it is easy to replace
the large hub site with a small, inexpensive cabinet or vault. In Figure 7.4.18, four cabinets serve
the 50,000 subs formerly served by a single large hub. Placing all of the equipment at the head
end has the additional advantage of lowering operational costs and permitting less total equip-
ment to be deployed in the initial stages. In the traditional architecture, it is necessary to locate at
least one CMTS, HDT, and possibly a video server at each hub, regardless of how limited the
demand. With the CMTS and HDT pulled back to the head end, this equipment can be shared
among multiple cabinets/hubs when demand is low.
y
The traditional architecture cost increases rapidly with increasing bandwidth demand, but the
dense wavelength division multiplexing (DWDM) architecture only requires additional, and rela-
tively inexpensive, DWDM transmitters. Similar to SDH, the DWDM architecture also provides
path and equipment protection via redundant optical amplifiers and optical switches. However, it
does not provide all of SDH’s performance monitoring capabilities. Therefore, for certain high-
priority services, such as telephony, a hybrid approach is possible. The low bandwidth, high-pri-
ority services like telephony can be transmitted to the hub/cabinet via SDH, while the high band-
width, non-lifeline services such as VOD can be transmitted via DWDM.
Application Considerations
Although DWDM has been applied in cable systems, most implementations demultiplex the sig-
nal at the hub and feed the individual wavelengths to large nodes [4]. This configuration saves
fiber between the head end and hub, but only provides one dedicated narrowcast wavelength per
1000 homes. In the DWDM deep fiber architecture shown in Figure 7.4.18, DWDM is extended
one level further into the network to an optically scalable node (OSN) that may be either strand-
or cabinet-mounted. The signal at the OSN is further split to feed 100 home mini-nodes. Starting
at the headend, the narrowcast signals are transmitted via directly modulated DWDM transmit-
ters. The wavelengths are multiplexed together, optically amplified by an erbium doped fiber
amplifier (EDFA), passed through the hub site, and are demultiplexed at the OSN. In this ulti-
mate configuration, a dedicated wavelength serves each mini-node, providing approximately 300
MHz of narrowcasting 256-QAM channels (~2 Gbits/s or 20 Mbits/s per subscriber). Eventually,
as the broadcast analog spectrum is gradually reduced and the narrowcast spectrum increased,
the entire 50 to 860 MHz band can be utilized to deliver <50 Mbits/s per subscriber. Initially,
y
however, the system can be scaled such that each wavelength is shared among all the mini-nodes
fed by a particular OSN. A simple optical splitter is deployed in the OSN instead of a DWDM
demultiplexer. As bandwidth demand increases, additional DWDM transmitters are added at the
head end, and DWDM demultiplexers replace the optical splitters in the OSN.
The broadcast analog video is redundantly transmitted via an externally modulated 1550 nm
transmitter from the head end to a hub, where it is amplified and split to feed the OSN’s. The
broadcast and demultiplexed narrowcast signals are combined and then transmitted over the
same fiber to a single receiver in the mini-node. The signals are combined via a simple 2:1 opti-
cal coupler, or by a 2:1 DWDM multiplexer, depending on the available loss budget. If a DWDM
mux is necessary because of link budget considerations, then another option is to leave the
broadcast and narrowcast signals separate, and transmit them over individual fibers to separate
receivers in the mini-node. The cost of a 2:1 DWDM mux and 10 km of fiber is greater than the
cost of an additional receiver. In addition, avoiding the 2:1 combiner more easily enables the
equipment associated with a 1:16 split to be packed into the OSN, as opposed to only serving 8
mini-nodes from each OSN. An additional advantage of this approach is that it eliminates poten-
tial problems associated with the CSO and CTB from the narrowcast signals interfering with the
broadcast signal when using a single receiver.
The plant costs of the deep fiber, DWDM architecture are greater than those of traditional
HFC architectures, due to the expanded number of nodes (one per 75 to 100 homes), but this cost
increase is mostly offset by the decrease in RF amplifiers and power supplies. The elimination or
minimization of SDH or ATM transport systems from the head end to hub actually makes the
hubless deep fiber system more cost-effective than the traditional network. The primary benefit
of these next generation architectures is the massively scalable bandwidth, along with the opera-
tional savings in power and RF plant and hub maintenance, and the increased network reliability
due to deeper redundant fiber and fewer total network elements. In addition, the hubless architec-
ture typically can be deployed much more quickly, an important advantage in today’s competitive
service provider environment.
Security
Module(s)
Operations
Internet
Content
Supporting
Video Headend Hardware & Consumer
Content Software Devices
Other
Content
OpenCable Device
In March 1972, new rules regarding cable television became effective. These rules required
cable television operators to obtain a certificate of compliance from the Commission prior to
operating a cable television system or adding a television broadcast signal. The rules applicable
to cable operators fell into several broad subject areas:
• Franchise standards
• Signal carriage
• Network program nonduplication and syndicated program exclusivity
• Nonbroadcast or cablecasting services
• Cross-ownership
• Equal employment opportunity
• Technical standards
Cable television operators who originated programming were subject to equal time, Fairness
Doctrine, sponsorship identification, and other provisions similar to rules applicable to broad-
casters. Cable operators were also required to maintain certain records and to file annual reports
with the Commission concerning general statistics, employment, and finances.
In succeeding years, the Commission modified or eliminated many of the rules. Among the
more significant actions, the Commission deleted most of the franchise standards in 1977, sub-
stituted a registration process for the certificate of compliance application process in 1978, and
eliminated the distant signal carriage restrictions and syndicated program exclusivity rules in
1980. In 1983, the Commission deleted its requirement that cable operators file financial infor-
mation. In addition, court actions led to the deletion of the pay cable programming rules in 1977.
• Ensure cable operators continue to expand their capacity and program offerings
• Ensure cable operators do not have undue market power
• Ensure consumer interests are protected in the receipt of cable service
The Commission has adopted various regulations to implement these goals.
Basic service is the lowest level of cable service a subscriber can buy. It includes, at a mini-
mum, all over-the-air television broadcast signals carried pursuant to the must-carry require-
ments of the Communications Act, and any public, educational, or government access channels
required by the system's franchise agreement. It may include additional signals chosen by the
operator. Basic service is generally regulated by the local franchising authority (the local or state
entity empowered by federal, state, or local law to grant a franchise to a cable company to oper-
ate in a given area).
Cable programming service includes all program channels on the cable system that are not
included in basic service, but are not separately offered as per-channel or per-program services.
Pursuant to a 1996 federal law, the rates charged for cable programming services tiers provided
after March 31, 1999 are not regulated. There may be one or more tiers of cable programming
service.
Per-channel or per-program service includes those cable services that are provided as single-
channel tiers by the cable operator, and individual programs for which the cable operator charges
a separate rate. Neither of these services is regulated by the local franchising authorities or the
Commission.
tem must obtain that station's consent prior to carrying or transmitting its signal. Except for
“superstations,” a cable system may not carry the signal of any television broadcast station that is
not located in the same market as the cable system without that broadcaster's consent. Supersta-
tions are transmitted via satellite, usually nationwide, and the cable system may carry such sta-
tions outside their local market without their consent. The negotiations between a television
station and a cable system are private agreements which may, but need not, include some form of
compensation to the television station such as money, advertising time, or additional channel
access.
Radio Programming
While the 1992 Cable Act's must-carry provisions only apply to local commercial and noncom-
mercial educational television stations, the Act's retransmission consent provisions apply to all
commercial broadcast stations. Many cable systems carry radio stations as an “all-band” offer-
ing, meaning that as with any standard radio receiver, all stations which deliver a signal to the
antenna are carried on the system. The Commission only requires consent from those radio sta-
tions within 57 miles of the cable system's receiving antenna. Thus, even though a cable opera-
tor's antenna may pick up a station's signal, operators are not required to obtain the consent of
stations outside of the 57 mile zone unless the station affirmatively seeks retransmission consent.
Manner of Carriage
Subject to the Commission's network nonduplication, syndicated exclusivity, and sports broad-
casting rules, cable systems must carry the entirety of the program schedule of every local televi-
sion station carried pursuant to the mandatory carriage provisions or the retransmission consent
provisions of the 1992 Cable Act. A broadcaster and a cable operator may negotiate for partial
carriage of the signal where the station is not eligible for must carry rights, either because of the
station's failure to meet the requisite definitions or because the cable system is outside the sta-
y
tion's market. In those situations where the carriage in the entirety rule applies, the primary video
and accompanying audio of all television broadcast stations must be carried in full, without alter-
ation or deletion of their content. Ancillary services such as closed captioning and program-
related material in the vertical blanking interval must be carried. However, other information
contained in the vertical blanking interval need not be carried.
7.4.6 References
1. Ciciora, Walter S.: “Cable Television,” in NAB Engineering Handbook, 9th ed., Jerry C.
Whitaker (ed.), National Association of Broadcasters, Washington, D.C., pp. 1339–1363,
1999.
2. Fink, D. G., and D. Christiansen (eds.): Electronic Engineer’s Handbook, 2nd ed.,
McGraw-Hill, New York, 1982.
3. Baldwin, T. F., and D. S. McVoy: Cable Communications, Prentice-Hall, Englewood Cliffs,
N.J., 1983.
4. Bonang, C., and C. Auvray-Kander: “Next Generation Broadband Networks for Interactive
Services,” in Proceedings of IBC 2000, International Broadcasting Convention, Amster-
dam, 2000.
5. “Cable Television Information Bulletin,” Federal Communications Commission, Washing-
ton, D.C., June 2000.
g g
Chapter
7.5
Satellite Delivery Systems
7.5.1 Introduction
The first commercial satellite transmission occurred on July 10, 1962, when television pictures
were beamed across the Atlantic Ocean through Telstar 1. The launch vehicle, however, lacked
sufficient power to place the spacecraft into a stationary position. Three years later, after consid-
erable progress in the development of rocket motors, INTELSAT saw its initial craft, Early Bird
1, launched into a geostationary orbit, and a rapidly growing communications industry was born.
In the same year, the USSR inaugurated the Molnya series of satellites, which traveled in more
elliptical orbits, to better meet the needs of that nation and its more northerly position. The Mol-
nya satellites were placed in orbits inclined about 64° relative to the equator, with an orbital
period half that of the earth.
From these humble beginnings, satellite-based audio, video, and data communications have
emerged to become a powerful force in communications across all types of industries, from mil-
itary logistics and support to consumer television.
7-137
y y
Antennas
The antenna structure of a communications satellite consists of several different antenna sec-
tions. One receives signals from earth; another transmits those signals back to earth. The trans-
mitting antenna may be constructed in several sections to carry more than one signal beam.
Finally, a receive-transmit beacon antenna provides communication with a ground station to con-
trol the operation of the satellite. The control functions include turning parts of the electronics on
and off, adjusting the radiation pattern, and maintaining the satellite in its proper position.
The design of the complex antenna system for a satellite relies heavily on the horizontal and
vertical polarizations of signals to keep incoming and outgoing information separated. Multiple
layers in carefully controlled thicknesses of reflecting materials, which are sensitive to signal
polarizations, can be used for such purposes. Also, multiple-feed horns can develop more beams
to earth. Antennas for different requirements may combine several antenna designs, but nearly
all are based on the parabolic reflector, because of two fundamental properties of the parabolic
curve:
• Rays received by the structure that are parallel to the feed axis, are reflected and converged at
the focus
• Rays emitted from the focal point are reflected and emerge parallel to the feed axis
Special cases may involve spherical and elliptical reflectors, but the parabolic is most common.
7.5.2b Transponders
From the antenna, a signal is directed to a chain of electronic equipment known as a transponder.
This unit contains the circuits for receiving, frequency conversion, and transmission. Some of the
first satellites launched for INTELSAT and other systems contained only one or two transponder
units. However, initial demand for satellite link services forced satellite builders to develop more
economical systems with multiple-transponder designs. Such requirements meant refinements in
circuitry that would allow each receiver-transmitter chain to operate more efficiently (less power
draw). Multiple-transponder electronics also meant a condensation of the physical volume
needed for each transponder with as small as possible an increase in the overall size and mass of
the satellite to accommodate the added equipment.
General purpose satellites placed in orbit have 12 or 24 transponders for C-band units, each
with 36-MHz bandwidths. However, wide variations in the number of transponders and in their
operating bandwidths, exist. Some Ku-band systems use fewer transponders with wider band-
widths, but others have 40 or more transponders of 27 MHz bandwidth. Identification of the
transponders is described by an agreed-upon frequency plan for the band.
Users of satellite communication links are assigned to transponders generally on a lease basis.
Assignments usually leave one or more spare transponders aboard each craft, allowing for
standby capabilities in the event of a transponder failure.
By assigning transponders to users, the satellite operator simplifies the design of up-link and
down-link facilities. An earth station controller can be programmed for the transponders of inter-
est. For example, a television station may need access to several transponders from one satellite.
The operator enters only the transponder number (or carrier frequency) of interest. The receiver
handles retuning of the receiver and automatic switching of signals from the dual-polarity feed
horn on the antenna. Controllers can also be used to move an antenna from one satellite to
another automatically.
y y
Each transponder has a fixed center frequency and a specific signal polarization according to
the frequency plan. For example, all odd-numbered transponders might use horizontal polariza-
tion, while even-numbered ones might be vertically polarized. Excessive deviation from the cen-
ter carrier frequency by one signal does not produce objectionable interference between
transponders and signals because of the isolation provided by cross-polarization. This concept
extends to satellites in adjacent parking spaces in the geosynchronous orbit. Center frequencies
for transponders on adjacent satellites are offset in frequency from those on the first craft. An
angular offset of polarization affords additional isolation. The even and odd transponder assign-
ments are offset by 90° from one another. As spacing is decreased between satellites, the polar-
ization offset must be increased to reduce the potential for interference.
While this discussion has centered on the down-link frequency plan for a C-band satellite, up-
link facilities follow much the same plan, except that up-link frequencies are centered around 6
GHz. A similar plan for Ku-band equipment uses up-linking centered around 14 GHz and down-
link operation around 12 GHz.
Two types of solar panels are in general use. One is a flat configuration arranged as a pair of
deployable panel wings that extend to either side of the satellite body. In the other type, the solar
cells are attached to the barrellike body of the satellite.
Each approach has advantages and disadvantages. On the winglike panels, approximately
18,000 cells are used to maintain a 1200 W output capability at end-of-life. The other configura-
tion uses approximately 80,000 cells mounted on a 2 m or greater diameter cylinder that is 5 m or
more in height. The wing approach is usually the more efficient of the two.
The wing keeps all cells illuminated at all times, producing more electric current. However,
all cells are subject to a higher rate of bombardment by space debris and by solar proton radia-
tion, and to a higher average temperature (approximately 60°C) because of constant illumination.
The rotating-drum design places a small portion of all cells perpendicular to the sun’s rays at
any one time. Cells that are not perpendicular to the light source produce power as long as they
are illuminated, but the amount is reduced and depends on the angle of light incidence. The indi-
vidual cells are not all exposed in one direction simultaneously and are less likely to be struck by
space debris. Also, because cells on the drum are not exposed to sunlight at all times, their aver-
age temperature is significantly less, approximately 25°C, which extends their operating life-
time.
The output current, voltage, and power from the solar panels undergo variation, depending on
the operating conditions. Without a means of controlling or regulating the power, the operation
of the electronics package would also vary. An unregulated power bus is simpler and takes less of
the allowable mass, or mass budget, of the craft. System electronics circuitry must, however,
include on-board regulators to maintain consistent RF levels. The alternative is to use bus regula-
tion with more complex circuitry, a somewhat lower output power, and reduced reliability
because of the additional components in the system.
Some variation can be accommodated by the storage batteries in either case. The most serious
need for batteries following insertion into orbit of the geosynchronous satellite occurs approxi-
mately 84 days per year. The earth passes between the sun and the satellite, causing an eclipse.
The blackout period can last as long as 70 min within one day. During these periods, the need for
a rechargeable supply is essential. Nickel-cadmium batteries have played a major role in power-
ing such spacecraft.
Power to the electronics on the craft requires protective regulation to maintain consistent sig-
nal levels. Most of the equipment operates at low voltages, but the final stage of each transpon-
der chain ends in a high power amplifier. The HPA of C-band satellite channels may include a
traveling-wave tube (TWT) or a solid-state power amplifier (SSPA). Ku-band systems rely pri-
marily on TWT devices. Traveling-wave tubes, similar to their earth-bound counterparts,
klystrons, require multiple voltage levels. The filaments operate at 5 V, but beam-focus and elec-
tron-collection electrodes in the device require voltages in the hundreds and thousands of volts.
To develop such a range of voltages, the power supply must include voltage converters.
From these voltages, the TWT units produce output powers in the range of 8.5 to 20 W. Most
systems are operated at the lower end of the range for increased reliability and greater longevity
of the TWTs. In general, lifetime is assumed to be 7 years.
At the receiving end of the transponder, signals coming from the antenna are split into sepa-
rate bands through a channelizing network, which directs each input signal to its own receiver,
processing amplifier, and HPA. At the output, a combiner brings all the channels together again
into one signal to be fed to the transmitting antenna.
y y
Figure 7.5.2 Attitude of the spacecraft is determined by pitch, roll, and yaw rotations around three
reference axes.
• The self-generated torques related to antenna displacements, the solar arrays, and changing
masses of on-board fuel supplies
The results of meteorite impacts on spacecraft are considered, but have not been experienced sig-
nificantly by any satellite to date.
The attitude of the spacecraft is defined as the combination of pitch, roll, and yaw. These are
motions that can occur without causing a change in the orbital position of the satellite. As illus-
trated in Figure 7.5.2:
• Pitch is the rotation of the craft about an axis perpendicular to the plane of the orbit
• Roll is rotation of the craft about an axis in the plane of the orbit
• Yaw is rotation about an axis that points directly toward the center of the earth
Satellite attitude is determined by the angular variation between satellite body axes and these
three reference axes.
Attitude control involves two functions. First, it is necessary to rotate that part of the satellite
that must point toward the earth around the pitch axis. Rotation must be precisely one revolution
per day or 0.25° per minute. Second, the satellite must be stabilized against torque from any other
source. To perform these functions, numerous detectors measure the sense of the satellite with
respect to earth horizons, the sun, specified stars, gyroscopes, and radio frequency signals. Atti-
tude correction systems may involve a control loop with the ground control station or may be
designed as a closed loop by storing all required attitude reference information within the satel-
lite itself.
Relative to the first function listed, satellites can be spin-stabilized gyroscopically by rotating
the body of the satellite in the range of 30 to 120 times per minute. This effectively stabilizes the
craft with respect to the axis of the spin and is used with cylindrical-type satellites. Although a
relatively simple approach, it has a major drawback—the antenna would not remain pointed
toward earth if it were mounted on the rotating portion of the craft. The solution is to de-spin the
section holding the antenna at one revolution per day about the pitch axis to maintain a correct
antenna heading.
y y
Receiving antennas for commercial applications, such as radio/TV networks, CATV net-
works, and special services or teleconferencing centers, range from less than a meter to 7 m or
more in diameter. Antennas for consumer and business use are even smaller, depending on the
type of signal being received and the quality of the signal to be provided by the down-link instal-
lation. If the signals being received are strictly digital in nature, smaller sizes are usually suffi-
cient.
Antenna size for any application is determined primarily by the type of transmission, the band
of operation, the location of the receiving station, typical weather in the receiving station locale,
and the quality of the output signal required from the down-link. As indicated previously, digital
transmissions usually allow a smaller main reflector to be used because of the transmission char-
acteristics of digital systems. The data stream periodically includes information to check the
accuracy of the data and, if errors are found, to make corrections. If errors are too gross for error
correction, the digital circuitry may provide error concealment techniques to hide the errors.
Absolute correction is less critical for entertainment-type programming functions, such as televi-
sion and audio programming. The nature of the application also helps to determine if the antenna
must be strictly parabolic or if one of the spherical types will be sufficient.
Regarding the frequency band of operation, the lower the signal frequency, the larger the
antenna reflector must be for the same antenna gain. However, there are compensating factors
such as the required output signal quality, station location, and local environment. Generally, the
gain figure and directivity of larger reflectors are greater than those of smaller reflectors.
One of the most critical sections of the receiver is the low-noise amplifier or low-noise con-
verter, the first component to process the signal following the antenna. The amplifier must not
add noise to the signal, because once added, the noise cannot be removed. LNAs are rated by
their noise temperature. The cost of LNAs increases significantly as the temperature figure (and
noise) goes down.
Following the LNA, the receiver tuner converts signals to an IF frequency. As with the up-link
equipment, an output at 70 MHz is useful to connect to terrestrial microwave equipment if
desired. A second conversion takes video, audio, or data to baseband signals in a form most con-
venient for those using the communications link.
Figure 7.5.3 Carrier-to-noise ratio is determined by the spacecraft’s effective isotropic radiated
power (EIRP) and the home receiver figure of merit (G/T). For a fixed receiver quality, C/N can be
improved by increasing the EIRP. This may be accomplished by either increasing the satellite
transmitter power or reducing coverage area (increasing antenna gain). These curves assume an
operating frequency of 12 GHz. For a typical receiver noise figure of 3.0 dB, the antenna diame-
ters would range from 0.36 m for the 3 dB/K case to 2.5 m for the 20 dB/K case. (After [2].)
Because the picture quality is related to C/N, a common method of improving the picture is to
use higher figures of merit for the home terminal. This can be achieved with a larger antenna that
has a higher gain or with an LNA with lower noise-temperature characteristics.
The relationship between C/N and home-terminal antenna size is independent of carrier fre-
quency. It is independent because the antenna gain of the home terminal depends on the square
of the frequency in exactly the same way as the free-space loss. Therefore, in the equation for
transmission power, the two variables effectively cancel each other, all other things being equal.
On the other hand, for a given power level in a satellite and a given coverage area, the move to
higher carrier frequencies requires larger antennas to achieve the performance because of atmo-
spheric and rain losses. Thus, to maintain picture quality with atmospheric losses, high-power
signals have to be used in the satellite to accommodate the smaller home-receiver antennas.
y y
Figure 7.5.4 In the 12-GHlz band, analog video performance will degrade during rain. This exam-
ple shows performance for a system designed to have a S/N of 50 dB during clear weather. Cli-
mate types range from cold and dry areas such as Maine (B) to tropical areas such Florida and
Louisiana (E). (After [2].)
Rain Attenuation
Attenuation caused by rain can be a difficult problem to deal with. Both the rate and volume of
rain vary extensively around the U.S. and elsewhere. A rain attenuation model, drawn up by the
National Aeronautics and Space Administration for the 12 GHz broadcast band, summarizes the
different climates in the U.S. The hardest to deal with is that of the southeast, particularly in Lou-
isiana and Florida.
To illustrate how the climate curves reflect the video S/N, these parameters are plotted in Fig-
ure 7.5.4 against a percentage of time for each climate type. The picture quality in the worst cli-
mate would be usable until the C/N dropped below the FM threshold. This would happen about
0.2 percent of the time in the southeastern U.S. This amounts to almost 18 h a year.
y y
Figure 7.5.5 Receiving station G/T is a function of antenna diameter, receiver temperature, and
antenna temperature. In heavy rain, the antenna temperature increases from a clear-weather
value of perhaps 50K to a maximum value of 290K (ambient). (After [2].)
As mentioned previously, a critical characteristic for receiving high quality pictures from a
direct-broadcast satellite is the figure of merit (G/T) of the home terminal. At a particular fre-
quency, this figure simply depends on the size of the antenna and on the system noise tempera-
ture of the LNA/receiver system. The system noise temperature, for its part, is affected by rain.
Rain not only attenuates the signal but also increases the antenna temperature. This temperature
must be added to the LNA temperature to set the system noise temperature. Variation of the
antenna temperature is particularly distressing because the system temperature margin that can
account for a high G/T may be obliterated just when the margin is needed the most. System tem-
perature in clear weather and antenna size are, thus, not perfectly interchangeable.
For the same ratio, a larger antenna is better than a lower clear-weather system temperature,
because the increased gain of a larger antenna is not affected by rain.
A plot of the figure of merit versus antenna diameter for different low-noise amplifier tem-
peratures in both clear weather and heavy rain yields the results shown in Figure 7.5.5. A 1.8 m
antenna with a 480K temperature and a 1 m antenna with a 120K temperature both give the same
clear-weather figure of merit: 17 dB/K. However, in heavy rain the larger antenna G/T deterio-
rates to only 16 dB, whereas the smaller antenna system drops all the way to 13 dB.
y y
7.5.5 References
1. Cook, James H., Jr., Gary Springer, Jorge B. Vespoli: “Satellite Earth Stations,” in NAB
Engineering Handbook, Jerry C. Whitaker (ed.), National Association of Broadcasters,
Washington, D.C., pp. 1285–1322, 1999.
2. Kase, C. A., and W. L. Pritchard: “Getting Set for Direct-Broadcast Satellites,” IEEE Spec-
trum, IEEE, New York, N.Y., vol. 18, no. 8, pp. 22–28, 1981.
g g
Chapter
7.6
Content Distribution
7.6.1 Introduction
The Internet is not just ubiquitous, its use for conducting commerce has become pervasive. Great
strides have been made in the areas of providing widespread connectivity, increased capacity for
moving richer content and adequate security for maintaining privacy. One of the largest remain-
ing challenges is scalability—particularly for distributing and delivering mixed media content to
large numbers of locations and users in a predictable and efficient manner. The Internet has
proven to be a great transaction network but falls woefully short when it comes to delivering ser-
vices to even moderately sized communities or subscriber bases. On the other hand, present-day
broadcast infrastructures, satellite, cable, and terrestrial networks possess built-in scalability and
have the means to deliver broadband services to communities of any size, spread over any geog-
raphy, very effectively.
7-151
7-152 Television Receivers and Cable/Satellite Distribution Systems
With the proper hardware, television programs can be fairly successfully viewed on a personal
computer screen. On the other hand, putting a Web page from a computer browser output directly
onto a TV screen is generally not a satisfying experience. This is because Web content is typi-
cally viewed on a computer by a single user positioned at close proximity to the screen, while
television is usually watched at far greater distances and often by groups of viewers. This distinc-
tion is referred to as the “one-foot vs ten-foot experience” or “lean forward vs. lean back” usage.
This implies that when viewing the typical Web page on a TV screen, fonts and graphics are gen-
erally too small to be comfortably viewed. Also, the selection of hyperlinks can be difficult with
an infrared remote control.
ITV systems have dealt with this issue in different ways. Some systems transcode the Web
page content in a specialized server so that it displays more appropriately on a TV screen. Other
systems perform this transcoding in the client receiver, or with a combination of client and
server-based steps. These transcodings typically involve font substitution to a larger typeface,
while retaining as much of the look and feel of the original page as possible. The overall success
of the process varies widely. Some systems do a better job than others, and some Web pages lend
themselves to such transcoding more readily than others.
In addition, because a mouse is generally not used with ITV receivers, Web-page links are
converted to another form of display. Common practice has been to highlight each link on the
screen one at a time, using the remote control navigation keys to sequence among the links. For
example, as a user pushes the “down arrow” key on the ITV remote, the next link down the page
will be highlighted. The user steps through the links on the page using the navigation keys until a
link that the user wishes to follow is highlighted. Then the user presses a “Go,” “OK” or “Enter”
key on the remote, and the display switches to the linked page, once again transcoded for TV dis-
play.
The ultimate solution to this problem involves the design of Web pages customized for TV
display by the content author. While this is rarely done at present, it may become standard prac-
tice as the ITV medium takes hold.
and a host server, new applications have emerged, such as online learning and live webcasts, that
require rich content to reach millions of users simultaneously. These applications require one-to-
many exchanges, and place considerable demands on the Internet infrastructure in terms of band-
width, scalability, and predictability of content delivery.
time of the Internet, by moving the content closer to the edge of the network and therefore requir-
ing fewer hops from the user. These strategies include the following:
• Web caching, which can be used to pre-fetch commonly requested content and store it in loca-
tions that are closer to the user, thus eliminating the need for every transaction to be serviced
by a data center. Web caching can be used to effectively decongest data centers, but it only
works when the content being served is relatively static. In other words, if the content changes
frequently, then the caches have to be replenished frequently as well, and this can result in
stale caches and unpredictable delays.
• Content replication, an effective method for improving response time to transactions, particu-
larly when one can predict which content will be in demand and at what locations. Content
that changes frequently, such as real-time stock updates, news highlights, and live Web events
can be replicated to many servers at locations close to users. Content serving occurs locally,
and therefore delivery is more predictable.
deliver high-quality, synchronized audio and video content to a large population of listeners
and viewers. In addition, many of these networks have migrated from analog to digital trans-
mission systems, thus greatly enhancing their ability to carry new types of digital content,
including Internet content.
• Broadcast networks are inherently scalable. By virtue of their point-to-multipoint transmis-
sion capability, it takes no more resources, bandwidth or other provisions, to send content to a
million locations as it does to one, as long all of the receiving locations are within the trans-
mission footprint of the broadcast network. In contrast, with the traditional Internet, each
location that is targeted to receive the content will add to the overall resources required to
complete the transmission.
• Broadcast networks offer predictable performance. Again, by virtue of the point-to-multipoint
nature of transmission on broadcast networks, there are no variances in the propagation delay
of data throughout the network, regardless of where a receiver is located. This inherent capa-
bility assures a uniform experience to all users within the broadcast network.
By enabling broadcast networks to connect with the Internet, the Broadcast Internet offers a
cost-effective, reliable, and seamless path for delivering multimedia rich content to large num-
bers of users and service providers simultaneously. Pan-continent broadcast networks, such as
digital satellite systems, can be used to distribute content to a very large, and potentially highly
dispersed, set of locations. Other types of broadcast networks with smaller footprints, such as
digital TV networks, can be used for local content distribution. The content is received and
cached locally at these locations, which will—in turn—serve the content to users that connect
either directly to that location or through a service provider that uses the location for content
serving. The content is served to the users using the traditional mechanisms employed on the
Internet. For example, instead of the ISPs replenishing their local caches on an on-demand basis,
they can adopt the strategy of pre-filling their caches based on analysis of the type of content
most frequently requested by the communities they serve. Content that is dynamic in nature,
such as audio/video streaming and stock tickers, will be updated on the local caches at the appro-
priate frequency, keeping the cached content current and providing better overall quality of ser-
vice to the users. Figure 7.6.1 illustrates the application of this principle to a cable-based delivery
system. Figure 7.6.2 expands the concept to a variety of distribution media.
The major design considerations for building Broadcast Internet-ready broadcast facilities
are:
• Multi-vendor interoperability. Any equipment that is added to the head-end to incorporate
Internet data must work seamlessly with all existing equipment and must fit within the opera-
tional framework used by the facility. This requires any new equipment to support standard
interfaces and protocols, and follow the guidelines set by standards organizations that cover
both the digital video and IP domains.
• Bandwidth optimization. Managing the available bandwidth in optimal fashion is a very
important design consideration. There is considerable variability in the use of bandwidth by
compression and multiplexing equipment, caused either by the inherent inefficiencies of the
equipment, the nature of the content, or the service mix. The head-end architecture must pro-
vide the ability to capitalize on this variability to maximize the use of the bandwidth in an
opportunistic manner. Two strategies are commonly applied for allocating bandwidth to data
services in an environment that includes legacy video services as well: 1) pre-allocated band-
7-156 Television Receivers and Cable/Satellite Distribution Systems
Figure 7.6.1 A high-bandwidth data distribution system incorporating cable TV distribution to end-
users.
width, where the data services are treated just like another “channel” or “bundle of channels”
and the bandwidth is reserved for the exclusive use of these services; 2) opportunistic band-
width, where the data services are treated as “trickle services”, where there are no specific
time of delivery requirements for the service. In this case, bandwidth can be optimized by not
pre-allocating the bandwidth for the data services and taking advantage of the variability in
the bandwidth use by the legacy video programs to “fill the blanks” with data.
• Flexibility. The nature and type of Internet services that will be required to be supported in
any head-end is likely to change rapidly, as is evidenced by the rapid rate of technological and
application advancement in the Internet in general. The head-end architecture must easily
adapt to new applications requirements without having to undergo major design changes.
Currently, applications that incorporate data can be classified as following:
• Pure data services. These services use Internet or private data as the sole content. There are
no legacy, television-oriented digital audio/video elements in the content. This type of content
is typically targeted towards personal computers and integrated set-top boxes that have an
operating environment similar to personal computers and are enabled for interactive services
on the television. The content preparation, packaging, scheduling, and delivery systems used
for these services can be independent of those used for the legacy video services.
• Loosely co-related data services. These services use Internet or private data to enhance or
extend a legacy, television-oriented program, or to provide a parallel co-related content
Content Distribution 7-157
stream that is independent of the video stream, but scheduled for play-out simultaneously.
The parallel content stream can be viewed in real time or stored for later viewing. The content
preparation, packaging, scheduling, and delivery systems used for these services must have
the ability to interface and interact with those used for the legacy video services.
• Enhanced TV. These services use Internet or private data to enhance a legacy, television-ori-
ented program in a highly synchronized manner. The synchronization can be either presenta-
tion time-based or frame content-based. The content preparation, packaging, scheduling, and
delivery systems for these services must be fully integrated into those used for legacy video
services.
• Data distribution to Internet service providers. This model is suitable for local distribution
of content, particularly multicast services and streaming media, to Internet service providers
that then deliver that content to consumers over traditional land-based last-mile connections.
This model can be very effective for simulcast services—programs that are viewable on the
TV as well as on a PC, with enhancements specific to the PC.
• DTV as last mile. In this model, the DTV spectrum is used as a last mile in a content distri-
bution network. Content providers lease or acquire on demand last mile bandwidth for deliv-
ering Internet based services.
The DTV card must also support multi-sync monitors and common PC display resolutions such
as 1024 × 768, 800 × 600, and 640 × 480. If the board can support DVD playback in addition to
ATSC and NTSC, the value proposition for the consumer is significantly enhanced.
must be in place, and certain agreements must be reached among the many potential informa-
tion-providers.
A related challenge is the interface format itself, including the physical connecting device.
Table 7.6.4 summarizes the more common interfaces.
7.6.4a DVI
The Digital Visual Interface (DVI) grew out of a consortium of companies known as the Digital
Display Working Group, an open industry group lead by Intel, Compaq, Fujitsu, Hewlett Pack-
7-162 Television Receivers and Cable/Satellite Distribution Systems
ard, IBM, NEC and Silicon Image [2]. The objective of the Group was to address industry
requirements for a digital connectivity specification for high-performance PCs and digital dis-
plays.
DVI has evolved to become a preferred interface between devices such as DVD players and
cable/satellite/terrestrial set-top DTV receivers. DVI is available in two basic flavors:
• DVI-I, which is a combination analog/digital interface for a certain degree of backward com-
patibility.
• DVI-D, which is 100 percent digital.
DVI accommodates two different bandwidths:
• Dual Link DVI, which supports 2 × 165 MHz channels of bandwidth.
• Single Link DVI, which supports a maximum bandwidth of 165 MHz on a single channel.
The consumer world adds another layer of complexity by incorporating High-Definition
Copy Protection (HDCP), which encrypts each pixel as it moves from the player or set-top box to
the display, using an asymmetric system that calls for periodic reauthorization from the source.
• Isochronous: data channels provide guaranteed data transport at a predetermined rate. This is
especially important for time-critical multimedia data where just-in-time delivery eliminates
the need for costly buffering.
Much like LANs and WANs, IEEE 1394 is defined by the high level application interfaces
that use it, not a single physical implementation. Therefore, as new silicon technologies allow
higher speeds, longer distances, and alternate media, IEEE 1394 can scale to enable new applica-
tions.
Perhaps most important for use as a digital interface for consumer electronics is that IEEE
1394 is a peer-to-peer interface. This allows not only dubbing from one camcorder to another
without a computer, for example, but also allows multiple computers to share a given camcorder
without any special support in the camcorders or computers. All of these features of IEEE 1394
are key reasons why it is a preferred audio/video digital interface.
HAVi Specifications
The Home Audio Video Interoperability Group (HAVi) was formed in 1999 by eight major con-
sumer electronics companies—Grundig, Hitachi, Matsushita, Philips, Sharp, Sony, Thomson,
and Toshiba. The goal of the group was to create common specifications for interconnecting
(networking) and providing interoperability between digital home entertainment products.
HAVI’s work focused on a multi-vendor architecture to manage home entertainment products
and to provide for seamless interoperability. Devices connected in a HAVi network can share
resources and functionality across the network. The standard is based on an underlying IEEE
1394 digital interface. In addition to the basic networking and control capabilities, HAVI pro-
vides a Java-based user interface that is optimized for television displays and further defines the
behavior of applications in the network.
7.6.4d PVRs
An important development in the television environment, and one that has strong resonance for
ITV applications, is the introduction of the personal video recorder (PVR). Because these
devices allow personalization of programming, they are attractive to consumers and offer a fun-
damental form of interactivity with the content environment for consumers.
These devices allow a substantial amount of television programming to be stored on a hard
disk and played back in non-linear fashion. Because a non-linear storage device (a hard drive) is
being used, these devices can begin to replay a program while it is still being recorded. This
allows a powerful feature of the PVR called live pause, by which the device is used as a buffer
for live programming. This permits users to pause a program that they are watching as it is cur-
rently being broadcast, and return to it a few minutes later without interruption.
PVRs also offer a range of program selection processes from simple searching of electronic
program guides to intelligent searching for keywords in program titles, subjects, content, talent
names, and so on. Advnced PVRs incorporate a browsing ability to seek and download stored
content from on-demand libraries, as well as access broadcast program schedules.
Some PVRs store ITV elements of a program so they can be selectively accessed upon later
playback. A PVR that can access both broadcast channels (via terrestrial, satellite, or cable) plus
the Internet via broadband connectivity will serve as a powerful media collection tool. When
interfaced to a home network system, such a storage device can act as the home media server.
PVRs can be standalone devices, part of a PC, a home server, or even a remote storage system
in which the actual physical storage is on a server at some service provider’s location, with
unique personal access by the user. In this respect, the PVR is essentially a user interface that
accesses storage locally or on various networks, with the actual storage location of a particular
program being transparent to the user. The ultimate vision of this process is the creation of vir-
tual channels, by which the users’s PVR builds the program stream that the user views from vari-
ous stored sources.
Table 3 video formats; 2) outputs all ATSC table 3 formats in the form of NTSC output; and
3) receives and reproduces, and/or outputs Dolby Digital audio.
These industry standard definitions were intended to eliminate the confusion over product fea-
tures and capabilities of television sets and monitors intended for DTV applications. The agree-
ment promised to spur the sale of DTV-compliant sets by injecting a certain amount of logic into
the marketing efforts of TV set manufacturers.
7.6.6 Videoconferencing
With desktop computers nearly as ubiquitous in business these days as telephones, the time has
arrived for the next big push in telecommunications—interactive desktop videoconferencing.
Interaction via video has been used successfully for many years to permit groups of persons to
communicate from widely-distant locations. Such efforts have usually required some degree of
advance planning and specialized equipment ranging from custom-built fiber or coax services to
satellite links. These types of applications will certainly continue to grow, as the need to commu-
nicate on matters of business expands. The real explosion in videoconferencing, however, will
come when three criteria are met:
• Little—if any—advance planning is needed
• No special communications links need be installed to participate in a conference
• Participants can do it from their offices
The real promise of videoconferencing is to make it as convenient and accessible as a telephone
call.
ity is determined not simply by the hardware being used and how fast it is, but by the sophistica-
tion of the algorithms that run on the hardware.
(a )
(b )
(c )
Figure 7.6.4 Primary videoconferencing modes: (a) point-to-point, (b) broadcast, (c) multicast.
• Broadcast, with a single origination point and multiple receiving points (Figure 7.6.4b)
• Multicast, where all (or at least some) of the participating individuals can communicate with
each other and/or with the group as a whole (Figure 7.6.4c).
Content Distribution 7-169
7.6.7 References
1. Pizzi, Skip: “Internet and TV Convergence,” in Interactive TV Survival Guide, Jerry C.
Whitaker ed., McGraw-Hill, New York, N.Y., 2001.
2. Putman, Peter: “Square Pegs and Round Holes,” http://www.hdtvexpert.com.
3. Hoffman, Gary A.: “IEEE 1394: The A/V Digital Interface of Choice,” 1394 Technology
Association Technical Brief, 1394 Technology Association, Santa Clara, Calif., 1999.
4. NAB TV TechCheck: “CEA Establishes Definitions for Digital Television Products,”
National Association of Broadcasters, Washington, D.C., September 1, 2000.
7.6.8 Bibliography
NAB: “Consumer Electronic Consortium Publishes Updated Specifications for Home Audio
Video Interoperability,” TV TechCheck, National Association of Broadcasters, Washington,
D.C., May 21, 2001.
g g
Section
RF System Maintenance
8
Radio frequency (RF) equipment is unfamiliar to many persons entering the electronics industry.
Colleges do not routinely teach high-power RF principles, favoring instead digital technology.
Unlike other types of products, however, RF equipment often must receive preventive mainte-
nance to achieve its expected reliability. Maintaining RF gear is a predictable, necessary expense
that facilities must include in their operating budgets. Tubes (if used in the system) will have to
be replaced from time-to-time no matter what the engineer does; components fail every now and
then; and time must be allocated for cleaning and adjustments. By planning for these expenses
each month, unnecessary downtime can be avoided.
Although the reason generally given for minimum RF maintenance is a lack of time and/or
money, the cost of such a policy can be deceptively high. Problems that could be solved for a few
dollars may, if left unattended, result in considerable damage to the system and a large repair bill.
A standby system often can be a lifesaver, however its usefulness sometimes is overrated. The
best standby RF system is a main system in good working order.
In This Section:
8-1
y
Angevine, Eric: “Controlling Generator and UPS Noise,” Broadcast Engineering, PRIMEDIA
Intertec, Overland Park, Kan., March 1989.
Baietto, Ron: “How to Calculate the Proper Size of UPS Devices,” Microservice Management,
PRIMEDIA Intertec, Overland Park, Kan., March 1989.
Bowick, C.: RF Circuit Design, Howard W. Sams and Co., Indianapolis, IN, 1982.
Bryant, G. H.: Principles of Microwave Measurements, IEE Electrical Measurement Series,
Peter Peregrinus Ltd., London, 1988.
Buchmann, Isidor: “Batteries,” in The Electronics Handbook, Jerry C. Whitaker (ed.), pg. 1058,
CRC Press, Boca Raton, Fla., 1996.
“Cable Testing with Time Domain Reflectometry,” Application Note 67, Hewlett Packard, Palo
Alto, Calif., 1988.
Colin, R. E.: Foundations for Microwave Engineering, McGraw-Hill, New York, N.Y., 1966.
DeDad, John A.: “Auxiliary Power,” in Practical Guide to Power Distribution for Information
Technology Equipment, PRIMEDIA Intertec, Overland Park, Kan., pp. 31–39, 1997.
Federal Information Processing Standards Publication No. 94, Guideline on Electrical Power for
ADP Installations, U.S. Department of Commerce, National Bureau of Standards, Wash-
ington, D.C., 1983.
Gray, T. S.: Applied Electronics, Massachusetts Institute of Technology, 1954.
High Power Transmitting Tubes for Broadcasting and Research, Phillips Technical Publication,
Eindhoven, The Netherlands, 1988.
Highnote, Ronnie L.: The IFM Handbook of Practical Energy Management, Institute for Man-
agement, Old Saybrook, Conn., 1979.
“Improving Time Domain Network Analysis Measurements,” Application Note 62, Hewlett
Packard, Palo Alto, Calif., 1988.
Kaufhold, Gerry: “The Smith Chart, Parts 1–4,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., November 1989–-March 1990.
Kennedy, George: Electronic Communication Systems, 3rd ed., McGraw-Hill, New York, N.Y.,
1985.
Kolbert, Don: “Testing Coaxial Lines,” Broadcast Engineering, Intertec Publishing, Overland
Park, Kan., November 1991.
Lawrie, Robert: Electrical Systems for Computer Installations, McGraw-Hill, New York, N.Y.,
1988.
Montgomery, C. G., R. H. Dicke, and E. M. Purcell: Principles of Microwave Circuits,
McGraw-Hill, New York, N.Y., 1948.
Power Grid Tubes for Radio Broadcasting, Thomson-CSF publication #DTE-115, Thomson-
CSF, Dover, N.J., 1986.
Silence, Neal C.: “The Smith Chart and its Usage in RF Design,” RF Design, Intertec Publish-
ing, Overland Park, Kan., pp. 85–88, April 1992.
y
Smith, Morgan: “Planning for Standby AC Power,” Broadcast Engineering, PRIMEDIA Inter-
tec, Overland Park, Kan., March 1989.
Smith, P. H.: Electronic Applications of the Smith Chart, McGraw-Hill, New York, N.Y., 1969.
Strickland, James A.: “Time Domain Reflectometry Measurements, Measurement Concepts
Series, Tektronix, Beaverton, Ore., 1970.
Stuart, Bud: “Maintaining an Antenna Ground System,” Broadcast Engineering, PRIMEDIA
Intertec, Overland Park, Kan., October 1986.
Svet, Frank A.: “Factors Affecting On-Air Reliability of Solid State Transmitters,” Proceedings
of the SBE Broadcast Engineering Conference, Society of Broadcast Engineers, Indianapo-
lis, IN, October 1989.
“TDR Fundamentals,” Application Note 62, Hewlett Packard, Palo Alto, CA., 1988.
The Care and Feeding of Power Grid Tubes, Varian EIMAC, San Carlos, Calif., 1984.
Whitaker, Jerry C.: Maintaining Electronic Systems, CRC Press, Boca Raton, Fla., 1991.
Whitaker, Jerry C.: RF Systems Handbook, CRC Press, Boca Raton, Fla., 2002.
y
g g
Chapter
8.1
RF System Reliability Considerations
8.1.1 Introduction
Most RF system failures can be prevented through regular cleaning and inspection, and close
observation. The history of the unit also is important in a thorough maintenance program so that
trends can be identified and analyzed.
8-7
y y
Figure 8.1.1 Example of a transmitter operating log that should be filled-out regularly by mainte-
nance personnel.
• Complete list of the components replaced or repaired, including the device schematic number
and part number.
• Total system downtime as a result of the failure.
• Name of the engineer who made the repairs.
The importance of regular, accurate logging can best be emphasized through the following exam-
ples
Case Study #1
Improper neutralization is detected on an AM broadcast transmitter IPA (intermediate power
amplifier), shown in Figure 8.1.2. The neutralization adjustment is made by moving taps on a
coil, and none have been changed. The history of the transmitter (as recorded in the maintenance
record) reveals, however, that the PA grid tuning adjustment has, over the past two years, been
moving slowly into the higher readings. An examination of the schematic diagram leads to the
conclusion that C-601 is the problem.
The tuning change of the stage was so gradual that it was not thought significant until an
examination of the transmitter history revealed that continual retuning in one direction only was
necessary to achieve maximum PA grid drive. Without a record of the history of the unit, time
could have been wasted in substituting expensive capacitors in the circuit, one at a time. Worse
yet, the engineer might have changed the tap on coil L-601 to achieve neutralization, further hid-
ing the real cause of the problem.
y y
Figure 8.1.2 AM transmitter IPA/PA stage exhibiting neutralization problem. A history of IPA retun-
ing (through adjustment of L-601) helped determine that loss of neutralization was the result of C-
601 changing in value.
Case Study #2
A UHF broadcast transmitter is found to exhibit decreasing klystron body-current. The typical
reading with average picture content is 50 mA, but over a 4-week period, the reading dropped to
30 mA. No other parameters show deviation from normal. Yet, the decrease in the reading indi-
cates an alternate path (besides the normal body-current circuitry) by which electrons return to
the beam power supply. A schematic diagram of the system is shown in Figure 8.1.3. Several fac-
tors could cause the body-current variation, including water leakage into the body-to-collector
insulation of the klystron. In time, this water can corrode the klystron envelope, possibly leading
to a loss of vacuum and klystron failure.
Water leakage can also cause partial bypassing of the body-current circuitry, an important
protection system in the transmitter. It is essential that the circuit functions normally at all times
and at full sensitivity in order to detect change when a fault condition occurs. Regular logging of
transmitter parameters ensures that developing problems such as this one are caught early.
Figure 8.1.3 Simplified high-voltage schematic of a klystron amplifier showing the parallel leakage
path that can cause a reduction in protection sensitivity of the body-current circuit.
in the PA tube socket. Because the supply of cooling air is passed through the socket, airborne
contaminants can be deposited on various sections of the assembly. These can create a high-volt-
age arc path across the socket insulators. Perform any cleaning work around the PA socket with
extreme care. Do not use compressed air to clean out a power tube socket. Blowing compressed
air into the PA or IPA stage of a transmitter will merely move the dirt from places where you can
see it to places where you cannot see it. Use a vacuum instead. When cleaning the socket assem-
bly, do not disturb any components in PA the circuit. Visually check the tube anode to see if dirt
is clogging any of the heat-radiating fins.
Cleaning is also important to proper cooling of solid-state components in the transmitter. A
layer of dust and dirt can create a thermal insulator effect and prevent proper heat exchange from
a device into the cabinet.
Special precautions must be taken with systems that receive ac power from two independent
feeds. Typically, one ac line provides 208 V 3-phase service for the high-voltage section of the
system, and a separate ac line provides 120 V power for low-voltage circuits. Older transmitters
or high-power transmitters often utilize this arrangement. Check to see that all ac is removed
before any maintenance work begins.
Consider the following preventive maintenance procedures.
former may cause flashover failures. Insulating compound or oil around the base of a trans-
former can indicate overheating and/or leakage.
Relay Mechanisms
• Inspect relay contacts, including high-voltage or high-power RF relays, for signs of pitting or
discoloration.
• Inspect the mechanical linkage to confirm proper operation. The contactor arm (if used)
should move freely, without undue mechanical resistance.
• Inspect vacuum contactors for free operation of the mechanical linkage (if appropriate), and
for indications of excessive dissipation at the contact points and metal-to-glass (or metal-to-
ceramic) seals.
Unless problems are experienced with an enclosed relay, do not attempt to clean it. More
harm than good can be done by disassembling properly working components for detailed inspec-
tion.
Connection Points
• Inspect connections and terminals that are subject to vibration. Tightness of connections is
critical to the proper operation of high-voltage and RF circuits.
• Inspect barrier strip and printed circuit board contacts for proper termination.
Although it is important that all connections are tight, be careful not to over tighten. The con-
nection points on some components, such doorknob capacitors, can be damaged by excessive
force. There is no section of an RF system where it is more important to keep connections tight
than in the power amplifier stage. Loose connections can result in arcing between components
and conductors that can lead to system failure. The cavity access door is a part of the outer con-
ductor of the coaxial transmission line circuit in FM and TV transmitters, and in many RF gener-
ators operating at VHF and above. High potential RF circulating currents flow along the inner
surface of the door, which must be fastened securely to prevent arcing.
larger. Yet, they are stable, provide high gain, and may be easily driven by solid state circuitry.
Klystrons are relatively simple to cool and are capable of long life with a minimum of mainte-
nance. Two different types of klystrons have found service in television applications:
• Integral cavity klystron, in which the resonant cavities are built into the body.
• External cavity klystron, in which the cavities are mechanically clamped onto the body and
are outside the vacuum envelope of the device.
This difference in construction requires different maintenance procedures. The klystron body
(the RF interaction region of the integral cavity device), is cooled by the same liquid that is fed to
the collector. Required maintenance involves checking for leaks and adequate coolant flow.
Although the cavities of the external cavity unit are air-cooled, the body may be water- or air-
cooled. Uncorrected leaks in a water-cooled body can lead to cavity and tuning mechanism dam-
age. Look inside the magnet frame with a flashlight once a week. Correct leaks immediately and
clean away coolant residues.
The air-cooled body requires only sufficient airflow. The proper supply of air can be moni-
tored with one or two adhesive temperature labels and a close visual inspection. Look for discol-
oration of metallic surfaces. The external cavities need a clean supply of cooling air. Dust
accumulation inside the cavities will cause RF arcing. Check air supply filters regularly. Some
cavities have a mesh filter at the inlet flange. Inspect this point as required.
It is possible to make a visual inspection of the cavities of an external-cavity device by remov-
ing the loading loops and/or air loops. This procedure is recommended only when unusual
behavior is experienced, and not as part of routine maintenance. Generally there is no need to
remove a klystron from its magnet frame and cavities during routine maintenance.
increases. Excessive dissipation is perhaps the single greatest cause of catastrophic failure in a
power tube. PA tubes used in broadcast, industrial, and research applications can be cooled using
one of three methods: forced-air, liquid, and vapor-phase cooling. In radio and VHF-TV trans-
mitters, forced-air cooling is by far the most common method used. Forced-air systems are sim-
ple to construct and easy to maintain.
The critical points of almost every PA tube type are the metal-to-ceramic junctions or seals.
At temperatures below 250°C these seals remain secure, but above that temperature, the bonding
in the seal may begin to disintegrate. Warping of grid structures also may occur at temperatures
above the maximum operating level of the tube. The result of prolonged overheating is shortened
tube life or catastrophic failure. Several precautions are usually taken to prevent damage to tube
seals under normal operating conditions. Air directors or sections of tubing may be used to pro-
vide spot-cooling to critical surface areas of the device. Airflow sensors prevent operation of the
system in the event of a cooling system failure.
Tubes that operate in the VHF and UHF bands are inherently subject to greater heating action
than devices operated at lower frequencies (such as AM service). This effect is the result of
larger RF charging currents into the tube capacitances, dielectric losses, and the tendency of
electrons to bombard parts of the tube structure other than the grid and plate in high-frequency
applications. Greater cooling is required at higher frequencies.
The technical data sheet for a given power tube will specify cooling requirements. The end-
user is not normally concerned with this information; it is the domain of the transmitter manu-
facturer. The end-user, however, is responsible for proper maintenance of the cooling system.
Air-Handling System
All modern air-cooled PA tubes use an air-system socket and matching chimney for cooling.
Never operate a PA stage unless the air-handling system provided by the manufacturer is com-
plete and in place. For example, the chimney for a PA tube often can be removed for inspection
of other components in the circuit. Operation without the chimney, however, may significantly
reduce airflow through the tube and result in excessive dissipation of the device. It also is possi-
ble that operation without the proper chimney could damage other components in the circuit
because of excessive radiated heat. Normally the tube socket is mounted in a pressurized com-
partment so that cooling air passes through the socket and then is guided to the anode cooling
fins, as illustrated in Figure 8.1.4. Do not defeat any portion of the air-handling system provided
by the manufacturer.
Cooling of the socket assembly is important for proper cooling of the tube base, and for cool-
ing of the contact rings of the tube itself. The contact fingers used in the collet assembly of a
socket typically are made of beryllium copper. If subjected to temperatures above 150°C for an
extended period of time, the beryllium copper will lose its temper (springy characteristic) and
will no longer make good contact with the base rings of the device. In extreme cases, this type of
socket problem can lead to arcing, which can burn through the metal portion of the tube base
ring. Such an occurrence can ultimately lead to catastrophic failure of the device because of a
loss of the vacuum envelope. Other failure modes for a tube socket include arcing between the
collet and tube ring that can weld a part of the socket and tube together. The end result is failure
of both the tube and the socket.
y y
Ambient Temperature
The temperature of the intake air supply is a parameter that is usually under the control of the
maintenance engineer. The preferred cooling air temperature is no higher than 75°F, and no
lower than the room dew point. The air temperature should not vary because of an oversized air
conditioning system or because of the operation of other pieces of equipment at the transmission
facility. Monitoring the PA exhaust stack temperature is an effective method of evaluating overall
RF system performance. This can be easily accomplished. It also provides valuable data on the
cooling system and final stage tuning.
Another convenient method for checking the efficiency of the transmitter cooling system over
a period of time involves documenting the back pressure that exists within the PA cavity. This
measurement is made with a manometer, a simple device that is available from most heating,
ventilation, and air-conditioning (HVAC) suppliers. The connection of a simplified manometer
to a transmitter PA input compartment is illustrated in Figure 8.1.5.
When using the manometer, be careful that the water in the device is not allowed to backflow
into the PA compartment. Do not leave the manometer connected to the PA compartment when
the transmitter is on the air. Make the necessary measurement of PA compartment back pressure
and then disconnect the device. Seal the connection point with a subminiature plumbing cap or
other appropriate hardware.
By charting the manometer readings, it is possible to accurately measure the performance of
the transmitter cooling system over time. Changes resulting from the build-up of small dust par-
ticles (microdust) may be too gradual to be detected except through back-pressure charting. Be
y y
Figure 8.1.5 A manometer device used for measuring back pressure in the PA compartment of a
transmitter.
certain to take the manometer readings during periods of calm weather. Strong winds can result
in erroneous readings because of pressure or vacuum conditions at the transmitter air intake or
exhaust ports.
Deviations from the typical back-pressure value, either higher or lower, could signal a prob-
lem with the air-handling system. Decreased PA input compartment back pressure could indicate
a problem with the blower motor or a build-up of dust and dirt on the blades of the blower assem-
bly. Increased back pressure, on the other hand, could indicate dirty PA tube anode cooling fins
or a build-up of dirt on the PA exhaust ducting. Either condition is cause for concern. A system
suffering from reduced air pressure into the PA compartment must be serviced as soon as possi-
ble. Failure to restore the cooling system to proper operation may lead to premature failure of the
PA tube or other components in the input or output compartments. Cooling problems do not
improve. They always get worse.
Failure of the PA compartment air-interlock switch to close reliably may be an early indica-
tion of impending cooling system trouble. This situation could be caused by normal mechanical
wear or vibration of the switch assembly, or it may signal that the PA compartment air pressure
has dropped. In such a case, documentation of manometer readings will show whether the trou-
ble is caused by a failure of the air pressure switch or a decrease in the output of the air-handling
system.
y y
Thermal Cycling
Most power grid tube manufacturers recommend a warm-up period between application of fila-
ment-on and plate-on commands. Most RF equipment manufacturers specify a warm-up period
of about five minutes. The minimum warm-up time is two minutes. Some RF generators include
a time delay relay to prevent the application of a plate-on command until predetermined warm-
up cycle is completed. Do not defeat these protective circuits. They are designed to extend PA
tube life. Most manufacturers also specify a recommended cool-down period between the appli-
cation of the plate-off and filament-off commands. This cool-down, generally about 10 minutes is
designed to prevent excessive temperatures on the PA tube surfaces when the cooling air is shut
off. Large vacuum tubes contain a significant mass of metal, which stores heat effectively.
Unless cooling air is maintained at the base of the tube and through the anode cooling finds,
excessive temperature rise can occur. Again, the result can be shortened tube life, or even cata-
strophic failure because of seal cracks caused by thermal stress.
Most tube manufacturers suggest that cooling air continue to be directed toward the tube base
and anode cooling fins after filament voltage has been removed to further cool the device.
Unfortunately, however, not all control circuits are configured to permit this mode of operation.
Keep an accurate record of performance for each tube. Shorter than normal tube life could
point to a problem in the RF amplifier itself. The average life that may be expected from a power
grid tube is a function of many parameters, including:
• Filament voltage
• Ambient operating temperature
• RF power output
• Operating frequency
• Operating efficiency
The best estimate of life expectancy for a given system at a particular location comes from
on-site experience. As a general rule of thumb, however, at least 12 months of service can be
expected from most power tubes. Possible causes of short tube life include:
• Improper transmitter tuning.
• Inaccurate panel meters or external wattmeter, resulting in more demand from the tube than is
actually required.
• Poor filament voltage regulation.
• Insufficient cooling system airflow.
• Improper stage neutralization.
Filament Voltage
A true reading RMS voltmeter is required to accurately measure filament voltage. Make the
measurement directly from the tube socket connections. Secure the voltmeter test leads to the
socket terminals and carefully route the cables outside the cabinet. Switch off the plate power
supply circuit breaker. Close all interlocks and apply a filament on command. Do not apply the
high voltage during filament voltage tests. Serious equipment damage and/or injury to the main-
tenance engineer may result.
A true-reading RMS meter, instead of the more common average responding RMS meter, is
suggested because the true-reading meter can accurately measure a voltage despite an input
waveform that is not a pure sine wave. Some filament voltage regulators use silicon-controlled
rectifiers (SCRs) to regulate the output voltage. Do not put too much faith in the front-panel fil-
ament voltage meter. It is seldom a true-reading RMS device; most are average-responding
meters (unless otherwise specified).
Long tube life requires filament voltage regulation. Many RF systems have regulators built
into the filament supply. Older units without such circuits often can be modified to provide a
well-regulated supply by adding a ferroresonant transformer or motor-driven auto-transformer to
the ac supply input. A tube whose filament voltage is allowed to vary along with the primary line
voltage will not achieve the life expectancy possible with a tightly regulated supply. This prob-
lem is particularly acute at mountain-top installations, where utility regulation is generally poor.
To extend tube life, some broadcast engineers leave the filaments on at all times, not shutting
down at sign-off. If the sign-off period is three hours or less, this practice can be beneficial. Fila-
ment voltage regulation is a must in such situations because the primary line voltages may vary
substantially from the carrier-on to carrier-off value. Do not leave voltage on the filaments of a
y y
klystron for a period of more than two hours if no beam voltage is applied. The net rate of evapo-
ration of emissive material from the cathode surface of a klystron is greater without beam volt-
age. Subsequent condensation of the material on gun components may lead to voltage hold-off
problems and an increase in body current.
Figure 8.1.6 The effects of filament voltage management on the useful life of a thoriated tungsten
filament power tube. Note the dramatic increase in emission hours when filament voltage manage-
ment is practiced.
ment can be an excellent source for information about tuning a particular unit. Many times the
factory can provide pointers on how to simplify the tuning process, or what interaction of adjust-
ments may be expected. Whatever information is learned from such conversations, write it down.
ter, on the other hand, can significantly alter stage tuning. At high frequencies, normal tolerances
and variations in tube construction result in changes in element capacitance and inductance.
Likewise, replacing a component in the PA stage may cause tuning changes because of normal
device tolerances.
Stability is one of the primary objectives of transmitter tuning. Avoid tuning positions that do
not provide stable operation. Adjust for broad peaks or dips, as required. Tune so the system is
stable from a cold startup to normal operating temperature. Readings should not vary measurably
after the first minute of operation.
Adjust tuning not only for peak efficiency, but also for peak performance. These two ele-
ments of transmitter operations, unfortunately, do not always coincide. Trade-offs must some-
times be made in order to ensure proper operation of the system. For example, FM or TV aural
transmitter loading can be critical to wide system bandwidth and low synchronous AM. Loading
beyond the point required for peak efficiency must often be used to broaden cavity bandwidth.
Heavy loading lowers the PA plate impedance and cavity Q. A low Q also reduces RF circulating
currents in the cavity.
8.1.6 Bibliography
Gray, T. S.: Applied Electronics, Massachusetts Institute of Technology, 1954.
High Power Transmitting Tubes for Broadcasting and Research, Phillips Technical Publication,
Eindhoven, The Netherlands, 1988.
Power Grid Tubes for Radio Broadcasting, Thomson-CSF publication #DTE-115, Thomson-
CSF, Dover, N.J., 1986.
Svet, Frank A.: “Factors Affecting On-Air Reliability of Solid State Transmitters,” Proceedings
of the SBE Broadcast Engineering Conference, Society of Broadcast Engineers, Indianapo-
lis, IN, October 1989.
The Care and Feeding of Power Grid Tubes, Varian EIMAC, San Carlos, Calif., 1984.
Whitaker, Jerry C.: Maintaining Electronic Systems, CRC Press, Boca Raton, Fla., 1991.
Whitaker, Jerry C.: RF Systems Handbook, CRC Press, Boca Raton, Fla., 2002.
y y
g g
Chapter
8.2
Preventing RF System Failures
8.2.1 Introduction
The reliability and operating costs over the lifetime of an RF system can be significantly
impacted by the effectiveness of the preventive maintenance program designed and implemented
by the engineering staff. When dealing with a critical-system unit such as a broadcast transmitter
that must operate on a daily basis, maintenance can have a major impact—either positive or neg-
ative—on downtime and bottom-line profitability of the facility. The sections of a transmitter
most vulnerable to failure are those exposed to the outside world: the ac-to-dc power supplies
and RF output stage. These circuits are subject to high energy surges from lightning and other
sources.
The reliability of any communications system may be compromised by an enabling event
phenomenon. An enabling event phenomenon is an event which, while not causing a failure by
itself, sets up (or enables) a second event that can lead to failure of the communications system.
This phenomenon is insidious because the enabling event is often not self-revealing. Examples
include:
• A warning system that has failed or been disabled for maintenance.
• One or more controls set incorrectly so that false readouts are provided for operations person-
nel.
• Redundant hardware that is out of service for maintenance.
• Remote metering that is out of calibration.
8-23
g y
Figure 8.2.1 The measured performance of a single-channel FM antenna (tuned to 92.3 MHz).
The antenna provides a VSWR of less than 1.1:1 over a frequency range of 300 kHz.
• Check all current-carrying meter/overload shunt resistors (R1–R3) for signs of overheating.
• Carefully examine the wiring throughout the power supply for loose connections.
g y
• Examine the condition of the filter capacitor series resistors (R4 and R5), if used, for indica-
tions of overheating. Excessive current through these resistors could point to a pending failure
in the associated filter capacitor.
• Examine the condition of the bleeder resistors (R6–R8). A failure in one of the bleeder resis-
tors could result in a potentially dangerous situation for maintenance personnel by leaving the
main power supply filter capacitor (C2) charged after the removal of ac input power.
• Examine the plate voltage meter multiplier assembly (A1) for signs of resistor overheating.
Replace any resistors that are discolored with the factory-specified type.
When changing components in the transmitter high-voltage power supply, be certain to use
parts that meet with the approval of the manufacturer. Do not settle for a close match of a
replacement part. Use the exact replacement part. This ensures that the component will work as
intended and will fit in the space provided in the cabinet.
Overload Sensor
The plate supply overload sensor in some transmitters is arranged as shown in Figure 8.2.2. An
adjustable resistor—either a fixed resistor with a movable tap or a potentiometer—is used to set
the sensitivity of the plate overload relay. Check potentiometer-type adjustments periodically.
Fixed-resistor-type adjustments rarely require additional attention. Most manufacturers have a
chart or mathematical formula that may be used to determine the proper setting of the adjustment
resistor (R9) by measuring the voltage across the overload relay coil (K1) and observing the
operating plate current value. Clean the overload relay contacts periodically to ensure proper
operation. If mechanical problems are encountered with a relay, replace it.
Transmitter control logic for a high power UHF system is usually configured for two states of
operation:
• An operational level, which requires all the “life-support” systems to be present before the
HV command is enabled.
• An overload level, which removes HV when one or more fault conditions occur.
Inspect the logic ladder for correct operation at least once a month. At longer intervals, per-
haps annually, check the speed of the trip circuits. (A storage oscilloscope is useful for this mea-
surement.) Most klystrons require an HV removal time of less than 100 ms from the occurrence
of an overload. If the trip time is longer, damage may result to the klystron. Pay particular atten-
tion to the body-current overload circuits. Occasionally check the body current without applied
drive to ensure that the dc value is stable. A relatively small increase in dc body current can lead
to overheating problems.
g y
The RF arc detectors in a UHF transmitter also require periodic monitoring. External cavity
klystrons generally have one detector in each of the third and fourth cavities. Integral devices use
one detector at the output window. A number of factors can cause RF arcing, including:
• Overdriving the klystron
• Mistuning the cavities
• Poor cavity fit (external type only)
• Under coupling of the output
• High VSWR
Regardless of the cause, arcing can destroy the vacuum seal, if drive and/or HV are not
removed quickly. A lamp is usually included with each arc detector photocell for test purposes.
Figure 8.2.3 A protection circuit using relays for utility company ac phase-loss protection.
any failure inside the transmitter that might result in a single-phasing condition is taken into
account. Because 3-phase motors are particularly sensitive to single-phasing faults, the relay
interlock is tied into the filament circuit logic ladder. For AM transmitters utilizing PWM
schemes, the input of the phase loss protector is connected to the load side of the plate circuit
breaker. The phase-loss protector shown in the figure includes a sensitivity adjustment for vari-
ous nominal line voltages. The unit is small and relatively inexpensive. If your transmitter does
not have such a protection device, consider installing one. Contact the factory service depart-
ment for recommendations on the connection methods that should be used.
dirty air passes over the motor, the accumulation of dust and dirt must be blown out of the
device before the debris impairs cooling.
• Follow the manufacturer's recommendations for suggested frequency and type of lubrication.
Bearings and other moving parts may require periodic lubrication. Carefully follow any spe-
cial instructions on operation or maintenance of the cooling equipment.
• Inspect motor-mounting bolts periodically. Even well-balanced equipment experiences some
vibration, which can cause bolts to loosen over time.
• Inspect air filters weekly and replace or clean them as necessary. Replacement filters should
meet original specifications.
• Clean dampers and all ducting to avoid airflow restrictions. Lubricate movable and mechani-
cal linkages in dampers and other devices as recommended. Check actuating solenoids and
electromechanical components for proper operation. Movement of air throughout the trans-
mitter causes static electrical charges to develop. Static charges can result in a buildup of dust
and dirt in duct-work, dampers, and other components of the system. Filters should remove
the dust before it gets into the system, but no filter traps every dust particle.
• Check thermal sensors and temperature system control devices for proper operation.
Figure 8.2.5 A typical heating and cooling arrangement for a 20 kW FM transmitter installation.
Ducting of PA exhaust air should be arranged so that it offers minimum resistance to airflow.
controlling the room temperature to between 60°F and 70°F, tube and component life will be
improved substantially.
Case 1
A fully automatic building ventilation system (Figure 8.2.6) was installed to maintain room tem-
perature at 20°C during the fall, winter, and spring. During the summer, however, ambient room
temperature would increase to as much as 60°C. A field survey showed that the only building
exhaust route was through the transmitter. Therefore, air entering the room was heated by test
equipment, people, solar radiation on the building, and radiation from the transmitter itself
g y
Case 2
A simple remote installation was con-
structed with a heat recirculating feature
for the winter (Figure 8.2.7). Outside
supply air was drawn by the transmitter Figure 8.2.6 Case study in which excessive sum-
cooling system blowers through a bank mertime hearing was eliminated through the addition
of air filters and hot air was exhausted of a 1 hp exhaust blower to the building. (Courtesy of
through the roof. A small blower and Harris.)
damper were installed near the roof exit
point. The damper allowed hot exhaust
air to blow back into the room through a
tee duct during winter months. For sum-
mer operation, the roof damper was
switched open and the room damper
closed. For winter operation, the
arrangement was reversed. The facility,
however, experienced short tube life
during winter operation, even though
the ambient room temperature during
winter was not excessive.
The solution involved moving the
roof damper 12 ft. down to just above Figure 8.2.7 Case study in which excessive back-
pressure to the PA cavity occurred during winter
the tee. This eliminated the stagnant “air
months, when the rooftop damper was closed. The
cushion” above the bottom heating duct
problem was eliminated by repositioning the damper
damper and significantly improved air as shown. (Courtesy of Harris.)
flow in the region. Cavity back pressure
was, therefore, reduced. With this rela-
tively simple modification, the problem
of short tube life disappeared.
Case 3
An inconsistency regarding test data was discovered within a transmitter manufacturer's plant.
Units tested in the engineering lab typically ran cooler than those at the manufacturing test facil-
ity. Figure 8.2.8 shows the test station difference, a 4-foot exhaust stack that was used in the engi-
neering lab. The addition of the stack increased airflow by up to 20 percent because of reduced
air turbulence at the output port, resulting in a 20°C decrease in tube temperature.
These examples point out how easily a cooling problem can be caused during HVAC system
design. All power delivered to the transmitter is either converted to RF energy and sent to the
g y
water and ethylene glycol mixture. Do not exceed a 50:50 mix by volume. The heat transfer of
the mixture is lower than that of pure water, requiring the flow to be increased, typically by 20–
25 percent. Greater coolant flow means higher pressure and suggests close observation of the
cooling system after adding the glycol. Allow the system to heat and cool several times. Then
check all plumbing fittings for tightness.
The action of heat and air on ethylene glycol causes the formation of acidic products. The
acidity of the coolant can be checked with litmus paper. Buffers can and should be added with
the glycol mixture. Buffers are alkaline salts that neutralize acid forms and prevent corrosion.
Because they are ionizable chemical salts, the buffers cause conductivity of the coolant to
increase. Measure the collector-to-ground resistance periodically. Coolant conductivity is typi-
cally acceptable if the resistance caused by the coolant is greater than 20 times the resistance of
the body-metering circuitry.
Experience has shown that a good way to ensure good coolant condition is to drain, flush, and
recharge the system every spring. The equipment manufacturer can provide advice on how this
procedure should be carried out and can recommend types of glycol to use. Maintain unrestricted
airflow over the heat exchanger coils and follow the manufacturers' instructions on pump and
motor maintenance.
8.2.7 Bibliography
Gray, T. S.: Applied Electronics, Massachusetts Institute of Technology, 1954.
High Power Transmitting Tubes for Broadcasting and Research, Phillips Technical Publication,
Eindhoven, The Netherlands, 1988.
Power Grid Tubes for Radio Broadcasting, Thomson-CSF publication #DTE-115, Thomson-
CSF, Dover, N.J., 1986.
Svet, Frank A.: “Factors Affecting On-Air Reliability of Solid State Transmitters,” Proceedings
of the SBE Broadcast Engineering Conference, Society of Broadcast Engineers, Indianapo-
lis, IN, October 1989.
The Care and Feeding of Power Grid Tubes, Varian EIMAC, San Carlos, Calif., 1984.
Whitaker, Jerry C.: Maintaining Electronic Systems, CRC Press, Boca Raton, Fla., 1991.
Whitaker, Jerry C.: RF Systems Handbook, CRC Press, Boca Raton, Fla., 2002.
g y
g g
Chapter
8.3
Troubleshooting RF Equipment
8.3.1 Introduction
Problems will occur from time to time with any piece of equipment. The best way to prepare for
a transmitter failure is to know the equipment well. Study the transmitter design and layout.
Know the schematic diagram and what each component does. Examine the history of the trans-
mitter by reviewing old maintenance logs to see what components have failed in the past.
8-39
g q p
will often cause a failure in another, so check all semiconductors associated with one found to be
defective.
In high power transmitters, look for signs of arcing in the RF compartments. Loose connec-
tions and clamps can cause failures that are hard to locate. Never rush through a troubleshooting
job. A thorough knowledge of the theory of operation and history of the transmitter is a great aid
in locating problems in the RF sections. Do not overlook the possibility of tube failure when
troubleshooting a transmitter. Tubes can fail in unusual ways; substitution may be the only prac-
tical test for power tubes used in modern transmitters.
Study the control ladder of the transmitter to identify interlock or fail-safe system problems.
Most newer transmitters have troubleshooting aids built in to help locate problems in the control
ladder. Older transmitters, however, often require a moderate amount of investigation before
repairs can be accomplished.
• If the overload is based in the high voltage dc power supply, shut down the transmitter and
check the schematic diagram for the location in the circuit of the plate overload sensor relay
(or comparator circuit). This will indicate within what limits component checking will be
required. The plate overload sensor is usually found in one of two locations: the PA cathode
dc return, or the high voltage power supply negative connection to ground. Transmitters using
a cathode overload sensor generally have a separate high voltage dc overload sensor in the
plate power supply.
• A sensor in the cathode circuit will substantially reduce the area of component checking
required. A plate overload with no excitation in such an arrangement would almost certainly
indicate a PA tube failure, because of either an inter-electrode short circuit or a loss of vac-
uum. Do not operate the transmitter when the PA tube is out of its socket. This is not an
acceptable method of determining whether a problem exits with the PA tube. Substitute a
spare tube instead. Operating a transmitter with the PA tube removed can result in damage to
other tubes in the transmitter when the filaments are on, and damage to the driver tubes and
driver output/PA input circuit components when the high voltage is on.
• If circuit analysis indicates a problem in the high voltage power supply itself, use an ohmme-
ter to check for short circuits. Remove all power from the transmitter and discharge all filter
capacitors before beginning any troubleshooting work inside the unit. When checking for
short circuits with an ohmmeter, take into account the effects that bleeder resistors and high
voltage meter multiplier assemblies can have on resistance readings. Most access panels on
transmitters use an interlock system that will remove the high voltage and ground the high
voltage supplies when a panel is removed. For the purposes of ohmmeter tests, these inter-
locks may have to be temporarily defeated. Never defeat the interlocks unless all ac power has
been removed from the transmitter and all filter capacitors have been discharged using the
grounding stick supplied with the transmitter.
Following the preliminary ohmmeter tests, check the following components in the dc plate
supply:
• Oil-filled capacitors for signs of overheating or leakage.
• Feed-through capacitors for signs of arcing or other damage.
• The dc plate blocking capacitor for indications of insulation breakdown or arcing.
• All transformers and chokes for signs of overheating or winding failure.
• Transient suppression devices for indications of overheating or failure.
• Bleeder resistors for signs of overheating.
• Any surge-limiting resistors placed in series with filter capacitors in the power supply for
indications of overheating or failure. A series resistor that shows signs of overheating can be
an indication that the associated filter capacitor has failed.
If the plate overload trip-off occurs only at elevated voltage levels, ohmmeter checks will not
reveal the cause of the problem. It may be necessary, therefore, to troubleshoot the problem using
the process of elimination.
g q p
supply ripple that may damage other components in the transmitter. Consult the manufacturer to
be sure.
Perform any troubleshooting work on a transmitter with extreme care. Transmitter high volt-
ages can be lethal. Work inside the transmitter only after all ac power has been removed and after
all capacitors have been discharged using the grounding stick provided with the transmitter.
Remove primary power from the unit by tripping the appropriate power distribution circuit
breakers in the transmitter building. Do not rely on internal contractors or SCRs to remove all
dangerous ac. Do not defeat protective interlock circuits. Although defeating an access panel
interlock switch may save work time, the consequences can be tragic.
fault indicator is not lit, the load is not likely the cause of the problem. A definitive check of the
load can be made by switching the transmitter output to a dummy load and bringing up the high
voltage. The PA tube may be checked by substituting one of known quality. When the tube is
changed, carefully inspect the contact fingerstock for signs of overheating or arcing. Be careful
to protect the socket from damage when removing and inserting the PA tube. Do not change the
tube unless there is good reason to believe that it may be defective.
If problems with the PA stage persist, examine the grid circuit of the tube. Figure 8.3.2 shows
the input stage of a grounded screen, FM transmitter. A short circuit in any of the capacitors in
the grid circuit (C1–C5) will effectively ground the PA grid. This will cause a dramatic increase
in plate current, because the PA bias supply will be shorted to ground along with the RF signal
from the IPA stage.
The process of finding a defective capacitor in the grid circuit begins with a visual inspection
of the suspected components. Look for signs of discoloration because of overheating, loose con-
nections, and evidence of package rupture. The voltage and current levels found in a transmitter
PA stage are often sufficient to rupture a capacitor if an internal short circuit occurs. Check for
component overheating right after shutting the transmitter down. (As mentioned previously,
remove all ac power and discharge all capacitors first.) A defective capacitor will often overheat.
Such heating can also occur, however, because of improper tuning of the PA or IPA stage, or a
defective component elsewhere in the circuit.
Before replacing any components, study the transmitter schematic diagram to determine
which parts in the circuit could cause the failure condition that exists. By knowing how the trans-
mitter works, many hours can be saved in checking components that an examination of the fault
condition and the transmitter design would show to be an unlikely cause of the problem.
Check blocking capacitors C6 and C7. A breakdown in either component would have serious
consequences. The PA tube would be driven into full conduction, and could arc internally. The
working voltages of capacitors C1–C5 could also be exceeded, damaging one or more of the
components. Because most of the wiring in the grid circuit of a PA stage consists of wide metal
straps (required because of the skin effect), it is not possible to view stress points in the circuit to
narrow the scope of the troubleshooting work. Areas of the system that are interconnected using
components that have low power dissipation capabilities, however, should be closely examined.
g q p
For example, the grid bias decoupling components shown in Figure 8.3.2 (R1, L3, and C5)
include a low wattage (2 W) resistor and a small RF choke. Because of the limited power dissipa-
tion ability of these two devices, a failure in decoupling capacitor C5 would likely cause R1 and
possibly L3 to burn out. The failure of C5 in a short circuit would pull the PA grid to near ground
potential, causing the plate current to increase and trip off the transmitter high voltage. Depend-
ing on the sensitivity and speed of the plate overload sensor, L3 could be damaged or destroyed
by the increased current it would carry to C5, and therefore, to ground.
If L3 were able to survive the surge currents that resulted in PA plate overload, the choke
would continue to keep the plate supply off until C5 was replaced. Bias supply resistor R1, how-
ever, would likely burn out because the bias power supply is generally switched on with the
transmitter filament supply. Therefore, unless the PA bias power supply line fuse opened, R1
would overheat and probably fail.
Because of the close spacing of components in the input circuit of a PA stage, carefully check
for signs of arcing between components or sections of the tube socket. Keep all components and
the socket itself clean at all times. Inspect all interconnecting wiring for signs of damage, arcing
to ground, or loose connections.
Figure 8.3.3 An FM transmitter PA output stage built around a 1/4-wavelength cavity with capaci-
tive coupling to the load.
Figure 8.3.4 The equivalent electrical circuit of the PA stage shown in Figure 8.3.3.
sample lines provide two low-power RF outputs for a modulation monitor or other test instru-
ments. Neutralization inductors L3 and L4 consist of adjustable grounding bars on the screen
grid ring assembly. The combination of L2 and C6 prevents spurious oscillations within the cav-
ity.
Figure 8.3.4 shows the electrical equivalent of the PA cavity schematic diagram. The 1/4-
wavelength cavity acts as the resonant tank for the PA. Coarse tuning of the cavity is accom-
plished by adjustment of the shorting plane. Fine tuning is performed by the PA tuning control,
which acts as a variable capacitor to bring the cavity into resonance. The PA loading control con-
sists of a variable capacitor that matches the cavity to the load. There is one value of plate load-
ing that will yield optimum output power, efficiency, and PA tube dissipation. This value is
dictated by the cavity design and values of the various dc and RF voltages and currents supplied
to the stage.
g q p
Figure 8.3.5 The mechanical equivalent of the PA stage shown in Figure 8.3.4.
The logic of a PA stage often disappears when the maintenance engineer is confronted with
the actual physical design of the system. As illustrated in Figure 8.3.5, many of the components
take on an unfamiliar form. Blocking capacitor C4 is constructed of a roll of Kapton insulating
material sandwiched between two circular sections of aluminum. (Kapton is a registered trade-
mark of DuPont.) PA plate tuning control C5 consists of an aluminum plate of large surface area
that can be moved in or out of the cavity to reach resonance. PA loading control C7 is constructed
much the same as the PA tuning assembly, with a large-area paddle feeding the harmonic filter,
located external to the cavity. The loading paddle may be moved toward the PA tube or away
from it to achieve the required loading. The L2–C6 damper assembly actually consists of a 50 Ω
noninductive resistor mounted on the side of the cavity wall. Component L2 is formed by the
inductance of the connecting strap between the plate tuning paddle and the resistor. Component
C6 is the equivalent stray capacitance between the resistor and the surrounding cavity box.
From this example it can be seen that many of the troubleshooting techniques that work well
with low-frequency RF and dc do not necessarily apply in cavity stages. It is, therefore, critically
important to understand how the system operates and what each component does. Because many
of the cavity components (primarily inductors and capacitors) are mechanical elements more
than electrical ones, troubleshooting a cavity stage generally focuses on checking the mechanical
integrity of the box.
Most failures resulting from problems within a cavity are the result of poor mechanical con-
nections. All screws and connections must be kept tight. Every nut and bolt in a PA cavity was
included for a reason. There are no insignificant screws that do not need to be tight. However, do
not over tighten either. Stripped threads and broken component connection lugs will only cause
additional grief.
When a problem occurs in a PA cavity, it is usually difficult to determine which individual
element (neutralization inductor, plate tuning capacitor, loading capacitor, etc.) is defective from
the symptoms the failure will display. A fault within the cavity is usually a catastrophic event that
g q p
will take the transmitter off the air. It is often impossible to bring the transmitter up for even a
few seconds to assess the fault situation. The only way to get at the problem is to shut the trans-
mitter down and take a look inside.
Closely inspect every connection, using a trouble light and magnifying glass. Look for signs
of arcing or discoloration of components or metal connections. Check the mechanical integrity
of each element in the circuit. Be certain the tuning and loading adjustments are solid, without
excessive mechanical play. Look for signs of change in the cavity. Check areas of the cavity that
may not seem like vital parts of the output stage, such as the maintenance access door finger-
stock and screws. Any failure in the integrity of the cavity, whether at the base of the PA tube or
on part of the access door, will cause high circulating currents to develop and may prevent proper
operation of the stage. If a problem is found that involves damaged fingerstock, replace the
affected sections. Failure to do so will likely result in future problems because of the high cur-
rents that can develop at any discontinuity in the cavity inner or outer conductor.
5. If step 4 shows the VSWR overload is real, and not the result of faulty control circuitry,
check all connections in the output and coupling sections of the final stage. Look for signs
of arcing or loose hardware, particularly on any movable tuning components. Inspect high-
voltage capacitors for signs of overheating, which might indicate failure; and check coils
for signs of dust build-up, which might cause a flash-over. In some transmitters, VSWR
overloads can be caused by improper final stage tuning or loading. Consult the equipment
instruction book for this possibility. Also, certain transmitters include glass-enclosed
spark-gap lightning protection devices (gas-gaps) that can be disconnected for testing.
6. If VSWR overload conditions resulting from problems external to the transmitter are expe-
rienced at an AM radio station, check the following items:
• Component dielectric breakdown. If a normal (near zero) reflected power reading is indicated
at the transmitter under carrier-only conditions, but VSWR overloads occur during modula-
tion, component dielectric breakdown may be the problem. A voltage breakdown could be
occurring within one of the capacitors or inductors at the antenna tuning unit (ATU) or pha-
sor. Check all components for signs of damage. Clean insulators as required. Carefully check
any open-air coils or transformers for dust buildup or loose connections.
• Narrowband antenna. If the overload occurs with any modulating frequency, the probable
cause of the fault is dielectric breakdown. If, on the other hand, the overload seems particu-
larly sensitive to high-frequency modulation, then narrow antenna bandwidth is indicated.
Note the action of the transmitter forward/reflected power meter. An upward deflection of
reflected power with modulation is a symptom of limited antenna bandwidth. The greater the
upward deflection, the more limited the bandwidth. If these symptoms are observed, conduct
an antenna impedance sweep of the system.
• Static buildup. Tower static buildup is characterized by a gradual increase in reflected power
as shown on the transmitter front panel. The static buildup, which usually occurs prior to or
during thunderstorms and other bad weather conditions, continues until the tower base ball
gaps arc-over and neutralize the charge. The reflected power reading then falls to zero. A
static drain choke at the tower base to ground will generally prevent this problem.
• Guy wire arc-over. Static buildup on guy wires is similar to a nearby lightning strike in that no
charge is registered during the buildup of potential on the reflected power meter. Instead, the
static charge builds on the guys until it is of sufficient potential to arc across the insulators to
the tower. The charge is then removed by the static drain choke and/or ball gaps at the base of
the tower. Static buildup on guy wires can be prevented by placing RF chokes across the insu-
lators, or by using non-metallic guys. Arcing across the insulators may also be reduced or
eliminated by regular cleaning.
• Relay logic
Figure 8.3.7 A typical transmitter interlock circuit. Terminals A and B are test points used for trou-
bleshooting the system in the event of an interlock system failure.
to full value. If it does and the turn-on problem persists, the failure likely involves one or more of
the gating cards.
When troubleshooting a step-start fault in a transmitter employing the dual contactor arrange-
ment, begin with a close inspection of all contact points on both contactors. Pay careful attention
to the auxiliary relay contacts of the start contactor. If the contacts fail to properly close, the full
load of the high-voltage power supply will be carried through the resistors and start contactor.
These devices are normally sized only for intermittent duty. They are not intended to carry the
full load current for any length of time. Look for signs of arcing or overheating of the contact
pairs and current-carrying connector bars. Check the current-limiting resistors for excessive dis-
sipation and continuity.
the primary power control is adjusted to match the desired RF output from the transmitter. If one
of the high-voltage rectifier stacks of this system failed in a short circuit condition, the output
voltage (and RF output) would fall, causing the thyristor circuit to increase the conduction period
of the SCR pairs. Depending on the series resistance of the failed rectifier stack and the rating of
the primary side circuit breaker, the breaker may or may not trip. Remember that the circuit
breaker was chosen to allow operation at full transmitter power with the necessary headroom to
prevent random tripping. The primary power system, therefore, can dissipate a significant
amount of heat under reduced power conditions, such as those that would be experienced with a
drop in the high-voltage supply output. The difference between the maximum designed power
output of the supply (and, therefore, the transmitter) and the failure-induced power demand of the
system can be dissipated as heat without tripping the main breaker.
Operation under such fault conditions, even for 20 seconds or less, can cause considerable
damage to power-supply components, such as the power transformer, rectifier stack, thyristors,
or system wiring. Damage can range from additional component failures to a fire in the affected
section of the transmitter.
g q p
g g
Chapter
8.4
Testing Coaxial Transmission Line
8.4.1 Introduction
Antenna and transmission line performance measurements are often neglected until a problem
occurs. Many facilities do not have the equipment necessary to perform useful measurements.
Experience is essential, because much of the knowledge obtained from such tests is derived by
interpreting the raw data. In general, transmission systems measurements should be made:
• Before and during installation of the antenna and transmission line. Barring unforeseen oper-
ational problems, this will be the only time that the antenna is at ground level. Ready access
to the antenna allows a variety of key measurements to be performed without climbing the
tower.
• During system troubleshooting when attempting to locate a problem. Following installation,
these measurements usually concern the transmission line itself.
• To ensure that the transmission line is operating normally, many facilities check the transmis-
sion line and antenna system on a regular basis. A quick sweep of the line with a network ana-
lyzer and a time-domain reflectometer (TDR) may disclose developing problems before they
can cause a transmission line failure.
Ideally, the measurements should be used to confirm a good impedance match, which can be
interpreted as minimum VSWR or maximum return loss. Return loss is related to the level of
signal that is returned to the input connector after the signal has been applied to the transmission
line and reflected from the load. A line perfectly matched to the load would transfer all energy to
the load. No energy would be returned, resulting in an infinite return loss, or an ideal VSWR of
1:1. The benefits of matching the transmission line system for minimum VSWR include:
• Most efficient power transfer from the transmitter to the antenna system.
• Best performance with regard to overall bandwidth.
• Improved transmitter stability with tuning following accepted procedures more closely.
• Minimum transmitted signal distortions.
The network analyzer allows the maintenance engineer to perform a number of critical mea-
surements in a short period of time. The result is an antenna system that is tuned as close as prac-
8-57
g
Figure 8.4.1 Network analyzer plot of an FM broadcast antenna operating at 94.3 MHz.
tical for uniform impedance across the operating bandwidth. A well-matched system increases
operating efficiency by properly coupling the signal from the transmitter to the antenna. Figure
8.4.1 shows a network analyzer plot of an FM broadcast antenna.
unit at the top of the transmission line, measurements will accurately show antenna characteris-
tics without effects of the transmission line. Results are plotted on an X-Y plotter or defined and
stored for later printout.
One particularly desirable feature of a network analyzer is its capability to display either a
Smith chart or a more simple Cartesian X-Y presentation of return loss-versus-frequency. (Some
units may provide both displays simultaneously.) The Smith chart is useful, but interpretation can
be confusing. The Cartesian presentation, while technically not better, is usually easier to inter-
pret.
8.4.2a Calibration
Calibration methods vary for different instruments. For one method, a short circuit is placed
across the network analyzer terminals, producing a return loss of zero (the short reflects all sig-
nals applied to it). The instrument is then checked with a known termination. This step often
causes the inexperienced technician to go astray. The termination should have known character-
istics and full documentation. It is acceptable procedure to check the equipment by examining
more than one termination, where the operator knows the characteristics of the devices used. Sig-
nificant changes from the known characteristics suggest that additional tests should be per-
formed. After the test unit is operating correctly, check to ensure that the adapters and connectors
to be used in the measurement do not introduce errors of their own. An accepted practice for this
task involves the use of a piece of transmission line of known quality. A 20-ft section of line
should sufficiently separate the input and output connectors. The results of any adjustment at
either end will be noticeable on the analyzer. Also, the length allows adjustments to be made
fairly easy. The section of line used should include tuning stubs or tuners to permit the connec-
tors to be matched to the line across the operating channel.
The facility's dummy load must next be matched to the transmission line. Do not assume that
the dummy load is an appropriate termination by itself, or a station reference. The primary func-
tion of a dummy load is to dissipate power in a manner that allows easy measurement. It is nei-
ther a calibration standard nor a reference. Experience proves it is necessary to match dummy
and transmission line sections to maintain a good reference. The load is matched by looking into
the transmission line at the patch panel (or other appropriate point). Measurements are then taken
at locations progressively closer to the transmitter, until the last measurement is made at the out-
put connection of the transmitter. After the dummy load is checked, it serves as a termination.
that point on the line. This introduces a reflection into the line, the magnitude of which is a func-
tion of the size of the ring. The phase of the reflection is a function of the location of the ring
along the length of the center conductor.
Installing the ring is usually a cut-and-try process. It may be necessary to open, adjust, close,
and test the line several times. However, after a few cuts, the effect of the ring will become appar-
ent. It is not uncommon to need more than one ring on a given piece of transmission line for a
good match over the required bandwidth. When a match is obtained, the ring normally is sol-
dered into place.
Impedance-matching hardware also is available for use with waveguide. A piece of material is
placed into the waveguide and its location is adjusted to create the desired mismatch. For any
type of line, the goal is to create a mismatch equal in magnitude, but opposite in phase, to the
existing undesirable mismatch. The overall result is a minimum mismatch and minimum VSWR.
A tuner alters the line characteristic impedance at a given point by changing the distance
between the center and outer conductors by effectively moving the outer conductor. In reality, it
increases the capacity between the center and outer conductors to produce a change in the imped-
ance, and introduce a reflection at that point.
ZO = 138
×
D
log 10 ---
--------
k d 8.4.1
Where
k = the dielectric constant of the insulating material
D = the inside diameter of the outer conductor
d = the outside diameter of the center conductor
In 2-conductor transmission lines, the surge impedance is determined by
ZO 276
×
2 S
log 10 -----
= --------
k s 8.4.2
Where:
k = the dielectric constant of the insulating material
S = the spacing between the centers of the two conductors
s = the diameter of the conductors
(Diameters and spacings can be measured in either centimeters or inches.)
g
If the inductance and capacitance per foot (or meter) of the transmission lines are given by the
manufacturer, the surge impedance can be calculated by
ZO = L
--- 8.4.3
C
Where:
L = quoted inductance per unit length
C = quoted capacitance per unit length
Note that the actual length of the line is not a factor in any of these formulas. The surge
impedance of the line is independent of cable length, and wholly dependant on cable type.
One common variety of 1/2-inch foam dielectric coaxial cable has an inductance per foot of
0.058 microhenries (μH) and a capacitance per foot of 23.1 picofarads (pF). Its surge impedance
is therefore
–6 – 12
ZO = ( 0.058 × 10 / 23.1 × 10 ) = 50 Ω 8.4.4
Multiple Mismatches
Unfortunately, if there is more than one discontinuity in the cable system, it becomes more diffi-
cult to determine the impedance of the second and subsequent mismatches.
Some TDR manufacturers have attempted to provide correction factors to account for multi-
ple mismatch errors, but they may not work for all situations. In general, the larger the first mis-
match, the greater the error in calculating the second mismatch. When the values of the
mismatched cable sections and/or loads are known, the technician can work backward and calcu-
g
late what the values should have been. When the cable values and/or loads are unknown, only the
first mismatch can be calculated accurately.
8.4.4 Bibliography
“Cable Testing with Time Domain Reflectometry,” Application Note 67, Hewlett Packard, Palo
Alto, Calif., 1988.
Kennedy, George: Electronic Communication Systems, 3rd ed., McGraw-Hill, New York, N.Y.,
1985.
Kolbert, Don: “Testing Coaxial Lines,” Broadcast Engineering, Intertec Publishing, Overland
Park, Kan., November 1991.
“Improving Time Domain Network Analysis Measurements,” Application Note 62, Hewlett
Packard, Palo Alto, Calif., 1988.
Strickland, James A.: “Time Domain Reflectometry Measurements, Measurement Concepts
Series, Tektronix, Beaverton, Ore., 1970.
“TDR Fundamentals,” Application Note 62, Hewlett Packard, Palo Alto, CA., 1988.
g g
Chapter
8.5
The Smith Chart
8.5.1 Introduction
The Smith chart is generally acknowledged to be the most universal tool for RF design work. RF
problems can be represented by complicated mathematics, such as hyperbolic or quadratic func-
tions, or by polynomial functions and differential equations. These equations involve complex
numbers, and must be solved for each specific frequency at each characteristic impedance, a
time-consuming and potentially error-prone process. The Smith chart simplifies these equations
into a chart format.
Several complications must be overcome to achieve a general solution for RF design prob-
lems. First is the variation of basic parameters caused by frequency; each frequency reacts differ-
ently to a given impedance. Second is the wide range of possible fundamental impedances. For
example, no two antennas of different design have exactly the same characteristic impedance,
although most fall into the range between 30 Ω and 120 Ω. The Smith chart provides a method to
“normalize” the frequency and characteristic impedance of an RF problem to permit solution on
a standardized chart.
Two quantities must be determined before using a Smith Chart: normalized impedance and
propagation constant. In addition, the expected or measured characteristic impedance of at least
one part of the RF circuit must be defined.
8-63
8-64 RF System Maintenance
• The capacitive susceptance of the wire/shield combination at the frequency of interest (C).
• The conductance per unit length of the transmission line (G).
For applications where the length of the transmission line is short, the resistance of the wire is
negligible and so is the conductance. Even for long lengths of transmission line, such as antenna
feed lines, the resistance can be factored into the solution after the rest of the graphical work has
been done. Thus, the general equation for characteristic impedance (Z) of a typical transmission
line is
Z = L⁄ C (8.5.1)
Because the inductive reactance and capacitive susceptance are calculated at the frequency of
circuit operation, the frequency-related terms drop out, simplifying the process. For RF applica-
tions, the frequency-dependent values of inductance and capacitance of a transmission line result
from:
• Physical dimensions of the cable, such as diameter and thickness of the inner conductor.
• Distance between the inner conductor and the outer shield.
• Dielectric constant between the inner and outer conductors.
For waveguide, the physical dimensions are the width and breadth of a cross-section of the
guide expressed in fractions of a wavelength of the operating frequency. Note that these parame-
ters relate to operation at a single frequency. Modulated signals present further complications.
Figure 8.5.1 Resistance circles of the Smith chart coordinate system. (After [1].)
reflected. The Smith chart can be used to identify the best way to provide maximum energy
transfer from the transmission line into the load. In addition to transmission line problems, Smith
chart techniques apply to RF amplifiers, attenuators, directional couplers, antennas, antenna-
matching networks, and signal distribution networks.
Figure 8.5.2 Reactance curves of the Smith chart coordinate system. (After [1].)
Figure 8.5.3 Complete Smith chart coordinate system, with sample values plotted. (After [1].)
The Smith Chart 8-67
One other family of useful circles, usually not printed on the chart, is plotted with a compass,
as needed. These are the standing wave ratio (SWR) circles. They occur as concentric circles
centered on prime center, as shown in Figure 8.5.4. In typical use, Smith charts are drawn with
the resistance component line horizontal. The SWR and attenuation scales are radial scales.
They are used by aligning a straightedge at right angles to the scale of interest, finding the
desired value, and tracing a line up the Smith chart. The bottom transmission coefficient scale is
a magnitude scale, which must be applied differently. A compass or divider is used to transfer
linear distances from portions of the Smith chart.
The Smith chart can be used to efficiently design a filter and/or impedance-matching net-
work. Each part of each problem can be solved using the chart, taking advantage of the graphical
nature of the tool to obtain possible solutions quickly. Because power components for RF work
come in limited size and value ranges, impedance-matching problems often must be reworked to
obtain final component values that can be realized with off-the-shelf parts. The flexibility of the
Smith chart in solving a complex problem is a great help in this respect.
When using a Smith Chart, perform the following steps (in sequence):
• Write down the pertinent actual resistance and reactance values.
• Choose a convenient denominator that brings the normalized resistance of the characteristic
impedance close to 1.0. Normalize all other values by dividing by this number. Note which
values are impedances for series circuits and which are admittances for shunt circuits.
• Plot the normalized values onto the Smith chart.
8-68 RF System Maintenance
Figure 8.5.5 Example of Smith chart for lumped element match. (From [1]. Used with permission.)
used, the Smith Chart’s graphical representation can add an intuitive insight that saves consider-
able design time.
8.5.4 References
1. Silence, Neal C.: “The Smith Chart and its Usage in RF Design,” RF Design, Intertec Pub-
lishing, Overland Park, Kan., pp. 85–88, April 1992.
8.5.5 Bibliography
Adam, S. F.: Microwave Theory and Applications, Prentice-Hall, New York, N.Y., 1969.
Bowick, C.: RF Circuit Design, Howard W. Sams and Co., Indianapolis, IN, 1982.
Bryant, G. H.: Principles of Microwave Measurements, IEE Electrical Measurement Series,
Peter Peregrinus Ltd., London, 1988.
Colin, R. E.: Foundations for Microwave Engineering, McGraw-Hill, New York, N.Y., 1966.
Kaufhold, Gerry: “The Smith Chart, Parts 1–4,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., November 1989–-March 1990.
Montgomery, C. G., R. H. Dicke, and E. M. Purcell: Principles of Microwave Circuits,
McGraw-Hill, New York, N.Y., 1948.
Smith, P. H.: Electronic Applications of the Smith Chart, McGraw-Hill, New York, N.Y., 1969.
g g
Chapter
8.6
Standby Power Systems
8.6.1 Introduction
When utility company power problems are discussed, most people immediately think of black-
outs. The lights go out, and everything stops. With the facility down and in the dark, there is
nothing to do but sit and wait until the utility company finds the problem and corrects it. This
process generally takes only a few minutes. There are times, however, when it can take hours. In
some remote locations, it can even take days.
Blackouts are, without a doubt, the most troublesome utility company problem that a facility
will have to deal with. Statistics show that power failures are, generally speaking, a rare occur-
rence in most areas of the country. They are also short in duration. Studies have shown that 50
percent of blackouts last 6 s or less, and 35 percent are less than 11 min long. These failure rates
usually are not cause for concern to commercial users, except where computer-based operations,
transportation control systems, medical facilities, and communications sites are concerned.
When continuity of operation is critical, redundancy must be carried throughout the system.
The site never should depend upon one critical path for ac power. For example, if the facility is
fed by a single step-down transformer, a lightning flash or other catastrophic event could result in
a transformer failure that would bring down the entire site. A replacement could take days or
even weeks.
8-71
y y
Figure 8.6.1 The classic standby power system using an engine-generator set. This system pro-
tects a facility from prolonged utility company power failures.
The cost of standby power for a facility can be substantial, and an examination of the possible
alternatives should be conducted before any decision on equipment is made. Management must
clearly define the direct and indirect costs and weigh them appropriately. Include the following
items in the cost-vs.-risk analysis:
• Standby power-system equipment purchase and installation cost.
• Exposure of the system to utility company power failure.
• Alternative operating methods available to the facility.
• Direct and indirect costs of lost uptime because of blackout conditions.
A distinction must be made between emergency and standby power sources. Strictly speaking,
emergency systems supply circuits legally designated as being essential for safety to life and
property. Standby power systems are used to protect a facility against the loss of productivity
resulting from a utility company power outage.
Figure 8.6.3 The dual utility feeder system of ac power loss protection. An automatic transfer
switch changes the load from the main utility line to the standby line in the event of a power inter-
ruption.
Figure 8.6.5 The use of a diesel generator for standby power and peak power shaving applica-
tions.
g set will smooth over the transition from the main utility feed to the standby, often making a
commercial power failure unnoticed by on-site personnel. A conventional m-g typically will give
up to 0.5 s of power fail ride-through, more than enough to accomplish a transfer from one utility
feed to the other. This standby power system is further refined in the application illustrated in
Figure 8.6.7, where a diesel generator has been added to the system. With the automatic overlap
transfer switch shown at the generator output, this arrangement also can be used for peak
demand power shaving.
Figure 8.6.8 shows a simplified schematic diagram of a 220 kW UPS system utilizing dual
utility company feed lines, a 750 kVA gas-engine generator, and five dc-driven motor-generator
sets with a 20-min battery supply at full load. The five m-g sets operate in parallel. Each is rated
for 100 kW output. Only three are needed to power the load, but four are on-line at any given
time. The fifth machine provides redundancy in the event of a failure or for scheduled mainte-
nance work. The batteries are always on-line under a slight charge across the 270 V dc bus. Two
separate natural-gas lines, buried along different land routes, supply the gas engine. Local gas
storage capacity also is provided.
Figure 8.6.6 A dual feeder standby power system using a motor-generator set to provide power
fail ride-through and transient-disturbance protection. Switching circuits allow the m-g set to be
bypassed, if necessary.
Figure 8.6.7 A premium power-supply backup and conditioning system using dual utility feeds, a
diesel generator, and a motor-generator set.
y y
Figure 8.6.8 Simplified installation diagram of a high-reliability power system incorporating dual
utility feeds, a standby gas-engine generator, and five battery-backed dc m-g sets. (After [1].)
• Natural and liquefied petroleum gas. Advantages: quick starting after long shutdown periods,
long life, low maintenance. Disadvantage: availability of natural gas during area-wide power
failure subject to question.
• Gasoline. Advantages: rapid starting, low initial cost. Disadvantages: greater hazard associ-
ated with storing and handling gasoline, generally shorter mean time between overhaul.
• Gas turbine. Advantages: smaller and lighter than piston engines of comparable horsepower,
rooftop installations practical, rapid response to load changes. Disadvantages: longer time
required to start and reach operating speed, sensitive to high input air temperature.
The type of power plant chosen usually is determined primarily by the environment in which
the system will be operated and by the cost of ownership. For example, a standby generator
located in an urban area office complex may be best suited to the use of an engine powered by
natural gas, because of the problems inherent in storing large amounts of fuel. State or local
building codes can place expensive restrictions on fuel-storage tanks and make the use of a gaso-
line- or diesel-powered engine impractical. The use of propane usually is restricted to rural areas.
y y
Figure 8.6.9 Typical configuration of an engine-generator set. (From [2]. Used with permmission.)
The availability of propane during periods of bad weather (when most power failures occur) also
must be considered.
The generator rating for a standby power system should be chosen carefully and should take
into consideration the anticipated future growth of the plant. It is good practice to install a
standby power system rated for at least 25 percent greater output than the current peak facility
load. This headroom gives a margin of safety for the standby equipment and allows for future
expansion of the facility without overloading the system.
An engine-driven standby generator typically incorporates automatic starting controls, a bat-
tery charger, and automatic transfer switch. (See Figure 8.6.9.) Control circuits monitor the util-
ity supply and start the engine when there is a failure or a sustained voltage drop on the ac
supply. The switch transfers the load as soon as the generator reaches operating voltage and fre-
quency. Upon restoration of the utility supply, the switch returns the load and initiates engine
shutdown. The automatic transfer switch must meet demanding requirements, including:
• Carrying the full rated current continuously
• Withstanding fault currents without contact separation
• Handling high inrush currents
• Withstanding many interruptions at full load without damage
y y
The nature of most power outages requires a sophisticated monitoring system for the engine-
generator set. Most power failures occur during periods of bad weather. Most standby generators
are unattended. More often than not, the standby system will start, run, and shut down without
any human intervention or supervision. For reliable operation, the monitoring system must check
the status of the machine continually to ensure that all parameters are within normal limits.
Time-delay periods usually are provided by the controller that require an outage to last from 5 to
10 s before the generator is started and the load is transferred. This prevents false starts that
needlessly exercise the system. A time delay of 5 to 30 min usually is allowed between the resto-
ration of utility power and return of the load. This delay permits the utility ac lines to stabilize
before the load is reapplied.
The transfer of motor loads may require special consideration, depending upon the size and
type of motors used at a plant. If the residual voltage of the motor is out of phase with the power
source to which the motor is being transferred, serious damage can result to the motor. Excessive
current draw also may trip overcurrent protective devices. Motors above 50 hp with relatively
high load inertia in relation to torque requirements, such as flywheels and fans, may require spe-
cial controls. Restart time delays are a common solution.
Automatic starting and synchronizing controls are used for multiple-engine-generator instal-
lations. The output of two or three smaller units can be combined to feed the load. This capability
offers additional protection for the facility in the event of a failure in any one machine. As the
load at the facility increases, additional engine-generator systems can be installed on the standby
power bus.
Generator Types
Generators for standby power applications can be induction or synchronous machines. Most
engine-generator systems in use today are of the synchronous type because of the versatility, reli-
ability, and capability of operating independently that this approach provides [2]. Most modern
synchronous generators are of the revolving field alternator design. Essentially, this means that
the armature windings are held stationary and the field is rotated. Therefore, generated power
can be taken directly from the stationary armature windings. Revolving armature alternators are
less popular because the generated output power must be derived via slip rings and brushes.
The exact value of the ac voltage produced by a synchronous machine is controlled by vary-
ing the current in the dc field windings, while frequency is controlled by the speed of rotation.
Power output is controlled by the torque applied to the generator shaft by the driving engine. In
this manner, the synchronous generator offers precise control over the power it can produce.
Practically all modern synchronous generators use a brushless exciter. The exciter is a small
ac generator on the main shaft; the ac voltage produced is rectified by a 3-phase rotating rectifier
assembly also on the shaft. The dc voltage thus obtained is applied to the main generator field,
which is also on the main shaft. A voltage regulator is provided to control the exciter field cur-
rent, and in this manner, the field voltage can be precisely controlled, resulting in a stable output
voltage.
The frequency of the ac current produced is dependent on two factors: the number of poles
built into the machine, and the speed of rotation (rpm). Because the output frequency must nor-
mally be maintained within strict limits (60 Hz or 50 Hz), control of the generator speed is essen-
tial. This is accomplished by providing precise rpm control of the prime mover, which is
performed by a governor.
y y
There are many types of governors; however, for auxiliary power applications, the isochro-
nous governor is normally selected. The isochronous governor controls the speed of the engine
so that it remains constant from no-load to full load, assuring a constant ac power output fre-
quency from the generator. A modern system consists of two primary components: an electronic
speed control and an actuator that adjusts the speed of the engine. The electronic speed control
senses the speed of the machine and provides a feedback signal to the mechanical/hydraulic actu-
ator, which in turn positions the engine throttle or fuel control to maintain accurate engine rpm.
The National Electrical Code provides guidance for safe and proper installation of on-site
engine-generator systems. Local codes may vary and must be reviewed during early design
stages.
If the decision is made that building occupants can live with the noise of the generator, care
must be taken in scheduling the required testing and exercising of the unit. Whether testing
occurs monthly or weekly, it should be done on a regular schedule.
If it has been determined that the noise should be controlled, or at least minimized, the easiest
way to achieve this objective is to physically separate the machine from occupied areas. This may
be easier said than done. Because engine noise is predominantly low-frequency in character,
walls and floor/ceiling construction used to contain the noise must be massive. Lightweight con-
struction, even though it may involve several layers of resiliently mounted drywall, is ineffective
in reducing low-frequency noise. Exhaust noise is a major component of engine noise but, fortu-
nately, it is easier to control. When selecting an engine-generator set, select the highest-quality
exhaust muffler available. Such units often are identified as “hospital-grade” mufflers.
Engine-generator sets also produce significant vibration. The machine should be mounted
securely to a slab-on-grade or an isolated basement floor, or it should be installed on vibration
isolation mounts. Such mounts usually are specified by the manufacturer.
Because a UPS system or motor-generator set is a source of continuous power, it must run
continuously. Noise must be adequately controlled. Physical separation is the easiest and most
effective method of shielding occupied areas from noise. Enclosure of UPS equipment usually is
required, but noise control is significantly easier than for an engine-generator because of the
lower noise levels involved. Nevertheless, the low-frequency 120 Hz fundamental of a UPS sys-
tem is difficult to contain adequately; massive constructions may be necessary. Vibration control
also is required for most UPS and m-g gear.
8.6.3 Batteries
Batteries are the lifeblood of most UPS systems. Important characteristics include the following:
• Charge capacity—how long the battery will operate the UPS
• Weight
• Charging characteristics
• Durability/ruggedness
Additional features that add to the utility of the battery include:
• Built-in status/temperature/charge indicator and/or data output port
• Built-in over-temperature/over-current protection with auto-reset capabilities
• Environmental friendliness
The last point deserves some attention. Many battery types must be recycled or disposed of
through some prescribed means. Proper disposal of a battery at the end of its useful life is, thus,
an important consideration. Be sure to check the original packaging for disposal instructions.
Failure to follow the proper procedures could have serious consequences.
Research has brought about a number of different battery chemistries, each offering distinct
advantages. Today’s most common and promising rechargeable chemistries include the follow-
ing:
y y
• Nickel cadmium (NiCd)—used for portable radios, cellular phones, video cameras, laptop
computers, and power tools. NiCds have good load characteristics, are economically priced,
and are simple to use.
• Lithium ion (Li-Ion)—now commonly available and typically used for video cameras and lap-
top computers. This battery promises to replace some NiCds for high energy-density applica-
tions.
• Sealed lead acid (SLA)—used for uninterruptible power systems, video cameras, and other
demanding applications where the energy-to-weight ratio is not critical and low battery cost is
desirable.
• Nickel metal hydride (NiMH)—used for cellular phones, video cameras, and laptop comput-
ers where high energy is of importance and cost is secondary.
• Lithium polymer (Li-Polymer)—this battery has the highest energy density and lowest self-
discharge of common battery types, but its load characteristics typically suit only low current
applications.
• Reusable alkaline—used for light duty applications. Because of its low self-discharge, this
battery is suitable for portable entertainment devices and other non-critical appliances that are
used occasionally.
No single battery offers all the answers; rather, each chemistry is based on a number of com-
promises.
A battery, of course, is only as good as its charger. Common attributes for the current genera-
tion of charging systems include quick-charge capability and automatic battery condition analy-
sis and subsequent intelligent charging.
8.6.3a Terms
The following terms are commonly used to specify and characterize batteries:
• Energy density. The storage capacity of a battery measured in watt-hours per kilogram (Wh/
kg).
• Cycle life. The typical number of charge-discharge cycles for a given battery before the
capacity decreases from the nominal 100 percent to approximately 80 percent, depending
upon the application.
• Fast-charge time. The time required to fully charge an empty battery.
• Self-discharge. The discharge rate when the battery is not in use.
• Cell voltage. The output voltage of the basic battery element. The cell voltage multiplied by
the number of cells provides the battery terminal voltage
• Load current. The maximum recommended current the battery can provide.
• Current rate. The C-rate is a unit by which charge and discharge times are scaled. If dis-
charged at 1C, a 100 Ah battery provides a current of 100 A; if discharged at 0.5C, the avail-
able current is 50 A.
y y
Figure 8.6.10 The charge states of an SLA battery. (From [3]. Used with permission.)
• Exercise requirement. This parameter indicates the frequency that the battery needs to be
exercised to achieve maximum service life.
The third stage is the float-charge that compensates for self-discharge after the battery has been
fully charged.
During the “constant current charge,” the SLA battery is charged at a high current, limited by
the charger itself. After the voltage limit is reached, the topping charge begins and the current
starts to gradually decrease. Full-charge is reached when the current drops to a preset level or
reaches a low-end plateau.
The proper setting of the cell voltage limit is critical and is related to the conditions under
which the battery is charged. A typical voltage limit range is from 2.30 V to 2.45 V. If a slow
charge is acceptable, or if the room temperature can exceed 30°C (86°F), the recommended volt-
age limit is 2.35 V/cell. If a faster charge is required and the room temperature remains below
30°C, 2.40 or 2.45 V/cell can be used.
8.6.4 References
1. Lawrie, Robert: Electrical Systems for Computer Installations, McGraw-Hill, New York,
N.Y., 1988.
2. DeDad, John A.: “Auxiliary Power,” in Practical Guide to Power Distribution for Informa-
tion Technology Equipment, PRIMEDIA Intertec, Overland Park, Kan., pp. 31–39, 1997.
3. Buchmann, Isidor: “Batteries,” in The Electronics Handbook, Jerry C. Whitaker (ed.), pg.
1058, CRC Press, Boca Raton, Fla., 1996.
8.6.5 Bibliography
Angevine, Eric: “Controlling Generator and UPS Noise,” Broadcast Engineering, PRIMEDIA
Intertec, Overland Park, Kan., March 1989.
Baietto, Ron: “How to Calculate the Proper Size of UPS Devices,” Microservice Management,
PRIMEDIA Intertec, Overland Park, Kan., March 1989.
Federal Information Processing Standards Publication No. 94, Guideline on Electrical Power for
ADP Installations, U.S. Department of Commerce, National Bureau of Standards, Wash-
ington, D.C., 1983.
Highnote, Ronnie L.: The IFM Handbook of Practical Energy Management, Institute for Man-
agement, Old Saybrook, Conn., 1979.
Smith, Morgan: “Planning for Standby AC Power,” Broadcast Engineering, PRIMEDIA Intertec,
Overland Park, Kan., March 1989.
Stuart, Bud: “Maintaining an Antenna Ground System,” Broadcast Engineering, PRIMEDIA
Intertec, Overland Park, Kan., October 1986.
g g
Section
Test Equipment
9
The audio/video business in one in which there is a constant flow of new products. This rapid
advancement of technology requires an equally rapid advancement in training technical person-
nel. Many large manufacturers offer entry-level maintenance training on their products, but these
programs may be restricted to authorized resellers of the product line. Hands-on training is
unquestionably the best way to learn how to service a given piece of equipment. When a student
is allowed to practice a new technique with the supervision of a skilled instructor, the highest
level of learning and retention occurs.
Modern computer-based test instruments provide the ability to rapidly transmit data from one
location to another. Instruments, such as oscilloscopes and spectrum analyzers, are available that
can output a waveform or other data to a modem for transmission to a central service facility for
analysis. Instead of a single person grappling with a difficult problem, the on-site technician can
call on the resources and experience of the service center. Field service, thus, becomes a team
effort that includes the technician in the field and the often more experienced service center
engineers and staff. Software programs are available to permit test equipment at a remote loca-
tion to be configured as required, capture data, and transmit that data to the service center for
analysis. Teleservicing also makes possible the creation of reference libraries of key waveforms
and data patterns. These data facilitate troubleshooting, and are useful in documenting the per-
formance characteristics of the equipment being maintained. Over time, such documentation can
become a valuable addition to the service record of a given piece of equipment.
Service sites are sometimes in less than ideal locations, such as a mountaintop or a remote
microwave relay site. Teleservicing in such cases provides numerous benefits. If the problem
involves intermittent failures, the test instrument can be set up in a “babysitting mode” and left to
capture a critical signal when it occurs.
In This Section:
9-1
q p
Applications 9-47
On-Air Measurements 9-48
Spurious Harmonic Distortion 9-49
Selective-Tuned Filter Alignment 9-50
Small-Signal Troubleshooting 9-52
Defining Terms 9-52
References 9-57
Bibliography 9-58
Ferrara, K. C., S. J. Keene, and C. Lane: “Software Reliability from a System Perspective,” Pro-
ceedings IEEE Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1988.
Fink, D., and D. Christiansen (eds.): Electronics Engineers' Handbook, 3rd ed., McGraw-Hill,
New York, N.Y., 1989.
Fortna, H., R. Zavada, and T. Warren: “An Integrated Analytic Approach for Reliability Improve-
ment,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York,
N.Y., 1990.
Gore, George: “Choosing a Hand-Held Test Instrument,” Electronic Servicing and Technology,
Intertec Publishing, Overland Park, Kan., December 1988.
Greenberg, Bob: “Repairing Microprocessor-Based Equipment,” Sound and Video Contractor,
Intertec Publishing, Overland Park, Kan., February 1988.
Griffin, P.: “Analysis of the F/A-18 Hornet Flight Control Computer Field Mean Time Between
Failure,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York,
N.Y., 1985.
Hall, F., R. A. Paul, and W. E. Snow: “R&M Engineering for Off-the-Shelf Critical Software,”
Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York, N.Y.,
1988.
Hansen, M. D., and R. L. Watts: “Software System Safety and Reliability,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1988.
Harju, Rey: “Hands-Free Operation: A New Generation of DMMs,” Microservice Management,
Intertec Publishing, Overland Park, Kan., August 1988.
Harris, Brad: “The Digital Storage Oscilloscope: Providing the Competitive Edge,” Electronic
Servicing and Technology, Intertec Publishing, Overland Park, Kan., June 1988.
Harris, Brad: “Understanding DSO Accuracy and Measurement Performance,” Electronic Ser-
vicing and Technology, Intertec Publishing, Overland Park, Kan., April 1989.
Hermason, Sue E., Major, USAF: letter dated December 2, 1988. From Yates, W., and Shaller,
D.: “Reliability Engineering as Applied to Software,” Proceedings IEEE Reliability and
Maintainability Symposium, IEEE, New York, N.Y., 1990.
Hobbs, Gregg K.: “Development of Stress Screens,” Proceedings IEEE Reliability and Main-
tainability Symposium, IEEE, New York, N.Y., 1987.
Horn, R., and F. Hall: “Maintenance Centered Reliability,” Proceedings IEEE Reliability and
Maintainability Symposium, IEEE, New York, N.Y., 1983.
Hoyer, Mike: “Bandwidth and Rise Time: Two Keys to Selecting the Right Oscilloscope,” Elec-
tronic Servicing and Technology, Intertec Publishing, Overland Park, Kan., April 1990.
Irland, Edwin A.: “Assuring Quality and Reliability of Complex Electronic Systems: Hardware
and Software,” Proceedings of the IEEE, vol. 76, no. 1, IEEE, New York, N.Y., January
1988.
Kenett, R., and M. Pollak: “A Semi-Parametric Approach to Testing for Reliability Growth, with
Application to Software Systems,” IEEE Transactions on Reliability, IEEE, New York,
N.Y., August 1986.
q p
Kinley, Harold: “Using Service Monitor/Spectrum Analyzer Combos,” Mobile Radio Technol-
ogy, Intertec Publishing, Overland Park, Kan., July 1987.
Maynard, Eqbert, OUSDRE Working Group Chairman: “VHSIC Technology Working Group
Report” (IDA/OSD R&M Study), Document D-42, Institute of Defense Analysis, Novem-
ber 1983.
Montgomery, Steve: “Advanced in Digital Oscilloscopes,” Broadcast Engineering, Intertec Pub-
lishing, Overland Park, Kan., November 1989.
Neubauer, R. E., and W. C. Laird: “Impact of New Technology on Repair,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1987.
Pepple, Carl: “How to Use a Spectrum Analyzer at the Cell Site,” Cellular Business, Intertec
Publishing, Overland Park, Kan., Marcy 1989.
Persson, Conrad: “Oscilloscope Special Report,” Electronic Servicing and Technology, Intertec
Publishing, Overland Park, Kan., April 1990.
Persson, Conrad: “Oscilloscope: The Eyes of the Technician,” Electronic Servicing and Technol-
ogy, Intertec Publishing, Overland Park, Kan., April 1987.
Persson, Conrad: “Test Equipment for Personal Computers,” Electronic Servicing and Technol-
ogy, Intertec Publishing, Overland Park, Kan., July 1987.
Persson, Conrad:, “The New Breed of Test Instruments,” Broadcast Engineering, Intertec Pub-
lishing, Overland Park, Kan., November 1989.
Powell, Richard: “Temperature Cycling vs. Steady-State Burn-In,” Circuits Manufacturing, Ben-
will Publishing, September 1976.
Ogden, Leonard: “Choosing the Best DMM for Computer Service,” Microservice Management,
Intertec Publishing, Overland Park, Kan., August 1989.
Robinson, D., and S. Sauve: “Analysis of Failed Parts on Naval Avionic Systems,” Report No.
D180-22840-1, Boeing Company, Seattle, Wash., October 1977.
Siner, T. Ann: “Guided Probe Diagnosis: Affordable Automation,” Microservice Management,
Intertec Publishing, Overland Park, Kan., March 1987.
Smeltzer, Dennis: “Packing and Shipping Equipment Properly,” Microservice Management,
Intertec Publishing, Overland Park, Kan., April 1989.
Smith, A., R. Vasudevan, R. Matteson, and J. Gaertner: “Enhancing Plant Preventative Mainte-
nance via RCM,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE,
New York, N.Y., 1986.
Smith, William B.: “Integrated Product and Process Design to Achieve High Reliability in Both
Early and Useful Life of the Product,” Proceedings IEEE Reliability and Maintainability
Symposium, IEEE, New York, N.Y., 1987.
Sokol, Frank: “Specialized Test Equipment,” Microservice Management, Intertec Publishing,
Overland Park, Kan., August 1989.
Spradlin, B. C.: “Reliability Growth Measurement Applied to ESS,” Proceedings IEEE Reliabil-
ity and Maintainability Symposium, IEEE, New York, N.Y., 1986.
q p
Toorens, Hans: “Oscilloscopes: From Looking Glass to High-Tech,” Electronic Servicing and
Technology, Intertec Publishing, Overland Park, Kan., April 1990.
Tustin, Wayne: “Recipe for Reliability: Shake and Bake,” IEEE Spectrum, IEEE, New York,
N.Y., December 1986.
Ware, Peter: “Servicing Obsolete Equipment,” Microservice Management, Intertec Publishing,
Overland Park, Kan., March 1989.
Whitaker, Jerry C.: Electronic Systems Maintenance Handbook, CRC Press, Boca Raton, Fla.,
2001.
Wickstead, Mike: “Signature Analyzers,” Microservice Management, Intertec Publishing, Over-
land Park, Kan., October, 1985.
Wilson, M. F., and M. L. Woodruff: “Economic Benefits of Parts Quality Knowledge,” Proceed-
ings IEEE Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1985.
Wolf, Richard J.: “Spectrum Analyzer Uses for Two-Way Technicians,” Mobile Radio Technol-
ogy, Intertec Publishing, Overland Park, Kan., July 1987.
Wong, Kam L.: “Demonstrating Reliability and Reliability Growth with Environmental Stress
Screening Data,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE,
New York, N.Y., 1990.
Wong, K. L., I. Quart, L. Kallis, and A. H. Burkhard: “Culprits Causing Avionics Equipment
Failures,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York,
N.Y., 1987.
Worm, Charles M.: “The Real World: A Maintainer's View,” Proceedings IEEE Reliability and
Maintainability Symposium, IEEE, New York, N.Y., 1987.
Yates, W., and D. Shaller: “Reliability Engineering as Applied to Software,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1990.
q p
g g
Chapter
9.1
Troubleshooting Digital Systems
9.1.1 Introduction
Repair of computer-based hardware requires a different approach to maintenance than conven-
tional analog circuits. Successful troubleshooting depends on four key elements:
• Understanding how the system operates
• Using the right test equipment
• Performing troubleshooting steps in the right sequence
• Paying attention to what the unit-under-test (UUT) is telling you
9-9
g g y
machine cycle The shortest amount of time necessary for a microprocessor to complete an oper-
ation or a process.
memory map The assignment of various devices to portions of the microprocessor memory
space.
time, usually consisting of data read, data write, or arithmetic functions. In response to instruc-
tions contained in the control program, the microprocessor orders data to be placed on the data
bus, reads it, performs calculations as required, and writes the resulting data to specific circuit
elements over the same bus.
The system clock determines the rate at which the microprocessor executes instructions. All
data movement within the kernel is synchronized with the master clock. A key function of the
clock is to make sure that data placed on the bus has time to stabilize before a read or write oper-
ation occurs. The clock pulses define valid data windows for the various buses. The clock also
provides refresh signals for the DRAM chips.
ROM contains built-in instructions for the microprocessor. These instructions are specific to
a given type of device, known as the instruction set. All microprocessor-based systems require
ROM instructions to get them started when the system is first powered up (booting).
RAM consists of one or more banks of memory devices that serve as a temporary storage area
for data. Data contained in RAM is volatile; it will be lost when power is removed from the sys-
tem. Program instructions read from the storage media are stored in and executed from RAM.
Buses are communications paths that conduct information from one place to another. Three
types of buses are fundamental to computer systems:
• Data bus—carries information to and from the microprocessor, connecting it with every part
of the system that handles data. The number of data bus lines is equal to the number of bits
the microprocessor can handle at one time; each bit requires its own line. Data bus lines are
bidirectional.
• Address bus—carries information that identifies the location of a particular piece of data.
Each device in the system has a specific range of addresses unique to that device. The data
and address buses work together to respond to the read and write commands of the micropro-
cessor.
• Control and status bus—the control and status lines of the microprocessor are connected to
the other kernel devices to effect control of system operations. These lines allow the micro-
processor to specify whether it wants to read data from a particular device, or write data to it.
They also provide a means for the device to notify the microprocessor when data is available
for transmission.
Both the data and address buses employ buffers located at the outer boundary of the kernel to
isolate the buses from other circuits on the system board.
Address decoders on the address bus notify each remote device when the address placed on
the bus by the microprocessor is within the address range of that particular device. This notifica-
tion is performed by turning the chip select (CS) pin of the device on or off. A device can
respond to requests from the microprocessor only when the CS pin is enabled. This prevents the
wrong device from responding to commands from the microprocessor.
The direct memory access controller permits the movement of data to and from memory
without going through the microprocessor. The DMA controller is designed to perform data
transfers involving peripheral devices, such as mass storage disks. Like other devices in the com-
puter system, the DMA controller operates only in response to instructions from the micropro-
cessor. When executing instructions, the DMA circuit assumes control over the data, address,
and control lines from the microprocessor. This process is know as cycle stealing. For one or
more clock cycles, the DMA chip takes over the communications buses, and the microprocessor
restricts itself to internal functions (such as arithmetic calculations). Because the microprocessor
g g y
will not relinquish control over the buses for more than a few cycles at a time, the DMA control-
ler and microprocessor may pass control of the buses back and forth several times before the
DMA controller completes a given task.
I/O circuitry allows the microprocessor kernel to communication with peripheral devices,
such as the keyboard, monitor, storage media, and communications ports. Unlike the components
of the microprocessor kernel, I/O devices are not always governed by the microprocessor clock.
Peripherals may operate asynchronously with the system clock. The I/O circuitry typically con-
tains buffers that serve to isolate the kernel from the peripherals, and function as holding points
for data on its way to or from the kernel. For data going to the kernel, the buffers hold the data
until the microprocessor is ready to accept it. For data that the microprocessor is sending to a
peripheral device, the buffers hold the data until the device is ready to accept it.
• Types of devices used. An PWB populated with DIP integrated circuits is much easier to
repair than one populated with surface-mounted ICs.
• Nature of the malfunction. Some failures can be diagnosed with little more than a DMM and
a logic probe. Other failures require complex and expensive test instruments.
• Mean time to repair (MTTR) considerations. Hardware used in critical applications where
downtime cannot be tolerated is best repaired using the board swapping technique.
Because of the complexity of computer equipment today, many service departments have
established a hierarchical approach to troubleshooting. This technique involves assigning one of
three levels to each service problem:
• First-tier problems—obvious failures that usually can be solved without extensive trouble-
shooting or expensive test equipment. First-tier problems are handled by technology general-
ists, often less experienced or entry-level personnel. In a sizable maintenance organization,
this group is the largest of the three. It handles the majority of service work.
• Second-tier problems—failures that are harder to diagnose than first-tier faults. Test equip-
ment is required to troubleshoot the system. Complex problems often call for sophisticated
instruments.
• Third-tier problems—the most difficult to troubleshoot failures. Complex and expensive test
equipment is required, as well as extensive experience on the unit being serviced. In a large
maintenance organization, this is the smallest of the three groups.
Such a tiered approach to service results in the most cost-effective use of technical talent. It
also offers fast turn-around of products to the customer.
service technician, therefore, may have to practice some degree of reverse engineering to repair
the system. By definition, reverse engineering means working backward from a finished product
to develop schematics, parts lists, operating standards, and test procedures. Even if this data is
unavailable from the OEM, it may be available from other sources, including:
• The equipment owner. Large contract sales often specify that the buyer receives technical
documentation as part of the purchase.
• Parts suppliers to the OEM. A wealth of technical information is available from semiconduc-
tor manufacturers. By ordering application notes from the maker of the microprocessor chip,
for example, insight may be gained into how the system operates. Most newer computer-
based products are built around chip sets, and the chip set manufacturer may be able to supply
extensive documentation on how the products operate.
• Documentation from similar systems. Product lines tend to share many design concepts. If the
maintenance manual for a given piece of hardware is not available, but documentation for an
earlier or later system from the same manufacturer is available, sufficient information may be
gleaned from the available data to solve the problem at hand.
easily accomplished with the proper packing materials, such as bubble wrap and anti-static foam
sheets. Furthermore, all electronic assemblies should be shipped in anti-static bags to prevent
ESD damage.
If intermittent disruptions are noted in the computer system, check for noise or ripple on the
power supply output pins. Noise can often enter the processor and related devices through the
supply, causing erratic operation. Less than 0.1 V ripple should be observed on an oscilloscope.
Check the reference manual for a ripple specification. 60 Hz ac line ripple may occur from a
defective (open or marginal) filter capacitor or a shorted filter choke. High frequency noise may
also be observed as a result of insufficient filtering of the switching stage.
Most computer equipment requires +5 V, +12 V, and –12 V. A simplified block diagram of a
switching supply is shown in Figure 9.1.3. Major elements of the example system include:
• Input and energy storage—provides input line conditioning (RFI filtering and current limit-
ing) and ac-to-dc rectification and filtering.
• Startup and reference—supplies unregulated dc to the startup circuit, supplies a +5 V refer-
ence for output voltage control, and generates trigger signals to the drive pulse generator cir-
cuit (which triggers the power supply into operation).
• Control and drive—manages power pulse generation during normal operation and during cur-
rent-limiting conditions, and directs power control pulses to the drive transformer and into the
primary inverter circuit.
• Primary inverter—provides drive signals for the power switching transistors and supplies cur-
rent-level status information for the shut-down circuit.
g g y
• Protection—detects and limits high current on the primary winding of the primary inverter
transformer, directs shutdown in the event of an overvoltage condition on the +5 V output,
and directs shutdown in the event of excessive temperature within the supply chassis.
• Secondary output—provides rectification, filtering, and regulation of the required operating
supply voltages.
When troubleshooting a catastrophic power supply failure, first check the primary line fuse.
Consult the technical documentation for the location of internal, PWB-mounted fuses, if used.
Check rectifiers and input-side filter capacitors. Both are vulnerable to failure from transient dis-
turbances on the ac line. Check the switching transistors, which are subjected to large switching
currents during normal operation. Check the startup circuit for the presence of control pulses.
Because of the availability of inexpensive stock power supplies, replacement of a defective
unit is typically the best option.
9.1.5 Bibliography
Allen, Tom: “Components of Microprocessor-Based Systems,” Electronic Servicing and Tech-
nology, Intertec Publishing, Overland Park, Kan., January 1988.
Allen, Tom: “Troubleshooting Microprocessor-Based Circuits: Part 1,” Electronic Servicing and
Technology, Intertec Publishing, Overland Park, Kan., January 1988.
Allen, Tom: “Troubleshooting Microprocessor-Based Circuits: Part 2,” Electronic Servicing and
Technology, Intertec Publishing, Overland Park, Kan., February 1988.
Bychowski, Phil: “Reverse Engineering,” Microservice Management, Intertec Publishing, Over-
land Park, Kan., March 1989.
Carey, Gregory D.: “Isolating Microprocessor-Related Problems,” Electronic Servicing and
Technology, Intertec Publishing, Overland Park, Kan., June 1988.
Clodfelter, Jim: “Troubleshooting the Microprocessor,” Microservice Management, Intertec
Publishing, Overland Park, Kan., November 1985.
g g y
Chapter
9.2
Digital Test Instruments
9.2.1 Introduction
As the equipment used by consumers and industry becomes more complex, the requirements for
highly-skilled maintenance technicians also increases. Maintenance personnel today require
advanced test equipment and must think in a “systems mode” to troubleshoot much of the hard-
ware now in the field. New technologies and changing economic conditions have reshaped the
way maintenance professionals view their jobs. As technology drives equipment design forward,
maintenance difficulties will continue to increase. Such problems can be met only through
improved test equipment and increased technician training.
Servicing computer-based professional equipment typically involves isolating the problem to
the board level and then replacing the defective PWB. Taken on a case-by-case basis, this
approach seems efficient. The inefficiency in the approach, however (which is readily apparent),
is the investment required to keep a stock of spare boards on hand. Furthermore, because of the
complex interrelation of circuits today, a PWB that appeared to be faulty may actually turn out to
be perfect. The ideal solution is to troubleshoot down to the component level and replace the
faulty device instead of swapping boards. In many cases, this approach requires sophisticated
and expensive test equipment. In other cases, however, simple test instruments will do the job.
Although the cost of most professional equipment has been going up in recent years, mainte-
nance technicians have seen a buyer's market in test instruments. The semiconductor revolution
has done more than given consumers low-cost computers and disposable devices. It has also
helped to spawn a broad variety of inexpensive test instruments with impressive measurement
capabilities.
9-19
g
Figure 9.2.1 Functional block diagram of a basic DMM. LSI chip technology has reduced such
systems to a single device.
(a)
(b)
Figure 9.2.2 DMM analog-to-digital conversion process: (a) functional block diagram, (b) graph of
Vc vs. time.
The capacitor is then discharged to zero. As shown in the figure, the discharge interval is directly
proportional to the maximum voltage, which in turn is proportional to the applied voltage.
At the same time that the integration interval ends and the discharge interval begins, a counter
in the meter begins counting pulses generated by a clock circuit. When the voltage reaches zero,
the counter stops. The number of pulses counted is, therefore, proportional to the discharge
period. This count is converted to a digital number and displayed as the measured voltage.
Although this method works well, it is somewhat slow, so many microcomputer-based meters use
a variation called multislope integration.
When a DMM is used in the resistance testing mode, it places a low voltage with a constant-
current characteristic across the test leads. After the leads are connected, the voltage across the
measurement points is determined, which provides the resistance value (voltage divided by the
known constant current = resistance). The meter converts the voltage reading into an equivalent
resistance for display. The voltage placed across the circuit under test is kept low to protect semi-
conductor devices that might be in the loop. The voltage at the ohmmeter probe is about 0.1 V or
less, too low to turn on silicon junctions.
The ultimate performance of a DMM in the field is determined by its inherent accuracy. Key
specifications include:
• Accuracy—how closely the meter indicates the actual value of a measured signal, specified
in percent of error. Zero percent indicates a perfect meter.
• Frequency response—the range of frequencies that can be measured by the meter without
exceeding a specified amount of error.
g
• Input impedance—the combined ac and dc resistance at the input terminals of the multime-
ter. The impedance, if too low, can load critical circuits and result in measurement errors. An
input impedance of 10 MΩ or greater will prevent loading.
• Precision—the degree to which a measurement is carried out. As a general rule, the more
digits a DMM can display, the more precise the measurement will be.
Logic Instruments
The simplest of all logic instruments is the logic probe. A basic logic probe tells the technician
whether the logic state of the point being checked is high, low, or pulsed. Most probes include a
pulse stretcher that will show the presence of a 1-shot pulse. The indicators are usually LEDs. In
some cases, a single LED is used to indicate any of the conditions (high, low, or pulsing); gener-
ally, individual LEDs are used to indicate each state. Probes usually include a switch to match the
level sense circuitry of the unit to the type of logic being checked (TTL, CMOS, ECL, etc.). The
probe receives its power from the circuit under test. Connecting the probe in this manner sets the
approximate value of signal voltage that constitutes a logic low or high. For example, if the
g
Figure 9.2.3 Use of a logic pulser and current tracer to locate a short-circuit on a PWB.
power supply voltage for a CMOS logic circuit is 18 V, a logic low would be about 30 percent of
18 V (5.4 V), and a logic high would be about 70 percent of 18 V (12.6 V).
The logic pulser is the active counterpart to the logic probe. The logic pulser generates a sin-
gle pulse, or train of pulses into a circuit to check the response.
parator's “reference” socket. While the circuit containing the IC under test is operating, the com-
parator matches the responses of the reference IC to the test device.
Figure 9.2.6 Use of a delay period between the trigger point and the trace point of a timing ana-
lyzer.
Accurate sampling of data lines requires a trigger source to begin data acquisition. A fixed
delay may be inserted between the trigger point and the trace point to allow for bus settling. This
concept is illustrated in Figure 9.2.6. A variety of trigger modes are commonly available on a
timing analyzer, including:
• Level triggering—data acquisition and/or display begins when a logic high or low is detected.
• Edge triggering—data acquisition/display begins on the rising or falling edge of a selected
signal. Although many logic devices are level-dependent, clock and control signals are often
edge-sensitive.
• Bus state triggering—data acquisition/display is triggered when a specific code is detected
(specified in binary or hexadecimal).
The timing analyzer is constantly taking data from the monitored bus. Triggering, and subse-
quent generation of the trace point, controls the data window displayed to the technician. It is
g
Figure 9.2.8 Use of a timing analyzer for detecting glitches on a monitored line.
possible, therefore, to configure the analyzer to display data that precedes the trace point. (See
Figure 9.2.7.) This feature can be a powerful troubleshooting and developmental tool.
The timing analyzer is perhaps the best method of detecting glitches in computer-based
equipment. (A glitch being defined as any transition that crosses a logic threshold more than
once between clock periods. See Figure 9.2.8.) The triggering input of the analyzer is set to the
bus line that is experiencing random glitches. When the analyzer detects a glitch, it displays the
bus state preceding, during, or after occurrence of the disturbance.
The state analyzer is the second half of a logic analyzer. It is used most often to trace the exe-
cution of instructions through a microprocessor system. Data, address, and status codes are cap-
tured and displayed as they occur on the microprocessor bus. A state is a sample of the bus when
the data are valid. A state is usually displayed in a tabular format of hexadecimal, binary, octal,
or assembly language. Because some microprocessors multiplex data and addresses on the same
lines, the analyzer must be able to clock-in information at different clock rates. The analyzer, in
essence, acts as a demultiplexer to capture an address at the proper time, and then to capture data
present on the same bus at a different point in time. A state analyzer also gives the operator the
ability to qualify the data stored. Operation of the instrument may be triggered by a specific logic
pattern on the bus. State analyzers usually offer a sequence term feature that aids in triggering. A
sequence term allows the operator to qualify data storage more accurately than would be possible
with a single trigger point. A sequence term usually takes the following form:
g
• find xxxx
• then find yyyy
• start on zzzz
A sequence term is useful for probing a subroutine from a specific point in a program. It also
makes possible selective storage of data, as shown in Figure 9.2.9.
To make the acquired data easier to understand, most state analyzers include software that
interpret the information. Such disassemblers (also known as inverse assemblers) translate hex,
binary, or octal codes into assembly code (or some other format) to make then easier to read.
The logic analyzer is used routinely by design engineers to gain an in-depth look at signals
within digital circuits. A logic analyzer can operate at high speeds, making the instrument ideal
for detecting glitches resulting from timing problems. Such faults are usually associated with
design flaws, not with manufacturing defects or failures in the field.
While the logic analyzer has benefits for the service technician, its use is limited. Designers
require the ability to verify hardware and software implementations with test equipment; service
technicians simply need to quickly isolate a fault. It is difficult and costly to automate test proce-
dures for a given PWB using a logic analyzer. The technician must examine a long data stream
and decide if the data are good or bad. Writing programs to validate state analysis data is possi-
ble, but—again—is time-consuming.
marched through a 16-bit linear feedback shift register. Whatever data is left over in the register
after the specified measurement window has closed is converted into a 4-bit hexadecimal read-
out. The feedback portion of the shift register allows just one faulty bit in a digital bit stream to
create an entirely different signature than would be expected in a properly operating system. This
tool allows the maintenance technician to identify a single bit error in a digital bit stream with a
high degree of certainty, even when picking up errors that are timing-related.
To function properly, the signature analyzer requires start and stop signals, a clock input, and
the data input. The start and stop inputs are derived from the circuit being tested and are used to
bracket the beginning and end of the measurement window. During this gate period, data is input
through the data probe. The clock input controls the sample rate of data entering the analyzer.
The clock is most often taken from the clock input pin of the microprocessor. The start, stop, and
clock inputs may be configured by the technician to trigger on the rising or falling edges of the
input signals.
tify the cause of a fault condition. An emulative tester emulates the board's microprocessor while
verifying circuit operation. Testing the board “inside-out” allows easy access to all parts of the
circuit; synchronization with the various data and address cycles of the PWB is automatic. Test
procedures, such as read/write cycles from the microprocessor are generic, so that high quality
functional tests can be quickly created for any board. Signature analysis is used to verify that cir-
cuits are operating correctly. Even with long streams of data, there is no maximum memory
depth.
Several different types of emulative testers are available. Some are designed to check only one
brand of computer, or only computers based on one type of microprocessor. Other instruments
can check a variety of computer systems using so-called personality modules that adapt the
instrument to the system being serviced.
Guided-fault isolation (GFI) is practical with an emulative tester because the instrument
maintains control over the entire system. Automated tests can isolate faults to the node level. All
board information and test procedures are resident within the emulative test instrument, includ-
ing prompts to the operator on what to do next. Using the microprocessor test connection com-
bined with movable probes or clips allows a closed loop test of virtually any part of a circuit.
Input/output capabilities of emulative testers range from single point probes to well over a hun-
dred I/O lines. These lines can provide stimulus as well as measurement capabilities.
The principal benefit of the guided probe over the manual probe is derived from the creation
of a topology database for the UUT. The database describes devices on the UUT, their internal
fault-propagating characteristics, and their inter-device connections. In this way, the tester can
guide the operator down the proper logic path to isolate the fault. A guided probe accomplishes
the same analysis as a conventional fault tree, but differs in that troubleshooting is based on a
generic algorithm that uses the circuit model as its input. First, the system compares measure-
ments taken from the failed UUT against the expected results. Next, it searches for a database
representation of the UUT to determine the next logical place to take a measurement. The guided
probe system automatically determines which nodes impact other nodes. The algorithm tracks its
way through the logic system of the board to the source of the fault. Programming of a guided
probe system requires the following steps:
• Development of a stimulus routine to exercise and verify the operating condition of the UUT
(go/no-go status).
• Acquisition and storage of measurements for each node affected by the stimulus routine. The
programmer divides the system or board into measurement sets (MSETs) of nodes having a
common timebase. In this way, the user can take maximum advantage of the time domain fea-
ture of the algorithm.
• Programming a representation of the UUT that depicts the interconnection between devices,
and the manner in which signals can propagate through the system. Common connectivity
libraries are developed and reused from one application to another.
• Programming a measurement set cross reference database. This allows the guided probe
instrument to cross MSET boundaries automatically without operator intervention.
• Implementation of a test program control routine that executes the stimulus routines. When a
failure is detected, the guided probe database is invoked to determine the next step in the trou-
bleshooting process.
g
be slow. Test times for an average PWB may range from 8–20 minutes versus one minute or
less for a bed of nails.
Applications
The most common applications for computer-controlled testing are data gathering, product go/
no-go qualification, and troubleshooting. All depend on software to control the instruments.
Acquiring data can often be accomplished with a computer and a single instrument. The com-
puter collects dozens or hundreds of readings until the occurrence of some event. The event may
be a preset elapsed time or the occurrence of some condition at a test point, such as exceeding a
preset voltage or dropping below a preset voltage. Readings from a variety of test points are then
stored in the computer. Under computer direction, test instruments can also run checks that
might be difficult or time-consuming to perform manually. The computer may control several
test instruments such as power supplies, signal generators, and frequency counters. The data is
stored for later analysis.
9.2.6 Bibliography
Carey, Gregory D.: “Automated Test Instruments,” Broadcast Engineering, Intertec Publishing,
Overland Park, Kan., November 1989.
Detwiler, William L.: “Troubleshooting with a Digital Multimeter,' Mobile Radio Technology,
Intertec Publishing, Overland Park, Kan., September 1988.
Gore, George: “Choosing a Hand-Held Test Instrument,” Electronic Servicing and Technology,
Intertec Publishing, Overland Park, Kan., December 1988.
Harju, Rey: “Hands-Free Operation: A New Generation of DMMs,” Microservice Management,
Intertec Publishing, Overland Park, Kan., August 1988.
Persson, Conrad: “Test Equipment for Personal Computers,” Electronic Servicing and Technol-
ogy, Intertec Publishing, Overland Park, Kan., July 1987.
Persson, Conrad:, “The New Breed of Test Instruments,” Broadcast Engineering, Intertec Pub-
lishing, Overland Park, Kan., November 1989.
g
Ogden, Leonard: “Choosing the Best DMM for Computer Service,” Microservice Management,
Intertec Publishing, Overland Park, Kan., August 1989.
Siner, T. Ann: “Guided Probe Diagnosis: Affordable Automation,” Microservice Management,
Intertec Publishing, Overland Park, Kan., March 1987.
Sokol, Frank: “Specialized Test Equipment,” Microservice Management, Intertec Publishing,
Overland Park, Kan., August 1989.
Whitaker, Jerry C.: Electronic Systems Maintenance Handbook, CRC Press, Boca Raton, Fla.,
2001.
Wickstead, Mike: “Signature Analyzers,” Microservice Management, Intertec Publishing, Over-
land Park, Kan., October, 1985.
g
g g
Chapter
9.3
Oscilloscopes
9.3.1 Introduction
The oscilloscope is one of the most general-purpose of all test instruments. A scope can be used
to measure voltages, examine waveshapes, check phase relationships, examine clock pulses, and
countless other functions. There has been considerable progress in scope technology within the
decade or so, most noticeably the move to portable digital systems. Some instruments offer logic
tracing functions, network connectivity, and hard copy printout.
9.3.2 Specifications
A number of parameters are used to characterize the performance of an oscilloscope. Key param-
eters include bandwidth and risetime.
9.3.2a Bandwidth
Oscilloscope bandwidth is defined as the frequency at which a sinusoidal signal will be attenu-
ated by a factor of 0.707 (or reduced to 70.7 percent of its maximum value). This is referred to as
the –3 dB point. Bandwidth considerations for sine waves are straightforward. For square waves
and other complex waveforms, however, bandwidth considerations become substantially more
involved.
Square waves are made up of an infinite number of sine waves: the fundamental frequency
plus mostly odd harmonics. Fourier analysis shows that a square wave consists of the fundamen-
tal sine wave plus sine waves that are odd multiples of the fundamental. Figure 9.3.1 illustrates
the mechanisms involved. The fundamental frequency contributes about 81.7 percent of the
square wave. The third harmonic contributes about 9.02 percent, and the fifth harmonic about
3.24 percent. Higher harmonics contribute less to the shape of the square wave. As outlined here,
approximately 94 percent of the square wave is derived from the fundamental and the third and
fifth harmonics. Inaccuracies introduced by the instrument are typically 2 percent or less. The
user's ability to read a conventional CRT display may contribute another four percent error. Thus,
a total of six percent error may be introduced by the instrument and the operator. A 96 percent
9-35
p
(a ) (b) (c )
Figure 9.3.1 The mechanisms involved in square waves: (a) individual waveforms involved, (b)
waveshape resulting from combining the fundamental and the first odd harmonic (the third har-
monic, (c) waveshape resulting from combining the fundamental and the first, third, and fifth har-
monics.
accurate reproduction of a square wave should, therefore, be sufficient for all but the most criti-
cal applications. It follows that scope bandwidth is important up to and including the fifth har-
monic of the desired signal.
0.35
Tr = ---------- (9.3.1)
BW
Where:
Tr = instrument rise time
BW = instrument bandwidth
Oscilloscopes 9-37
DIV control determines the value of each increment on the Y-axis of the display. On a dual chan-
nel scope, two controls are provided, one for each source. Typical vertical sensitivity ranges vary
from 5 mV to 5 V per division.
The need for interpolation introduces inaccuracies in the measurement process. Determina-
tion of exact values is especially difficult in the microsecond ranges, where even small incre-
ments can make a big difference in the measurement. It is also difficult with a conventional
scope to make a close-up examination of specific portions of a waveform. For this purpose, an
oscilloscope with two independently adjustable sweep generators is recommended. A small sec-
tion of the waveform can be selected for closer examination by adjusting the B channel sweep-
time and delay-time controls while observing an intensified portion of the waveform. The inten-
sified section can then be expanded across the screen. This type of close-up examination is use-
ful for measuring pulse rise times and viewing details in complex signals, such as video
horizontal sync or color burst.
Oscilloscopes 9-39
Although a DSO is specified by its maximum sampling rate, the actual rate used in acquiring
a given waveform is usually dependent on the time-per-division setting of the oscilloscope. The
record length (samples recorded over a given period of time) defines a finite number of sample
points available for a given acquisition. The DSO must, therefore, adjust its sampling rate to fill
a given record over the period set by the sweep control. To determine the sampling rate for a
given sweep speed, the number of displayed points per division is divided into the sweep rate per
division. Two additional features can modify the actual sampling rate:
• Use of an external clock for pacing the digitizing rate. With the internal digitizing clock dis-
abled, the digitizer will be paced at a rate defined by the operator.
• Use of a peak detection (or glitch capture) mode. Peak detection allows the digitizer to sam-
ple at the full digitizing rate of the DSO, regardless of the time base setting. The minimum
and maximum values found between each normal sample interval are retained in memory.
These minimum and maximum values are used to reconstruct the waveform display with the
help of an algorithm that recreates a smooth trace along with any captured glitches. Peak
detection allows the DSO to capture glitches even at its slowest sweep speed. For higher per-
formance, a technique known as peak-accumulation (or envelope mode) may be used. With
this approach, the instrument accumulates and displays the maximum and minimum excur-
sions of a waveform for a given point in time. This builds an envelope of activity that can
reveal infrequent noise spikes, long-term amplitude or time drift, and pulse jitter extremes.
duration of the waveform. The next repetition of the waveform triggers the instrument again, and
more samples are taken at different points on the waveform. Over many repetitions, the number
of stored samples can be built up to the equivalent of a high sampling rate.
System operating speed also has an effect on the display update rate. Update performance is
critical when measuring waveform or voltage changes. If the display does not track the changes,
adjustment of the circuit may be difficult.
Figure 9.3.5 Block diagram of a DSO showing the conversion, I/O memory, and display circuits (Courtesy of Kikusui.)
Oscilloscopes 9-41
p
Oscilloscopes 9-43
With single-shot digitizing, the waveform is captured the first time it occurs, on the first trigger.
It can then be displayed immediately or held in memory for analysis at a later date.
Triggering
Basic triggering modes available on a digital oscilloscope permit the user to select the desired
source, its coupling, level, and slope. More advanced digital scopes contain triggering circuitry
similar to that found in a logic analyzer. These powerful features let the user trigger on elusive
conditions, such as pulse widths less than or greater than expected, intervals less than or greater
than expected, and specified logic conditions. The logic triggering can include digital pattern,
state qualified, and time/event qualified conditions. Many trigger modes are further enhanced by
allowing the user to hold-off the trigger by a selectable time or number of events. Hold-off is
especially useful when the input signal contains bursts of data or follows a repetitive pattern.
These features are summarized as follows:
• Pulse-width triggering lets the operator quickly check for pulses narrower than expected or
wider than expected. The pulse-width trigger circuit checks the time from the trigger source
transition of a given slope (typically the rising edge) to the next transition of opposite slope
(typically the falling edge). The operator can interactively set the pulse-width threshold for
the trigger. For example, a glitch can be considered any signal narrower than 1/2 of a clock
period. Conditions preceding the trigger can be displayed to show what events led up to the
glitch.
• Interval triggering lets the operator quickly check for intervals narrower than expected or
wider than expected. Typical applications include monitoring for transmission phase changes
in the output of a modem or for signal dropouts, such as missing bits in a transport stream.
• Pattern triggering lets the user trigger on the logic state (high, low, or either) of several inputs.
The inputs can be external triggers or the input channels themselves. The trigger can be gen-
erated either upon entering or exiting the pattern. Applications include triggering on a partic-
ular address select or data bus condition. Once the pattern trigger is established, the operator
can probe throughout the circuit, taking measurement synchronous with the trigger.
• State qualified triggering enables the oscilloscope to trigger on one source, such as the input
signal itself, only after the occurrence of a specified logic pattern. The pattern acts as an
enable or disable for the source.
Advanced Features
Some digital oscilloscopes provide enhanced triggering modes that permit the user to select the
level and slope for each input. This flexibility makes it easy to look for odd pulse shapes in the
pulse-width trigger mode and for subtle dropouts in the interval trigger mode. It also simplifies
testing different logic types (TTL, CMOS, and ECL), and testing analog/digital combinational
circuits with dual logic triggering. Additional flexibility is available on multiple channel scopes.
Once a trigger has been sensed, multiple simultaneously sampled inputs permit the user to moni-
tor conditions at several places in the unit under test, with each channel synchronized to the trig-
ger. Additional useful trigger features for monitoring jitter or drift on a repetitive signal include:
• Enveloping
• Extremes
p
• Waveform delta
• Roof/floor
Each of these functions are related, and in some cases describe the same general operating mode.
Various scope manufacturers use different nomenclature to describe proprietary triggering meth-
ods. Generally speaking, as the waveshape changes with respect to the trigger, the scope gener-
ates upper and lower traces. For every nth sample point with respect to the trigger, the maximum
and minimum values are saved. Thus, any jitter or drift is displayed in the envelope.
Advanced triggering features provide the greatest benefit in conjunction with single-shot
sampling. Repetitive sampling scopes can only capture and display signals that repeat precisely
from cycle to cycle. Several cycles of the waveform are required to create a digitally-recon-
structed representation of the input. If the signal varies from cycle to cycle, such scopes can be
less effective than a standard analog oscilloscope for accurately viewing a waveform.
9.3.4 Bibliography
Albright, John R.: “Waveform Analysis with Professional-Grade Oscilloscopes,” Electronic Ser-
vicing and Technology, Intertec Publishing, Overland Park, Kan., April 1989.
Breya, Marge: “New Scopes Make Faster Measurements,” Mobile Radio Technology Magazine,
Intertec Publishing, Overland Park, Kan., November 1988.
Harris, Brad: “The Digital Storage Oscilloscope: Providing the Competitive Edge,” Electronic
Servicing and Technology, Intertec Publishing, Overland Park, Kan., June 1988.
Harris, Brad: “Understanding DSO Accuracy and Measurement Performance,” Electronic Ser-
vicing and Technology, Intertec Publishing, Overland Park, Kan., April 1989.
Hoyer, Mike: “Bandwidth and Rise Time: Two Keys to Selecting the Right Oscilloscope,” Elec-
tronic Servicing and Technology, Intertec Publishing, Overland Park, Kan., April 1990.
Montgomery, Steve: “Advanced in Digital Oscilloscopes,” Broadcast Engineering, Intertec Pub-
lishing, Overland Park, Kan., November 1989.
Persson, Conrad: “Oscilloscope Special Report,” Electronic Servicing and Technology, Intertec
Publishing, Overland Park, Kan., April 1990.
Persson, Conrad: “Oscilloscope: The Eyes of the Technician,” Electronic Servicing and Technol-
ogy, Intertec Publishing, Overland Park, Kan., April 1987.
Toorens, Hans: “Oscilloscopes: From Looking Glass to High-Tech,” Electronic Servicing and
Technology, Intertec Publishing, Overland Park, Kan., April 1990.
Whitaker, Jerry C.: Electronic Systems Maintenance Handbook, CRC Press, Boca Raton, Fla.,
2001.
g g
Chapter
9.4
Spectrum Analysis
9.4.1 Introduction
An oscilloscope-type instrument displays voltage levels referenced to time and a spectrum ana-
lyzer displays signal levels referenced to frequency. The frequency components of the signal
applied to the input of the analyzer are detected and separated for display against a frequency-
related time base. Spectrum analyzers are available in a variety of ranges with some models
designed for use with audio or video frequencies, and others intended for use with RF frequen-
cies.
The primary application of a spectrum analyzer is the measurement and identification of RF
signals. When connected to a small receiving antenna, the analyzer can measure carrier and side-
band power levels. By expanding the sweep width of the display, offset or multiple carriers can
be observed. By increasing the vertical sensitivity of the analyzer and adjusting the center fre-
quency and sweep width, it is possible to observe the occupied bandwidth of the RF signal. Con-
vention dictates that the vertical axis displays amplitude, and the horizontal axis displays
frequency. This frequency-domain presentation allows the user to glean more information about
the characteristics of an input signal than is possible from an oscilloscope. Figure 9.4.1 compares
the oscilloscope and spectrum analyzer display formats.
9-45
p y
When using the spectrum analyzer, care must be taken to not overload the front-end with a
strong input signal. Overloading can cause “false” signals to appear on the display. These false
signals are the result of non-linear mixing in the front-end of the instrument. False signals may
be identified by changing the RF attenuator setting to a higher level. The amplitude of false sig-
nals (caused by overloading) will drop much more than the amount of increased attenuation.
The spectrum analyzer is useful in troubleshooting receivers as well as transmitters. As a
tuned signal tracer, it is well adapted to stage-gain measurements and other tests. There is one
serious drawback, however. The 50 Ω spectrum analyzer input can load many receiver circuits
too heavily, especially high impedance circuits such as FET amplifiers. Isolation probes are
available to overcome loading problems. Such probes, however, also attenuate the input signal,
and unless the spectrum analyzer has enough reserve gain to overcome the loss caused by the iso-
lation probe, the instrument will fail to provide useful readings. Isolation probes with 20 dB to
40 dB attenuation are typical. As a rule of thumb, probe impedance should be at least 10 times
the impedance of the circuit to which it is connected.
9.4.3 Applications
The primary application for a spectrum analyzer centers around measuring the occupied band-
width of an input signal. Harmonics and spurious signals can be checked and potential causes
investigated. Figure 9.4.3 shows a typical test setup for making transmitter measurements.
The spectrum analyzer is also well-suited to making accurate transmitter FM deviation mea-
surements. This is accomplished using the Bessel null method. The Bessel null is a mathematical
function that describes the relationship between spectral lines in frequency modulation. The
Bessel null technique is highly accurate; it forms the basis for modulation monitor calibration.
The concept behind the Bessel null method is to drive the carrier spectral line to zero by chang-
ing the modulating frequency. When the carrier amplitude is zero, the modulation index is given
by a Bessel function. Deviation may be calculated from
Δ fc = MI × f (9.4.1)
Where:
Δfc = deviation frequency
MI = modulation index
fm = modulating frequency
The carrier frequency “disappears” at the Bessel null point, with all power remaining in the
FM sidebands.
A tracking generator may be used in conjunction with the spectrum analyzer to check the
dynamic response of frequency-sensitive devices, such as transmitter isolators, cavities, ring
combiners, duplexers, and antenna systems. A tracking generator is a frequency source that is
locked in step with the spectrum analyzer horizontal trace rate. The resulting display shows the
relationship of the amplitude-versus-frequency response of the device under test. The spectrum
analyzer may also be used to perform gain-stage measurements. The combination of a spectrum
analyzer and a tracking generator makes filter passband measurements possible. As measure-
p y
Figure 9.4.3 Test setup for measuring the harmonic and spurious output of a transmitter. The
notch filter is used to remove the fundamental frequency to prevent overdriving the spectrum ana-
lyzer input. (After [2].)
ments are made along the IF chain of a receiver, the filter passbands become increasingly narrow,
as illustrated in Figure 9.4.4.
Figure 9.4.5 Common test setup to measure transmitter harmonic distortion with a spectrum ana-
lyzer. (After [3].)
Figure 9.4.6 Typical spectrum analyzer display of a single-cavity filter. Bandwidth (BW) = 458 MHz
to 465 MHz = 7 MHz. Filter quality factory (Q) = fCT / BW = 463 MHz / 7 MHz = 66. The trace
shows 1 Db insertion loss. (After [3].)
Figure 9.4.7 Test setup for duplexer tuning using a spectrum analyzer. (After [3].)
The duplexer insertion loss of each port can be measured as the difference between a refer-
ence amplitude and the pass frequency amplitude. The reference signal is measured by shorting
the generator output cable to the analyzer input cable. This reference level nulls any cable losses
to prevent them from being included in duplexer insertion loss measurements.
p y
average detection A detection scheme wherein the average (mean) amplitude of a signal is mea-
sured and displayed.
B-SAVE A (or B, C MINUS A) Waveform subtraction mode wherein a waveform in memory is
subtracted from a second, active waveform and the result displayed on screen.
band switching Technique for changing the total range of frequencies (band) to which a spec-
trum analyzer can be tuned.
baseband The lowest frequency band in which a signal normally exists (often the frequency
range of a receiver’s output or a modulator’s input); the band from dc to a designated fre-
quency.
baseline clipper A means of blanking the bright baseline portion of the analyzer display.
Bessel functions Solutions to a particular type of differential equation; predicts the amplitudes
of FM signal components.
Bessel null method A technique most often used to calibrate FM deviation meters. A modulat-
ing frequency is chosen such that some frequency component of the FM signal nulls at a
specified peak deviation.
calibrator A signal generator producing a specified output used for calibration purposes.
carrier-to-noise ratio (C/N) The ratio of carrier signal power to average noise power in a given
bandwidth surrounding the carrier; usually expressed in decibels.
center frequency The frequency at the center of a given spectrum analyzer display.
coax bands The range of frequencies that can be satisfactorily passed via coaxial cable.
comb generator A source producing a fundamental frequency component and multiple compo-
nents at harmonics of the fundamental.
component In spectrum analysis, usually denotes one of the constituent sine waves making up
electrical signals.
decibel (dB) Ten times the logarithm of the ratio of one electrical power to another.
delta F (ΔF) A mode of operation on a spectrum analyzer wherein a difference in frequency may
be read out directly.
distortion Degradation of a signal, often a result of nonlinear operations, resulting in unwanted
signal components. Harmonic and intermodulation distortion are common types.
dynamic range The maximum ratio of two simultaneously present signals which can be mea-
sured to a specified accuracy.
emphasis Deliberate shaping of a signal spectrum or some portion thereof, often used as a
means of overcoming system noise. Preemphasis is often used before signal transmission
and deemphasis after reception.
envelope The limits of an electrical signal or its parameters. For instance, the modulation enve-
lope limits the amplitude of an AM carrier.
equivalent noise bandwidth The width of a rectangular filter that produces the same noise
power at its output as an actual filter when subjected to a spectrally flat input noise signal.
p y
Real filters pass different noise power than implied by their nominal bandwidths because
their skirts are not infinitely sharp.
external mixers A mixer, often in a waveguide format, that is used external to a spectrum ana-
lyzer.
filter A circuit that separates electrical signals or signal components based on their frequencies
filter loss The insertion loss of a filter: the minimum difference in dB between the input signal
level and the output level.
first mixer input level Signal amplitude at the input to the first mixer stage of a spectrum ana-
lyzer. An optimum value is usually specified by the manufacturer.
flatness Unwanted variations in signal amplitude over a specified bandwidth, usually expressed
in dB.
Fourier analysis A mathematical technique for transforming a signal from the time domain to
the frequency domain and vice versa.
frequency band A range of frequencies that can be covered without switching.
frequency deviation The maximum difference between the instantaneous frequency and the car-
rier frequency of an FM signal.
frequency domain representation The portrayal of a signal in the frequency domain; represent-
ing a signal by displaying its sine wave components; the signal spectrum.
frequency marker An intensified or otherwise distinguished spot on a spectrum analyzer dis-
play indicating a specified frequency point.
frequency range That range of frequencies over which the performance of the instrument is
specified.
fundamental frequency The basic rate at which a signal repeats itself.
grass Noise or a noise-like signal giving the ragged, hashy appearance of grass seen close-up at
eye level.
graticule The calibrated grid overlaying the display screen of spectrum analyzers, oscilloscopes,
and other test instruments.
harmonic distortion The distortion that results when a signal interacts with itself, often because
of non linearities in the equipment, to produce sidebands at multiples, or harmonics, of the
frequency components of the original signal.
harmonic mixing A technique wherein harmonics of the local oscillator signal are deliberately
mixed with the input signal to achieve a large total input bandwidth. Enables a spectrum
analyzer to function at higher frequencies than would otherwise be possible.
harmonics Frequency components of a signal occurring at multiples of the signal’s fundamental
frequency.
heterodyne spectrum analyzer A type of spectrum analyzer which scans the input signal by
sweeping the incoming frequency band past one of a set of fixed resolution bandwidth fil-
ters and measuring the signal level at the output of the filter.
p y
intermediate frequency (IF) In a heterodyne process, the sum or difference frequency at the
output of a mixer stage which will be used for further signal processing.
IF gain The gain of an amplifier stage operating at the IF frequency.
instantaneous frequency The rate of change of the phase of a sinusoidal signal at a particular
instant.
intermodulation distortion The distortion that results when two or more signals interact, usu-
ally because of non-linearities in the equipment, to produce new signals.
linear scale A scale wherein each increment represents a fixed difference between signal levels.
LO output A port on a spectrum analyzer where a signal from the local oscillator is made avail-
able; used for tracking generators and external mixing.
local oscillator An oscillator that produces the internal signal that is mixed with an incoming
signal in a mixer to produce the IF signal.
logarithmic scale A scale wherein each scale increment represents a fixed ratio between signal
levels.
magnitude-only measurement A measurement that responds only to the magnitude of a signal
and is insensitive to its phase.
MAX HOLD A spectrum analyzer feature which captures the maximum signal amplitude at all
displayed frequencies over a series of sweeps.
max span The maximum frequency span that can be swept and displayed by a spectrum ana-
lyzer.
MAX/MIN A display mode on some spectrum analyzers that shows the maximum and mini-
mum signal levels at alternate frequency points; its advantage is its resemblance to an ana-
log display.
maximum input level The maximum input signal amplitude which can be safely handled by a
particular instrument.
MIN HOLD A spectrum analyzer feature which captures the minimum signal amplitude at all
displayed frequencies over a series of sweeps.
mixing The process whereby two or more signals are combined to produce sum and difference
frequencies of the signals and their harmonics.
noise Unwanted random disturbances superimposed on a signal which tend to obscure it.
noise bandwidth The frequency range of a noise-like signal. For white noise, the noise power is
directly proportional to the bandwidth of the noise.
noise floor The self-noise of an instrument or system that represents the minimum limit at which
input signals can be observed. The spectrum analyzer noise floor appears as a “grassy”
baseline in the display even when no signal is present.
noise sideband Undesired response caused by noise internal to the spectrum analyzer appearing
on the display immediately around a desired response, often having a pedestal-like appear-
ance.
p y
peak/average cursor A manually controllable function which enables the user to set the thresh-
old at which the type of signal processing changes prior to display in a digital storage sys-
tem.
peak detection A detection scheme wherein the peak amplitude of a signal is measured and dis-
played. In spectrum analysis, 20 log (peak) is often displayed.
peaking The process of adjusting a circuit for maximum amplitude of a signal by aligning inter-
nal filters. In spectrum analysis, peaking is used to align preselectors.
period The time interval at which a process recurs; the inverse of the fundamental frequency.
phase lock The control of an oscillator or signal generator so as to operate at a constant phase
angle relative to a stable reference signal source. Used to ensure frequency stability in spec-
trum analyzers.
preselector A tracking filter located ahead of the first mixer which allows only a narrow band of
frequencies to pass into the mixer.
products Signal components resulting from mixing or from passing signals through other non-
linear operations such as modulation.
pulse stretcher Pulse shaper that produces an output pulse whose duration is greater than that of
the input pulse and whose amplitude is proportional to that of the peak amplitude of the
input pulse.
pulse repetition frequency (PRF) The frequency at which a pulsing signal recurs; equal to the
fundamental frequency of the pulse train.
reference level The signal level required to deflect the CRT display to the top graticule line.
reference level control The control used to vary the reference level on a spectrum analyzer.
resolution bandwidth (RBW) The width of the narrowest filter in the IF stages of a spectrum
analyzer. The RBW determines how well the analyzer can resolve or separate two or more
closely spaced signal components.
return loss The ratio of power sent to a system to that returned by the system. In the case of
antennas, the return loss can be used to find the SWR.
ring A transient response wherein a signal initially performs a damped oscillation about its
steady-state value.
SAVE function A feature of spectrum analyzers incorporating display storage which enables
them to store displayed spectra.
sensitivity Measure of a spectrum analyzer’s ability to display minimum level signals at a given
IF bandwidth, display mode, and any other influencing factors.
shape factor In spectrum analysis, the ratio of an RBW filter’s 60 dB bandwidth to its 3 dB or 6
dB width (depending on manufacturer).
sideband Signal components observable on either or both sides of a carrier as a result of modu-
lation or distortion processes.
sideband suppression An amplitude modulation technique wherein one of the AM sidebands is
deliberately suppressed, usually to conserve bandwidth.
p y
single sweep Operating mode in which the sweep generator must be reset for each sweep. Espe-
cially useful for obtaining single examples of a signal spectrum.
span per division, Span/Div Frequency difference represented by each major horizontal divi-
sion of the graticule.
spectrum The frequency domain representation of a signal wherein it is represented by display-
ing its frequency distribution.
spurious response An undesired extraneous signal produced by mixing, amplification, or other
signal processing technique.
stability The property of retaining defined electrical characteristics for a prescribed time and in
specified environments.
starting frequency The frequency at the left hand edge of the spectrum analyzer display.
sweep speed, sweep rate The speed or rate, expressed in time per horizontal divisions, at which
the electron beam of the CRT sweeps the screen.
time domain representation Representation of signals by displaying the signal amplitude as a
function of time; typical of oscilloscope and waveform monitor displays.
time-varying signal A signal whose amplitude changes with time.
total span The total width of the displayed spectrum; the span/div times the number of divisions.
tracking generator A signal generator whose output frequency is synchronized to the frequency
being analyzed by the spectrum analyzer.
ultimate rejection The ratio, in dB, between a filter’s passband response and its response beyond
the slopes of its skirts.
vertical scale factor, vertical display factor The number of dB, volts, etc., represented by one
vertical division of a spectrum analyzer display screen.
waveform memory Memory dedicated to storing a digital replica of a spectrum.
waveform subtraction A process wherein a saved waveform can be subtracted from a second,
active waveform.
zero hertz peak A fictitious signal peak occurring at zero Hertz that conveniently marks zero
frequency. The peak is present regardless of whether or not there is an input signal.
zero span A spectrum analyzer mode of operation in which the RBW filter is stationary at the
center frequency; the display is essentially a time domain representation of the signal prop-
agated through the RBW filter.
9.4.5 References
1. Kinley, Harold: “Using Service Monitor/Spectrum Analyzer Combos,” Mobile Radio Tech-
nology, Intertec Publishing, Overland Park, Kan., July 1987.
2. Pepple, Carl: “How to Use a Spectrum Analyzer at the Cell Site,” Cellular Business, Inter-
tec Publishing, Overland Park, Kan., Marcy 1989.
p y
3. Wolf, Richard J.: “Spectrum Analyzer Uses for Two-Way Technicians,” Mobile Radio Tech-
nology, Intertec Publishing, Overland Park, Kan., July 1987.
9.4.6 Bibliography
Whitaker, Jerry C. (ed.): Electronic Systems Maintenance Handbook, 2nd ed., CRC Press, Boca
Raton, Fla., 2002.
g g
Chapter
9.5
Reliability Engineering
9.5.1 Introduction
Before the advent of semiconductors, reliability of electronic equipment was an elusive goal.
Racks full of vacuum tubes did not lend themselves to long-term stability and dependable opera-
tion. In actual circuit complexity, the systems were generally simple compared to today's com-
puter-based hardware. However, the primary active devices that made the equipment of 30–40
years ago operate—tubes—were fragile components with a more-or-less limited lifetime.
Enter solid state electronics and the wonders that it has provided. We have been blessed with
increased reliability, increased performance, reduced space requirements, reduced heat genera-
tion, and practical affordable systems that were little more than dreams 30 years ago, or even 20
years ago. However, in our march ahead with technology, engineers have acquired new problems
in the areas of preventive maintenance and troubleshooting.
The electronics industry has grown up in a technical sense. And the point we have reached in
equipment sophistication requires new approaches to reliability and maintainability.
Reliability is the primary concern of any equipment user. The best computer system in the
world is useless if it does not work. The most powerful and sophisticated transmitter is of no
value if it will not stay on the air. Maintainability ranks right behind reliability in concern to pro-
fessional users. When equipment fails, the user needs to know that it can be returned to service
within a reasonable length of time.
The science of reliability and maintainability is not just an esoteric concept developed by the
aerospace industry to satisfy government dictates. It is a science that has fostered the continued
improvements in components and system that we enjoy today.
9.5.1a Objectives
The ultimate goal of any maintenance department is zero downtime. This is an elusive goal, but
one that can be approximated by examining the vulnerable areas of plant operation and taking
steps to prevent a sequence of events that could result in system failure. In cases where failure
prevention is not practical, a reliability assessment should encompass the stocking of spare parts,
circuit boards, or even entire systems. A large facility can often cost-justify the purchase of back-
9-59
y g g
up gear that can be used as spares for the entire complex. The cost of backup hardware is expen-
sive, but so is downtime.
Failures can, and do, occur in electronic systems. The goal of product quality assurance at
every step in the manufacturing and operating chain is to ensure that failures do not produce a
systematic or repeatable pattern. The ideal is to eliminate failures altogether. Short of that, the
secondary goal is to end up with a random distribution of failure modes. This situation indicates
that the design of the system is fundamentally correct and that failures are caused by random
events that cannot be predicted. In an imperfect world, this is often the best we can hope for. Reli-
ability and maintainability must be built into products or systems at every step in the design, con-
struction, and maintenance process. It cannot be treated as an afterthought.
9.5.1b Terminology
In order to understand the principles of reliability engineering, the following basic terms must be
defined:
availability Probability of a system subject to repair operating satisfactorily on demand.
average life The mean value for a normal distribution of product or component lives; generally
applied to mechanical failures resulting from “wear-out.”
burn-in Initially high failure rate encountered when first placing a component on test. Burn-in
failures are usually associated with manufacturing defects and the debugging phase of early
service.
defect Any deviation of a unit or product from specified requirements. A unit or product may
contain more than one defect.
degradation failure A failure that results from a gradual change in performance characteristics
of a system or part with time.
downtime Time during which equipment is not capable of doing useful work because of mal-
function. This does not include preventive maintenance time. In other words, downtime is
measured from the occurrence of a malfunction to the correction of that malfunction.
failure A detected cessation of ability to perform a specified function or functions within previ-
ously established limits. It is beyond adjustment by the operator by means of controls nor-
mally accessible during routine operation of the system. (This requires that measurable
limits be established to define “satisfactory performance.”)
failure mode and effects analysis (FMEA) An iterative documented process performed to iden-
tify basic faults at the component level and determine their effects at higher levels of
assembly.
failure rate The rate at which failure occurs during an interval of time as a function of the total
interval length.
fault tree analysis (FTA) An iterative documented process of a systematic nature performed to
identify basic faults, determine their causes and effects, and establish their probabilities of
occurrence.
lot size A specific quantity of similar material or collection of similar units from a common
source; in inspection work, the quantity offered for inspection and acceptance at any one
y g g
time. It may be a collection of raw material, parts, subassemblies inspected during produc-
tion, or a consignment of finished products to be sent out for service.
maintainability The probability that a failure will be repaired within a specified time after a
failure occurs.
mean time to failure (MTTF) The measured operating time of a single piece of equipment
divided by the total number of failures during the measured period of time. This measure-
ment is normally made during that period between early life and wear-out failures.
mean time to repair (MTTR) The measured repair time divided by the total number of failures
of the equipment.
mode of failure The physical description of the manner in which a failure occurs and the operat-
ing condition of the equipment or part at the time of the failure.
part failure rate The rate at which a part fails to perform its intended function.
quality assurance (QA) All those activities, including surveillance, inspection, control, and doc-
umentation aimed at ensuring that the product will meet its performance specifications.
reliability The probability that an item will perform satisfactorily for a specified period of time
under a stated set of use conditions.
reliability growth Action taken to move a hardware item toward its reliability potential, during
development or subsequent manufacturing or operation.
reliability prediction Compiled failure rates for parts, components, subassemblies, assemblies,
and systems. These generic failure rates are used as basic data to predict a value for reli-
ability.
sample One or more units selected at random from a quantity of product to represent that prod-
uct for inspection purposes.
sequential sampling Sampling inspection in which, after each unit is inspected, the decision is
made to accept, reject, or inspect another unit. (Sequential sampling as defined here is
sometimes called unit sequential sampling or multiple sampling.)
system A combination of parts, assemblies and sets joined together to perform a specific opera-
tional function or functions.
test to failure Testing conducted on one or more items until a predetermined number of failures
have been observed. Failures are induced by increasing electrical, mechanical, and/or envi-
ronmental stress levels, usually in contrast to life tests in which failures occur after
extended exposure to predetermined stress levels. A life test can be considered a test-to-
failure using age as the stress.
state-of-the-art allow. There are trade-offs in this process. It would be unrealistic, for example, to
perform extensive testing to identify potential failures if the cost of that testing exceeded the cost
savings generated by not having to replace the devices later in the field.
The focus of any QA effort is quality and reliability. These terms are not synonymous. They
are related, but do not provide the same measure of a product:
• Quality is the measure of a product's performance relative to some established criteria.
• Reliability is the measure of a product's life expectancy.
Stated from a different perspective, quality answers the question of whether the product meets
applicable specifications now; reliability answers the question of how long the product will con-
tinue to meet its specifications.
ability is obtained. This prediction method depends on the availability of accurate evaluation
models that reflect the reliability of lower-level components. Various formal prediction proce-
dures are used, based on theoretical and statistical concepts, including:
• Parts-count method. The parts-count approach to reliability prediction provides an estimate
of reliability based on a count by part type (ICs, transistors, resistors, capacitors, and other
components). This method is useful during the early design stage of a product, when the
amount of available detail is limited. It involves counting the number of components of each
type, multiplying that number by a generic failure rate for each part type, and summing the
products to obtain the failure rate of each functional circuit, subassembly assembly, and/or
block depicted in the system block diagram. This method is useful in the design phase
because it provides rapid estimates of reliability, permitting assessment of the feasibility of a
given concept.
• Stress-analysis method The stress-analysis technique is similar to the parts-count method,
but utilizes a detailed parts model plus calculation of circuit stress values for each part before
determining the failure rate. Each part is evaluated in its electric-circuit and mechanical-
assembly application based on an electrical and thermal stress analysis. After part failure
rates have been established, a combined failure rate for each functional block is determined.
9.5.2d Standardization
Standardization and reliability go hand-in-hand. Standardization of electronic components began
with military applications in mind; the first recorded work was performed by the U. S. Navy with
vacuum tubes. The Navy recognized that some control at the component level was essential to
the successful incorporation of electronics into Naval systems. The scope of the increase in solid
state electronics within defense hardware is demonstrated dramatically in the evolution of the
y g g
Navy destroyer. In 1937, a destroyer carried less than 100 active devices, almost all of them vac-
uum tubes. A typical destroyer today carries 10 million or more active devices, most of which are
complex integrated circuits (ICs). Standardization is essential to controlling this level of com-
plexity.
Standardization and reliability are closely related, although there are many aspects of stan-
dardization whose reliability implications are subtle. The primary advantages of standardization
include:
• Total product interchangeability. Standardization ensures that products of the same part
number provide the same physical and electrical characteristics. There have been innumerable
instances of replacement devices bearing the same part number as failed devices, but not
functioning identically to them. In many cases, the differences in performance were so great
that the system would not function with the new device.
• Consistency and configuration control. Component manufacturers constantly redefine their
products in order to improve yields and performance. Consistency and configuration control
provide the user with assurance that product changes will not affect the interchangeability of
the part.
• Efficiencies of volume production. Standardization programs usually result in production
efficiencies that reduce the costs of parts, relative to components with the same level of reli-
ability screening and control.
• Effective spares management. Use of standardized components makes the stocking of spare
parts a much easier task. This aspect of standardization is not a minor consideration.
y g g
(a ) (b )
Figure 9.5.3 Distribution of component failures identified through burn-in testing: (a) steady-state
high-temperature burn-in, (b) temperature cycling. (After [2].)
burn-in versus temperature cycling. Note that cycling screened out a significant number of IC
failures. The distribution of failures under temperature cycling usually resembles the distribution
of field failures. Temperature cycling more closely simulates real-world conditions than steady-
state burn-in. The goal of burn-in testing is to ensure that the component lot is advanced past the
infant mortality stage (T-1 on the bathtub curve). This process is used not only for individual
components, but for entire systems.
Such a systems approach to reliability is effective, but not foolproof. The burn-in period is a
function of statistical analysis; there are no absolute guarantees. The natural enemies of elec-
tronic parts are heat, vibration, and excessive voltage. Figure 9.5.4 documents failures-versus-
hours in the field for a piece of avionics equipment. The conclusion is made that a burn-in period
of 200 hours or more will eliminate 60 percent of the expected failures. But, the burn-in period
for another system using different components may well be another number of hours.
So, what does this mean to you, the end-user? Simply that infant mortality is a statistical fact
of life in the operation of any piece of hardware. Most engineers can relate to the problems of
“working the bugs” out of a new piece of equipment.
Because the goal of burn-in testing is to catch system problems and potential faults before the
device or unit leaves the manufacturer, the longer the burn-in period, the greater the likelihood of
catching additional failures. The problems involved with extended burn-in, however, are time
and money. Longer burn-in translates to longer delivery delays and additional costs for the
equipment manufacturer, which are likely to be passed on to the user. The point at which a prod-
uct is shipped is based largely on experience with similar components or systems and the finan-
cial requirement to move products to users.
y g g
Figure 9.5.4 The failure history of a piece of avionics equipment as a function of time. Note that 60
percent of the failures occurred within the first 200 hours of service. (After [3].)
Figure 9.5.6 The effects of environmental conditions on the roller-coaster hazard rate curve. (After
[4].)
Figure 9.5.7 System failure modes typically uncovered by environmental stress screening. (After
[5].)
Figure 9.5.8 The effects of environmental stress screening on the reliability bathtub curve. (After
[5].)
might take weeks or months to occur in the field. The end result is greater product reliability for
the user.
The ESS concept requires that every single product undergo qualification testing before
implementation into a larger system for shipment to an end-user. The flaws uncovered by ESS
vary from one unit to the next, but types of failures tend to respond to particular environmental
y g g
Table 9.5.1 Comparison of Conventional Reliability Testing and Environmental Stress Screening
(After [4].)
Figure 9.5.9 Integrated circuit screening process and potential results. (After [6].)
stresses. Figure 9.5.9 lists the effects of environmental stresses on integrated circuit elements and
parameters. These data clearly demonstrate that the burn-in screens must match the flaws sought,
or the flaws will probably not be found.
The concept of flaw-stimulus relationships can also be shown in Venn diagram form. Figure
9.5.10 shows a Venn diagram for a hypothetical, but specific product. The required screen would
be different for a different product. For clarity, not all stimuli are shown. Note that there are many
latent defects that will not be uncovered by any one stimulus. For example, a solder splash that is
just barely clinging to a circuit board would probably not be broken loose by high temperature
burn-in or voltage cycling, but vibration or thermal cycling would probably break the particle
loose. Remember also that the defect may only be observable during a stimulation and not
observable during a static bench test.
The levels of stress imposed on a product during ESS should be greater than the stress to
which the product will be subjected during its operational lifetime, while still below the maxi-
mum design parameters. This rule of thumb is pushed to the limits under an enhanced screening
process. Enhanced screening places the component or system at well above the expected field
y g g
Figure 9.5.10 Venn diagram representation of the relationship between flaw precipitation and
applied environmental stress. (After [6].)
environmental levels. This process has been found to be useful and cost effective on many pro-
grams and products. Enhanced screening, however, requires the development of screens that are
carefully identified during product design development so that the product can survive the quali-
fication tests. Enhanced screening techniques are often required for cost effective products on a
cradle to grave basis. That is, early design changes for screenability save tremendous costs over
the lifetime of the product.
The types of products that can be economically checked through ESS break down into two
categories: high-dollar items and mass-produced items. Units that are physically large in size,
such as RF generators or mainframe computers, are difficult to test in the finished state. Still,
qualification tests using more primitive methods, such as cam-driven truck bed shakers, are prac-
tical. Because most large systems generate a large about of heat, subjecting the equipment of
temperature extremes may also be accomplished. Sophisticated ESS for large systems, however,
must rely on qualification testing at the sub-assembly stage.
The basic hardware compliment for an ESS test station includes a thermal chamber shaker
and controller/monitor. A typical test sequence includes 10 minutes exposure to random vibra-
tion, followed by 10 cycles between temperature minimum and maximum. To save time, the two
tests may be performed simultaneously.
U. S. Navy Research
In [8] research conducted by the U.S. Navy
Figure 9.5.12 The distribution of failure modes
cited connectors and cabling as the main cul-
in the field for an avionics system. (After [8].)
prits in system failures (see Figure 9.5.12).
The study concluded that:
• Approximately 60 percent of all avionics failures are the result of high-level components, pri-
marily connectors and cabling.
• 25 percent of all failures are caused by maintenance and test procedures.
• 13 percent are the result of over stress and abuse of system components.
• Less than 2 percent are the result of (IC) failures.
The high percentage of connector and cable related problems will come as no surprise to most
maintenance technicians. Wiring harnesses may be subjected to significant physical abuse under
the best operating conditions.
Conditions of extreme low or high temperatures, high humidity, and vibration during trans-
portation may have a significant impact on long-term reliability of the system. It is, for example,
possible—and even probable—that the vibration stress of the truck ride to a remote transmitting
site will represent the worst-case vibration exposure of the transmitter and all components within
during the lifetime of the product.
Manufacturers report that most of the significant vibration and shock problems for land-
based products arise from the shipping and handling environment. Shipping tends to be an order
of magnitude more severe than the operating environment with respect to vibration and shock.
Early testing for these problems involved simulation of actual shipping and handling events, such
as end-drops, truck trips, side impacts, and rolling over curbs and cobblestones. Although unso-
phisticated by today's standards, these tests were capable of improving product resistance to ship-
ping-induced damage. End-users must realize that the field is the final test environment and
burn-in chamber for the equipment that we use.
door permits checking of the ADG oil without having to remove the compartment panel. How-
ever, the oil level sight gauge requires line-of-sight reading. The way it is installed, the gauge
cannot be read through the access door, even with an inspection mirror. The entire compartment
panel, secured with 63 fasteners, must be removed just to see if oil servicing is needed.
It is clear that maintainability must play a larger role in the design formula of equipment.
With additional hardware complexity, designers must consider up-front the need to fix a piece of
equipment after it has left the factory.
RCM is based on the premise that reliability is a design characteristic to be realized and pre-
served during the operational life of the system. This concept is further extended to assert that
efficient and cost-effective lifetime maintenance and logistic support programs can be developed
using decision logic that focuses on the consequences of failure. Such a program provides the
following benefits:
• The probability of failure is reduced.
• Incipient failures are detected and corrected either before they occur, or before they develop
into major defects.
• Greater cost-effectiveness of the maintenance program is achieved.
Maintenance Classifications
Under RCM, maintenance tasks are classified into the following areas:
• Hard-time maintenance—failure modes that require scheduled maintenance at predeter-
mined fixed intervals of age or usage.
• On-condition maintenance—failure modes that require scheduled inspections or tests
designed to measure deterioration of an item so that corrective maintenance can be per-
formed.
• Condition monitoring—failure modes that require unscheduled tests or inspection of com-
ponents where failure can be tolerated during operation of the system, or where impending
failure can be detected through routine monitoring during normal operations.
Each step in the RCM process is critical. When properly executed, RCM results in a preven-
tive maintenance program that ensures the system will remain in good working condition. RCM
analysis involves the following key steps:
• Determine the maintenance-important items in the system
• Acquire failure data on the system
• Develop fault-tree analysis data
• Apply decision logic to critical failure modes
• Compile and record maintenance classifications
• Implement the RCM program
• Apply sustaining-engineering based on field experience
The RCM process has a life-cycle perspective. The driving force is reduction of the scheduled
maintenance burden and support cost while maintaining the necessary readiness state. As with
other reliability tasks, the RCM process is reapplied as available data move from a predicted state
to operational experience.
y g g
Figure 9.5.18 The costs of correcting software errors at key points in the development cycle.
(After [16].)
differences arise from the fact that a software program is essentially an abstraction; it is immune
from mechanical defects, electrical noise, and physical degradation that affect hardware. As both
technologies advance, however, the distinctions between hardware and software are beginning to
lessen. For example, system designers freely trade hardware complexity for software simplicity,
and vice-versa
Hardware reliability is concerned with latent defects, physical design, and the operating envi-
ronment. Software reliability is pure logic; it has no problems analogous with wear out. Software
is also used to mask hardware failures. Fault-tolerant systems are charged with maintaining sys-
tem integrity in the face of a hardware malfunction. Software failures for fault-tolerant systems
can be categorized into one or more of the following categories:
• Failure to perform functions required under normal operating conditions.
• Failure to perform functions required under abnormal conditions. Such conditions could
include overload, invalid input, or another abnormal operating state.
• Failure to recover from a hardware failure.
• Failure to diagnose a hardware failure.
Fault-tolerant control systems with duplicate critical parts are expected to function despite
any single hardware problem. Enumeration of all possible faults is a staggering task. Systems
that operate in real-time, and subject to quasi-random inputs can be driven into internal states
beyond classification. Software design and reliability testing for a fault-tolerant system is no
easy matter.
Because software failures come with greatly varying levels of importance, it is necessary to
grade them in steps ranging from fatal flaws to cosmetic problems. The density of software bugs
must also be related to the likely rate at which the defects will create actual failures under field
conditions. A quality standard for software balances the engineer's desire to remove all possible
defects with the need to deliver products within a reasonable timeframe, and at a reasonable price
y g g
point. Under conditions in which the test environment is a good model of the actual field envi-
ronment, and the software code is exercised completely, the “find-and-fix” process can be used
as a reliable guide to software quality. Under these conditions, the rate at which bugs are found is
roughly proportional to the density of bugs in the code.
Software run on critical systems is often checked by purposely overloading the system. Rea-
son tells us that problems can be expected to occur under the stresses that are present when
queues reach their limits, registers go into overflow, and transactions are delayed. Such abnormal
behavior is likely to exercise “contingency software” not used under normal conditions.
Distinguishing between hardware and software problems is often a difficult task. Problems
manifest themselves at the system level and sometimes may be remedied by changes in either the
software or the hardware. Because software is more flexible, the easiest and most economical
way to fix a problem is usually through a software change. With this point in mind, it is possible
that the perceived contribution of software to system unreliability may be artificially high.
Causal Analysis
In the general application, causal analysis attempts to determine the cause of some observed
phenomena. In software development, causal analysis seeks to explain how faults have been
injected into the program, and how they have led to system or software failures. Once a fault is
identified, programmers must then determine at what point in the process the fault was injected,
and how it can be prevented in future program development.
Attempts to predict software reliability have been made by mathematicians utilizing probabil-
ity and statistics tools. However, most models have fallen short of the goal of developing a soft-
ware equivalent of common hardware models. Still, the basic concepts employed in hardware
reliability assurance can successfully be applied to software. Reliability engineering concepts
and tools may, if applied within the software development process, directly improve the reliabil-
ity of the end-product.
Software Maintainability
The maintainability of software is determined by those characteristics of the code and computer
support resources that affect the ability of software programmers and analysts to change instruc-
tions. Such changes are made to:
• Correct program errors
• Expand system capabilities
• Add or remove specific features from the program
• Modify software to make it compatible with hardware changes
Modern software programs are usually constructed of modules that perform specific func-
tions. Modular construction is achieved by dividing total program functions into a series of ele-
ments that each have a single input (entry point) and a single output (exit point). The
development of a large system may permit the use of a given module a number of times, simpli-
fying software design and de-bugging. Modules performing a specific function, such as I/O
management or display control, may also be used on other product lines.
The top-level modules of a software program provide functional control over the lower-level
modules on which they call. The top-level software, therefore, is exercised more frequently than
lower-level modules. Various estimates of run-time distributions show that just 4 to 10 percent of
the total software package operates for 50 to 90 percent of the time in a real-time program.
ards. Even under the most thorough testing program, it is virtually impossible to foresee and test
every conceivable software condition in a major system. The goal of software safety, therefore, is
to identify all potential hazards and ensure that they have either been eliminated or reduced to an
acceptable level of risk.
9.5.6 References
1. Fink, D., and D. Christiansen (eds.): Electronics Engineers' Handbook, 3rd ed., McGraw-
Hill, New York, N.Y., 1989.
2. Powell, Richard: “Temperature Cycling vs. Steady-State Burn-In,” Circuits Manufacturing,
Benwill Publishing, September 1976.
3. Capitano, J., and J. Feinstein: “Environmental Stress Screening Demonstrates its Value in
the Field,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York,
N.Y., 1986.
4. Wong, Kam L.: “Demonstrating Reliability and Reliability Growth with Environmental
Stress Screening Data,” Proceedings IEEE Reliability and Maintainability Symposium,
IEEE, New York, N.Y., 1990.
5. Tustin, Wayne: “Recipe for Reliability: Shake and Bake,” IEEE Spectrum, IEEE, New
York, N.Y., December 1986.
6. Hobbs, Gregg K.: “Development of Stress Screens,” Proceedings IEEE Reliability and
Maintainability Symposium, IEEE, New York, N.Y., 1987.
7. Smith, William B.: “Integrated Product and Process Design to Achieve High Reliability in
Both Early and Useful Life of the Product,” Proceedings IEEE Reliability and Maintain-
ability Symposium, IEEE, New York, N.Y., 1987.
8. Clark, Richard J.: “Electronic Packaging and Interconnect Technology Working Group
Report (IDA/OSD R&M Study),” IDA Record Document D-39, August 1983.
9. Wilson, M. F., and M. L. Woodruff: “Economic Benefits of Parts Quality Knowledge,”
Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York, N.Y.,
1985.
10. Griffin, P.: “Analysis of the F/A-18 Hornet Flight Control Computer Field Mean Time
Between Failure,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE,
New York, N.Y., 1985.
11. Wong, K. L., I. Quart, L. Kallis, and A. H. Burkhard: “Culprits Causing Avionics Equip-
ment Failures,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New
York, N.Y., 1987.
12. Horn, R., and F. Hall: “Maintenance Centered Reliability,” Proceedings IEEE Reliability
and Maintainability Symposium, IEEE, New York, N.Y., 1983.
13. Maynard, Eqbert, OUSDRE Working Group Chairman: “VHSIC Technology Working
Group Report” (IDA/OSD R&M Study), Document D-42, Institute of Defense Analysis,
November 1983.
y g g
14. Robinson, D., and S. Sauve: “Analysis of Failed Parts on Naval Avionic Systems,” Report
No. D180-22840-1, Boeing Company, Seattle, Wash., October 1977.
15. Worm, Charles M.: “The Real World: A Maintainer's View,” Proceedings IEEE Reliability
and Maintainability Symposium, IEEE, New York, N.Y., 1987.
16. Babel, Philip S.: “Software Development Integrity Program.” Briefing paper for the Aero-
nautical Systems Division, Air Force Systems Command. From Yates, W., and Shaller, D.:
“Reliability Engineering as Applied to Software,” Proceedings IEEE Reliability and Main-
tainability Symposium, IEEE, New York, N.Y., 1990.
17. Hermason, Sue E., Major, USAF: letter dated December 2, 1988. From Yates, W., and
Shaller, D.: “Reliability Engineering as Applied to Software,” Proceedings IEEE Reliabil-
ity and Maintainability Symposium, IEEE, New York, N.Y., 1990.
9.5.7 Bibliography
Brauer, D. C., and G. D. Brauer: “Reliability-Centered Maintenance,” Proceedings IEEE Reli-
ability and Maintainability Symposium, IEEE, New York, N.Y., 1987.
Cameron, D., and R. Walker: “Run-In Strategy for Electronic Assemblies,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1986.
DesPlas, Edward: “Reliability in the Manufacturing Cycle,” Proceedings IEEE Reliability and
Maintainability Symposium, IEEE, New York, N.Y., 1986.
Devaney, John: “Piece Parts ESS in Lieu of Destructive Physical Analysis,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1986.
Doyle, Edgar, Jr.: “How Parts Fail,” IEEE Spectrum, IEEE, New York, N.Y., October 1981.
Ferrara, K. C., S. J. Keene, and C. Lane: “Software Reliability from a System Perspective,” Pro-
ceedings IEEE Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1988.
Fortna, H., R. Zavada, and T. Warren: “An Integrated Analytic Approach for Reliability Improve-
ment,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York,
N.Y., 1990.
Hall, F., R. A. Paul, and W. E. Snow: “R&M Engineering for Off-the-Shelf Critical Software,”
Proceedings IEEE Reliability and Maintainability Symposium, IEEE, New York, N.Y.,
1988.
Hansen, M. D., and R. L. Watts: “Software System Safety and Reliability,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1988.
Irland, Edwin A.: “Assuring Quality and Reliability of Complex Electronic Systems: Hardware
and Software,” Proceedings of the IEEE, vol. 76, no. 1, IEEE, New York, N.Y., January
1988.
Kenett, R., and M. Pollak: “A Semi-Parametric Approach to Testing for Reliability Growth, with
Application to Software Systems,” IEEE Transactions on Reliability, IEEE, New York,
N.Y., August 1986.
y g g
Neubauer, R. E., and W. C. Laird: “Impact of New Technology on Repair,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1987.
Smith, A., R. Vasudevan, R. Matteson, and J. Gaertner: “Enhancing Plant Preventative Mainte-
nance via RCM,” Proceedings IEEE Reliability and Maintainability Symposium, IEEE,
New York, N.Y., 1986.
Spradlin, B. C.: “Reliability Growth Measurement Applied to ESS,” Proceedings IEEE Reliabil-
ity and Maintainability Symposium, IEEE, New York, N.Y., 1986.
Whitaker, Jerry C.: Maintaining Electronic Systems Handbook, CRC Press, Boca Raton, Fla.,
2001.
Yates, W., and D. Shaller: “Reliability Engineering as Applied to Software,” Proceedings IEEE
Reliability and Maintainability Symposium, IEEE, New York, N.Y., 1990.
y g g
g g
Section
In This Section:
10-1
10-2 Section Ten
Linkwitz, S.: “Narrow Band Testing of Acoustical Systems,” Audio Engineering Society pre-
print 1342, AES, New York, N.Y., May 1978.
Lipshitz, S., T. Scott, and J. Vanderkooy: “Increasing the Audio Measurement Capability of FET
Analyzers by Microcomputer Post-Processing,” Audio Engineering Society preprint 2050,
AES, New York, N.Y., October 1983.
Metzler, R. E.: “Automated Audio Testing,” Studio Sound, August 1985.
Metzler, R. E., and B. Hofer: “Wow and Flutter Measurements,” Audio Precision 1 Users' Man-
ual, Audio Precision, Beaverton, Ore., July 1986.
Moller, H.: “Electroacoustic Measurements,” B&K Application Note 16-035, B&K Instruments,
Naerum, Denmark.
Moller, H., and C. Thompsen: “Swept Electroacoustic Measurements of Harmonic Difference
Frequency and Intermodulation Distortion,” B&K Application Note 15-098, B&K Instru-
ments, Naerum, Denmark.
Otala, M., and E. Leinonen: “The Theory of Transient Intermodulation Distortion,” IEEE Trans.
Acoust. Speech Signal Process., ASSP-25(1), February 1977.
Preis, D.: “A Catalog of Frequency and Transient Responses,” J. Audio Eng. Soc., AES, New
York, N.Y., vol. 24, June 1976.
Ramirez, R.: “The FFT—Fundamentals and Concepts,” Tektronix, Inc., Beaverton, Ore., 1975.
Randall, R. B.: Application of B&K Equipment to Frequency Analysis, 2nd ed., B&K Instru-
ments, Naerum, Denmark, 1977.
Schrock, C.: “The Tektronix Cookbook of Standard Audio Measurements,” Tektronix Inc., Bea-
verton, Ore., 1975.
Skritek, P.: “Simplified Measurement of Squarewave/Sinewave and Related Distortion Test
Methods,” Audio Engineering Society preprint 2195, AES, New York, N.Y., 1985.
Small, R.: “Total Difference Frequency Distortion: Practical Measurements,” J. Audio Eng. Soc.,
AES, New York, N.Y., vol. 34, no. 6, pg. 427, June 1986.
Theile, A. N.: “Measurement of Nonlinear Distortion in a Bandlimited System,” J. Audio Eng.
Soc., AES, New York, N.Y., vol. 31, pp. 443–445, June 1983.
Tremaine, H. W.: Audio Cyclopedia, Howard W. Sams, Indianapolis, Ind., 1975.
Vanderkooy, J.: “Another Approach to Time Delay Spectrometry,” Audio Engineering Society
preprint 2285, AES, New York, N.Y., October 1985.
g g
Chapter
10.1
Audio Measurement and Analysis
10.1.1 Introduction
Many parameters are important in audio devices and merit attention in the measurement process.
Some common audio measurements are frequency response, gain or loss, harmonic distortion,
intermodulation distortion, noise level, phase response, and transient response. There are other,
equally important tests too numerous to list here. This chapter will explain the basics of these
measurements, describe how they are made, and give some examples of their application.
Most measurements in audio (and other fields) are composed of measurements of fundamen-
tal parameters. These parameters include signal level, phase, and frequency. Most other measure-
ments consist of measuring these fundamental parameters and displaying the results in
combination by using some convenient format. For example, signal-to-noise ratio (S/N) is a pair
of level measurements made under different conditions expressed as a logarithmic, or decibel
(dB), ratio.
When characterizing an audio device, it is common to view it as a box with input terminals
and output terminals. In normal use an audio signal is applied to the input, and the audio signal,
modified in some way, appears at the output. In the case of an equalizer the modification to the
signal is an intentional change in the gain with frequency (frequency response). Often it is
desired to know or verify the details of this gain change. This is accomplished by measurement.
Real-world behavior being what it is, audio devices will also modify other parameters of the
audio signal which should have been left alone. To quantify these unintentional changes to the
signal we again turn to measurements. Using the earlier example of an equalizer, changes to the
amplitude-versus-frequency response of the signal inevitably bring changes in phase versus fre-
quency. Some measurements are what are known as one-port measurements such as impedance
or noise level. These are not concerned with both input and output signals, only with one or the
other.
Measurement of level is fundamental to most audio specifications. Level can be measured
either in absolute terms or in relative terms. Power output is an example of an absolute level mea-
surement; it does not require any reference. S/R and gain or loss are examples of relative, or ratio
10-5
y
measurements; the result is expressed as a ratio of two measurements. Though it may not appear
so at first, frequency response is also a relative measurement. It expresses the gain of the device
under test as a function of frequency, with the midband gain as a reference.
Distortion measurements are a way of quantifying the amount of unwanted components
added to a signal by a piece of equipment. The most common technique is total harmonic distor-
tion (THD), but others are often used. Distortion measurements express the amount of unwanted
signal components relative to the desired signal, usually as a percentage or decibel value. This is
also an example of multiple level measurements that are combined to give a new measurement
figure.
V r ms = V2 +V 2…
+V 2
(10.1.1)
total
rms 1 rms 2 rms n
Note that the result is not dependent on the phase relationship of the signal and its harmonics.
The rms value is determined completely by the amplitude of the components. This mathematical
predictability is very powerful in practical applications of level measurement, enabling measure-
ments made at different places in a system to be correlated. It is also extremely important in cor-
relating measurements with theoretical calculations.
An interesting result for gaussian random noise is that the rms value equals the standard devi-
ation of the amplitude distribution. In fact, if the amplitude distribution of any signal is plotted,
the standard deviation of the distribution is, by definition, the rms value.
y
(a)
(b)
Figure 10.1.1 Root-mean-square (rms) measurements: (a) relationship of rms and average val-
ues, (b) the rms measurement circuit.
(a)
(b)
Figure 10.1.2 Average measurements: (a) illustration of average detection, (b) average-measure-
ment circuit.
switch. Common peak detectors usually use a large resistor to discharge the capacitor gradually
after the user has had a chance to read the meter.
The ratio of the true peak value to the rms value is called the crest factor. As can be seen from
Figure 10.1.5, this is a measure of the peakedness of the signal. For any signal but an ideal square
wave the crest factor will be greater than 1. A comparison of crest-factor values for various
waveforms is shown in Figure 10.1.5. As the signals become more peaked, the crest factor will
increase. The q parameter in the gaussian-noise example is a measure of the percent of time dur-
ing which the signal is greater than the rms value. True gaussian noise has infinitely large peaks
and therefore an infinite crest factor.
By introducing a controlled charge and discharge time with resistors, a quasi-peak detector is
achieved. These charge and discharge times are selected to simulate the ear's sensitivity to impul-
sive peaks. International standards define these response times and set requirements for reading
accuracy on pulses and sine-wave bursts of various durations. Quasi-peak detectors normally
have their gain adjusted so that they read the same as an rms detector for sine waves.
Another method of specifying signal amplitude is called peak-equivalent sine. It is the rms
level of a sine wave having the same peak-to-peak amplitude as the signal under consideration.
This is the peak value of the waveform scaled by the correction factor 1.414, corresponding to
the peak-to-rms ratio of a sine wave. This is useful when specifying test levels of waveforms in
distortion measurements. If the distortion of a device is measured as a function of amplitude, a
point will be reached where the output level cannot increase any further. At this point the peaks
of the waveform will be clipped, and the distortion will rise rapidly with further increases in
level. If another signal is used for distortion testing on the same device, it is desirable that the
levels at which clipping is reached correspond. Signal generators are normally calibrated in this
way to allow changing between waveforms without clipping or readjusting levels.
y
Figure 10.1.3 Comparison of rms and average characteristics. (Courtesy of EDN, January 20, 1982,)
Audio Measurement and Analysis 10-9
y
(a)
(b)
Figure 10.1.4 Peak measurements: (a) illustration of peak detection, (b) peak-measurement cir-
cuit.
analog meter can handle this job with ease. Another application for analog meters is monitoring
the results of an adjustment for a peak or a null. Some manufacturers have put both analog and
digital displays on the same instrument to enable the best of both worlds. The analog scale on
such meters typically does not have very fine graduations on it and is intended only for approxi-
mate measurements of rapidly changing signals. Other digital instruments provide a simulated
analog display using bar graphs or other means.
The bandwidth of a voltmeter can have a significant effect on the accuracy of the reading. For
a meter with a single-pole rolloff (i.e., one bandwidth-limiting component in the signal path),
significant errors can occur in measurements. For such a meter with a specified bandwidth of
100 kHz, there will be a 10 percent error in measurements of signal at 50 kHz. To obtain 1 per-
cent accurate measurements (with other error sources in the meter ignored), the signal frequency
must be less than 10 kHz.
Another problem with limited-bandwidth measuring devices is shown in Figure 10.1.6. Here,
is a distorted sine wave is being measured by two meters with different bandwidths. The meter
with the narrower bandwidth does not respond to all the harmonics and gives a lower reading.
The severity of this effect varies with the frequency being measured and the bandwidth of the
meter; it can be especially severe when measuring wideband noise. Most audio requirements are
adequately served by a meter with a 500-kHz bandwidth. This allows reasonably accurate mea-
surement of signals to about 100 kHz. Peak measurements are even more sensitive to bandwidth
effects. Systems with restricted low-frequency bandwidth will produce tilt in a square wave, and
bumps in the high-frequency response will produce an overshoot. The effect of either situation
will be an increase in the peak reading.
Accuracy is a measure of how well a meter measures a signal at a midband frequency, usually
1 kHz. This sets a basic limit on the performance of the meter in establishing the absolute ampli-
tude of a signal. It is also important to look at the flatness specification to see how well this per-
formance is maintained with changes in frequency. The flatness specification describes how well
the measurements at any other frequency will track those at 1 kHz. If a meter has an accuracy of
2 percent at 1 kHz and a flatness of 1 dB (10 percent) from 20 Hz to 20 kHz, the inaccuracy can
be as wide as 12 percent at 20 kHz.
Meters often have a specification on accuracy that changes with voltage range, being most
accurate only in the range in which they were calibrated. A meter with 1 percent accuracy on the
2-V range and 1 percent accuracy per step would be 3 percent accurate on the 200-V scale. By
using the flatness specification given previously, the overall accuracy for a 100-V 20-kHz sine
y
wave is 14 percent. In many meters an additional accuracy derating is given for readings as a per-
centage of full scale, making readings at less than full scale less accurate.
However, the accuracy specification is not normally as important as the flatness. When per-
forming frequency response or gain measurements, the results are relative and are not affected by
the absolute voltage used. When measuring gain, however, the attenuator accuracy of the instru-
ment is a direct error source. Similar comments apply to the accuracy and flatness specifications
for signal generators. Most are specified in the same manner as voltmeters, with the inaccuracies
adding in much the same way.
E
dB = 20 log ----1- (10.1.2)
E2
P
dB = 10 log ----1- (10.1.3)
P2
There is no difference between decibel values from power measurements and decibel values
from voltage measurements if the impedances are equal. In both equations the denominator vari-
able is usually a stated reference. This is illustrated with an example in Figure 10.1.7. Whether
the decibel value is computed from the
power-based equation or from the voltage-
based equation, the same result is obtained.
A doubling of voltage will yield a value
of 6.02 dB, while a doubling of power will
yield 3.01 dB. This is true because doubling
voltage results in a factor-of-4 increase in
power. Table 10.1.1 shows the decibel values
for some common voltage and power ratios.
These are handy to commit to memory, and
they make quick comparisons of readings
especially easy. Figure 10.1.7 Equivalence of voltage and power
decibels.
y
The previous example showed the decibel value dB Value Voltage Ratio Power Ratio
obtained from two measured quantities. Often 0 1 1
+1 1.122 1.259
audio engineers express the decibel value of a sig-
+2 1.259 1.586
nal relative to some standard reference instead of +3 1.412 1.995
another signal. The reference for decibel mea- +6 1.995 3.981
surements may be predefined as a power level, as +10 3.162 10
in dBm (decibels above 1 mW), or it may be a +20 10 100
voltage reference. When measuring dBm or any +40 100 10,000
power-based decibel value, the reference imped- –1 0.891 0.794
–2 0.794 0.631
ance must be specified or understood. For exam-
–3 0.707 0.501
ple, 0 dBm (600 Ω) would be the correct way to –6 0.501 0.251
specify level. Both 600 and 150 Ω are common –10 0.3163 0.1
reference impedances in audio work. –20 0.1 0.01
The equations assume that the circuit being –40 0.01 0.0001
measured is terminated in the reference imped-
ance used in the decibel calculation. However,
most voltmeters are high-impedance devices and
are calibrated in decibels relative to the voltage required to reach 1 mW in the reference imped-
ance. This voltage is 0.775 V in the 600-Ω case. Termination of the line in 600 Ω is left to the
user. If the line is not terminated, it is not correct to speak of a dBm measurement. The case of
decibels in an unloaded line is referred to as dBu (or sometimes dBv) to denote that it is refer-
enced to a 0.775-V level without regard to impedance.
Another common decibel reference in voltage measurements is 1 V. When using this refer-
ence, measurements are presented as dBV. Often it is desirable to specify levels in terms of a ref-
erence transmission level somewhere in the system under test. These measurements are
designated dBr where the reference point or level must be separately conveyed.
level measurement on the right channel yields 10 mV of signal, the separation is defined to be 60
dB.
One possible problem with this procedure is that the 10 mV may not be 1 kHz leaking from
the other channel but may represent the noise floor of the system under test. If this is true, the
separation measurement is inaccurate. The solution is to use a bandpass filter tuned to the fre-
quency of the test tone, thereby rejecting system noise and other interfering components. If the
measurements are to be made as a function of frequency, the bandpass-filter frequency should be
slaved to the generator frequency. This approach is illustrated in Figure 10.1.8. One channel is
driven from a sine-wave generator at its nominal operating level and terminated in its normal
load impedance. The other channel input is terminated by the normal source impedance, and the
output is terminated by the normal load impedance. The level at the output of the driven channel
is measured with a voltmeter, and the output of the undriven channel is measured by a voltmeter
with a bandpass filter centered on the test-signal frequency. The level difference between these
measurements, expressed in decibels, is the separation. When the two channels have different
gains, it is common to correct the measurements for the gain difference so as to present the
crosstalk referred to the channel inputs.
Crosstalk between two audio channels is sometimes nonlinear. The presence of a signal in one
channel will sometimes yield tones of other frequencies in the receiving channel. This is espe-
cially true in transmission systems where cross-modulation can occur between carriers. These
tones disappear when the source signal is removed, clearly indicating that they are due to the sus-
pected source. Measuring them can be tedious if sine waves are used because the frequency of
the received interference may not be easily predictable. There may also be some test frequencies
which cause the interference products to appear outside the channel bandwidth, hiding the effect
being tested. Therefore, this test is often performed with a random noise source so that all possi-
ble interference frequencies are tested.
y
without the high-pass filter, the effects of hum can be estimated. Subtracting the two decibel
measurements results in a level ratio measurement called the hum-to-hiss ratio. Good-quality
equipment will have hum-to-hiss ratios of less than 1 dB.
Even more information about the underlying sources of noise may be obtained with spectral
analysis. This may be accomplished with any of the common spectrum-analysis techniques,
including:
• Fast-Fourier-transform (FFT) analyzers
• Heterodyne analyzers
• Real-time analyzers (RTAs)
Each offers advantages for particular applications, but all provide considerable insight to the
measurement being made. Figure 10.1.12 shows the spectrum of output noise from a profes-
sional equalizer. The measuring equipment uses a sweeping one-third-octave filter and an rms
detector. Note the presence of power-line-related signals, both 120-Hz and 180-Hz components.
The 120-Hz product is due to asymmetrical charging currents in the power supply, while the 180-
Hz product is the result of transformer field leakage.
Another approach which is often useful in noise analysis is to view the noise on an oscillo-
scope and trigger the oscilloscope with an appropriate synchronization signal. The components
of the noise related to the synchronization signal will remain stationary on the screen, and the
unrelated energy will produce fuzz on the trace. For example, to investigate line-related compo-
nents, the oscilloscope should be triggered on the power line. If interference is suspected from a
nearby television signal, the oscilloscope can be triggered on the television vertical and/or hori-
zontal sync signals.
Noise may be expressed as an absolute level (usually in dBm or dBu) by simply measuring
the weighted voltage (proper termination being assumed in the case of dBm) at the desired point
in the system. However, this is often not very meaningful. A 1-mV noise voltage at the output of
a power amplifier may be quite good, while 1 mV of noise at the output of a microphone would
render it useless for anything but measuring jet planes. A better way to express noise perfor-
mance is the signal-to-noise ratio. S/N is a decibel measure of the noise level using the signal
level measured at the same point as a reference. This makes measurements at different points in a
system or in different systems directly comparable. A signal with a given S/N can be amplified
with a perfect amplifier or attenuated with no change in the S/N. Any degradation in S/N at later
points in the system is due to limitations of the equipment that follows.
y
Figure 10.1.12 Typical spectrum of noise and hum measured with a sweeping one-third-octave fil-
ter.
10.1.5 Bibliography
Bauman, P., S. Lipshitz, and J. Vanderkooy: “Cepstral Techniques for Transducer Measurement:
Part II,” Audio Engineering Society preprint 2302, AES, New York, N.Y., October 1985.
Berman, J. M., and L. R. Fincham: “The Application of Digital Techniques to the Measurement
of Loudspeakers,” J. Audio Eng. Soc., AES, New York, N.Y., vol. 25, June 1977.
Cabot, R. C.: “Measurement of Audio Signal Slew Rate,” Audio Engineering Society preprint
1414, AES, New York, N.Y., November 1978.
Lipshitz, S., T. Scott, and J. Vanderkooy: “Increasing the Audio Measurement Capability of FET
Analyzers by Microcomputer Post-Processing,” Audio Engineering Society preprint 2050,
AES, New York, N.Y., October 1983.
Metzler, R. E.: “Automated Audio Testing,” Studio Sound, August 1985.
Metzler, R. E., and B. Hofer: “Wow and Flutter Measurements,” Audio Precision 1 Users' Man-
ual, Audio Precision, Beaverton, Ore., July 1986.
Moller, H.: “Electroacoustic Measurements,” B&K Application Note 16-035, B&K Instruments,
Naerum, Denmark.
Moller, H., and C. Thompsen: “Swept Electroacoustic Measurements of Harmonic Difference
Frequency and Intermodulation Distortion,” B&K Application Note 15-098, B&K Instru-
ments, Naerum, Denmark.
Otala, M., and E. Leinonen: “The Theory of Transient Intermodulation Distortion,” IEEE Trans.
Acoust. Speech Signal Process., ASSP-25(1), February 1977.
y
Preis, D.: “A Catalog of Frequency and Transient Responses,” J. Audio Eng. Soc., AES, New
York, N.Y., vol. 24, June 1976.
Schrock, C.: “The Tektronix Cookbook of Standard Audio Measurements,” Tektronix Inc., Bea-
verton, Ore., 1975.
Vanderkooy, J.: “Another Approach to Time Delay Spectrometry,” Audio Engineering Society
preprint 2285, AES, New York, N.Y., October 1985.
g g
Chapter
10.2
Audio Phase and Frequency
Measurement
10.2.1 Introduction
When a signal is applied to the input of a device, the output will appear at a later point in time.
For a sine-wave excitation this delay between input and output may be expressed as a proportion
of the sine-wave cycle, usually in degrees. One cycle is 360°, one half-cycle is 180°, etc. This
measurement is illustrated in Figure 10.2.1. The phasemeter input signal no. 2 is delayed from, or
is said to be lagging, input no.1 by 45°. Most audio measuring gear measures phase directly by
measuring the proportion of one signal cycle between zero crossings of the signals. This can be
done with an edge-triggered set-reset flip-flop as shown in Figure 10.2.1. The output of this flip-
flop will be a signal which goes high during the time between zero crossings of the signals. By
averaging the amplitude of this pulse over one cycle (i.e., measuring its duty cycle) a measure-
ment of phase results.
10-19
q y
not be audible. There can be problems with time delay when the delayed signal will be used in
conjunction with an undelayed signal. This would be the case if one channel of a stereo signal
was delayed and the other was not. If we subtract out the absolute time delay from a phase plot,
the remainder will truly represent the audible portions of the phase response. In instances where
absolute delay is a concern, the original phase curve is more relevant.
ponents of a complex waveform. This describes the delay in the harmonics of a musical tone rel-
ative to the fundamental. If the group delay is flat, all components will arrive together. A peak or
rise in the group delay indicates that those components will arrive later by the amount of the peak
or rise. It is computed by taking the derivative of the phase response versus frequency. Mathe-
matically
This requires that phase be measured over a range of frequencies to give a curve that can be
differentiated. It also requires that the phase measurements be performed at frequencies which
are close enough together to provide a smooth and accurate derivative.
Note that the gate interval does not enter into the calculation and may be chosen on the basis
of the speed of measurements desired. Longer gate intervals and higher-frequency clocks will
result in higher-resolution measurements. However, the gate interval must be an integer multiple
of the input-signal period. This is easy to ensure with appropriate logic circuitry. For the fairly
typical case of a 10-Hz signal, a 0.1-s gate, and a 10-MHz clock, we would have a 1-cycle gate
and a count of
giving a resolution of 6 digits. A 1-s gate would allow 10 cycles of input signal, giving a count of
10 million.
q y
Figure 10.2.6 Error and delay (bias errors) in writing out peaks and valleys in a spectrum. (After
[1].)
small bandwidths. Means are normally provided for setting the IF bandwidth, allowing the reso-
lution of the spectrum analyzer to be adjusted. The frequency-analysis range, or span, of the ana-
lyzer is set by the tuning of the LO. A minimum bandwidth is required for any value of span and
sweep speed. This requirement allows the IF filter to settle to its steady-state response on the
input signal. If the sweep is too fast or the bandwidth too small, the filter output will give an
incorrect reading. The shape of the response seen on the analyzer screen for different sweep rates
is shown in Figure 10.2.6. As the sweep rate is increased above the optimum value, the peak will
start to drop and its frequency will shift in the direction of the sweep. The optimum bandwidth B
for a particular sweep time T and dispersion or total frequency sweep range D is given in Figure
10.2.7.
Another approach to spectrum analysis is using a real-time analyzer (RTA). A parallel bank of
bandpass filters is driven with the signal to be analyzed. The outputs of the filters are rectified
and displayed in bar-graph form on a cathode-ray tube (CRT) or other suitable display as shown
in Figure 10.2.8. The resulting display is shown in Figure 10.2.9. The filters are at fixed frequen-
cies, usually spaced every one-third octave or full octave from 20 Hz to 20 kHz. This results in
30 filters for the one-third-octave case and 10 filters for octave-band units. These frequencies
have been standardized by the IEC and are given in Table 10.2.1. Some units have been built with
12 filters per octave, or a total of 120 filters, for even higher resolution. Because RTAs are nor-
mally made with these fractional-octave filters, they are constant-percentage-bandwidth devices.
This means that the bandwidth of the filters is always a fixed percentage of the center frequency.
The advantage of parallel-filter analyzers is their instantaneous display and their ability to see
transient events. Since all filters are constantly monitoring the signal, all transients will be seen.
Disadvantages include the low-resolution display and the inability to trade resolution for fre-
quency range after the unit is manufactured.
q y
Figure 10.2.7 Optimum resolution setting for spectrum analyzers. Read Boptimum for a given dis-
persion and sweep time. (After [2].)
RTAs are commonly used with a random-noise or multitone test signal to measure a device or
system response quickly. A random-noise signal is a signal whose instantaneous amplitude is a
random, usually gaussian variable. Random noise has a spectrum (amplitude versus frequency)
made up of all frequencies over the bandwidth of the noise. Various terms are used to describe
q y
spectrum analyzer to examine these components. Advances in technology have made it possible
to implement directly Fourier's theory with hardware and software. Instruments that perform
Fourier analysis digitize the signal, sampling the waveform at a rate faster than the highest-fre-
quency input signal, and convert these samples into a numerical representation of the signal's
instantaneous value. Fourier series provides a way to convert these signal samples into samples
of the signal spectrum. This transforms the data from the time domain to the frequency domain.
The FFT is merely a technique for efficiently computing the Fourier series by eliminating redun-
dant mathematical operations.
The FFT operates on a piece of the signal that has been acquired and stored in memory for the
calculation. Take, for example, the section of a sine wave shown in Figure 10.2.10. This is a
piece of a sine wave which continues in time on both sides of the selected segment. The FFT
algorithm does not know anything about the waveform outside the piece that it is using for calcu-
lations. It therefore assumes that the signal repeats itself outside the “window” it has of the sig-
nal. This is important because an incorrectly selected piece of the signal may lead to very strange
results.
Consider the sine wave of Figure 10.2.10 and the possible pieces of it that have been selected
for analysis. In the first example, the beginning and end of the window have been chosen to coin-
cide with zero crossings of the signal. If the selected segment is repeated, an accurate representa-
tion of the signal is obtained. The FFT of this signal will give the correct spectrum, a single-
frequency component. If the window is chosen incorrectly, as in the second example, there is a
noninteger number of cycles in the waveform. When this segment is repeated, the resulting wave-
form will not look like the original sine wave. The computed spectrum will also be in error; it
will be the spectrum of the discontinuous waveform. Transients that start at zero and decay to
zero before the end of the sample segment will not suffer from any discontinuities. Continuous
random or periodic signals will, however, be affected in a manner analogous to the effects on sine
waves.
Clearly, then, the choice of windows for a signal that is to be transformed is critical to obtain-
ing correct results. It is often difficult to select the correct end-points of the window, and even
more difficult without operator involvement. The window function may be thought of as a rectan-
gular-shaped function which multiplies the signal. Intuitively it seems that the sharp discontinui-
ties introduced by the endpoints of the window are at fault for the spurious components in the
FFT. This may be proved theoretically but is beyond the scope of this discussion. A simple solu-
tion to the windowing problem is to use nonrectangular windows. Multiplying the signal by a
window that decreases to zero gradually at its end-points eliminates the discontinuities. Using
these windows modifies the data and results in a widening of the spectral peaks in the frequency
domain. This is illustrated in Figure 10.2.11. This tradeoff is unavoidable and has results in the
development of many different windowing functions that emphasize the spectral widening or the
rejection of spurious components. Perhaps the most common window function the Hamming
window, illustrated in Figure 10.2.12, which is a cosine function raised above zero. It rejects spu-
rious signals by at least 42 dB but spreads the spectral peaks by only 40 percent.
The FFT algorithm always assumes that the signal being analyzed is continuous. If transient
signals are being analyzed, the algorithm assumes that they repeat at the end of each window. If a
transient may be guaranteed to have decayed to zero before the end of the window time, a rectan-
gular window may be used. If not, a shaped window must still be used. However, if there is sig-
nificant transient energy at the end of the window, the computed spectrum will not include the
frequency contribution of that data.
q y
(a)
(b)
(c)
Figure 10.2.10 The FFT assumes that all signals are periodic and that they duplicate what is cap-
tured inside the FFT window: (a) periodic signal, integral number of cycles in the measurement
window; (b) nonperiodic signal transient, (c) nonperiodic signal, random. (After [3].)
Fourier-transform algorithms can be written for any number of data points. However, it is eas-
iest if the number of points can be expressed as the product of two smaller numbers. In this case,
the transform may be broken into the product of two smaller transforms, each of a length equal to
the smaller numbers. The most convenient lengths for transforms, based on this scheme, are
powers of 2. Common transform lengths are 512 points and 1024 points. These would ordinarily
provide a spectrum with 256 and 512 components, respectively. However, because of errors at
high frequencies due to aliasing and the rolloff introduced by windowing only 200 or 400 lines
are displayed.
The transformation from the time domain to the frequency domain by using the FFT may be
reversed to go from the frequency domain to the time domain. This allows signal spectra to be
analyzed and filtered and the resulting effect on the time-domain response to be assessed. Other
transformations may be applied to the data, yielding greater ability to separate the signal into its
components.
q y
(a )
(b )
(c )
Figure 10.2.11 Window shapes trade off major-lobe bandwidth and side-lobe rejection. (a) An
almost periodic waveform in the rectangular acquisition window. The FFT magnitude (expanded 4
times for detail) shows closely adjacent, nearly equal components; one has substantial leakage.
Also, a small wrinkle at two divisions from center hints at a possible third component. (b) Multiply-
ing the waveform by a Hamming window reduces side-lobe leakage and reveals a third low-fre-
quency component in the FFT magnitude (expanded 4 times for detail). (c) A Parzen window
offers more side-lobe reduction, but the increased bandwidth of the major lobe causes the two
nearly equal components to merge completely into each other. ( After [3].)
q y
Figure 10.2.12 Some common FFT data windows and their frequency-domain parameters. (After
[3].)
10.2.5 References
1. Randall, R. B.: Application of B&K Equipment to Frequency Analysis, 2nd ed., B&K
Instruments, Naerum, Denmark, 1977.
2. Engelson, M., and F. Telewski: Spectrum Analyzer Theory and Applications, Artech House,
Norwood, Mass., 1974.
3. Ramirez, R.: “The FFT—Fundamentals and Concepts,” Tektronix, Inc., Beaverton, Ore.,
1975.
10.2.6 Bibliography
Bauman, P., S. Lipshitz, and J. Vanderkooy: “Cepstral Techniques for Transducer Measurement:
Part II,” Audio Engineering Society preprint 2302, AES, New York, N.Y., October 1985.
Berman, J. M., and L. R. Fincham: “The Application of Digital Techniques to the Measurement
of Loudspeakers,” J. Audio Eng. Soc., AES, New York, N.Y., vol. 25, June 1977.
Cabot, R. C.: “Measurement of Audio Signal Slew Rate,” Audio Engineering Society preprint
1414, AES, New York, N.Y., November 1978.
Lipshitz, S., T. Scott, and J. Vanderkooy: “Increasing the Audio Measurement Capability of FET
Analyzers by Microcomputer Post-Processing,” Audio Engineering Society preprint 2050,
AES, New York, N.Y., October 1983.
Metzler, R. E.: “Automated Audio Testing,” Studio Sound, August 1985.
Metzler, R. E., and B. Hofer: “Wow and Flutter Measurements,” Audio Precision 1 Users' Man-
ual, Audio Precision, Beaverton, Ore., July 1986.
q y
Moller, H.: “Electroacoustic Measurements,” B&K Application Note 16-035, B&K Instruments,
Naerum, Denmark.
Moller, H., and C. Thompsen: “Swept Electroacoustic Measurements of Harmonic Difference
Frequency and Intermodulation Distortion,” B&K Application Note 15-098, B&K Instru-
ments, Naerum, Denmark.
Otala, M., and E. Leinonen: “The Theory of Transient Intermodulation Distortion,” IEEE Trans.
Acoust. Speech Signal Process., ASSP-25(1), February 1977.
Preis, D.: “A Catalog of Frequency and Transient Responses,” J. Audio Eng. Soc., AES, New
York, N.Y., vol. 24, June 1976.
Schrock, C.: “The Tektronix Cookbook of Standard Audio Measurements,” Tektronix Inc., Bea-
verton, Ore., 1975.
Vanderkooy, J.: “Another Approach to Time Delay Spectrometry,” Audio Engineering Society
preprint 2285, AES, New York, N.Y., October 1985.
q y
g g
Chapter
10.3
Nonlinear Audio Distortion
10.3.1 Introduction
Distortion is a measure of signal impurity. It is usually expressed as a percentage or decibel ratio
of the undesired components to the desired components of a signal. Distortion of a device is
measured by feeding it one or more sine waves of various amplitudes and frequencies. In sim-
plistic terms, any frequencies at the output which were not present at the input are distortion.
However, strictly speaking, components due to power line interference or other spurious signals
are not distortion. There are many methods of measuring distortion in common use: harmonic
distortion and at least three different types of intermodulation distortion. These are different test
procedures rather than different forms of distortion in the device under test.
10-33
10-34 Audio Test and Measurement
To measure harmonic distortion with a spectrum analyzer the procedure illustrated in Figure
10.3.2 is used. The fundamental amplitude is adjusted to the 0-dB mark on the display. The
amplitudes of the harmonics are then read and converted to linear scale. The rms sum of these
values is taken and represents the THD. This procedure is time-consuming and difficult for an
unskilled operator. Even skilled operators have trouble in obtaining accuracies better than 2 dB
in the final result because of equipment limitations and the problems inherent in reading num-
bers off a trace on the screen of an analyzer.
A simpler approach to the measurement of harmonic distortion is the notch-filter distortion
analyzer. This device, commonly referred to as simply a distortion analyzer, removes the funda-
mental of the signal to be investigated and measures the remainder. A block diagram of such a
unit is shown in Figure 10.3.3. The fundamental is removed with a notch filter, and its output is
then measured with an ac voltmeter. Since distortion is normally presented as a percentage of the
fundamental level, this level must be measured or set equal to a predetermined reference value.
Additional circuitry (not shown) is required to set the level to the reference value for calibrated
Nonlinear Audio Distortion 10-35
Figure 10.3.4 Conversion graph for indicated distortion and true distortion. (From [1]. Used with
permission.)
measurements. Some analyzers use a series of step attenuators and a variable control for setting
the input level to the reference value.
More sophisticated units eliminate the variable control by using an electronic gain control.
Others employ a second ac-to-dc converter to measure the input level and compute the percent-
age using a microprocessor. Completely automatic units also provide autoranging logic to set the
attenuators and ranges. This provision significantly reduces the effort and skill required to make
a measurement.
The correct method of representing percentage distortion is to express the level of the har-
monics as a fraction of the fundamental level. However, commercial distortion analyzers use the
total signal level as the reference voltage. For small amounts of distortion these two quantities
are equivalent. At large values of distortion the total signal level will be greater than the funda-
mental level. This makes distortion measurements on these units lower than the actual value. The
relationship between the measured distortion and the true distortion is given in Figure 10.3.4.
The errors are negligible below 10 percent measured distortion and are not significant until 20
percent measured distortion.
10-36 Audio Test and Measurement
The need to tune the notch filter to the correct frequency can also make this a very tedious
measurement. Some manufacturers have circumvented this problem by including the measure-
ment oscillator and analyzer in one package, placing the analyzer and oscillator frequency con-
trols on the same knob or button. This eliminates the problem only when the signal source used
for the test is the internal oscillator. A better approach, used by some manufacturers, is to mea-
sure the input frequency and tune the filter to the measured frequency. This eliminates any need
to adjust the analyzer frequency.
Because of the notch-filter response, any signal other than the fundamental will influence the
results, not just harmonics. Some of these interfering signals are illustrated in Figure 10.3.5. Any
practical signal contains some hum and noise, and the distortion analyzer will include these in
the reading. Because of these added components, the correct term for this measurement is total
harmonic distortion and noise (THD + N). Although this factor does limit the readings of equip-
ment for very low distortion, it is not necessarily bad. Indeed it can be argued that the ear hears
all components present in the signal, not just the harmonics. Some interfering signals, such as the
19-kHz pilot tone used in frequency-modulation (FM) stereo, may be outside the range of audi-
bility and therefore totally undesirable.
Additional filters are included on most distortion analyzers to reduce unwanted hum and
noise as illustrated in Figure 10.3.6. These usually consist of one or more high-pass filters (400
Hz is almost universal) and several low-pass filters. Common low-pass-filter frequencies are
22.4 kHz, 30 kHz, and 80 kHz. Better equipment will include filters at all these frequencies to
ease the tradeoff between limiting bandwidth to reduce noise and the reduction in reading accu-
racy from removing desired components of the signal. When used in conjunction with a good
differential input on the analyzer, these filters can solve most practical measurement noise prob-
lems.
The use of a sine-wave test signal and a notch-type distortion analyzer has the distinct advan-
tage of simplicity in both instrumentation and use. This simplicity has an additional benefit in
ease of interpretation. The shape of the output waveform from a notch-type analyzer indicates
the slope of the nonlinearity. Displaying the residual components on the vertical axis of an oscil-
loscope and the input signal on the horizontal gives a plot of the transfer characteristics' devia-
tion from a best-fit straight line. This technique is diagramed in Figure 10.3.7. The trace will be
Nonlinear Audio Distortion 10-37
a horizontal line for a perfectly linear device. If the transfer characteristic curves upward on pos-
itive input voltages, the trace will bend upward at the right-hand side. Examination of the distor-
tion components in real time on an oscilloscope will show such characteristics as oscillation on
the peaks of the signal, crossover distortion, and clipping. This is a valuable tool in the design
and development of audio circuits and one which no other distortion measurement method can
fully match. Viewing the residual components in the frequency domain using a spectrum ana-
lyzer also yields considerable information about the distortion mechanism inside the device
under test.
Both the frequency and the amplitude of the sine-wave stimulus are adjustable parameters in
harmonic-distortion testing. This often proves to be of great value in investigating the nature of a
distortion mechanism. By measuring at low frequencies, thermal distortion and related effects
may be examined in detail. Using frequencies near the power line frequency and looking for
beats in the distortion products can reveal power supply limitations and line-related interference.
Measurements at high frequencies can reveal the presence of nonlinear capacitances or slew-rate
limiting. By examining the slope of the distortion change with frequency, several mechanisms
which are active in the same frequency range may be isolated.
Limitations when measuring distortion at high frequencies are the major problem with THD
testing, as illustrated in Figure 10.3.8. Because the components being measured are harmonics of
the input frequency, they may fall outside the pass-band of the device under test. An audio
10-38 Audio Test and Measurement
test signal. Indeed, early analyzers used a filtered version of the power line for the low-frequency
tone: hence the 60-Hz low tone frequency in the SMPTE standard. It is important, however, that
no harmonics of the low-frequency signal generator extend into the measurement range of the
high-frequency tone. The analyzer will be unable to distinguish these from sidebands. After the
first stage of filtering in the analyzer, there is little low-frequency energy left to create IM in the
analyzer. This considerably simplifies the remaining circuitry.
As shown in Figure 10.3.11, when this composite signal is applied to the test device, the out-
put waveform is distorted. As the high-frequency tone is moved along the transfer characteristic
10-40 Audio Test and Measurement
measuring the 15-kHz signal directly, the rms amplitude of the total signal is measured and a cor-
rection factor is applied. Additional high-pass filtering at approximately 400 Hz may be used to
eliminate the effects of hum on the measurement. A block diagram of a DIM distortion analyzer
using this measurement approach is shown in Figure 10.3.15.
tions and procedures. This has led to the definition of the quasi-peak meter for measuring tele-
phone noise as described previously.
Accuracy of most distortion analyzers is specified at better than 1 dB, but this can be mislead-
ing. Separate specifications are often put on the bandwidth and ranges, as is common for voltme-
ters. A more important specification for distortion measurements is the residual distortion of the
measurement system. Manufacturers of distortion analyzers often specify the oscillator and the
distortion analyzer separately. A system in which the oscillator and the analyzer are each speci-
fied at 0.002 percent THD can have a system residual distortion of 0.004 percent. If the noise of
the analyzer and/or the oscillator is specified separately, this must be added to the residual speci-
fication to find the residual THD + N of the system. It is not uncommon to find this limiting sys-
tem residual at most input voltages. For example, an analyzer specified at 0.002 percent
distortion and 20-μV input noise will have a 0.003 percent residual at 1-V input and 0.02 percent
at 0.1-V input. These voltages are common when measuring analog mixing consoles and pream-
plifiers, resulting in a serious practical limitation with some distortion analyzers.
Many commercial units specify the residual distortion at only one input voltage or at the full
scale of one range. The performance may degrade by as much as 10 dB when the signal is at the
bottom of an input range. This is true because THD + N measurements are a ratio of the distor-
tion components and noise to the signal level. At the full-scale input voltage, the voltage in the
notch filter is a maximum and the filter's noise contribution will be minimized. As the level
drops, the residual noise in the notch filter becomes a larger percentage of the reading. When the
next input range occurs, the residuals will improve again. This limitation is in addition to the
input-noise problem discussed previously because it results from noise in a later portion of the
instrument.
(a )
(b )
Figure 10.3.16 Addition of distortion: (a) addition of transfer-function nonlinearities, (b) addition of
distortion components.
by each nonlinearity can be seen to be in phase and will sum to a component of twice the magni-
tude. However, if the second device under test has a complementary transfer characteristic as
shown in Figure 10.3.17, we obtain quite a different result. When the devices are cascaded, the
effects of the two curves will cancel, yielding a straight line for the transfer characteristic. The
corresponding distortion products are out of phase with each other, resulting in no distortion
components in the final output.
It is quite common for this to occur at low levels of distortion, especially between the test
equipment and the device under test. For example, if the test equipment has a residual of 0.002
percent when connected to itself and readings of 0.001 percent are obtained from the circuit
under test, cancellations are occurring. It is also possible for cancellations to occur in the test
equipment itself, with the combined analyzer and signal generator system giving readings lower
than the sum of their individual residuals. If the distortion is from an even-order (asymmetrical)
nonlinearity, reversing the phase of the signal between the offending devices will change a can-
cellation to an addition. If the distortion is from an odd-order (symmetrical) nonlinearity, phase
inversions will not affect the cancellation.
(a )
(b )
Figure 10.3.17 Cancellation of distortion: (a) cancellation of distortion waveform, (b) cancellation
of transfer-characteristic nonlinearity.
Figure 10.3.18 for a three-pole low-pass filter. If the frequency of the generator is off by 3 per-
cent, the gain measurement will be off by 1 dB. A higher-order filter or a less accurate generator
will produce more error.
Figure 10.3.19 shows the effect of generator distortion on the gain measurement of a multi-
pole high-pass filter. The harmonics of the generator are not attenuated as much by the filter as is
the fundamental. If the signal-source distortion is high, as with function generators, the gain
measurement will be in error. This effect is most important when measuring notch filters in an
equalizer, where the distortion will appear as inadequate notch depth.
Figure 10.3.19 suggests that the effect of these errors on distortion measurements is more
severe. The gain introduced by a filter on the harmonics of the generator can make them exceed
the distortion of the filter itself. A three-pole high-pass filter will introduce 18 dB of gain at the
second harmonic and 29 dB of gain at the third. Under these conditions an oscillator which has
0.001 percent second- and 0.001 percent third-harmonic distortion will read 0.03 percent when
measuring a distortion-free filter. These errors necessitate the use of an oscillator with very low
distortion.
Another source of error in measurements is the output impedance of the generator. The
amplitude and phase response of a device under test will often be affected by its input impedance
interacting with the source impedance of the generator. These devices form a resistive divider in
which the shunt leg is the nonconstant impedance of the device under test. This causes a varia-
tion of the voltage at the input to the test device, thus corrupting the measurements. Low-output-
impedance generators will suffer less variation with load than high-impedance generators. How-
ever, if the system response is being measured, the generator impedance should be equal to the
10-46 Audio Test and Measurement
source impedance of the device normally driving that input. Transformer input stages often
require a specific source impedance to provide correct damping for optimum high-frequency
response. Too large a source impedance will cause excessive rolloff while too low a source
impedance will produce an underdamped or peaked response. For example, most microphone
inputs are designed to be driven from a 150-Ω source, a value close to the typical microphone
source impedance.
10.3.6 References
1. Tremaine, H. W.: Audio Cyclopedia, Howard W. Sams, Indianapolis, Ind., 1975.
2. Ladegaard, P.: “Swept Distortion Measurements—An Effective Tool for Revealing TIM in
Amplifiers with Good Subjective Correlation to Subjective Evaluation,” B&K Application
Note 17-234, B&K Instruments, Naerum, Denmark, 1977.
3. Theile, A. N.: “Measurement of Nonlinear Distortion in a Bandlimited System,” J. Audio
Eng. Soc., AES, New York, N.Y., vol. 31, pp. 443–445, June 1983.
Nonlinear Audio Distortion 10-47
4. Small, R.: “Total Difference Frequency Distortion: Practical Measurements,” J. Audio Eng.
Soc., AES, New York, N.Y., vol. 34, no. 6, pg. 427, June 1986.
5. Leinonen, F., M. Otala, and J. Curl: “Method for Measuring Transient Intermodulation Dis-
tortion,” Audio Engineering Society preprint 1185, AES, New York, N.Y., October 1976.
6. Skritek, P.: “Simplified Measurement of Squarewave/Sinewave and Related Distortion Test
Methods,” Audio Engineering Society preprint 2195, AES, New York, N.Y., 1985.
7. Hofer, B.: “Practical Extended Range DIM Measurements,” Audio Engineering preprint
2334, AES, New York, N.Y., March 1986.
10.3.7 Bibliography
Bauman, P., S. Lipshitz, and J. Vanderkooy: “Cepstral Techniques for Transducer Measurement:
Part II,” Audio Engineering Society preprint 2302, AES, New York, N.Y., October 1985.
Berman, J. M., and L. R. Fincham: “The Application of Digital Techniques to the Measurement
of Loudspeakers,” J. Audio Eng. Soc., AES, New York, N.Y., vol. 25, June 1977.
Cabot, R. C.: “Measurement of Audio Signal Slew Rate,” Audio Engineering Society preprint
1414, AES, New York, N.Y., November 1978.
Lipshitz, S., T. Scott, and J. Vanderkooy: “Increasing the Audio Measurement Capability of FET
Analyzers by Microcomputer Post-Processing,” Audio Engineering Society preprint 2050,
AES, New York, N.Y., October 1983.
Metzler, R. E.: “Automated Audio Testing,” Studio Sound, August 1985.
Metzler, R. E., and B. Hofer: “Wow and Flutter Measurements,” Audio Precision 1 Users' Man-
ual, Audio Precision, Beaverton, Ore., July 1986.
Moller, H.: “Electroacoustic Measurements,” B&K Application Note 16-035, B&K Instruments,
Naerum, Denmark.
Moller, H., and C. Thompsen: “Swept Electroacoustic Measurements of Harmonic Difference
Frequency and Intermodulation Distortion,” B&K Application Note 15-098, B&K Instru-
ments, Naerum, Denmark.
Otala, M., and E. Leinonen: “The Theory of Transient Intermodulation Distortion,” IEEE Trans.
Acoust. Speech Signal Process., ASSP-25(1), February 1977.
Preis, D.: “A Catalog of Frequency and Transient Responses,” J. Audio Eng. Soc., AES, New
York, N.Y., vol. 24, June 1976.
Schrock, C.: “The Tektronix Cookbook of Standard Audio Measurements,” Tektronix Inc., Bea-
verton, Ore., 1975.
Vanderkooy, J.: “Another Approach to Time Delay Spectrometry,” Audio Engineering Society
preprint 2285, AES, New York, N.Y., October 1985.
g g
Chapter
10.4
Time Domain Audio Measurements
10.4.1 Introduction
In addition to characterizing the frequency-domain behavior of a device, it is also informative to
examine the time-domain behavior of audio components. The most common signals for this pur-
pose are sine waves, triangle waves, square waves, and tone bursts.
RT = 0.35 / BW (10.4.1)
10-49
10-50 Audio Test and Measurement
Figure 10.4.1 Effects of amplitude and phase response on square-wave characteristics. (From [1].
Used with permission.)
Time Domain Measurements 10-51
Figure 10.4.2 Definition of rise time, overshoot, ringing, and droop. (From [1]. Used with permis-
sion.)
Tone bursts are another technique for evaluating the response of audio devices to transients.
They are created by gating a sine wave on and off at its zero crossings. A tone burst concentrates
the energy of the waveform closer to a particular frequency, enabling evaluation of individual
sections of the audio-frequency range. However, they still contain substantial high-frequency
energy as shown in Figure 10.4.3. The number of cycles on and off significantly affects the fre-
quency spread of the energy. This frequency spreading can yield anomalous results, requiring
extreme care in interpretation. Linkwitz [2] proposed the use of shaped bursts, employing a win-
dowing function much the same as the Hamming window employed in FFT analysis. The shap-
ing of the burst rise and fall reduces the spread in frequency, concentrating the energy near the
frequency of the sine wave.
Tone-burst testing is common with loudspeakers, yielding qualitative information on the
damping characteristics of the drivers at a glance. As with square waves, the common parameters
specified in tone-burst measurements are overshoot and ringing. The overshoot or undershoot on
a tone burst is usually taken to be the amount by which the burst envelope goes above or below
the steady-state on level. Ringing on a tone burst refers to the tendency to continue oscillating
after the burst has stopped. This gives the appearance of a tail which continues after the body of
the burst.
(a)
(b)
Figure 10.4.3 Tone-burst spectra: (a) single-cycle sine-wave burst, (b) five-cycle sine-wave burst.
(From [2]. Used with permission.
10-54 Audio Test and Measurement
Figure 10.4.4 Typical tone-burst response of a compressor, illustrating the time response of the
compressor. (From [3]. Used with permission.)
10.4.3 References
1. Tremaine, H. M.: Audio Cyclopedia, Howard W. Sams, Indianapolis, IN, 1975.
2. Linkwitz, S.: “Narrow Band Testing of Acoustical Systems,” Audio Engineering Society
preprint 1342, AES, New York, N.Y., May 1978.
3. Cabot, R. C.: “Limiters, Compressors, and Expanders,” Sound & Video Contractor, Inter-
tec Publishing, Overland Park, Kan., vol. 26, November 1985.
Time Domain Measurements 10-55
10.4.4 Bibliography
Bauman, P., S. Lipshitz, and J. Vanderkooy: “Cepstral Techniques for Transducer Measurement:
Part II,” Audio Engineering Society preprint 2302, AES, New York, N.Y., October 1985.
Berman, J. M., and L. R. Fincham: “The Application of Digital Techniques to the Measurement
of Loudspeakers,” J. Audio Eng. Soc., AES, New York, N.Y., vol. 25, June 1977.
Cabot, R. C.: “Measurement of Audio Signal Slew Rate,” Audio Engineering Society preprint
1414, AES, New York, N.Y., November 1978.
Lipshitz, S., T. Scott, and J. Vanderkooy: “Increasing the Audio Measurement Capability of FET
Analyzers by Microcomputer Post-Processing,” Audio Engineering Society preprint 2050,
AES, New York, N.Y., October 1983.
Metzler, R. E.: “Automated Audio Testing,” Studio Sound, August 1985.
Metzler, R. E., and B. Hofer: “Wow and Flutter Measurements,” Audio Precision 1 Users' Man-
ual, Audio Precision, Beaverton, Ore., July 1986.
Moller, H.: “Electroacoustic Measurements,” B&K Application Note 16-035, B&K Instruments,
Naerum, Denmark.
Moller, H., and C. Thompsen: “Swept Electroacoustic Measurements of Harmonic Difference
Frequency and Intermodulation Distortion,” B&K Application Note 15-098, B&K Instru-
ments, Naerum, Denmark.
Otala, M., and E. Leinonen: “The Theory of Transient Intermodulation Distortion,” IEEE Trans.
Acoust. Speech Signal Process., ASSP-25(1), February 1977.
Preis, D.: “A Catalog of Frequency and Transient Responses,” J. Audio Eng. Soc., AES, New
York, N.Y., vol. 24, June 1976.
Schrock, C.: “The Tektronix Cookbook of Standard Audio Measurements,” Tektronix Inc., Bea-
verton, Ore., 1975.
Vanderkooy, J.: “Another Approach to Time Delay Spectrometry,” Audio Engineering Society
preprint 2285, AES, New York, N.Y., October 1985.
Section
11-1
-
11-2 Section Eleven
New test instruments are rising to the challenge posed by the new technologies being intro-
duced to the video production process. As the equipment used by broadcasters and video profes-
sionals becomes more complex, the requirements for advanced, specialized maintenance tools
also increases. These instruments range from simply go/no-go status indicators to automated test
routines with preprogrammed pass/fail limits. Video quality control efforts must focus on the
overall system, not just a particular island.
The attribute that makes a good test instrument is really quite straightforward: accurate mea-
surement of the signal under test. The attributes important to the user, however, usually involve
the following:
• Affordability
• Ease of use
• Performance
Depending upon the application, the order of these criteria may be inverted (performance, ease
of use, then affordability). Suffice it to say, however, that all elements of these specifications
combine to translate into the user’s definition of the ideal instrument.
Computer-based video test instruments provide the maintenance engineer with the ability to
rapidly measure a number of parameters with exceptional accuracy. Automated instruments offer
a number of benefits, including reduced setup time, test repeatability, waveform storage and
transmission capability, and remote control of instrument/measurement functions.
The memory functions of the new breed of instruments provide important new capabilities,
including archiving test setups and reference waveforms for ongoing projects and comparative
tests. Hundreds of files can be saved for later use. With automatic measurement capabilities,
even a novice technician can perform detail-oriented measurements quickly and accurately.
In the rush to embrace advanced, specific-purpose test instruments, it is easy to overlook the
grandparents of all video test devices—the waveform monitor and vectorscope. Just because
they are not new to the scene does not mean that they have outlived their usefulness.
The waveform monitor and vectorscope still fill valuable roles in the test and measurement
world. Both, of course, have their roots in the general-purpose oscilloscope. This heritage
imparts some important benefits. The scope is the most universal of all instruments, combining
the best abilities of the human user and the machine. Electronic instruments are well equipped to
quickly and accurately measure a given amplitude, frequency, or phase difference; they perform
calculation-based tasks with great speed. The human user, however, is far superior to any
machine in interpreting and analyzing an image. The waveform monitor and vectorscope—in an
instant—presents to the user a wealth of information that allows rapid characterization and
understanding of the signal under consideration.
In This Section:
-
Video Signal Measurement and Analysis 11-3
-
11-4 Section Eleven
-
Video Signal Measurement and Analysis 11-5
-
11-6 Section Eleven
Hamada, T., S. Miyaji, and S. Matsumoto: “Picture Quality Assessment System by Three-Lay-
ered Bottom-Up Noise Weighting Considering Human Visual Perception,” SMPTE Jour-
nal, SMPTE, White Plains, N.Y., pp. 20–26, January 1999.
MacAdam, D. L.: “Visual Sensitivities to Color Differences in Daylight,” J. Opt. Soc. Am., vol.
32, pp. 247–274, 1942.
Mertz, P.: “Television and the Scanning Process,” Proc. IRE, vol. 29, pp. 529–537, October 1941.
Pank, Bob (ed.): The Digital Fact Book, 9th ed., Quantel Ltd, Newbury, England, 1998.
Reed-Nickerson, Linc: “Understanding and Testing the 8-VSB Signal,” Broadcast Engineering,
Intertec Publishing, Overland Park, Kan., pp. 62–69, November 1997.
Robin, M., and M. Poulin: Digital Television Fundamentals, 2nd ed., McGraw-Hill, New York,
N.Y., 2001.
SMPTE Engineering Guideline EG 1, “Alignment Color Bar Test Signal for Television Picture
Monitors,” SMPTE, White Plains, N.Y., 1990.
“SMPTE Standard: For Television—Color Reference Pattern,” SMPTE 303M, SMPTE, White
Plains, N.Y., 1999.
SMPTE Standard: SMPTE 259M, “Serial Digital Interface for 10-bit 4:2:2 Components and
4Fsc NTSC Composite Digital Signals,” SMPTE, White Plains, N.Y., 1997.
Standards and Definitions Committee, Society for Information Display.
Stremler, Ferrel G.: “Introduction to Communications Systems,” Addison-Wesley Series in Elec-
trical Engineering, Addison-Wesley, New York, December 1982.
Tannas, Lawrence E., Jr.: Flat Panel Displays and CRTs, Van Nostrand Reinhold, New York, pg.
18, 1985.
Uchida, Tadayuki, Yasuaki Nishida, and Yukihiro Nishida: “Picture Quality in Cascaded Video-
Compression Systems for Digital Broadcasting,” SMPTE Journal, SMPTE, White Plains,
N.Y., pp. 27–38, January 1999.
Verona, Robert: “Comparison of CRT Display Measurement Techniques,” Helmet-Mounted Dis-
plays III, Thomas M. Lippert (ed.), Proc. SPIE 1695, SPIE, Bellingham, Wash., pp. 117–
127, 1992.
-
Chapter
11.1
Video Information Concepts
11.1.1 Introduction1
For the purpose of this handbook, video information may be defined as data conveying a descrip-
tion of a picture that can be displayed by an appropriate picture-reproducing system through the
use of television signals. When the information is that possessed by the electrical signals occur-
ring at a given location in a television system, the character of the information can be interpreted
in terms of the picture which would be displayed by some reference reproducer when driven by
the given signals. For purposes of this handbook, the reference reproducer is here defined as a
television signal-processing and display system which correctly displays the full information
content of any set of signals that have been properly formed and have been properly inserted into
the signal-processing circuits of the system. All the signal-processing circuits are assumed to be
free of distortion, noise, and interference. The only nonlinear element in the reference reproducer
is taken to be its picture display device, whose photoelectric transfer characteristics are specified
to be the same as those of the program director’s studio monitor.
In conventional monochrome and color systems, the information in the associated pictures
and signals can be broadly classified as:
• Monochrome information: the information carried by the monochrome signal in both mono-
chrome and color systems
• Coloring information: the information carried by all signals other than the monochrome sig-
nal in a color television system
Monochrome information by itself describes a monochrome picture without reference to its
chromaticity. Monochrome and coloring information together describe a color picture.
When it is desired to discuss the types of information carried by the signals in the primary-
color signal channels of a color television transmitter or receiver, the following classifications
are pertinent:
• Red-primary information: the information carried by a red-primary signal
1. Portions of this chapter are adapted from: D. G. Fink (ed.): Television Engineering Handbook,
McGraw-Hill, New York, N.Y., 1957. Used with permission.
11-7
-
11-8 Video Measurement Techniques
-
Video Information Concepts 11-9
equal to the raster height may be designated as unity along both axes. Thus, raster coordinates of
a point in a raster with a 4:3 aspect ratio would range between zero and unity for the vertical
coordinate and between zero and 1.33 for the horizontal one.
A point image is the pattern of light formed on a television display in response to light
received by a television camera from a single object point in its field of view. The position of any
given point image on a reproducing raster is determined by the position of the corresponding
object point together with whatever geometric distortions (e.g., pin-cushioning) may exist in the
raster. Its position is also influenced by the line structure in the scanning raster unless its size is
large in comparison with the distance between adjacent scanning lines, as happens when the
object point lies appreciably outside the camera field of focus.
A line image is the pattern of light formed on a television display in response to light received
by a television camera from a uniformly illuminated ideally narrow line in its field of view. The
position of any given line image on a reproducing raster is determined by the position of the cor-
responding object line in a manner analogous to that for a point image. A line image oriented
perpendicular to the scanning path may be called a transverse line image, and one parallel to this
path may be called a longitudinal line image.
A picture element is the smallest area of a television picture capable of being delineated by an
electrical signal passed through the system or part thereof. The number of picture elements (pix-
els) in a complete picture, and their geometric characteristics of vertical height and horizontal
width, provide information on the total amount of detail which the raster can display and on the
sharpness of that detail, respectively.
The description of the information content of a picture through use of the above quantities
may be carried out in varying degrees of exactness. In the case of a monochrome picture, the
method used would be expected to lead to the specification of definite numerical values for the
displayed luminance level at points having given raster coordinates. If the monochrome picture
concerned is that resulting from withdrawal of the chrominance signal from a color display dur-
ing a color transmission, the observed luminance levels represent monochrome levels of the
color picture. In the case of a color picture, the method would lead to the specification of numer-
ical values for both luminance levels and chromaticity values at points having given raster coor-
dinates.
The most direct rigorous approach involves specification of numerical values at all points in
the picture. This requires associating the pertinent colorimetric values with the numerical values
of explicit mathematical functions of the raster coordinates. In general, these are continuous
functions because of the continuous nature of the distribution functions representing typical
aperture transmittances.
It is meaningful to specify only as much information as can be recognized to consist of sepa-
rate and independent elements. To carry out this type of description, the raster area can be subdi-
vided into picture elements, the total number of elements being determined by the spectrum
characteristics of the monochrome signal and by the percentage of the total frame scanning time
that is consigned to blanking intervals. The video signal waveform for one frame period can be
regarded as a portion of a repetitive wave corresponding to a stationary picture, and its spectrum
then turns out to have a finite number of discrete frequency components. The total information
capacity of this spectrum is twice the number of components, since each component can be
adjusted in both amplitude and phase.
The waveform for a single frame has the same total information capacity, and the waveform
for any fractional part of a frame has a corresponding fraction of this total information capacity.
Consequently, the portion of the waveform used for active scanning of the raster during one
-
11-10 Video Measurement Techniques
frame period has an information capacity equal to twice the number of frequency components in
the spectrum for the signal multiplied by the number representing the fractional portion of the
frame period used in active scanning. This information capacity is a number that may be inter-
preted as the number of picture elements in the raster area. The elements may be regarded as
being uniformly distributed over the raster in the sense that their separations along the scanning
lines represent distances traversed by the scanning spot in equal times.
∞
E 0 ( t ) = -1- ∫ cos 2 π ft df (11.1.1)
π 0
∞
∫ – ∞ E ( t ) dt = 1 (11.1.2)
f and t here denote frequency and time, respectively. After band limiting, the frequency expan-
sion of the resulting signal becomes
fc
E ( t ) = -1- ∫ a ( f ) cos [ 2 πft – α( f ) ] df (11.1.3)
π 0
in which a (f) and α (f) are the attenuation and phase shift, respectively, of the channel at fre-
quency f, and fc is the frequency above which no appreciable transmission occurs.
The luminance distribution across the associated transverse line image of a signal in this sys-
tem has the form of the impulse response of the signal channel, which, in turn, is a curve of the
form sin x / x. Specifically, it is given by Equation (11.1.3) with a (f) constant and α (f) equal to
zero. This integrates to
f sin 2 πf t
E ( t ) = ⎛⎝ --c-⎞⎠ ⎛⎝ -----------------c---⎞⎠ (11.1.4)
π 2 πf t
c
-
Video Information Concepts 11-11
Note that the function E(t) has its maximum value at t = 0 and that it extends over the time
range t = ± 1/4 fc. The time interval Δ t = 1/2 fc is then representative of the maximum distance
through which the line image can be moved along the scanning path so that the luminance at a
given point on the raster reaches the value E(0) at an internal point in the range and is down to
2E(0) / π at either end point. Specifically, this distance is the distance traversed by the scanning
spot in time Δ t. The length of the picture element is defined to be this distance.
The number of picture elements in the raster is twice the maximum number of frequency
components that the given signal can possess, multiplied by the factor representing the fractional
portion of the total frame scanning interval used in active raster scanning. The elements are spec-
ified to be distributed uniformly over the raster, the distance between centers of adjacent ele-
ments on a given scanning path being fixed. That this distance is precisely equal to the picture-
element length in the television system under discussion can be shown as follows.
The spectrum for a signal representing a stationary picture can be shown to contain only fre-
quencies that are integral multiples of the picture repetition frequency f. Thus, if the channel cut-
off frequency is fc, the signal can contain not more than fc / f frequency components. This
corresponds to 2 fc / f information elements, and these are to be conceived as distributed uni-
formly throughout the raster scanning and blanking intervals. Picture elements are simply those
information elements that occur during the raster scanning intervals. The time interval between
successive elements of either type is evidently given by the frame scanning interval 1/f divided
by the total number 2 fc / f of information elements occurring in that time. This value turns out to
be 1/fc, which is precisely the time interval Δ t found just previously to be indicative of the pic-
ture-element length. It follows that in the system under discussion, adjacent picture elements on
a scanning path are in contact with each other, although they do not overlap. It is in order to bring
about this particular result that the picture-element length is defined in terms of luminance ratios
of 2/π.
The definition for picture-element width leads to a similar result. In the system under discus-
sion, displacing a longitudinal line image from exact coincidence with a given scanning path
results in an abrupt drop of the displayed luminance from the value characterizing the line image
to zero. Hence, the added condition appended to the definition becomes applicable and pre-
scribes that the picture-element width in this case is equal to the raster pitch distance.
It is thus seen that in a television system having a mathematical-point scanning aperture in the
camera, having sharp-cutoff signal transmission channels with distortionless passbands, and hav-
ing no nonlinear transducers, the picture elements exactly fill the raster area without overlapping.
Each such picture element may be called an ultimate picture element for the given television
system. It has the smallest possible dimensions that the picture element in a linear system with
given cutoff frequency can possess. Enlarging the camera scanning aperture from a point to a
configuration with finite dimensions immediately brings about an increase in the picture-ele-
ment length; a considerably enlarged aperture also causes the picture-element width to increase
so that overlap occurs both longitudinally and transversely. Replacement of the sharp-cutoff fre-
quency characteristic of the signal channel by a gradual one with the same cutoff frequency
changes the impulse response from that in Equation (11.1.4) to the more general form in Equa-
tion (11.1.3). The latter invariably yields a greater interval between points for which E(t) has a
value equal to a fraction 2/π of its maximum value than does the former. Lowering the cutoff fre-
quency of the channel causes the total number of picture elements to decrease and causes each
element to become longer.
In a nonlinear system, the actual picture element can be smaller than the ultimate element.
This can occur, for example, when the picture display device has a gamma exponent greater than
-
11-12 Video Measurement Techniques
unity. This is significant since the gamma exponent is usually found to lie in the range γ = 2.2
± 0.2 for conventional cathode-ray picture tubes. In practice, the picture-element length is never-
theless usually larger than that of the ultimate element because of the effects of camera aperture
size and the gradual cutoff in the frequency characteristic of the signal channel. The width, how-
ever, is ordinarily that of the ultimate element.
The total number of ultimate picture elements in the raster of a given television system is the
product of the number 2 fc / f of information elements in the frame interval and the percentage of
the frame interval consumed in active scanning of the raster. Thus, if a percentage PH of each
line and a percentage PV of the total number of lines are used in active scanning of the raster,
then there are 2PH PV fc / f picture elements in the raster. Moreover, the number of picture ele-
ments on each line of the raster is given by this number divided by the number of lines in the ras-
ter. The number of lines in a frame is fH / f, where fH is the line scanning frequency, and the
number of lines in the raster is therefore PV fH / f. Consequently, there are 2PH fc / fH picture ele-
ments on each raster line.
A list of picture-element statistics for various television systems is given in Table 11.1.1.
-
Video Information Concepts 11-13
element would also have this value. The resolution-element width, however, would be equal to
the raster pitch distance divided by the Kell factor, or to 1.4 times the raster pitch distance. At the
same time, the picture-element width would be exactly equal to the raster pitch distance. Just as
the picture element under this special condition may be called the ultimate picture element, the
resolution element under this same condition can be called the ultimate resolution element. Its
dimensions indicate the maximum resolving power of which the given television system is capa-
ble.
When the signal-channel cutoff frequency is unchanged although cutoff is made gradual
rather than sharp, both picture element and resolution element become longer. Their widths
remain unchanged as long as the camera scanning aperture remains a mathematical point. When
the camera aperture is enlarged beyond a diameter comparable with the raster pitch distance,
both picture-element and resolution-element width become greater than their ultimate values,
and it cannot be tacitly assumed that the ratio of their widths remains equal to the Kell factor.
Actual dimensions of the resolution element in a given television system cannot be calculated
on theoretical grounds but must be established by subjective experimental observation. Statistics
on the ultimate resolution element, however, are readily calculated on the basis of the specifica-
tions given previously. They are presented for various television systems in Table 11.1.2, a value
of 0.70 being assumed for the Kell factor.
-
11-14 Video Measurement Techniques
(a )
(b )
Figure 11.1.1 Video signal spectra: (a) camera scanning spot, shown with a Gaussian distribution,
passing over a luminance boundary on a scanning line; (b) corresponding camera output signal
resulting from the convolution of the spot and luminance distributions.
-
Video Information Concepts 11-15
(g) (h) ( i)
Figure 11.1.2 Scanning patterns of interest in analyzing conventional video signals: (a), (b), (c)
flat fields useful for determining color purity and transfer gradient (gamma); ( d) horizontal half-field
pattern for measuring low-frequency performance; (e) vertical half field for examining high-fre-
quency transient performance; (f) display of oblique bars; (g) in monochrome, a tonal wedge for
determining contrast and luminance transfer characteristics; in color, a display used for hue mea-
surements and adjustments; (h) wedge for measuring horizontal resolution; (i) wedge for measur-
ing vertical resolution.
-
11-16 Video Measurement Techniques
• The maximum output signal frequency generated by the camera or other pickup/generating
device
• Maximum modulating frequency corresponding to: 1) the fully transmitted (radiated) side-
band, or 2) the system used to convey the video signal from the source to the display
• Maximum video frequency present at the picture-tube (display) control electrodes
The maximum camera frequency is determined by the design and implementation of the
imaging element. The maximum modulating frequency is determined by the extent of the video
channel reserved for the fully transmitted sideband. The channel width, in turn, is chosen to pro-
vide a value of horizontal resolution approximately equal to the vertical resolution implicit in the
scanning pattern. The maximum video frequency at the display is determined by the device and
support circuitry of the display system.
Rh
Hr = -----
α
× ι (11.1.5)
Where:
Hr = horizontal resolution factor in lines per megahertz
Rh = lines of horizontal resolution per hertz of the video waveform
α = aspect ratio of the display
ι = active line period in microseconds
For NTSC, the horizontal resolution factor is:
2
78.8 = ------------ × 52.5 (11.1.6)
4⁄ 3
-
p
• Field-frequency component
2
• Components of the line frequency and its
harmonics
3
Surrounding each line-frequency harmonic is a
cluster of components, each separated from the
4
next by an interval equal to the field-scanning
frequency.
0 1 2 3 4
It is possible for the clusters surrounding
m
adjacent line-frequency harmonics to overlap
one another. As shown in Figure 11.1.4, two Figure 11.1.3 An array of image patterns cor-
patterns situated on adjacent vertical columns responding to indicated values of m and n.
produce the same value of video frequency (After [1].)
when scanned. Such “intercomponent confu-
sion” of spectral energy is fundamental to the
scanning process. Its effects are visible when a heavily striated pattern (such as that of a fabric
with an accented weave) is scanned with the striations approximately parallel to the scanning
lines. In the NTSC and PAL color systems, in which the luminance and chrominance signals
occupy the same spectral region (one being interlaced in frequency with the other), such inter-
component confusion may produce prominent color fringes. Precise filters, which sharply sepa-
rate the luminance and chrominance signals (comb filters), can remove this effect, except in the
diagonal direction.
In static and slowly moving scenes, the clusters surrounding each line-frequency harmonic
are compact, seldom extending further than 1 or 2 kHz on either side of the line-harmonic fre-
quency. The space remaining in the signal spectrum is unoccupied and may be used to accommo-
date the spectral components of another signal having the same structure and frequency spacing.
For scenes in which the motion is sufficiently slow for the eye to perceive the detail of moving
objects, it may be safely assumed that less than half the spectral space between line-frequency
harmonics is occupied by energy of significant magnitude. It is on this principle that the NTSC-
and PAL-compatible color television systems are based. The SECAM system uses frequency-
modulated chrominance signals, which are not frequency interlaced with the luminance signal.
DC component
Frequency of image repetition
Frequency of line scanning
2X frequency of line scanning
4X frequency
Amplitude
of line scanning
0, +1
0, 0
+1, -1
+1, 0
+1, +1
+1, -4
+1, -3
+1, -2
+2, -1
0, +2
0, +3
0, +4
+4, +1
+2, +1
+2, -3
+2, -2
+2, +2
+2, +3
+3, -1
+4, -1
+2, 0
+1, +2
+1, +3
+1, +4
+3, +1
+3, -3
+4, -3
+4, -2
m, n
+4, +2
+3, +2
+3, +3
+4, +3
+3, 0
+4, 0
Frequency +3, -2
Figure 11.1.4 The typical spectrum of a video signal, showing the harmonics of the line-scanning
frequency surrounded by clusters of components separated at intervals equal to the field-scanning
frequency. (After [1].)
11.1.4 References
1. Mertz, P.: “Television and the Scanning Process,” Proc. IRE, vol. 29, pp. 529–537, October
1941.
Chapter
11.2
Measurement of Color Displays
11.2.1 Introduction
The chromaticity and luminance of a portion of a color display device may be measured in sev-
eral ways. The most fundamental approach involves a complete spectroradiometric measurement
followed by computation using tables of color-matching functions. Spectroradiometers are avail-
able for this purpose. Another method, somewhat faster but less accurate, involves the use of a
photoelectric colorimeter. Because these devices have spectral sensitivities approximately equal
to the CIE color-matching functions, they provide direct readings of tristimulus values.
For setting up the reference white, it is often simplest to use a split-field visual comparator
and to adjust the display device until it matches the reference field (usually D65) of the compar-
ator. However, because a large spectral difference (large metamerism) usually exists between the
display and the reference, different observers may make different settings by this method. Conse-
quently, settings by one observer—or a group of observers—with normal color vision often are
used simply to provide a reference point for subsequent photoelectric measurements.
An alternative method of determining the luminance and chromaticity coordinates of any area
of a display involves measuring the output of each phosphor separately, then combining the mea-
surements using the center of gravity law, by which the total tristimulus output of each phosphor
is considered as an equivalent weight located at the chromaticity coordinates of the phosphor.
Consider the CIE chromaticity diagram shown in Figure 11.2.1 to be a uniform flat surface
positioned in a horizontal plane. For the case illustrated, the center of gravity of the three weights
(Tr , Tg, Tb), or the balance point, will be at the point Co. This point determines the chromaticity
of the mixture color. The luminance of the color Co will be the linear sum of the luminance out-
puts of the red, green, and blue phosphors. The chromaticity coordinates of the display primaries
may be obtained from the manufacturer. The total tristimulus output of one phosphor may be
determined by turning off the other two CRT guns, measuring the luminance of the specified
area, and dividing this value by the y chromaticity coordinate of the energized phosphor. This
procedure then is repeated for the other two phosphors. From this data, the color resulting from
given excitations of the three phosphors may be calculated as follows:
• Chromaticity coordinates of red phosphor = xr , yr
• Chromaticity coordinates of green phosphor = xg, yg
11-19
D l d df Di i l E i i Lib @M G Hill ( di i l i i lib )
Measurement of Color Displays
1.0
520 nm
0.8
510 nm 540 nm
T c1
Tg G 560 nm
0.6
d2
500 nm
y 580 nm
0.4 d3 d1
C1 600 nm
490 nm Co
Tr
R 780 nm
d
4
0.2
480 nm
Tb
B
0.00 380 nm
Yr
Total tristimulus value of red phosphor = Xr + Y r + Zr = ----- = T (11.2.1)
yr r
Yg
Total tristimulus value of green phosphor = X g + Y g + Z g = ----- = Tg (11.2.2)
yg
Y
Total tristimulus value of blue phosphor = X b + Y b + Zb = ----b- = T b (11.2.3)
yb
Consider Tr as a weight located at the chromaticity coordinates of the red phosphor and Tg as a
weight located at the chromaticity coordinates of the green phosphor. The location of the chro-
p y
maticity coordinates of color C1 (blue gun of color CRT turned off) can be determined by taking
moments along line RG to determine the center of gravity of weights Tr and Tg:
Tr × d1 = T g × d2 (11.2.4)
Taking moments along line C1B will locate the chromaticity coordinates of the mixture color Co:
Tc1 × d3 = T b × d4 (11.2.6)
• Preferred color reproduction: A departure from the preceding categories that recognizes the
preferences of the viewer. It is sometimes argued that corresponding color reproduction is not
the ultimate aim for some display systems, such as color television, and that it should be taken
into account that people prefer some colors to be different from their actual appearance. For
example, suntanned skin color is preferred to average real skin color, and sky is preferred
bluer and foliage greener than they really are.
Even if corresponding color reproduction is accepted as the target, some colors are more impor-
tant than others. For example, flesh tones must be acceptable—not obviously reddish, greenish,
purplish, or otherwise incorrectly rendered. Likewise, the sky must be blue and the clouds white,
within the viewer's range of acceptance. Similar conditions apply to other well-known colors of
common experience.
1.0
510 nm 520 nm
0.8
540 nm
500 nm
560 nm
0.6
y 580 nm
490 nm
0.4
600 nm
480 nm 780 nm
0.2
0.00 380 nm
Downloaded from Digital Engineering Library @ McGraw Hill (www digitalengineeringlibrary com)
p y
Studies have indicated that image context and image content are also factors that affect color
appearance. The use of highly chromatic backgrounds in a windowed display system is popular,
but it usually will affect the appearance of the colors in the foreground.
contrast ratio specifies the observable difference between a pixel that is switched on and one that
is in its corresponding off state:
L
Cr = on
-------- (11.2.8)
L of f
Where:
Cr = contrast ratio of the display
Lon = luminance of a pixel in the on state
Loff = luminance of a pixel in the off state
The area encompassed by the contrast ratio is an important parameter in assessing the perfor-
mance of a display. Two contrast ratio divisions typically are specified:
• Small area: comparison of the on and off states of a pixel-sized area
• Large area: comparison of the on and off states of a group of pixels
For most display applications, the small-area contrast ratio is the more critical parameter.
• Line width
• TV limiting resolution
Predictably, subjective measurements tend to exhibit more variability than objective measure-
ments. Although they generally are not used for acceptance testing or quality control, subjective
CRT measurements provide a fast and relatively simple means of performance assessment.
Results usually are consistent when performed by the same observer. Results for different
observers often vary, however, because different observers use different visual criteria to make
their judgments.
The shrinking raster and line-width techniques are used to estimate the vertical dimension of
the display (CRT) beam spot size (footprint). Several underlying assumptions accompany this
approach:
• The spot is assumed to be symmetrical and Gaussian in the vertical and horizontal planes.
• The display modulation transfer function (MTF) calculated from the spot-size measurement
results in the best performance envelope that can be expected from the device.
• The modulating electronics are designed with sufficient bandwidth so that spot size is the
limiting performance parameter.
• The modulation contrast at low spatial frequencies approaches 100 percent.
Depending on the application, not all of these assumptions are valid:
• Assumption 1. Verona [5] has reported that the symmetry assumption is generally not true.
The vertical spot profile is only an approximation to the horizontal spot profile; most spot
profiles exhibit some degree of astigmatism. However, significant deviations from the sym-
metry and Gaussian assumptions result in only minor deviations from the projected perfor-
mance when the assumptions are correct.
• Assumption 2. The optimum performance envelope assumption infers that other types of
measurements will result in the same or lower modulation contrast at each spatial frequency.
The MTF calculations based on a beam footprint in the vertical axis indicate the optimum
performance that can be obtained from the display because finer detail (higher spatial fre-
quency information) cannot be written onto the screen smaller than the spot size.
• Assumption 3. The modulation circuit bandwidth must be sufficient to pass the full incoming
video signal. Typically, the video circuit bandwidth is not a problem with current technology
circuits, which usually are designed to provide significantly more bandwidth than the display
is capable of reproducing. However, in cases where this assumption is not true, the calculated
MTF based purely on the vertical beam profile will be incorrect. The calculated performance
will be better than the actual performance of the display.
• Assumption 4. The calculated MTF is normalized to 100 percent modulation contrast at zero
spatial frequency and ignores the light scatter and other factors that degrade the actual mea-
sured MTF. Independent modulation-contrast measurements at a low spatial frequency can be
used to adjust the MTF curve to correct for the normalization effects.
• The brightness and contrast controls are set for the desired peak luminance with an active ras-
ter background luminance (1 percent of peak luminance) using a stair-step video signal.
• While displaying a flat-field video signal input corresponding to the peak luminance, the ver-
tical gain/size is reduced until the raster lines are barely distinguishable.
• The raster height is measured and divided by the number of active scan lines to estimate the
average height of each scan line. The number of active scan lines is typically 92 percent of the
line rate. (For example, a 525-line display has 480 active lines, an 875-line display has 817,
and a 1025-line display has 957 active lines.)
The calculated average line height typically is used as a stand-alone metric of display perfor-
mance.
The most significant shortcoming of the shrinking raster method is the variability introduced
through the determination of when the scan lines are barely distinct to the observer. Blinking and
other eye movements often enhance the distinctness of the scan lines; lines that were indistinct
become distinct again.
Line-Width Method
The line-width measurement technique requires a microscope with a calibrated graticule [5]. The
focused raster is set to a 4:3 aspect ratio, and the brightness and contrast controls are set for the
desired peak luminance with an active raster background luminance (1 percent of peak lumi-
nance) using a stair-step video signal. A single horizontal line at the anticipated peak operating
luminance is presented in the center of the display. The spot is measured by comparing its lumi-
nous profile with the graticule markings. As with the shrinking raster technique, determination
of the line edge is subjective.
result in an inflated limiting resolution measurement; too little contrast will result in a degraded
limiting resolution measurement.
Electronic resolution pattern generators typically provide a variety of resolution signals from
100 to 1000 TV lines/picture height (TVL/ph) or more in a given multiple (such as 100). Figure
11.2.3 illustrates an electronically generated resolution test pattern for high-definition video
applications.
Application Considerations
The subjective techniques discussed in this section, with the exception of TV limiting resolution,
measure the resolution of the display [5]. The TV pattern test measures image resolution, which
is quite different.
Consider as an example a video display in which the scan lines can just be perceived—about
480 scan lines per picture height. This indicates a display resolution of at least 960 TV lines,
counting light and dark lines, per the convention. If a pattern from an electronic generator is dis-
played, observation will show the image beginning to deteriorate at about 340 TV lines. This
characteristic is the result of beats between the image pattern and the raster, with the beat fre-
quency decreasing as the pattern spatial frequency approaches the raster spatial frequency. This
ratio of 340/480 = 0.7 (approximately) is known as the Kell factor. Although debated at length,
the factor does not change appreciably in subjective observations.
Half-Power-Width Method
Under the half-power-width technique, a single horizontal line is activated with the brightness
and contrast controls set to a typical operating level. The line luminance is equivalent to the high-
light luminance (maximum signal level). The central portion of the line is imaged with a micro-
scope in the plane of a variable-width slit. The open slit allows all the light from the line to pass
through to a photodetector. The output of the photodetector is displayed on an oscilloscope. As
the slit is gradually closed, the peak amplitude of the photodetector signal decreases. When the
signal drops to 50 percent of its initial value, the slit width is recorded. The width measurement
divided by the microscope magnification represents the half-power width of the horizontal scan
line.
Figure 11.2.3 Wide aspect ratio resolution test chart produced by an electronic signal generator.
(Courtesy of Tektronix.)
The half-power width is defined as the distance between symmetrical integration limits, cen-
tered about the maximum intensity point, which encompasses half of the total power under the
intensity curve. The half-power width is not the same as the half-intensity width measured
between the half-intensity points. The half-intensity width is theoretically 1.75 times greater than
the half-power width for a Gaussian spot luminance distribution.
It should be noted that the half-power line-width technique relies on line width to predict the
performance of the CRT. Many of the precautions outlined previously apply here also. The pri-
mary difference, however, is that line width is measured under this technique objectively, rather
than subjectively.
the luminance profile of the CRT spot, distance vs. luminance. The microphotometer is cali-
brated for luminance measures and for distance measures in the object plane. Each micron step
of the microphotometer represents a known increment in the object plane. The software then cal-
culates the MTF of the display based on its line spread from the calibrated luminance and dis-
tance measurements. Finite slit-width corrections also may be made to the MTF curve by
dividing it by a measurement-system MTF curve obtained from the luminance profile of an ideal
knife-edge aperture or a standard source.
The knife-edge Fourier transform measurement may be conducted using a low-spatial-fre-
quency vertical bar pattern (5 to 10 cycles) across the display with the brightness and contrast
controls set as discussed previously. The frequency response of the square wave pattern genera-
tor and video pattern generator should be greater than the frequency response of the display sys-
tem. The microphotometer scans from the center of a bright bar to the center of a dark bar (left to
right), measuring the width of the boundary and comparing it to a knife edge. The microphotom-
eter slit is oriented vertically, with its long axis parallel to the bars. The scan usually is made
from a light bar to a dark bar in the direction of spot movement. This procedure is preferred
because waveforms from scans in the opposite direction may contain certain anomalies. When
the beam is turned on in a square wave pattern, it tends to overshoot and oscillate. This behavior
produces artifacts in the luminance profile of the bar edge as the beam moves from an off to an
on state. In the on-to-off direction, however, the effects are minimal and the measured waveform
does not exhibit the same anomalies that can corrupt the MTF calculations.
The bar-edge (knife-edge) measurement, unlike the other techniques discussed so far, uses the
horizontal spot profile to predict display performance. All of the other techniques use the vertical
profile as an approximation of the more critical horizontal spot profile. The bar-edge measure-
ment will yield a more accurate assessment of display performance because the displayed image
is being generated with a spot scanned in the horizontal direction.
• Initial conditions. Setup includes allowing the monitor to warm up and stabilize for 20 to 30
minutes. The room ambient lighting should be the same as it is when the monitor is in normal
service, and several minutes must be allowed for visual adaptation to the operating environ-
ment.
• Initial screen adjustments. The monitor is switched to the setup position, in which the red,
green, and blue screen controls are adjusted individually so that the signals are barely visible.
• Purity. Purity, the ability of the gun to excite only its designated phosphor, is checked by
applying a low-level flat-field signal and activating only one of the three guns at a time. The
display should have no noticeable discolorations across the face.
• Scan size. The color picture monitor application establishes whether the overscan or under-
scan presentation of the display will be selected. An underscanned display is one in which the
active video (picture) area, including the corners of the raster, is visible within the screen
mask. Normal scan brings the edges of the picture tangent to the mask position. Overscan
should be no more than 5 percent.
• Geometry and aspect ratio. Display geometry and aspect ratio are adjusted with the cross-
hatch signal by scanning the display device with the green beam only. Correct geometry and
linearity are obtained by adjusting the pincushion and scan-linearity controls so that the pic-
ture appears without evident distortions from the normal viewing distance.
• Focus. An ideal focus target is available from some test signal generators; if it is unavailable,
multiburst, crosshatch, or white noise can be used as tools to optimize the focus of the dis-
played picture.
• Convergence. Convergence is adjusted with a crosshatch signal; it should be optimized for
either normal scan or underscan, depending upon the application.
• Aperture correction. If aperture correction is used, the amount of correction can be esti-
mated visually by ensuring that the 2T sin2 pulse has the same brightness as the luminance
bar or the multiburst signal when the 3 and 4.2 MHz bursts have the same sharpness and con-
trast.
• Chrominance amplitude and phase. The chrominance amplitude and phase are adjusted
using the SMPTE color bar test signal and viewing only the blue channel. Switching off the
comb filter, if it is present, provides a clear blue channel display. Periodically, the red and
green channels should be checked individually in a similar manner to verify that the decoders
are working properly. A detailed description of this procedure is given in [7].
• Brightness, color temperature, and gray scale tracking. The 100-IRE window signal is
used to supply the reference white. Because of typical luminance shading limitations, a cen-
trally placed PLUGE [8] signal is recommended for setting the monitor brightness control.
The black set signal provided in the SMPTE color bars also can be used for this purpose.
• Monitor matching. When color matching two or more color monitors, the same alignment
steps should be performed on each monitor in turn. Remember, however, that monitors cannot
be matched without the same phosphor sets, similar display uniformity characteristics, and
similar sharpness. The most noticeable deviations on color monitors are the lack of uniform
color presentations and brightness shading. Color matching of monitors for these parameters
can be most easily assessed by observing flat-field uniformity of the picture at low, medium,
and high amplitudes.
For complete monitor-alignment procedures, see [7].
11.2.4 References
1. MacAdam, D. L.: “Visual Sensitivities to Color Differences in Daylight,” J. Opt. Soc. Am.,
vol. 32, pp. 247–274, 1942.
2. Bender, Walter, and Alan Blount: “The Role of Colorimetry and Context in Color Dis-
plays,” Human Vision, Visual Processing, and Digital Display III, Bernice E. Rogowitz
(ed.), Proc. SPIE 1666, SPIE, Bellingham, Wash., pp. 343–348, 1992.
3. Tannas, Lawrence E., Jr.: Flat Panel Displays and CRTs, Van Nostrand Reinhold, New York,
N.Y., pg. 18, 1985.
4. Standards and Definitions Committee, Society for Information Display.
5. Verona, Robert: “Comparison of CRT Display Measurement Techniques,” Helmet-Mounted
Displays III, Thomas M. Lippert (ed.), Proc. SPIE 1695, SPIE, Bellingham, Wash., pp.
117–127, 1992.
6. “Critical Viewing Conditions for Evaluation of Color Television Pictures,” SMPTE Recom-
mended Practice RP 166, SMPTE, White Plains, N.Y., 1995.
7. “Alignment of NTSC Color Picture Monitors,” SMPTE Recommended Practice RP 167,
SMPTE, White Plains, N.Y., 1995.
8. Quinn, S. F., and C. A. Siocos: “PLUGE Method of Adjusting Picture Monitors in Televi-
sion Studios—A Technical Note,” SMPTE Journal, SMPTE, White Plains, N.Y., vol. 76,
pg. 925, September 1967.
Chapter
11.3
Camera Performance Verification
Peter Gloeggler
11.3.1 Introduction
With tube-type cameras, it was almost mandatory to fine-tune the device before a major shoot to
achieve optimum performance. The intrinsic stability of the CCD imager and the stability of the
circuitry used in modern CCD cameras now make it possible to operate the camera for several
months without internal readjustment. Physical damage to the camera or lens, in use or transport,
is probably the most frequent cause of a loss in performance in a CCD-based device. With care-
ful handling, the probability of malfunction is very small. It is nevertheless prudent to schedule a
quick check-out of the camera before the start of a major shoot, when the high cost of talent and
other aspects of the production are considered.
The following items are appropriate for inclusion in such a check-out procedure. If the test
results show a significant deviation from the manufacturers specifications or from the data previ-
ously obtained for the same test, a more thorough examination of the camera, as prescribed in the
camera service manual, is then indicated.
11-35
D l d df Di i l E i i Lib @M G Hill ( di i l i i lib )
11-36 Video Measurement Techniques
required for this measurement. Vertical smear monitors three regions of the chart: the aper-
ture (for white referencing), and above and below the aperture (for smearing).
White Shading
To confirm white shading:
• Set up a uniformly lit white test chart.
• Using the waveform monitor, open the lens to obtain about 70 IRE units of video (confirm
that the iris is in the range of f / 4.0 to f / 5.6; adjust the lighting if necessary), and confirm
there is a minimum of vertical, then horizontal, shading.
• Adjust as necessary using the camera horizontal and vertical white shading controls.
Flare
The camera flare correction circuitry provides an approximate correction for flare or scattering
of peripheral rays in the various parts of the optical system. To confirm the adjustment of the
flare correction circuit:
• Frame an 11-step gray scale chart that includes a very low reflectance strip of black velvet
added to the chart.
• Adjust the iris from fully closed until the white chip is at 100 IRE units of video. The flare
compensation circuitry is adjusted correctly if there is almost no rise in the black level of the
velvet strip as the white level is increased to 100 IRE units and only a small rise in the black
level, with no change in hue, when the iris is opened one more f-stop beyond the 100 IRE
units point.
• Adjust the R, G, and B flare controls as defined in the camera service manual if the flare cor-
rection is not adjusted correctly.
Linear Matrix
When it is necessary to use two dissimilar cameras models in a multicamera shoot, and either of
the two models provides an adjustable linear matrix, it is possible to use the variable matrix to
obtain a better colorimetry match between cameras. Specific matrix parameters and adjustments
(if any) will be found in the camera service manuals.
Figure 11.3.3 Color reference pattern layout specified in SMPTE 303M. (From [1]. Used with per-
mission.)
11.3.4 References
1. “SMPTE Standard: For Television—Color Reference Pattern,” SMPTE 303M, SMPTE,
White Plains, N.Y., 1999.
11.3.5 Bibliography
Gloeggler, Peter: “Video Pickup Devices and Systems,” in NAB Engineering Handbook,” 9th
Ed., Jerry C. Whitaker (ed.), National Association of Broadcasters, Washington, D.C.,
1999.
Chapter
11.4
Conventional Video Measurements
Carl Bentz
Jerry C. Whitaker, Editor-in-Chief
11.4.1 Introduction
Although there are a number of computer-based television signal monitors capable of measuring
video, sync, chroma, and burst levels (as well as pulse widths and other timing factors), some-
times the best way to see what a signal is doing is to monitor it visually. The waveform monitor
and vectorscope are oscilloscopes specially adapted for the video environment. The waveform
monitor, like a traditional oscilloscope, operates in a voltage-versus-time mode. While an oscil-
loscope timebase can be set over a wide range of intervals, the waveform monitor timebase trig-
gers automatically on sync pulses in the conventional TV signal, producing line- and field-rate
sweeps, as well as multiple lines, multiple fields, and shorter time intervals. Filters, clamps, and
other circuits process the video signal for specific monitoring needs. The vectorscope operates in
an X-Y voltage-versus-voltage mode to display chrominance information. It decodes the signal in
much the same way as a television receiver or a video monitor to extract color information and to
display phase relationships. These two instruments serve separate, distinct purposes. Some mod-
els combine the functions of both types of monitors in one chassis with a single CRT display.
Others include a communications link between two separate instruments.
Beyond basic signal monitoring, the waveform monitor and vectorscope provide a means to
identify and analyze signal aberrations. If the signal is distorted, these instruments allow a tech-
nician to learn the extent of the problem and to locate the offending equipment.
Although designed for analog video measurement and quality control, the waveform monitor
and vectorscope still serve valuable roles in the digital video facility, and will continue to do so
for some time.
11-43
D l d df Di i l E i i Lib @M G Hill ( di i l i i lib )
11-44 Video Measurement Techniques
Red
0V
-0.3V
0.7V
Green
0V
-0.3V
0.7V
Blue
0V
-0.3V
0.943V
0.7V
PAL Composite
0V
-0.3V
Figure 11.4.1 Examples of color test bars: (a) ITU nomenclature, (b, next page) SMPTE EG 1
color bar test signal, (c) EIA RS-189-A test signal, (d) multi-format color bar test signal. [Drawing
(b) after SMPTE EG 1, (c) after EIA RS-189-A, (d) after SMPTE RP 219.]
cation of color bar levels (ITU-R Rec. BT.471-1, “Nomenclature and Description of Color Bar
Signals”) is a set of four numbers separated by slashes or dots and giving RGB levels as a per-
centage of reference white in the following sequence:
white bar / black bar / max colored bars / min colored bars
For example, 100/0/100/25 means 100 percent R, G, and B on the white bar; 0 percent R, G, and
B on the black bar; 100 percent maximum of R, G, and B on colored bars; and 25 percent mini-
mum of R, G, and B on colored bars. (See Figure 11.4.1a.)
Some color bar patterns merit special names. For example, 100/0/75/0 bars are often called
“EBU bars” or “75 percent bars”, and 100/0/100/25 bars are known as “BBC bars.” Neverthe-
less, the ITU four number nomenclature remains the only reliable specification system to desig-
nate the exact levels for color bar test patterns.
The SMPTE variation is a matrix test pattern according to SMPTE engineering guideline EG
1, “Alignment Color Bar Test Signal for Television Picture Monitors.” The guideline specifies
that 67 percent of the field contains seven (without black) 75 percent color bars, plus 8 percent of
the field with the “new chroma set” bars (blue/black/magenta/black/cyan/black/gray), and the
(b )
(c)
remaining 25 percent contains a combination of –I, white, Q, black, and the black set signal (a
version of PLUGE). This arrangement is illustrated in Figure 11.4.1b.
Chroma gain and chroma phase for picture monitors are usually adjusted by observing the
standard encoded color bar signal with the red and green CRT guns switched off. The four visi-
ble blue bars are set for equal brightness. The use of the chroma set feature greatly increases the
accuracy of this adjustment because it provides a signal with the blue bars to be matched verti-
Figure 11.4.1d
cally adjacent to each other. Because the bars are adjacent, the eye can easily perceive differences
in brightness. This also eliminates effects resulting from shading or purity from one part of the
screen to another.
The EIA color bar signal is a matrix test pattern according to RS-189-A, which consists of 75
percent of the field containing seven (without black) 75 percent color bars (same as SMPTE) and
the remaining 25 percent containing a combination of –I, white, Q, and black. (See Figure
11.4.1c.)
Figure 11.4.2 Waveform monitor display of a color-bar signal at the two-line rate. (Courtesy of
Tektronix.)
ment where HD video sources are frequently converted and used as SD video content either in
525- or 625-line environment with same frame frequencies as in the original HDTV signal.
The multi-format color bar signal is composed of four specific patterns, shown in Figure
11.4.1d. The first part of the color bar signal represents a signal for the 4:3 aspect ratio; a second
part of the total signal adds side panels for the 16:9 aspect ratio. A third part adds black and
white ramps and additional color information, and the last part completes the total signal by add-
ing white and black bars, in addition to a set of near-black-level steps for monitor black level
adjustment.
(a)
(b)
Figure 11.4.3 EIA RS-189-A color-bar displays: (a) color displays of gray and color bars, (b) wave-
form display of reference gray and primary/complementary colors, plus sync and burst.
percent mode, a choice of 100 IRE or 75 IRE white reference level may be offered. Figure 11.4.3
shows 75 percent amplitude bars with a 100 IRE white level. Either white level can be used to set
levels, but operators must be aware of which signal has been selected. SMPTE bars have a white
level of 75 IRE as well as a 100 IRE white flag.
The vertical response of a waveform monitor depends upon filters that process the signal in
order to display certain components. The flat response mode displays all components of the sig-
nal. A chroma response filter removes luminance and displays only chrominance. The low-pass
filter removes chrominance, leaving only low-frequency luminance levels in the display. Some
monitors include an IRE filter, designed to average-out high-level, fine-detail peaks on a mono-
chrome video signal. The IRE filter aids the operator in setting brightness levels. The IRE
response removes most, but not all, of the chrominance.
If the waveform monitor has a dual filter mode, the operator can observe luminance levels
and overall amplitudes at the same time. The instrument switches between the flat and low-pass
filters. The line select mode is another useful feature for monitoring live signals.
Sync Pulses
Most waveform monitors include 0.5 μs or 1 μs per division magnification (MAG) modes, which
can be used to verify H-sync width between approximately 4.4 μs and 5.1 μs. The width is mea-
sured at the – 4 IRE point. On waveform monitors with good MAG registration, sync appearing
in the middle of the screen in the 2-line mode remains centered when the sweep is magnified.
Check the rise and fall times of sync, and the widths of the front porch and entire blanking inter-
val. Examine burst and verify there are between 8 and 11 cycles of subcarrier.
Check the vertical intervals for correct format, and measure the timing of the equalizing
pulses and vertical sync pulses. The acceptable limits for these parameters are shown in Figure
11.4.4.
Figure 11.4.6 Composite vertical-interval test signal (VITS) inserted in field 1, line 18. The video
level in IRE units is shown on the left; the radiated carrier signal is shown on the right.
Field-rate distortions appear as a difference in shading from the top to the bottom of the pic-
ture. A field-rate 60 Hz square wave is best suited for measuring field-rate distortion. Distortion
of this type is observed as a tilt in the waveform in the 2-field mode with the dc restorer off.
Figure 11.4.7 Waveform monitor display showing additive 60 Hz degradation. (Courtesy of Tek-
tronix.)
Line-rate distortions appear as streaking, shading, or poor picture stability. To detect such
errors, look for tilt in the bar portion of a pulse-and-bar signal. The waveform monitor should be
in the 1H or 2H mode with the fast dc restorer selected for the measurement.
The multiburst signal is used to test the high-frequency response of a system. The multiburst
includes packets of discrete frequencies within the television passband, with the higher frequen-
cies toward the right of each line. The highest frequency packet is at about 4.2 MHz, the upper
frequency limit of the NTSC system. The next packet to the left is near the color subcarrier fre-
quency (3.58 MHz, approximately) for checking the chrominance transfer characteristics. Other
packets are included at intervals down to 500 kHz. The most common distortion is high-fre-
quency rolloff, seen on the waveform monitor as reduced amplitude packets at higher frequen-
cies. This type of problem is shown in Figure 11.4.8. The television picture exhibits loss of fine
detail and color intensity when such impairments are present. High frequency peaking, appear-
ing on the waveform as higher amplitude packets at the higher frequencies, causes ghosting on
the picture.
Differential Phase
Differential phase (dφ) distortion occurs if a change in luminance level produces a change in the
chrominance phase. If the distortion is severe, the hue of an object will change as its brightness
changes. A modulated staircase or ramp is used to quantify the problem. Either signal places
chrominance of uniform amplitude and phase at different luminance levels. Figure 11.4.9 shows
a 100 IRE modulated ramp. Because dφ can change with changes in APL, measurements at the
center and at the two extremes of the APL range are necessary.
To measure dφ with a vectorscope, increase the gain control until the vector dot is on the edge
of the graticule circle. Use the phase shifter to set the vector to the 9 o'clock position. Phase error
Figure 11.4.8 Waveform monitor display of a multiburst signal showing poor high-frequency
response. (Courtesy of Tektronix.)
Figure 11.4.9 Waveform monitor display of a modulated ramp signal. (Courtesy of Tektronix.)
appears as circumferential elongation of the dot. The vectorscope graticule has a scale marked
with degrees of dφ error. Figure 11.4.10 shows a dφ error of 5º.
More information can be obtained from a swept R–Y display, which is a common feature of
waveform monitor and vectorscope instruments. If one or two lines of demodulated video from
the vectorscope are displayed on a waveform monitor, differential phase appears as tilt across the
line. In this mode, the phase control can be adjusted to place the demodulated video on the base-
line, which is equivalent in phase to the 9 o'clock position of the vectorscope. Figure 11.4.11
shows a dφ error of approximately 6º with the amount of tilt measured against a vertical scale.
Figure 11.4.10 Vectorscope display showing 5° differential phase error. (Courtesy of Tektronix.)
Figure 11.4.11 Vectorscope monitor display showing a differential phase error of 5.95° as a tilt on
the vertical scale. (Courtesy of Tektronix.)
This mode is useful in troubleshooting applications. By noting where along the line the tilt
begins, it is possible to determine at what dc level the problem starts to occur. In addition, field-
rate sweeps enable the operator to look at dφ over the field.
A variation of the swept R–Y display may be available in some instruments for precise mea-
surement of differential phase. Highly accurate measurements can be made with a vectorscope
that includes a precision phase shifter and a double-trace mode. This method involves nulling the
lowest part of the waveform with the phase shifter, and then using a separate calibrated phase
control to null the highest end of the waveform. A readout in tenths of a degree is possible.
Figure 11.4.12 Vectorscope display of a 10 percent differential gain error. (Courtesy of Tektronix.)
Differential Gain
Differential gain (dG) distortion refers to a change in chrominance amplitude with changes in
luminance level. The vividness of a colored object changes with variations in scene brightness.
The modulated ramp or staircase is used to evaluate this impairment with the measurement taken
on signals at different APL points.
To measure differential gain with a vectorscope, set the vector to the 9 o'clock position and
use the variable gain control to bring it to the edge of the graticule circle. Differential gain error
appears as a lengthening of the vector dot in the radial direction. The dG scale at the left side of
the graticule can be used to quantify the error. Figure 11.4.12 shows a dG error of 10 percent.
Differential gain can be evaluated on a waveform monitor by using the chroma filter and
examining the amplitude of the chrominance from a modulated staircase or ramp. With the wave-
form monitor in 1H sweep, use the variable gain to set the amplitude of the chrominance to 100
IRE. If the chrominance amplitude is not uniform across the line, there is dG error. With the gain
normalized to 100 IRE, the error can be expressed as a percentage. Finally, dG can be precisely
evaluated with a swept display of demodulated video. This is similar to the single trace R–Y
methods for differential phase. The B–Y signal is examined for tilt when the phase is set so that
the B–Y signal is at its maximum amplitude. The tilt can be quantified against a vertical scale.
ICPM
Television receivers may use a method known as intercarrier sound to reproduce audio informa-
tion. Sound is recovered by beating the audio carrier against the video carrier, producing a 4.5
MHz IF signal, which is demodulated to produce the sound portion of the transmission. From the
interaction between audio and video portions of the signal, certain distortions in the video at the
transmitter can produce audio buzz at the receiver. Distortions of this type are referred to as inci-
dental carrier phase modulation or ICPM. The widespread use of stereo audio for television
increased the importance of measuring this parameter at the transmitter, because the buzz is
Figure 11.4.13 Waveform monitor display using the ICPM graticule of a five-level stairstep signal
with no distortion. (Courtesy of Tektronix.)
more objectionable in stereo broadcasts. It is generally suggested that less than 3° of ICPM be
present in the radiated signal.
ICPM is measured using a high-quality demodulator with a synchronous detector mode and
an oscilloscope operated in a high-gain X-Y mode. Waveform and vector monitors usually have
such a mode as well. Video from the demodulator is fed to the Y input of the scope and the
quadrature output is fed to the X input terminal. Low-pass filters make the display easier to
resolve.
An unmodulated 5-step staircase signal produces a polar display, shown in Figure 11.4.13, on
a graticule developed for this purpose. Notice that the bumps all rest in a straight vertical line, if
there is no ICPM in the system. Tilt indicates an error, as shown in Figure 11.4.14. The graticule
is calibrated in degrees per radial division for differential gain settings. Adjustment, but not mea-
surement, can be performed without a graticule.
Figure 11.4.14 Waveform monitor display using the ICPM graticule of a five-level stairstep signal
with 5° distortion. (Courtesy of Tektronix.)
Figure 11.4.15 Measurement of tilt on a visual signal with 50 Hz and 15 kHz signals. (After [3].)
Figure 11.4.16 Measurement of rounding of a 15 kHz square-wave test signal to identify high-fre-
quency-response problems in a television transmitter. (After [3].)
Group Delay
The transients of higher-frequency components (15 kHz to 250 kHz), produce overshoot on the
leading edge of transitions and, possibly, ringing at the top and bottom of the waveform area. The
shorter duration of the “flat” top and bottom lowers concern about tilt and rounding. A square
wave contains a fundamental frequency and numerous odd harmonics. The number of the har-
monics determines the rise or fall times of the pulses. Square waves with rise times of 100 ns (T)
and 200 ns (2T) are particularly useful for television measurements. In the 2T pulse, significant
harmonics approach 5 MHz; a T pulse includes components approaching 10 MHz. Because the
TV system should carry as much information as possible, and its response should be as flat as
possible, the T pulse is a common test signal for determination of group delay.
Figure 11.4.17 Tolerance graticule mask for measuring transient response of a visual transmitter.
A complete oscillation is first displayed on the screen. The timebase then is expanded, and the
signal X-Y position controls are adjusted to shift the trace into the tolerance mask. (After [4].)
Group delay is the effect of time- and frequency-sensitive circuitry on a range of frequencies.
Time, frequency, phase shift, and signal delay all are related and can be determined from circuit
component values. Excessive group delay in a video signal appears as a loss of image definition.
Group delay is a fact of life that cannot be avoided, but its effect can be reduced through predis-
tortion of the video signal. Group delay adjustments can be made before the modulator stage of
the transmitter, while monitoring of the signal from a feedline test port and adjusting for best
performance.
Group delay can be monitored using a special purpose scope or waveform graticule. The goal
in making adjustments is to fit all excursions of the signal between the smallest tolerance mark-
ings of the graticule, as illustrated in Figure 11.4.17. Because quadrature phase errors are caused
by the vestigial sideband transmission system, synchronous detection is needed to develop the
display.
Table 11.4.1 Summary of Significant Linear Distortions and Test Methods (After [5])
Table 11.4.3 Summary of Nonlinear Distortions and Test Methods (After [5].)
Table 11.4.5 Summary of Significant Types of Noise and Test Methods (After [5].)
Figure 11.4.18 Block diagram of an automated video test instrument. (Courtesy of Tektronix.)
(a )
(b )
Figure 11.4.19 Automated video test instrument output charts: (a) waveform monitor mode dis-
play of color bars, (b) vectorscope mode display of color bars. (Courtesy of Tektronix.)
Figure 11.4.20 Expanded view of a sync waveform with measured parameters. (Courtesy of Tek-
tronix.)
Figure 11.4.21 Example error log for a monitored signal. (Courtesy of Tektronix.)
11.4.3 References
1. SMPTE Recommended Practice RP 219: “High-Definition, Standard-Definition Compati-
ble Color Bar Signal,” SMPTE, White Plains, N.Y., 2002.
11.4.4 Bibliography
SMPTE Engineering Guideline EG 1, “Alignment Color Bar Test Signal for Television Picture
Monitors,” SMPTE, White Plains, N.Y., 1990.
Pank, Bob (ed.): The Digital Fact Book, 9th ed., Quantel Ltd, Newbury, England, 1998.
Chapter
11.5
Application of the Zone Plate Signal
11.5.1 Introduction
The increased information content of advanced high-definition display systems requires sophis-
ticated processing to make recording and transmission practical [1]. This processing uses various
forms of bandwidth compression, scan-rate changes, motion-detection and motion-compensa-
tion algorithms, and other techniques. Zone plate patterns are well suited to exercising a complex
video system in the three dimensions of its signal spectrum: horizontal, vertical, and temporal.
Zone plate signals, unlike most conventional test signals, can be complex and dynamic. Because
of this, they are capable of simulating much of the detail and movement of actual video, exercis-
ing the system under test with signals representative of the intended application. These digitally
generated and controlled signals also have other important characteristics needed in test wave-
forms for video systems.
A signal intended for meaningful testing of a video system must be carefully controlled, so
that any departure from a known parameter of the signal is attributable to a distortion or other
change in the system under test. The test signal also must be predictable, so that it can be accu-
rately reproduced at other times or places. These constraints usually have led to test signals that
are electronically generated. In a few special cases, a standardized picture has been televised by a
camera or monoscope—usually for a subjective, but more detailed, evaluation of overall perfor-
mance of the video system.
A zone plate is a physical optical pattern, which was first used by televising it in this way.
Now that electronic generators are capable of producing similar patterns, the label “zone plate”
is applied to the wide variety of patterns created by video test instruments.
Conventional test signals, for the most part limited by the practical considerations of elec-
tronic generation, have represented relatively simple images. Each signal is capable of testing a
narrow range of possible distortions; several test signals are needed for a more complete evalua-
tion. Even with several signals, this method may not reveal all possible distortions or allow study
of all pertinent characteristics. This is true especially in video systems employing new forms of
sophisticated signal processing.
11-69
D l d df Di i l E i i Lib @M G Hill ( di i l i i lib )
pp g
Figure 11.5.1 Multiburst video test waveform: (a, left) picture display, (b, right) multiburst signal as
viewed on a waveform monitor (1H). (Courtesy of Tektronix.)
1. Figure 11.5.1(a) and other photographs in this chapter show the “beat” effects introduced by
the screening process used for photographic printing. This is largely unavoidable. The screen-
ing process is quite similar to the scanning or sampling of a television image—the patterns
are designed to identify this type of problem.
Figure 11.5.2 Conventional sweep-frequency test waveform: (a, left) picture display, (b, right)
waveform monitor display, with markers (1H). (Courtesy of Tektronix.)
Figure 11.5.3 Single horizontal frequency test signal from a zone plate generator: (a, left) picture
display, (b, right) waveform monitor display (1H). (Courtesy of Tektronix.)
Figure 11.5.4 Horizontal frequency-sweep test signal from a zone plate generator: ( a, left) picture
display, (b, right) waveform monitor display (1H). (Courtesy of Tektronix.)
Figure 11.5.5 Single vertical frequency test signal: (a, left) picture display, (b, right) magnified ver-
tical-rate waveform, showing the effects of scan sampling. (Courtesy of Tektronix.)
cesses that combine information from line to line, vertical testing patterns are increasingly
important.
In the vertical dimension, as well as the horizontal, tests can be done at a single frequency or
with a frequency-sweep signal. Figure 11.5.5 illustrates a magnified vertical-rate waveform dis-
play. Each “dash” in the photo represents one horizontal scan line. Sampling of vertical frequen-
cies is inherent in the scanning process, and the photo shows the effects on the signal waveform.
Note also that the signal voltage remains constant during each line, changing only from line to
line in accord with the vertical-dimension sine function of the signal. Figure 11.5.6 shows a ver-
tical frequency-sweep picture display.
The horizontal and vertical sine waves and sweeps are quite useful, but they do not use the
full potential of a zone plate signal source.
Figure 11.5.7 Combined horizontal and vertical frequency-sweep picture display. (Courtesy of
Tektronix.)
Figure 11.5.8 Combined horizontal and vertical frequency sweeps, selected line waveform display
(1H). This figure shows the maintenance of horizontal structure in the presence of vertical sweep.
(Courtesy of Tektronix.)
Figure 11.5.9 The best-known zone plate pattern, combined horizontal and vertical frequency
sweeps with zero frequency in the center screen. (Courtesy of Tektronix.)
effects, there are possible distortions or artifacts that are apparent only with simultaneous excita-
tion in both axes. In other words, the response of a system to diagonal detail may not be predict-
able from information taken from the horizontal and vertical responses.
appropriate setting of the time-related coefficients will create constant motion or motion sweep
(acceleration).
Specific motion-detection and interpolation algorithms in a system under test may be exer-
cised by determining the coefficients of a critical sequence of patterns. These patterns then may
be saved for subsequent testing during development or adjustment. In an operational environ-
ment, appropriate response to a critical sequence could ensure expected operation of the equip-
ment or facilitate fault detection.
Although motion artifacts are difficult to portray in the still-image constraints of a printed
book, the following example gives some idea of the potential of a versatile generator. In Figure
11.5.10, the vertical sweep maximum frequency has been increased to the point where it is zero-
beating with the scan at the bottom of the screen. (Note that the cycles/ph of the pattern matches
the lines/ph per field of the scan.) Actually, in direct viewing, there is another noticeable artifact
in the vertical center of the screen: a harmonic beat related to the gamma of the display CRT.
Because of interlace, this beat flickers at the field rate. The photograph integrates the interfield
flicker, thereby hiding the artifact, which is readily apparent when viewed in real time.
Figure 11.5.11 is identical to the previous photo, except for one important difference—
upward motion of 1/2-cycle per field has been added to the pattern. Now the sweep pattern itself
is integrated out, as is the first-order beat at the bottom. The harmonic effects in center screen no
longer flicker, because the change of scan vertical position from field to field is compensated by
a change in position of the image. The resulting beat pattern does not flicker and is easily photo-
graphed or, perhaps, scanned to determine depth of modulation.
A change in coefficients produces hyperbolic, rather than circular 2-axis patterns, as shown in
Figure 11.5.12. Another interesting pattern, which has been used for checking complex codecs,
is shown in Figure 11.5.13. This is also a moving pattern, which was altered slightly to freeze
some aspects of the movement for the purpose of taking the photograph.
Figure 11.5.11 The same vertical sweep as shown in Figure 11.5.10, except that appropriate pat-
tern motion has been added to “freeze” the beat pattern in the center screen for photography or
other analysis. (Courtesy of Tektronix.)
Figure 11.5.12 A hyperbolic variation of the 2-axis zone plate frequency sweep. (Courtesy of Tek-
tronix.)
Figure 11.5.13 A 2-axis frequency sweep in which the range of frequencies is swept several times
in each axis. Complex patterns such as this may be created for specific test requirements. ( Cour-
tesy of Tektronix.)
11.5.2 References
1. “Broadening the Applications of Zone Plate Generators,” Application Note 20W7056, Tek-
tronix, Beaverton, Ore., 1992.
Chapter
11.6
Picture Quality Measurement
11.6.1 Introduction
Picture-quality measurement methods include subjective testing, which is always used—at least
in an informal manner—and objective testing, which is most suitable for system performance
specification and evaluation [1]. A number of types of objective measurement methods are pos-
sible for digital television pictures, but those using a human visual system model are the most
powerful.
As illustrated in Figure 11.6.1, three key testing layers can be defined for the modern televi-
sion system:
• Video quality. This consists of signal quality and picture quality
• Protocol analysis. Protocol testing is required because the data formatting can be quite com-
plex and is relatively independent of the nature of the uncompressed signals or the eventual
conversion to interfacility transmission formats. Protocol test equipment can be both a source
of signals and an analyzer that locates errors with respect to a defined standard and deter-
mines the value of various operational parameters for the stream of data.
• Transmission system analysis. To send the video data to a remote location, one of many pos-
sible digital data transmission methods can be used, each of which imposes its own analysis
issues.
Table 11.6.1 lists several dimensions of video-quality measurement methods. Key definitions
include the following:
• Subjective measurements: the result of human observers providing their opinions of the video
quality.
• Objective measurements: performed with the aid of instrumentation, manually with humans
reading a calibrated scale or automatically using a mathematical algorithm.
• Direct measurements: performed on the material of interest, in this case, pictures (also known
as picture-quality measurements).
• Indirect measurements: made by processing specially designed test signals in the same man-
ner as the pictures (also known as signal-quality measurements). Subjective measurements
11-79
D l d df Di i l E i i Lib @M G Hill ( di i l i i lib )
y
Figure 11.6.1 Video testing layers for digital television. (After [1].)
are performed only in a direct manner because the human opinion of test signal picture qual-
ity is not particularly meaningful.
• In-service measurements: made while the program is being displayed, directly by evaluating
the program material or indirectly by including test signals with the program material.
Figure 11.6.2 Functional environment for traditional video measurements. (After [1].)
• Out-of-service: appropriate test scenes are used for direct measurements, and full-field test
signals are used for indirect measurements.
In the mixed environment of compressed and uncompressed signals, video-quality measure-
ments consist of two parts: signal quality and picture quality.
Figure 11.6.3 Functional environment for digital video measurements. (After [1].)
useful because their effect will be seen by a human observer if the pictures are processed in the
same way a number of times.
The use of digital compression has expanded the types of distortions that can occur in the
modern television system. Because signal-quality measurements will not do the job, objective
picture-quality measurements are needed, as illustrated in Figure 11.6.3. The total picture-quality
measurement space has increased because of subjective measurements that now include mul-
timinute test scenes with varying program material and variable picture quality. The new objec-
tive measurement methods must have strong correlation with subjective measurements and cover
a broad range of applications.
Even with all the objective testing methods available for analog and digital video, it is impor-
tant to have human observation of the pictures. Some impairments are not easily measured, yet
are obvious to a human observer. This situation certainly has not changed with the addition of
digital compression. Therefore, casual or informal subjective testing by a reasonably expert
viewer remains an important part of system evaluation and/or monitoring.
Figure 11.6.4 Block diagram of the feature-extraction method of picture-quality analysis. (After
[1].)
• Picture processing, where the complete input and output pictures are directly compared in
some manner and must be available at the measurement instrument (also known as picture
differencing).
From a system standpoint, the glass box approach utilizes knowledge of the compression system
to measure degradation. An example would be looking for blockiness in a DCT system. The
black box approach makes no assumptions about operation of the system.
In feature extraction, analysis time is reduced by comparing only a limited set of picture char-
acteristics. These characteristics are calculated, and the modest amount of data is embedded into
the picture stream. Examples of compression impairments would be block distortion, blurring/
smearing, or “mosquito noise.” At the receiver, the same features of the degraded picture are cal-
culated, providing a measure of the differences for each feature (Figure 11.6.4). The major weak-
ness of this approach is that it does not provide correlation between subjective and objective
measurements across a wide variety of compression systems or source material.
For the picture-processing scheme, the reference and degraded pictures are filtered in an
appropriate manner, resulting in a data set that may be as large as the original pictures. The dif-
ference between the two data sets is a measure of picture degradation, as illustrated in Figure
11.6.5.
A number of approaches to objective picture quality measurement have been proposed by
researchers using the human visual system (HVS) model as a basis. Such a model provides an
Results
Figure 11.6.5 Block diagram of the picture-differencing method of picture-quality analysis. (After
[1].)
image-quality metric that is independent of video material, specific types of impairments, and
the compression system used. The study of the HVS has been going on for some time, investigat-
ing such properties as contrast sensitivity, spatio-temporal response, and color perception [2].
Derivatives of this work include the JNDmetrix (Sarnoff/Tektronix) [3] and the Three-Layered
Bottom-Up Noise Weighting Model [4].
In-service picture quality measurement using the picture differencing method can be accom-
plished as shown in Figure 11.6.6. At the program source, a system-specific decoder is used to
provide the processed sequence to a picture quality evaluation system. The results would be
available in non-real-time, sampled real-time, or real-time, depending on the computational
power of the measurement instrument. The system-specific decoder provides the same picture
quality output as the decoder at the receive end of the transmission system.
Practical transmission systems for broadcast applications operate either virtually error-free
(resulting in no picture degradation) or with an unrecoverable number of errors (degradation so
severe that quality measurements are not particularly useful). Therefore, bit-error-rate evaluation
based on calculations in the receiver may provide sufficient monitoring of the transmission sys-
tem.
The foregoing is based on the assumption that the transmission path is essentially fixed.
Depending upon the application, this may not always be the case. Consider a terrestrial video
link, which may be switched to a variety of paths before reaching its ultimate destination. The
path also can change at random intervals depending on the network arrangement. The situation
gives rise to compression concatenation issues for which there are no easy solutions.
As identified by Uchida [6], video quality deterioration accumulates with each additional
coding device added to a cascade connection. This implies that, although picture deterioration
caused by a single codec may be imperceptible, cascaded codecs, as a whole, can cause serious
deterioration that cannot be overlooked. Detailed measurements of concatenation issues are doc-
umented in [6].
11.6.4 References
1. Fibush, David K.: “Picture Quality Measurements for Digital Television,” Proceedings of
the Digital Television ’97 Summit, Intertec Publishing, Overland Park, Kan., December
1997.
Chapter
11.7
DTV Transmission Performance Issues
11.7.1 Introduction
For analog signals, some transmission impairments are tolerable because the effect at the
receiver is often negligible, even for some fairly significant faults [1]. With digital television,
however, an improperly adjusted transmitter could mean the loss of viewers in the Grade B cov-
erage area. DTV reception, of course, does not degrade gracefully; it simply disappears. Atten-
tion to several parameters is required for satisfactory operation of the 8-VSB system. First
among these is the basic FCC requirement against creating interference to other over-the-air ser-
vices. To verify that there is no leakage into adjacent channels, out-of-band emission testing is
required.
As with analog transmitters, flat frequency response across the channel passband is necessary.
A properly aligned DTV transmitter exhibits many of the same characteristics as a properly
aligned analog unit: flat frequency response and group delay, with no leakage into adjacent chan-
nels. In the analog domain, group delay results in chrominance/luminance delay, which degrades
the displayed picture but still leaves it viewable. Group delay problems in DTV transmitters,
however, result in intersymbol interference (ISI) and a rise in the bit error rate (BER), possibly
causing the receiver to drop in and out of lock. Even low levels of ISI can cause receivers operat-
ing near the edge of the digital cliff to lose the picture completely. Amplitude and phase errors
can cause similar problems, again, resulting in reduced coverage area.
11-87
D l d df Di i l E i i Lib @M G Hill ( di i l i i lib )
11-88 Video Measurement Techniques
ues divided by the noise power. This is the difference between the idea signal and the actual
signal as demodulated along the real (in-phase) axis.
• Modulus error ratio. MER is a complex form of the S/N measurement that is made by
including the quadrature channel information in the ideal and error signal power computa-
tions. MER and S/N will be approximately equal unless their is an imbalance between the I
and Q channels. If the value of MER is significantly less than the S/N, Q-axis clipping is
likely to occur. This results because the Q-axis contains most of the amplitude peaks.
• Pilot amplitude error. PAE measurements quantify the pilot carrier deviation from the idea
case.
• Error vector magnitude. EVM is the rms value of the magnitudes of the symbol errors
along the real (in-phase) axis, divided by the magnitude of the real (in-phase) part of the out-
ermost constellation state. Because EVM includes both I and Q channels, it will indicate
transmitter clipping slightly before S/N. EVM is the magnitude of error induced by noise and
distortions compared to an ideal version of the signal.
Several types of transmitter impairments can degrade the overall performance of the system.
These impairments divide roughly into linear and nonlinear errors.
(a )
(b)
Figure 11.7.1 An 8-VSB constellation diagram: (a) a near-perfect condition, (b) constellation dia-
gram with noise and phase shift (the spreading of the pattern is caused by noise; the slant is
caused by phase shift). (After [1].)
considerable amount of migration might occur before boundary limits are exceeded. Degradation
in the BER is apparent only when those limits have been exceeded.
Emissions Mask
In practice, it is difficult to measure compliance with the RF emissions mask directly. For near-in
spectral components, within the adjacent channel, a common procedure is as follows [2]:
• Measure the transmitter IMD level with a resolution bandwidth of 30 kHz throughout the fre-
quency range of interest. This results in an adjustment to the standard FCC mask by 10.3 dB.
Under this test condition, the measured shoulder breakpoint levels should be at least –36.7 dB
from the mid band level.
• Measure the transmitter output spectrum without the filter using a spectrum analyzer.
• Measure the filter rejection vs. frequency using a network analyzer.
• Add the filter rejection to the measured transmitter spectrum. The sum should equal the trans-
mitter spectrum with the filter.
Output harmonics may be determined in the same manner as the rest of the output spectrum.
They should be at least –99.7 dB below the mid band power level. EVM is checked with a vector
signal analyzer or similar instrument.
The output power, pilot frequency, inband frequency response, and adjacent channel spectrum
should be measured periodically to assure proper transmitter operation. These parameters can be
measured while the transmitter is in service with normal programming.
Sideband Splatter
Interference from a DTV signal on either adjacent channel into NTSC or another DTV signal
will be primarily due to sideband splatter from the DTV channel into the adjacent channel [2].
The limits to this out-of-channel emission are defined by the RF mask as described in the “Mem-
orandum Opinion and Order on Reconsideration of the Sixth Report and Order,” adopted Febru-
ary 17, 1998, and released February 23, 1998.
For all practical purposes, high-power television transmitters will invariably generate some
amount of intermodulation products as a result of non-linear distortion mechanisms. Intermodu-
lation products appear as spurious sidebands that fall outside the 6 MHz channel at the output of
the transmitter. (See Figure 11.7.2.) Intermodulation products appear as noise in receivers tuned
Figure 11.7.2 Measured DTV transmitter output and sideband splatter. (After [2]. Courtesy of Har-
ris.)
to either first adjacent channel, and this noise adds to whatever noise is already present. The
overall specifications for the FCC mask are given in Figure 11.7.3.
Salient features of the RF mask include the following [2]:
• The shoulder level, at which the sideband splatter first appears outside the DTV channel, is
specified to be 47 dB below the effective radiated power (ERP) of the radiated DTV signal.
When this signal is displayed on a spectrum analyzer whose resolution bandwidth is small
compared to the bandwidth of the signal to be measured, it is displayed at a lower level than
would be the case in monitoring an unmodulated carrier (one having no sidebands). If the
analyzer resolution bandwidth is 0.5 MHz, and the signal power density is uniform over 5.38
MHz (as is the case for DTV), then the analyzer would display the DTV spectrum within the
DTV channel 10.3 dB below its true power. The correction factor is 10 log (0.5/5.38) = 10.3
dB. Thus, the reference line for the in-band signal shown across the DTV channel is at –10.3
dB, relative to the ERP of the radiated signal.
• The shoulder level is specified as – 47 dB, relative to the ERP. The shoulder level is at 36.7
dB below the reference line at –10.3 dB.
• The RF mask is flat for the first 0.5 MHz from the DTV channel edges, at –47 dB relative to
the ERP, and is shown to be 36.7 dB below the reference level, which is 10.3 dB below the
ERP.
• The RF mask from 0.5 MHz outside the DTV channel descends in a straight line from a value
of –47 dB to –110 dB at 6.0 MHz from the DTV channel edges.
• Outside of the first adjacent channels, the RF mask limits emissions to –110 dB below the
ERP of the DTV signal. No frequency limits are given for this RF mask. This limit on out-of-
channel emissions extends to 1.8 GHz in order to protect 1.575 MHz GPS signals.
Figure 11.7.3 RF spectrum mask limits for DTV transmission. The mask is a contour that illus-
trates the maximum levels of out-of-band radiation from a transmitted signal permitted by the FCC.
This graph is based on a measurement bandwidth of 500 kHz. (After [3]. Used with permission.)
• The total power in either first adjacent channel permitted by the RF mask is 45.75 dB below
the ERP of the DTV signal within its channel.
• The total NTSC weighted noise power in the lower adjacent channel is 59 dB below the ERP.
• The total NTSC weighted noise power in the upper adjacent channel is 58 dB below the ERP.
Table 11.7.1 DTV Pilot Carrier Frequencies for Two Stations (Normal offset above lower channel
edge: 309.440559 kHz; after [4])
Channel Relationship DTV Pilot Carrier Frequency Above Lower Channel Edge
NTSC Station NTSC Station NTSC Station DTV Station
Zero Offset + 10 kHz Offset – 10 kHz Offset No Offset
DTV with lower adjacent 332.138 kHz 342.138 kHz 322.138 kHz
NTSC ± 3 Hz ± 3 Hz ± 3 Hz
DTV co-channel with 338.056 kHz 348.056 kHz 328.056 kHz
NTSC ± 1 kHz ± 1 kHz ± 1 kHz
DTV co-channel with DTV + 19.403 kHz above + 19.403 kHz above + 19.403 kHz above 328.8436 kHz
DTV DTV DTV ± 10 Hz
of the cliff effect at the fringes of the service coverage area for a DTV signal, the allowable lower
power value will have a direct impact on the DTV reception threshold. A reduction of 0.97 dB in
transmitted power will change the DTV threshold of 14.9 dB (which has been determined to
yield a 3 × 10-6 error rate) to 15.87 dB, or approximately a 1-mile reduction in coverage distance
from the transmitter. Therefore, the average operating power of the DTV transmitted signal is of
significant importance.
The ATSC in [3] recommends a lower allowed power value of 95 percent of authorized power
and an upper allowed power value of 105 percent of authorized power.
Measurement Routine
The foregoing list of 8-VSB measurements should be performed during commissioning of the
RF system, and at regular intervals over the life of the transmitter. Sophisticated test instruments
are available from a number of manufacturers to aid in this work. Among the features provided
on many instruments are automatic measurement of selected parameters and storage of results
for comparison with future measurements. The storage feature can be a powerful tool in identify-
ing developing problems at an early stage.
reference. The co-located DTV station carrier should be 5.082138 MHz above the NTSC visual
carrier (22.697 kHz above the normal pilot frequency). The co-channel DTV station should set
its carrier 19.403 kHz above the co-located DTV carrier.
If there is interference with a co-channel DTV station, the analog station is expected to be sta-
ble within 10 Hz of its assigned frequency.
While it is possible to lock the frequency of the DTV station to the relevant NTSC station,
this may not be the best option if the two stations are not at the same location. It will likely be
easier to maintain the frequency of each station within the necessary tolerances. Where co-chan-
nel interference is a problem, that will be the only option.
In cases where no type of interference is expected, a pilot carrier-frequency tolerance of ±1
kHz is acceptable, but in all cases, good practice is to use a tighter tolerance if practicable.
11.7.4 References
1. Reed-Nickerson, Linc: “Understanding and Testing the 8-VSB Signal,” Broadcast Engi-
neering, Intertec Publishing, Overland Park, Kan., pp. 62–69, November 1997.
2. DTV Express Training Manual on Terrestrial DTV Broadcasting, Harris Corporation,
Quincy, Ill., September 1998.
3. ATSC: “Transmission Measurement and Compliance for Digital Television,” Advanced
Television Systems Committee, Washington, D.C., Doc. A/64-Rev A, May 30, 2000.
4. ATSC: “Guide to the Use of the ATSC Digital Television Standard,” Advanced Television
Systems Committee, Washington, D.C., Doc. A/54A, 4 December 2003.
Subject Index
A
A/B sync 3-19
absolute delay 10-20
ac line disturbance 5-133
ac power control 8-50
accuracy 10-11
active antenna system 7-96
active mixer 6-49
ac-to-RF efficiency 3-94
adaptive equalizer 3-97
add-compare-select 7-78
additive white Gaussian noise 2-70, 2-80
address bus 9-11
addressable converters 7-107
addressable tap 7-119
Advance Television Technology Center 2-70, 2-80
advection 1-40
AGC control range 6-22
aggregate event information table 3-118
air filtering system 8-36
airflow rate 8-33
air-handling system 8-14
air-interlock switch 8-16
airplane flutter 7-32
all-digital IBOC 2-80
allotment plan (DTV) 3-69
all-pass filter 4-18
all-pole filter 4-19
AM stereo 2-72, 6-68
AM-FM receivers 6-39
amplitude discriminator 6-62
amplitude limiting 6-64
amplitude modulation 1-68, 2-5, 3-9
amplitude shift keying 2-60
amplitude-tracking loop 7-76
analog compatibility tests 2-85
analog multiplier 7-35
ancillary data 3-43
annular control electrode (ACE) pulsing 3-77
antenna bandwidth 5-1
antenna efficiency 1-67
antenna elevation pattern 5-47
antenna gain 5-40
antenna height above average terrain 3-9
antenna matching 6-7
antenna power gain 3-9
antenna sweep 9-52
antenna tower 5-81, 5-82
antenna VSWR 5-53
antennas 5-33
aperiodic oscillator 6-57
aperture correction 11-32
arc detector 8-30
arcing 8-40
array antenna 5-22
aspect ratio 3-9
asynchronous counter 6-54
asynchronous data transfer 7-162
atmospheric charge energy 5-122
atmospheric electrical energy 5-116
atmospheric icing 5-83
atmospheric noise 6-41
ATSC A/80 3-111
ATSC GCR Standard 3-37
attenuation band 4-18
audio measurements 10-5
aural 3-75
aural amplifier 3-79
auto black circuit 11-37
automated test instrument 9-31
automatic chroma control 7-23
automatic fine tuning 7-24, 7-33
automatic frequency control 7-33
automatic gain control 6-47, 7-62
automatic gain control system 7-31
automatic level-control 7-120
automatic phase control 7-41
automatic-frequency-control 7-66
automatic-phase-and-frequency control 7-66
average AGC 7-31
average detection 9-53
average life 9-60
average luminance 11-15
DTV Transmission Performance Issues 12_2
B
back lobe 5-25
backdriving 9-31
balance point 11-19
balanced peak detector 7-35
balun 7-89
band switching 9-53
band-reject filter 4-18
bandwidth 5-1, 7-86, 9-35, 10-11
bandwidth modification 1-67
bare rock grounding 5-106
Barkhausen effect 6-9
Barlow-Wadley principle 1-61
baseband 9-53
baseband equalization DOCR 3-103
baseline clipper 9-53
basic input/output system 9-9
bathtub curve 9-65
batteries 8-81
battery action (on tower elements) 5-101
batwing panel antenna 5-70
beam perveance 3-80
beam pulsing 3-77
beam tilt 5-13, 5-47, 5-74
beamwidth 5-12, 7-86
Bessel function 1-75, 2-12, 9-53
Bessel null method 9-53
best-efforts network 7-153
bidirectional cable system 7-125
binary frequency-shift keying 1-84
binary on-off keying 1-84
binary phase-shift keying 1-85, 2-60
binary training sequence 7-74
bi-phase shift keying 7-149
bit error rate 2-59, 6-28, 7-62, 9-31
bit-error-rate evaluation 11-85
black body radiation 1-11
blackout 8-71
blend to analog 2-80
blend to mono 2-80
blind equalization 7-74
block error rate 2-70, 2-80
blocking oscillator 7-49
Bode diagram 1-54
body-current overload 8-29
bonding 5-106
bouncing APL signal 11-50
bouquet association table (BAT) 3-123
breaking radiation 1-17
bridge amplifier 7-117
brightness 3-12, 11-32
brightness control 7-36
broadband antenna 5-74
broadband panel radiator 5-74
broadband tracking filter 7-64
Broadcast Internet 7-155
broadcast videoconference 7-168
bulkhead panel 5-136
bulkhead panel grounding 5-137
buried ground system 5-102
burn-in 9-60, 9-65
burst-referenced ACC system 7-46
business networks 7-106
butterfly antenna 5-70
C
C band 7-137
cable modem termination system 7-126
cable service 7-133
cable system 7-133
cable television 7-105
cable television system 7-133
cable-ready tuner 7-123
cadence signal 3-107
calibrator 9-53
camera black shading 11-38
camera detail circuit 11-38
camera encoder measurement 11-37
camera flare correction 11-39
camera performance mesurement 11-35
camera scanning aperture 11-10
camera scanning system 11-13
camera white shading 11-38
candelabra structure 5-65
candelabra tower 5-73
capture ratio 6-22
carrier wave 1-70
carrier-to-noise ratio 7-145, 9-53
DTV Transmission Performance Issues 12_3
D
D/U 2-81
data bus 9-11
data de-interleaver 7-63
data derandomizer 7-63
data field sync 7-70
data field synchronization 7-62
data frames 3-48
data rate 7-149
data segment sync 3-47, 7-68, 7-69
data sink 3-112
data source 3-112
data transmission 7-115
data-slicing 7-65
data-stream syntax 3-122
dc restoration 7-37
dc to light spectrum 1-10
decibel 10-12
decision feedback 7-76
decode time stamp (DTS) 3-44
deemphasis 1-79
degradation failure 9-60
degree of modulation 1-69
de-icing 5-85
delay element 3-11
delay spread 2-59, 3-98, 3-99, 6-31
delayed AGC 7-64
delta modulation 1-82
demodulation 6-57
demodulator 6-40, 7-23
demultiplexing 1-68
dense wavelength division multiplexing 7-128
derating factor 5-34
desensitization 6-13, 6-14
Designated Market Area 7-134
desired to undesired field strength 3-72
detector 6-57, 7-23
Diacrode 3-84
dielectric breakdown 8-50
dielectric constant 8-60
diesel generator 8-75
difference-frequency distortion 10-40
differential gain distortion 11-55
differential peak detector 7-35
differential phase distortion 11-52
diffraction 1-30
digital audio broadcasting 2-15, 2-57
digital data system 1-79
Digital Display Working Group 7-161
digital multimeter 9-19
digital on-channel repeater 3-100
digital performance tests 2-85
digital storage oscilloscope 9-37
Digital Visual Interface 7-161
diplexer 4-17, 5-76
dipole antenna 5-1, 5-20, 7-88
dipole factor 3-67
Dirac pulse 6-28
direct broadcast satellite 3-121
direct digital frequency synthesis 1-61
direct lightning strike 5-106
direct memory access 9-9
direct method 5-28
direct modulation 2-14
direct table lookup 1-61
directional antenna 5-36
directional antenna system 5-23
directional array 5-24
directional-coupler multitap 7-118
DTV Transmission Performance Issues 12_5
directivity 5-12
direct-to-home satellite broadcast 3-114
disassembler 9-28
discrete sine wave frequency-response measurement 11-30
discriminator 6-62
display measurement 11-25
dissipator electrode 5-123
distortion 9-53, 10-33
distortion analyzer 10-34
distributed constant circulator 4-30
distributed transmission adapter 3-107
distributed transmission network 3-104
distributed transmission packet 3-107
distribution service 3-112
distribution system 7-107
DMM 9-21
DOCR 3-100
dominant wavelength 3-12
Doppler effect 6-27, 10-38
Doppler shift 3-109
dot interlace 3-13
dot pitch 11-24
double reflector antenna 7-93
double-balanced mixer 6-51
double-sideband amplitude modulation 1-68
doubly truncated waveguide 4-15
down-link 7-137
downtime 8-39, 9-59, 9-60
driven electrode 5-89
DTV peak envelope power (PEP) 3-82
DTV planning factors 3-67
DTV receiver 7-61
DTV RF envelope 3-82
dual feeder system 8-73
Dual Link DVI 7-162
dual-mode channel combiner 5-77
dual-modulus prescaling 1-58
dual-polarity feed horn 7-139
dual-polarized transition 4-12
dual-slope conversion 9-21
dummy load 8-59
duty cycle 10-19
DVB applications 3-122
DVB-C 3-124
DVB-MC 3-125
DVB-MS 3-125
DVB-S 3-124
DVB-T 3-125
DVI-D 7-162
DVI-I 7-162
DWDM architecture 7-128
dynamic echo 7-74
dynamic intermodulation 10-41
dynamic random-access memory 9-9
dynamic range 6-12, 9-46, 9-53
E
E vector 4-11
earth 5-88
earth current 5-122
earth electrode resistance 5-88
earthing 5-88
earth-to-electrode resistance 5-89
effective isotropic radiated power 7-145
effective radiated power 2-16, 3-10, 3-69, 3-98, 3-101, 5-12, 5-35
EIA color bar signal 11-46
EIA/CEA 909 7-102
electric field 5-65, 5-129
electromagnetic focusing 3-80
electromagnetic generation 5-116
electromagnetic interference 9-17
electromagnetic pulse (EMP) 5-127
electromagnetic radiation 1-1
electromagnetic spectrum 1-9, 1-19
electromagnetic waves 5-65, 7-86
electron-bunching 3-76
electronic program guide (EPG) 3-123
electronic speed control 8-80
electrostatic discharge (ESD) 5-127
elevation gain 5-40
elliptical polarization 5-36
elliptical reflector 7-139
EM spectrum 1-9
Emerald Book 5-88
emergency power 8-72
emission multiplex 3-115
EMP event 5-129
EMP radiation 5-129
emulation 9-9
emulative tester 9-29
enabling event phenomenon 8-23
encoder 1-67
DTV Transmission Performance Issues 12_6
F
facility 5-87
fade margin 2-37, 3-97
fading 6-27
fading simulator 6-33
fail-safe system 8-40
failure 9-60
failure mode 8-54
failure mode and effects analysis 9-60
failure modes 9-63
failure rate 9-60
fan dipole 7-90
far-field region 5-14
fast-charge time 8-82
fast-Fourier-transform 10-16
fault injection 9-81
fault isolation 5-88
fault tree 9-30
fault tree analysis 9-60
fault-tolerant systems 9-80
FCC propagation curves 3-68
feature extraction 11-82
feedback balanced diode detector 7-28
ferrite isolator 4-28
ferrite loop antenna 6-43
fiber-based cable television 7-106
field 3-10
field rate side channel 3-109
field replaceable unit 9-13
field strength 3-68, 9-49
figure of merit 7-86
filament voltage 8-19
filament voltage regulation 8-18
filter 4-17, 6-45, 9-54
filter alignment 4-18
filter loss 9-54
filter order 4-19
finite-impulse-response filter 7-76
first detector 6-6, 6-40
first mixer 7-64
first mixer input level 9-54
first order loop 7-75
flange buildup 4-9
flash converter 9-39
flash energy 5-122
flat response mode 11-48
flatness 9-54
flaw-stimulus relationship 9-70
flesh-tone correction 7-46
flicker effect 6-9
float-charge 8-83
flyback generator 7-51
flyback transformer 7-52
FM propagation 5-29
FM stereo 6-66
focus 11-32
folded dipole 5-20
folded unipole antenna 5-28
forward error correction 3-102, 3-112, 3-124, 7-149
Foster-Seeley discriminator 6-63
Fourier analysis 9-54
Fourier transform 11-29
FPLL 7-64
fractional-N-division synthesizer 1-56
DTV Transmission Performance Issues 12_7
frame 3-10
free space loss 2-42, 7-146
free-space azimuthal pattern 5-60
frequency 10-21
frequency bands 1-9, 9-54
frequency changer 6-6
frequency counter 9-23
frequency demodulator 6-62
frequency detector 6-62
frequency deviation 9-54
frequency diversity 2-62
frequency divider 6-54
frequency domain representation 9-54
frequency hopping 1-86
frequency marker 9-54
frequency modulation 1-74, 2-12, 3-10
frequency plan 7-139
frequency pulling 1-52
frequency pushing 1-51
frequency range 9-54
frequency reference 1-45
frequency response 2-18, 11-88
frequency segments 1-9
frequency shift keying 2-61
frequency spectrums 1-9
frequency translation 1-67, 3-100
frequency-and-phase-lock-loop 7-64
frequency-division multiplexing 2-60
frequency-domain multiplexing 1-68
frequency-sweep signal 11-70
frequeney-division multiplexing 6-66
Fresnel zone 1-34, 2-39
functional testing 9-31
fundamental frequency 9-54
fundamental impedance 8-63
G
gain 5-12, 7-86
gain compression 6-14
gain control 6-19
galvanized wire 5-102
gamma 11-11
gamma correction 3-13
gamma ray band 1-17
gas turbine generator 8-77
gasoline generator 8-77
gate 9-62
gate pulse generator 7-41
gated AGC 7-31
gated coincidence detector 7-35
gelcell battery 8-83
generator rating 8-78
geometry and aspect ratio 11-32
geostationary arc 7-138
geosynchronous orbit 7-142
ghost canceling reference 3-35
ghost cancellation 3-35
glaze ice 5-83
glitch 9-27
glitch capture 9-39
Global Positioning System 11-94
Grade A coverage 7-11
Grade B contour 3-67
Grade B coverage 7-11
Grand Alliance receiver 7-80
graticule 9-54
gray scale tracking 11-32
grazing incidence 1-34
Green Book 5-88
ground 5-88
ground resistance test meter 5-92
ground ring 5-89
ground rod 5-90, 5-104, 5-133
ground system 5-133
ground system checklist 5-144
ground wave propagation 6-41
grounding 5-133
grounding electrode 5-89
grounding electrode conductor 5-88
grounding elements 5-133
ground-strap connection 5-108
group delay 10-20, 11-59, 11-88
guard interval 3-126, 3-127
guided-fault isolation 9-30
guy anchor grounding 5-101
guy galloping 5-84
guy wire 8-50
guy-anchor point 5-103
guy-anchor point grounding 5-98
guyed tower grounding 5-101
H
H vector 4-11
half-intensity width 11-29
DTV Transmission Performance Issues 12_8
I
iBiquity Digital 2-66
iBiquity FM IBOC system 2-80
IBOC 2-69, 2-71, 2-79, 2-81
ICPM 11-55
ideal impedance 8-63
IEEE 1212 7-162
IEEE 1394 7-130, 7-162
IF amplifier 7-23
IF gain 9-55
IF processing DOCR 3-100
image frequency 6-5, 6-39, 7-64
image signal 7-22
impedance 8-61
impedance matching 5-14, 5-58
impulse Fourier transform 11-28
impulse response 10-54
in-band on channel (radio) 2-15, 2-57, 2-65, 2-69, 2-79
incident wave 5-52
incidental carrier phase modulation 7-34, 11-55
incidental phase modulation 2-19
in-circuit testing 9-31
in-cloud icing 5-83
index of refraction 1-39
inductive output tube 3-77
infrared band 1-10
input impedance 7-86
in-rush current 8-53
in-service measurement 11-84
instantaneous frequency 9-55
instantaneous sampling 1-79
instruction set 9-11
integral cavity klystron 8-13, 3-79
integrated flyback transformer 7-56
integrated receiver decoder 3-115, 3-116, 3-123
integration period 9-21
interactive cable system 7-107
interactive television 3-123
DTV Transmission Performance Issues 12_9
J
jitter 6-23
JNDmetrix 11-84
Johnson counter 6-70
Johnson noise 6-8
K
Kell factor 11-28
kernel 9-9
keyed AGC 7-31
Klystrode 3-77
klystron 3-76, 3-84, 3-88, 7-144
knife-edge Fourier transform 11-28
knife-edge Fourier transform measurement 11-30
Ku band 7-137
L
L network 5-14
latent defect 9-68, 9-72
LC filter 6-46
LDMOS 3-83, 3-85
lead-acid battery 8-83
least-mean-square 7-73
lens back-focus 11-37
level 10-6
level measurement 10-6
light to gamma ray spectrum 1-10
lighting requirements 5-82
lightning 5-133
lightning arrestor 5-141
lightning effect 5-115
lightning prevention 5-122
lightning rod 5-126
lightning strike 5-118
line bounce 7-38
line image 11-8
line select 11-49
line stretcher 5-18
linear array antenna 5-22
linear channel distortion 7-73
linear distortion 11-59, 11-88
linear matrix 3-14
linear scale 9-55
linear-beam device 3-88
line-width measurement 11-27
liquefied-petroleum gas generator 8-77
lithium ion battery 8-82
lithium polymer battery 8-82
live pause 7-164
LO output 9-55
load current 8-82
local origination programming 7-106
local oscillator 6-39, 7-64, 9-55
logic analyzer 9-25
logic current tracer 9-24
DTV Transmission Performance Issues 12_10
M
machine cycle 9-10
magnetic declination 5-25
magnetic doublet 1-20
magnetic north 5-25
magnitude scale 8-67
main facility ground point 5-136
maintainability 9-59, 9-61
maintenance 9-76
maintenance log 8-7
maintenance program 8-7
major lobe 5-24
manometer 8-15
manual gain control 6-14
master guide table 3-117
matching network 5-14
mean time between failure (MTBF) 3-94, 9-61
mean time to repair (MTTR) 3-94, 9-61
meander gate 3-21
measurement set 9-30
medium wave broadcasting 2-5
memory map 9-10
message corruption 1-86
metal-oxide-semiconductor field-effect transistor (MOSFET) 3-83
meter readings 8-7
m-g set 8-74
microwave band 1-12
microwave multipoint distribution system (MMDS) 3-125
microwave radio service 7-137
microwave tower grounding 5-133
millimeter waves 1-13
minimum detectable signal 6-10
minimum discernible signal 6-12
minimum-phase 10-19
mixed-highs principle 3-12
mixer 6-39, 7-22
mixer circuit 6-49
mixing 9-55
mixing products 2-19
mod-anode pulsing 3-77
mode of failure 9-61
modulated staircase 11-52
modulated waveform 1-67
modulation index 1-74, 2-12
modulation system 1-67
modulation transfer function 11-26
modulus error ratio 11-88
moiré effects 11-70
moiré pattern 7-43
monitor point 5-28, 9-62
monochrome level 11-8
motion-detection and interpolation 11-75
mountaintop grounding 5-106
MPEG Layer II 3-122
MPEG-2 AAC 2-71, 2-81
MPEG-2 compliance point 3-122
MPEG-2 data packets 3-122
MSDC (multistage depressed collector) klystron 3-84
multiburst 11-49
multiburst signal 11-70
multicast videoconference 7-168
multichannel microwave 7-110
multi-format color bar signal 11-47
multiloop synthesizer 1-59
multipath 2-71, 2-81, 6-42
multipath distortion 8-25
multipath echo 3-36
multipath immunity 3-126
multipath interference 2-59
multiple frequency network 3-105, 3-109
multiple sampling 9-61
multiple-tower systems 5-60
multiplexed analog component (MAC) 3-121
multiplexer 5-76
DTV Transmission Performance Issues 12_11
multiplexing 1-67
multiresonator filter 6-46
multislope integration 9-22
multislot antenna 5-69
multistage depressed collector (MSDC) klystron 3-77
must-carry 7-134
N
National Electrical Code 5-87
National Radio Systems Committee 2-69, 2-71, 2-81
National Television System Committee 3-11
natural electrode 5-89
natural gas generator 8-77
natural radioactive decay 5-116
natural sampling 1-79
near video-on-demand 3-123, 7-125
negative sign convention 5-14
negative transmission 3-10
network analyzer 8-57
network information table (NIT) 3-123
nickel cadmium battery 8-82
nickel metal hydride battery 8-82
noise 9-55, 11-59
noise bandwidth 9-55
noise canceler 7-50
noise factor 6-6, 6-8
noise figure 6-8, 6-40, 7-61, 7-64
noise floor 9-55
noise gate 7-50
noise limited contour 3-106
noise measurement 10-15
noise sideband 9-55
noise temperature 7-145
nonlinear distortion 11-59, 11-88, 11-90
nonlinearity 10-44
nonsoil grounding 5-107
normalized characteristic impedance 8-64
normalized impedance 8-63
notch filter 4-18
notch-filter distortion analyzer 10-34
NRSC 2-81
NTSC 3-11, 3-37
NTSC composite video 7-44
NTSC interference rejection filter 7-72
NTSC rejection filter 7-77
null-fill 5-47
Nyquist criteria 1-79
O
objective testing 2-71, 2-81, 11-79
OCAP 7-130
occupied bandwidth 9-47
odd-order IM products 6-13
OFDM 2-82, 3-126
offset tracking loop 7-76
one-port measurements 10-5
Open Cable 7-130
open service information system 3-123
operating parameters 8-19
opportunistic 7-156
opportunistic bandwidth 7-156
optical spectrum 1-10
optical splitter 7-129
optically scalable node 7-128
orthogonal energy 4-14
orthogonal frequency division multiplexing 2-72, 2-82, 3-126, 3-127
orthogonal frequency-division multiplexing (OFDM) 3-125
oscillator 1-51
oscilloscope 9-35, 11-43
outer code 3-124
out-of-band emission testing 11-87
out-of-channel emission 11-90
out-of-service picture-quality measurements 11-81
oven-controlled crystal oscillator 1-46, 11-94
overscan 11-32
overshoot 10-51
P
P4 phosphor 11-31
PA compartment 8-16
PA tube 8-14
PA tuning 8-19
PAC 2-71, 2-81
PAL color system 3-11, 3-18
PAL sync 3-19
panel antenna 5-30, 5-49
parabolic reflector 7-139
parallel amplification 3-84
parallel resonant oscillator 6-57
parasitic director elements 5-21
part failure rate 9-61
parts-count method 9-63
passive filter 4-17
DTV Transmission Performance Issues 12_12
Q
quadrature amplitude modulation (QAM) 1-74, 3-124
quadrature detector 7-35
quadrature distortion 7-29
quadrature phase-shift keying 2-60, 3-124, 7-149
quadriphase-shift keying 1-85
quality 9-62
quality assurance 9-61
quality audit 9-62
quantization 1-80, 9-38, 11-1
quantization error 1-80
quantization noise 1-82
quarter-wave monopol 7-92
quartz crystal 1-45, 6-55
quartz filter 6-46
quasi-parallel sound system 7-35
quasi-peak detector 10-8
R
radial ground 5-103
radial scale 8-67
radiating near-field region 5-14
radiation 7-86
radiation resistance 5-11, 7-86
radiative cooling 1-40
radio channel 6-27
Radio Data System 2-82
radio frequency band 1-12
radio horizon 2-38
radio receiver 6-7
radio refractivity 1-40
rain attenuation 7-147
random-noise signal 10-25
raster area 11-9
raster coordinates 11-8
raster spatial frequency 11-28
rating region table 3-117
ratio detector 6-63
ratio measurements 10-5
ray propagation 1-19
ray theory 1-30
Rayleigh channel 3-99
Rayleigh distribution 6-28
Rayleigh’s law 1-39
RDS 2-82
reactance curve 8-65
reactive near-field region 5-14
real-time analyzer 10-16, 10-24
receiver antenna system 7-85
receiver gain 6-8
receiver loop acquisition sequencing 7-63
reciprocal double exponential waveform 5-117
record length 9-39
Reed Solomon encoding/decoding 3-50
Reed-Solomon decoder 7-63
Reed-Solomon error-correction 7-73
Reed-Solomon outer coding 3-125
reference phase 4-21
DTV Transmission Performance Issues 12_14
S
S/N 11-87
safety factor 5-35
salting 5-94
sample 9-61
sample loop 5-26
sampling interval 9-38
sampling lines 5-27
sampling principle 1-79
satellite 7-137
satellite antenna piers 5-108
satellite digital audio radio service 2-71, 2-82
satellite link 3-112
satellite receiving antenna 7-93
satellite relay system 7-137
satellite services 7-108
satellite transmission 3-113
satellite virtual channel table 3-118
saturation 3-12, 7-24
scan width 9-46
scanning 3-10
scan-synchronizing signal 7-47
scatter field 1-41
scatter propagation 1-41
scatter serration 5-60
scattering matrix 4-22
SDARS 2-71, 2-82
sealed lead acid battery 8-82
SECAM 3-11
SECAM III 3-22
SECAM IV 3-26
second detector 6-57
secondary effects of lightning 5-105
secondary gamma rays 1-15, 1-17
secondary service area 2-5
second-order loop 1-53
segment sync 7-62, 7-77
selective fading 6-41
selectivity 6-11, 6-40, 6-46
selenium thyrector 8-30
self-discharge 8-82
self-supporting tower grounding 5-101
sensitivity 6-8, 9-46, 9-56
sensitivity level 7-11
separate service information (SI) 3-123
separation 2-19, 10-13
DTV Transmission Performance Issues 12_15
T
tangential-fire mode 5-50
tangential-firing panel 5-71
tap-off device 7-118
TE waves 4-11
tee network 5-15
telemetry 7-142
television lines 11-24
television receive only (TVRO) system 7-108
television receiver 7-11
television sound system 7-33
television transmitter 3-75
television tuner 7-12
temperature control 8-32
temperature shock testing 9-65
temperature-compensated crystal oscillator 1-47
terrestrial microwave relay 7-108
test charts 11-14
test instruments 11-2
test pattern 11-43
test to failure 9-61
thermal distortion 10-37
thermal fatigue 8-32
thermal noise 6-8, 6-9, 7-138
thermal radiation 1-13
third-order IM dynamic range 6-18
Three-Layered Bottom-Up Noise Weighting Model 11-84
threshold 6-22
threshold-of-visibility (TOV) 7-61
thyristor power control 8-51
tilt 10-51, 11-56
time and date table (TDT) 3-123
time division multiplexing 2-60
time domain 10-49
time gating 7-40
time hopping 1-86
time-division multiplexing 6-66
time-domain multiplexing 1-68
time-domain reflectometer 8-57
time-invariant 6-36
timing analyzer 9-25
TM waves 4-11
tone burst 10-52
DTV Transmission Performance Issues 12_17
U
Ufer ground system 5-97
UHF tetrode 3-84, 3-86
UHF transmitter 3-76
UHF tuner 7-17
ultraviolet band 1-10
underscan 11-32
unit sequential sampling 9-61
up-link 7-137
upper-sideband 2-5
upper-sideband signal 1-68
UPS inverter 8-80
useful storage bandwidth 9-39
utility company power failure 8-72
utility service drop 8-73
UV band 1-12
V
V antenna 7-90
valid data window 9-11
varactor diode 6-55, 7-19
variable transformer 5-58
variable-frequency oscillator 6-7
varicaps 7-19
vector field 5-65
vectorscope 11-43, 11-49
velocity of propagation 8-60, 8-64
Venn diagram 9-70
vertical blanking interval 3-35
vertical interval reference 3-35, 7-47
vertical polarization 5-12, 5-36
vertical resolution 11-12
vertical scan system 7-49
vertical shading 11-36
vertical sync pulse sag 7-32
vertically polarized 7-86
vestigial sideband transmission 3-10
vestigial-sideband AM 1-71
VHF transmitter 3-76
VHF tuner 7-14
video amplifier 7-36
video demodulator 7-27
DTV Transmission Performance Issues 12_18
W
wave optics 1-30
waveform monitor 11-43
waveform subtraction 9-57
waveguide 4-1, 4-10
wavelength 5-1, 7-85
web caching 7-154
weighted quasi peak 2-82
weighting filter 10-15
white balance 11-22
white noise 2-70, 2-80, 10-26, 11-92
windload 5-68
window function 10-27
windstorm effect 5-122
windsway 5-40, 5-73
Y
Yagi-Uda antenna 5-21
Yagi-Uda array 7-92
yaw 7-143
Z
zone plate 11-69
zone plate chart 11-14
zone plate pattern 11-69
zone plate signal 11-73
DTV Transmission Performance Issues 12_19
K. Blair Benson (deceased) was an engineering consultant and one of the world’s most renowned television
engineers. Beginning his career as an electrical engineer with General Electric, he joined the Columbia
Broadcasting System Television Network as a senior project engineer. From 1961 through 1966 he was
responsible for the engineering design and installation of the CBS Television Network New York Broadcast
Center, a project that introduced many new techniques and equipment designs to broadcasting. He advanced
to become vice president of technical development of the CBS Electronics Video Recording Division. He
later worked for Goldmark Communications Corporation as vice president of engineering and technical
operations.
A senior member of the institute of Electrical and Electronics Engineers and a fellow of the Society of
Motion Picture and Television Engineers, he served on numerous engineering committees for both societies
and for various terms as SMPTE Governor, television affairs vice president, and editorial vice president. He
wrote more than 40 scientific and technical papers on various aspects of television technology. In addition, he
was editor of four McGraw-Hill handbooks: the original edition of this Television Engineering Handbook, the
Audio Engineering Handbook, the Television and Audio Handbook for Engineers and Technicians, and
HDTV: Advanced Television for the 1990s.