Ac Manual10 Vol1
Ac Manual10 Vol1
Ac Manual10 Vol1
CINEMATOGRAPHER
MANUAL
TENTH EDITION
VOLUME I
AMERICAN
CINEMATOGRAPHER
MANUAL
TENTH
EDITION
Volume I
EDITED BY
Michael Goi, ASC
ISBN 978-1-4675-6831-9
Foreword
Y ou hold in your hands the result of five years of thought, debate, and
inspiration. When Stephen Burum, ASC, asked me to be the editor of
this 10th edition of the venerable American Cinematographer Manual, the
industry was in the birth throes of transition; digital intermediates were the
exception and not the rule, we still used the term video rather than digital,
and 4K as a viable production and post format was far beyond our reach.
All these changes and many more came in rapid succession as we labored to
bring this book to press. No sooner had we completed an article when it had
to be updated due to sweeping advances in technology.
I am at heart a low-tech person. I like things simple. I questioned whether
I was even the right person to be taking this book on. But in a strange way, it
made sense. If I could design the manual in a manner that made sense to me,
then the information it contained would be accesible to a wide spectrum of
professional and prosumer image makers. Cinematographers today need to
be closet scientists in order to decipher the tools they have at their disposal,
but all those technologies need not be daunting; they can be fun to explore
and exciting to utilize. Now more than ever, the dreams of a whole new gen-
eration can be made into real moving images. This edition contains some of
the most comprehensive information on digital that you will find anywhere,
but it doesn’t leave behind the essential building blocks of film technology,
which is at its highest level of development. Where we are now is really having
the best of both worlds.
When you embark on a journey to a new world, it’s best to take along a
crew who know the territory. The contributors to this edition have proven to
be the most helpful and dedicated group of scientists, artists and craftspeople
one could possibly hope to assemble. Thanks go to Jim Branch, Curtis Clark,
ASC; Richard Crudo, ASC; Dan Curry; Linwood G. Dunn, ASC; Richard
Edlund, ASC; Jonathan Erland; Jon Fauer, ASC; Ray Feeney; Tom Fraser;
Taz Goldstein; Colin Green and the Previsualization Society; Frieder Hoch-
heim; Michael Hofstein; Bill Hogan; John Hora, ASC; Rob Hummel; Steve
Irwin; Kent H. Jorgensen; Frank Kay; Glenn Kennel; Jon Kranhouse; Lou
Levinson; Andy Maltz and the AMPAS Science and Technology Council;
Vincent Matta; Tak Miyagishima; David Morin; M. David Mullen, ASC;
Dennis Muren, ASC; Iain A. Neil; Marty Ollstein; Josh Pines; Steven Poster,
ASC; Sarah Priestnall; David Reisner; Pete Romano, ASC; Andy Romanoff;
Dr. Rod Ryan; Nic Sadler and Chemical Wedding; Bill Taylor, ASC; Ira Tiffen
and Evans Wetmore.
iv
|
Special thanks go to Iain Stasukevich for his assistance in research, Lowell
Peterson, ASC, Jamie Anderson, ASC and King Greenspon for their proof-
reading skills, and Deeann Hoff and Mark McDougal for handling the layout
of the book.
Extra special thanks go to Brett Grauman, general manager of the ASC,
Patty Armacost, events coordinator, Delphine Figueras, my assistant when
I was handling being ASC president while trying to finish this book, Saul
Molina and Alex Lopex for their expertise in marketing and events man-
agement, Owen Roizman, ASC, who is the heart, soul and inspiration for
the organization, George Spiro Dibie, ASC, my mentor and friend, Martha
Winterhalter, whose knowledge of what we do and how to convey it to the
world knows no bounds, and Gina Goi, my wife, for her love and support
during my many twilight editing sessions.
Enjoy the Manual. Go make movies.
Table of Contents
Foreword Vol. I iii
Origins of the American Society of Cinematographers Vol. I 1
Responsibilities of the Cinematographer Vol. I 3
Summary of Formats Vol. I 7
Tak Miyagishima
Basic Digtal Concepts Vol. I 9
Marty Ollstein
A Primer for Evaluating Digital Motion Picture Cameras Vol. I 31
Rob Hummel
Digtal Cinematography on a Budget Vol. I 37
M. David Mullen, ASC
Putting the Image on Film Vol. I 57
Rob Hummel
Comparisons of 35mm 1.85, Anamorphic and Super 35
Film Formats Vol. I 59
Rob Hummel
Anamorphic Cinematography Vol. I 87
John Hora, ASC
Exposure Meters Vol. I 93
Jim Branch
Lenses Vol. I 111
Iain A. Neil
Camera Filters Vol. I 143
Ira Tiffen
Camera-Stabilizing Systems Vol. I 173
Previsualization Vol. I 187
Colin Green
3-D Stereoscopic Cinematography Vol. I 193
Rob Hummel
Day-for-Night, Infrared and Ultraviolet Cinematography Vol. I 207
Dr. Rod Ryan
Aerial Cinematography Vol. I 215
Jon Kranhouse
Underwater Cinematography Vol. I 229
Pete Romano, ASC
vi
|
Table of Contents
© 2012 Sony Electronics Inc. All rights reserved. Reproduction in whole or in part without written permission is prohibited. Features and specifications are subject to change
without notice. Weights and measurements are approximate. Sony, TRIMASTER EL and the Sony make. believe logo are trademarks of Sony. Screen image simulated.
* Viewable area, measured diagonally.
ASC_DV&EDIT_2CMYK.indd 1 12/19/12 8:23 AM
Samy’s Camera
for all your digital cinema needs
Rent It or Buy It
from grip
to lights
to cameras
S aleS
Los Angeles 323.938.2420 I Culver City 310.450.4551 Pasadena 626.796.3300
Santa Barbara 805.963.7269 I Santa Ana 714.557.9400 I San Francisco
www•samys•com
ASC_Samys2.indd 1 12/19/12 8:14 AM
CREATIVE VISION
ANYWHERE. ANYTIME.
EC3 provides location services through the combined creativity, experience and technological
infrastructure that is EFILM® and Company 3®; two of the industry’s most trusted post
production houses. Through custom configured hardware, software solutions and highly
skilled field technicians, EC3 provides location services for all digital capture Features.
323.308.3094 / 310.255.6643 / E C 3 O N LO C AT I O N . C O M
STAGE
12
STAGE
3 STAGE
26
STAGE
34
S:8.5”
I N T R O D U C I N G T H E C A N O N C I N E M A E O S S Y S T E M
© 2012 Canon U.S.A., Inc. All rights reserved. Canon and EOS are registered trademarks of Canon Inc. in the United States and may also be registered trademarks or trademarks in other countries.
BD & DVD
Color On Location Content Content
Services
Science Services Preparation Management
Animation
Front End Sound Digital
Digital Cinema
Distribution
Mastering
technicolor.com
e
S
He S
il
Fo
n To gR T H
R’
k
ac
R
Bl
iO O il T
Fr
os
e
lf TO
t
ll C un P
O
Ha l C
CT
5 ul
Fu Sp A
20 04 F
2
0.
6
TB
nD
Cl grid
iT AT F
ot
2 h h
20 ug
eD m e
ne le
20
Sc 1 H
rim alf
Re CTB
F fle
21 ull C ct
6 T
Di O
or
ffu A
sio cry
n lic
Pa
Ci
ne
l
twitter.com/leefilters
facebook.com/leefiltersusa Think LEE
leefilters.tumblr.com www.leefilters.com
Toland will track your photographic choices as you make them. As you
change the camera speed, you will instantly get feedback on how this
affects running time and exposure; when you change lenses, you will
see Depth of Field and Field of View updates in real time.
“ “
GREGG TOLAND, ASC
STEPHEN LIGHTHILL MICHAEL GOI
KEY FEATURES This app, named in honor of The combined intellectual and
the legendary Citizen Kane practical knowledge of the
cinematographer, Gregg world's greatest
Comprehensive database Toland, is an indispensable, cinematographers has been
on-set, exposure management poured into this, the first
of cameras and lenses tool. And, in keeping with the app-based field tool worthy of
Exposure calculator educational role of the ASC, carrying the ASC brand. The
the app is also a teacher and research and care that went
covering camera speed, mentor in its own right: as you into this was driven by our
”
learn the app, you learn the desire to make something
shutter angle & filter factor black art of exposure. innovative in design and
function that would be a fitting
”
Running time and footage
tribute to the cinematographer
calculator whose name it bears.
angenieux@tccus.com • www.angenieux.com
DIGITAL CINEMA (RE)SOLUTION
7680 x 4320
8K
ADORAMA RENTAL CO
5120 x 2700 (RED EPIC)
4096 x 2160
5K WE HAVE A SOLUTION
4K FOR YOUR RESOLUTION
3840 x 2160 Quad HD
3072 x 1728 3.8K ARC rents and supports the full
2880 x 1620 (ARRI) 3K range of professional still & motion
2048 x 1152
2.8K equipment, including digital cinema
cameras, accessories, lenses,
1920 x 1080 lighting and grip.
1080P 2K
1280 x 720
720P
NEW GEAR. KNOWLEDGEABLE STAFF. GREAT PRICES. | RENT | MOTION@ADORAMA.COM | 212 - 627- 8487 | NYC | ADORAMARENTAL.COM
•
Truly Cinematic
www.arri.com
Sekonic.com
Designed, Developed, Engineered LED’S
& Assembled in the USA ACCEPTED BY
BROADCASTERS,
NEWS AND
SPORTS GROUPS
JAB HURRICANE
LED-IP65 RATED, “WEATHERPROOF”
Single Source & No multiple shadows,
DMX compatible, AC/DC & output compatable
to a 800 Watt HMI. Optional units Jab Daylight,
Jab Variable & Jab Tungsten.
FOR ALL UNITS ETL, UL, CE & CSA approved, RoHS Compliant, No Ultra Violet or Infra Red Rays,
Cree LEDs, 60,000 hour warranty on LEDs, Green Technology, Universal power supply and much more.
Worldwide Distribution
U.K. & Ireland Australia & New Zealand Mexico, Brazil & Peru Italy
EvErything you Want in
a CamEra rEntal housE
ARRI SONY RED AATON
MOVIECAM JVC PANASONIC
ANGENIEUX COOKE FUJINON
CANON NIKON ZEISS
and so much more
Oppenheimer Cine Rental has long been the #1 20
Year
artistically
inspired
technically
advanced
We Give New meaning to the Phrase "Best and Brightest"
Environmentally
Sustainable
LED Lighting
Nila, Inc. 723 West Woodbury Rd., Altadena, CA 91001, 8183928370 Nila.com
Film & Digital Cinematography Equipment - S3D - Wireless
www.hd4dp.com
Th
The ulTimaTe choices
and Michael Vellekoop for the concept,
©A.M.P.A.S.®
design and implementation of the Pictorvision
Eclipse, an electronically stabilized aerial
camera platform. for above and beyond
fo
The Pictorvision Eclipse system allows
cinematographers to capture aerial footage
at faster flying speeds with aggressive
cinema
cin Tography
platform maneuvering.
Find these and our latest inventory of cutting edge technologies at:
W W W. P I C T O R V I S I O N . C O M 8 0 0 . 8 7 6 . 5 5 8 3 8 1 8 . 7 8 5 . 9 2 8 2
I am working on a film called "Balls to the I'm usually scouting for locations months
Walls" I use this app on every blocking before the DP comes on board and knowing
rehearsal. I find the master and all the precisely where the sun will be on the exact
coverage and capture the frames and lens shoot date is essential. Helios is without doubt
sizes in my photo library. Then I show the the best tool out there for DPs, production
director. I can also start setting up the shot designers, location scouts and directors.
before the camera is in place. Great app and Nothing comes even close.
now an important tool for my work! RICHARD LONCRAINE
ROBERT REED ALTMAN
It starts
back and forth between cameras.
I never worry about matching
because I always have Schneider
filters on each lens.”
Don A. Morgan, ASC
with the
glass...
“I’m a long-time fan of the
Classic Soft™. It is the best
wrinkle remover ever—and
it is light enough to use on
digital and film.”
Origins of the
American Society
of Cinematographers
F or over 93 years, the ASC has remained true to its ideals: loyalty, prog-
ress, artistry. Reverence for the past and a commitment to the future
have made a potent and lasting combination in a world of shifting values
and uncertain motives.
The American Society of Cinematographers received its charter from the
State of California in January 1919 and is the oldest continuously operat-
ing motion picture society in the world. Its declared purpose still resonates
today: “to advance the art of cinematography through artistry and techno-
logical progress, to exchange ideas and to cement a closer relationship among
cinematographers.”
The origins of the ASC lie in two clubs founded by cinematographers in
1913. The Cinema Camera Club was started in New York City by three cam-
eramen from the Thomas A. Edison Studio: Phil Rosen, Frank Kugler and
Lewis W. Physioc. They decided to form a fraternity to establish professional
standards, encourage the manufacture of better equipment and seek recogni-
tion as creative artists. Meanwhile, the similarly conceived Static Club was
formed in Los Angeles. When Rosen came to the West Coast five years later,
he and Charles Rosher combined the clubs. The ASC now has more than 340
active and associate members.
The first ASC screen credit was given to charter member Joseph August
when he photographed a William S. Hart picture in 1919.
The year after its charter, ASC began publishing American Cinematogra-
pher magazine, which ever since has served as the club’s foremost means of
advancing the art.
The ASC has been very active in recent years in expressing concern about
choices for Advanced Television (ATV), ranging from the choice of aspect
ratio to pushing for the abandonment of interlace displays. At the invitation
of the House and Senate in Washington, D.C., members of the ASC have
been asked to inform and advise legislators on these issues.
Currently our technology committee has created a standard test (StEM) for
digital cinema. They are advising the industry on standards in both produc-
tion and postproduction for digital capture, manipulation and presentation.
The ASC is not a labor union or guild, but is an educational, cultural and
professional organization. Membership is possible by invitation and is extend-
ed only to directors of photography with distinguished credits in the industry.
—George E. Turner
| 3
Responsibilities
Of The Cinematographer
PREPRODUCTION PLANNING
The cinematographer is frequently invited to watch rehearsals and offer
suggestions for “blocking” scenes to provide artful coverage in the most
4
|
Responsibilities of the Cinematographer
efficient way. All aspects of production are planned at this stage, including
how many cameras are needed, how to move them and whether older or
newer prime or zoom lenses are best suited for each task. Cinematographers
also frequently provide input to directors while they are developing story-
boards and shot lists. They go on location scouts and make recommenda-
tions to the production designer and art director for dressing and painting
sets based on the vision for the “look” of the film and requirements for light-
ing and camera movement. The cinematographer also confers with the di-
rector and assistant director about the best times and places to stage exterior
scenes to take maximum advantage of available light. They must also plan
for such variables as the weather and tides if they are going to shoot on the
beach or at sea. The cinematographer consults with the production designer
regarding plans for dressing stages and the space they need for rigging lights
and moving cameras. This includes the use of wild walls, removable sections
of ceilings, placement, sizes and angles of windows, practical lights, and the
colors and textures of props and walls.
The cinematographer also organizes camera, electrical and grip crews,
whose talents and skills are the right match for the tasks at hand. They work
with the gaffer to plan placement and rigging lighting fixtures, including
deciding whether a dimmer control console is needed. The cinematographer
also confers with the key grip on issues related to camera movement, includ-
ing what gear is needed. He or she also consults with the production manager
regarding arrangements for rental of camera, lighting and grip equipment,
and such specialized tools as insert cars, on the days it will be needed.
During preproduction, the cinematographer must establish rapport with
the make-up, hair and costume designers, which frequently includes shoot-
ing tests with the actors. Using information gained by this testing, the cine-
matographer can then present a visual interpretation of the actor’s character,
helping to define and amplify the performance. They might also test new
camera films, lenses and specialized photochemical or digital intermediate
processes (DI) to determine the most efficient way to put the final touches on
the visual design. In addition, the cinematographer establishes communica-
tions with the timer at the film lab or digital postproduction facility, which
will provide dailies. They also make arrangements and check out facilities for
viewing dailies, meet stand-ins for actors, and establish rapport and an open
line of communications with the AD.
PRINCIPAL PHOTOGRAPHY
The cinematographer is responsible for executing the vision for the “look”
of the film, while helping to keep production on budget and on schedule.
On many feature films, the day begins with the cinematographer viewing
dailies during early morning hours at the lab to verify that there are no
| 5
Responsibilities of the Cinematographer
technical problems and nuances in the “look” are working. They frequently
watch rehearsals of the first scene with the director, and suggest whether
modifications in lighting or coverage are needed. The cinematographer
stays in touch with the production manager and AD regarding any changes
in the anticipated schedule caused by unforeseen circumstances. In ad-
dition, the cinematographer approves the set up of lighting by the gaffer.
Many directors want lighting and camera coverage to be “flexible” enough
to give the actors the freedom to perform spontaneously.
Shots are often rehearsed with stand-ins. The cinematographer confers
with the director regarding adjustments in plans for lighting and coverage
and facilitates those changes with the grip, gaffer and lighting crews. They
also work with the standby painter for any last minute touch-ups needed
on sets, assist the AD in staging background action, and give the sound de-
partment the freedom to put their booms where they are needed to record
great audio of the dialogue. If the director desires, a final walk-through or
rehearsal is done with the actors. When cameras are rolling, the cinema-
tographer assures that there are no technical glitches. They also provide an
extra pair of eyes for the director on the set. The cinematographer might
suggest retaking a shot because something didn’t quite look or feel right,
while assuring the director they will find a way to make up the time.
If a DI finish is planned, the cinematographer might be recording request
that digital still images of the scenes be taken, which he or she later manipu-
lates with a personal computer to give the dailies timer and colorist a visual
reference for the “look.”
The cinematographer, director and other key collaborators watch dailies
together to judge how effectively the “look” is working. At the end of each
day, the cinematographer discusses the first set up for the next morning
with the AD and possibly the director. He or she also informs the script
supervisor if there are special camera or lighting notes, makes sure that the
camera, lighting and grip crews have all the information needed to rig their
gear, provides any special notes and instructions required by the film lab and
dailies timer, and works with the production manager regarding the need to
return or acquire equipment.
POSTPRODUCTION
The cinematographer’s role extends deep into postproduction with the goal
of assuring that the “look” that he or she rendered is what audiences see on
cinema and television screens. If possible, the cinematographer handles any
additional photography required by changes in the script. They are also called
on to supervise the seamless blending of visual effects shots with live-action
footage. The cinematographer is responsible for timing the film for continu-
ity and for adding nuances to the “look” in either a DI or optical lab envi-
6
|
Responsibilities of the Cinematographer
ronment. They also approve the final answer print in collaboration with the
director and producer. In addition, the cinematographer verifies that what
they approved is reflected in the release print. The final task for the cinema-
tographer involves timing, and, if necessary, reformatting films for release in
DVD, HD and other television formats. All of these final steps are meant to
assure that audiences experience films on motion picture and display screens
the way they are intended to be seen by the creators of the images.
| 7
Summary of Formats
compiled by Tak Miyagishima
ASC Associate Member
APERTURE SPECIFICATIONS
w 35mm Camera – Spherical Lens
Academy Camera Aperture .866" X .630" 22mm X 16mm
w 35mm Theatrical Release – Spherical
1.37:1 .825" X .602" 20.96mm X 15.29mm
1.66:1 .825" X .497" 20.96mm X 12.62mm
1.85:1 .825" X .446" 20.96mm X 11.33mm
w 35mm Television Aperture and Safe Areas
Camera Aperture .866" X .630" 22mm X 16mm
TV Station Projector Aperture .816" X .612" 20.73mm X 15.54mm
TV Transmitted Area .792" X .594" 20.12mm X 15.09mm
TV Safe Action Area .713" X .535" 18.11mm X 13.59mm
Corner Radii = .143"/3.63mm
TV Safe Title Area .630" X .475" 16mm X 12.06mm
Corner Radii = .125"/3.17mm
w 35mm Full Aperture – Spherical Lens (For Partial Frame Extraction) Prints (Super 35)
Camera Aperture (Film Center) .980" X .735" 24.89mm X 18.67mm
Finder Markings
35mm Anamorphic 2.4:1 AR .945" X .394" 24mm X 10mm
70mm 2.2:1 AR .945" X .430" 24mm X 10.92mm
35mm FLAT 1.85:1 AR .945" X .511" 24mm x 12.97mm
w 35mm Panavision 2-Perf
Camera Aperture (Film Center) .980" X .365" 24.89mm x 9.27mm
Ground Glass 2.4:1 AR .825" X .345" 20.96mm x 8.76mm
w 35mm Panavision 3-Perf
Camera Aperture (Film Center) .980" X .546" 24.89mm x 13.87mm
1.78:1 .910" X .511" 23.10mm x 12.98mm
w 35mm Panavision 4-Perf
1.85:1 AR Spherical (FLAT) PROJ AP .825" X .446" 20.96mm X 11.33mm
2.4:1 AR Anamorphic Squeeze PROJ AP .825" X .690" 20.96mm X 17.53mm
5 perf 70mm 2.2:1 AR PROJ AP 1.912" X .870" 48.56mm X 22.10mm
w Panavision 35 and Anamorphic Squeezed Negative
Camera Aperture .866" X .732" 22mm X 18.59mm
35mm Squeezed Print
Finder Marking (2.2:1 70mm) & Proj AP .825" X .690" 20.96mm X 17.53mm
16mm Squeezed Print .342" X .286" 8.69mm X 7.26mm
Max Proj. AP
16mm Un-Squeezed Print (1.85:1) .380" X .205" 9.65mm X 5.20mm
Proj. AP matte
70mm Unsqueezed Print Proj. AP 1.912" X .870" 48.56mm X 22.10mm
8
|
Summary of Formats
Film records an image by varying the density of silver or dye of the film
emulsion in a continuous gradient from clear to opaque, black to white.
Digital imaging builds an image with numbers in a binary format. The
detail displayed is limited to a scale with a discrete number of values, deter-
mined by the number of bits being used.
RGBG, it is 50% green, 25% red and 25% blue. This allocation mimics the
physiology of the human eye which is more sensitive to green light. These
groups of four filtered sensors are repeated throughout the chip. Dedicated
proprietary software interprets the signal (the de-Bayering process) from
each sensor site, taking into account the particular spectral transmission of
its’ color filter along with the values of the adjacent sites in the matrix, and
assigns a color and brightness value to each pixel of the recorded image. The
image data can be recorded before this digital conversion (de-mosaic or de-
Bayer process) is performed. This ‘pre-conversion’ format is called raw, and
yields more data and a higher quality image. (See Figure 6.)
The choice of the tiny color filters on the sensors—whether narrow spec-
trum or wide band—has a significant effect on the dynamic range and color
saturation of the image captured. A wider-band filter leaves the sensor more
sensitive to light, yielding a wider dynamic range and higher native ISO.
But the color recorded by that wide-band filtered sensor is less true and less
saturated.
The Panavision Genesis camera uses an RGB stripe mosaic pattern. This
design uses a 2x3 matrix of filtered sensors (two each of red, green and blue)
to measure the data that determines the value of each pixel. In this case, the
data is “oversampled”—six sensors contribute data to define each single pixel
in the recorded image.
The SONY F65 uses a new mosaic pattern that provides red, green, and
blue data for every pixel of a recorded 4K image. The higher resolution 8K
sensor array is rotated at a 45-degree angle so as to place the filtered sensors
in position to measure all three color values for each pixel in a recorded
image, producing a ‘true’ 4K RGB output image.
3) Resolution
As the smallest element of a digital image, the pixel represents the limit of
detail that can be displayed. An image composed of a small number of pixels
can only show a rough approximation of a scene, with little detail. The more
pixels used to display an image, the finer the detail that can be revealed.
Resolution is the measure of the finest detail visible in a displayed image,
and is defined numerically by the number of pixels recorded in the image
raster—NOT by the number of sensors in the camera chip. This distinction
has created some confusion and controversy in the resolution claims of
some camera manufacturers.
Camera resolution is commonly defined by the number of lines of pixels
(scan lines) it records. The standard HD camera records 1080 lines, although
some cameras that record 720 lines are also considered HD. An increase in
the pixel line count will produce a proportional increase in resolution and
representation of fine detail. Image resolution is expressed by two numbers:
| 13
Basic Digital Concepts
columns (or pixels per line) x lines. The HD standard image is defined as
being 1920x1080 pixels.
A doubling of pixel lines and pixels per line (as the change from 2K to
4K), increases the total pixel count by a factor of 4, requiring four times the
memory to store the image data. However, the MTF (modulation transfer
function, an optical measure of line-pair resolution) only doubles.
Some of the most common image-resolution standards include:
• Standard definition analog NTSC = 640x480 (1.33:1)
• HD = 1920x1080 (or 1280x720) (1.78:1)
• 2K = 2048 pixels per line (line count varies with aspect ratio)
• 4K = 4096 pixels per line
Pixel count is not the only element in the imaging chain that affects picture
resolution. Lens quality, precise registration of the chips in a three-chip cam-
era, and image scaling and resizing conversions also affect image resolution.
4) exPosuRe
Film has a greater dynamic range, or range of usable f-stops, than most
digital formats. And due to the response of film dyes and silver to exposure
extremes, which causes a gradual “rounding off ” of values at either end of
the tonal scale (highlights and shadows), there is a perceived extension of the
dynamic range. Shadows merge smoothly into darkness, and highlights fade
gradually (or “bloom”) into white.
Digital images, however, “clip” at either end of the tonal scale—the shad-
ows drop off abruptly into solid black, and highlights cut off into flat, white
areas with no definition. Clipping occurs when the exposure moves beyond
the specific threshold which is determined by the sensitivity and capability
of the camera or recording device.
A cinematographer can avoid clipping a digital image by monitoring and
controlling the light levels recorded in a scene. There are several useful digi-
tal tools available for monitoring exposure in a scene. A waveform monitor
displays the exposure levels across a scene, read by the camera from left to
right on a scale of 0–100 IRE (Institute of Radio Engineers). Generally, 0
IRE defines total black and 100 IRE defines total white, indicating the maxi-
mum amount of voltage that the system can handle. If the levels flatten out
at either end of the scale (0 or 100), the image will clip, and no detail will be
recorded in those areas. NTSC defines 7.5 IRE as black (called the “pedestal”
in post, and “setup” on the set). The waveform monitor displays the pedestal
and peak white levels, and indicates when clipping occurs.
Another tool is the Histogram, which graphically displays the distribu-
tion of light values in a scene, from black to white. Basically a bar chart, a
histogram indicates the proportion of image area (y axis) occupied by each
level of brightness from 0 IRE to 100 IRE (x axis). With a clear indicator of
14
|
Basic Digital Concepts
clipping at either end of the scale (often a red line), it is useful for determin-
ing whether the scene fits within the camera’s dynamic range.
Some cameras record light levels up to 110 IRE. These brighter values can
be used throughout the post-production process, then brought back down
to a “legal” level to avoid being hard-clipped at 100 IRE when broadcast on
television.
A handy exposure tool for digital production allows the user to quickly
bring light levels in a scene within the safe range to avoid clipping. Devel-
oped by David Stump, ASC, the device consists of a hollow sphere (diameter
about 1.5 feet) painted glossy white, with a hole in one side (diameter about
2 inches) that reveals a dark interior painted matte black. To use on the set,
place the device in the brightest area of the scene, usually the position of the
main subject. Adjust the lighting and camera exposure so that the specular
(shiny) white highlight on the white ball registers under 100 or 110 IRE, and
the black hole registers 0 IRE.
Some digital cameras have software tools that help protect the shadow
and highlight detail or, if clipping is unavoidable, soften or round off the
edge of the clipping. In the toe or shadow region, the technique involves
the manipulation of the black pedestal (the level at which the image turns
black) and black stretch. Black stretch flattens the curve of the Toe, lowering
the contrast and putting more levels (and subtlety) into the lower part of
the tone scale. The resulting effect of black stretch is to reveal more shadow
detail. Going the opposite direction, “crushing the blacks” raises the slope of
the Toe curve and compresses the tonal values in the shadows. Raising the
black Pedestal turns more of the shadow area to pure black. In the highlight
region, the “soft clip” or “knee” function compresses the values near 100
IRE, rounding off the exposure curve, allowing more highlight detail to be
recorded, and giving a more pleasing shape to the edges of a clip in the very
bright areas in the frame.
7) ViDeo sCAnning—
inteRlACe oR PRogRessiVe
The video image is composed of a certain number of lines of pixels. A light
or energy beam scans the lines to “write out” each frame. Since the begin-
ning of television broadcasting, interlaced scanning was the standard. In this
process, two vertical sweeps are made for each frame—one scans all the odd
lines (1, 3, 5, etc.), the next scans the even (2, 4, 6, etc.). Each sweep produces
a “field.” The NTSC 30 fps television standard records 60 interlaced fields per
second.
Progressive scanning is simpler—all lines are scanned sequentially in one
vertical sweep. There are no fields, only complete frames, just as in motion-
picture film photography. Computer screens use progressive scanning.
When broadcast standards were first being established, the bandwidth of
| 17
Basic Digital Concepts
the equipment was relatively small. That is, the throughput of the pipeline
used to transmit image data, measured by the amount of data it could trans-
mit over a given period of time, was very limited. The existing equipment
was not capable of sending an entire (progressive) frame 30 times per sec-
ond. It could, however, send 60 half-frames (the fields of interlaced frames)
per second, since this requires transmitting only half the frame data at any
given moment.
One benefit of interlaced scanning is that the higher frame rate (60 fields,
instead of 30 frames) makes the portrayal of movement clearer and more ac-
curate. At 24 progressive frames per second, or even 30, the shutter speed is
slow (1⁄48 or 1⁄60 second), and generates motion blur with any action. Camera
movement can appear juddery over certain backgrounds. Interlaced fields,
exposed at twice the shutter speed and frame rate, reduce these problems
and give a sharper, more precise portrayal of movement. Subjectively, an
interlaced image appears to have greater detail—an immediacy associated
with television viewing.
Image flicker is also reduced with interlaced scanning. CRT monitors have
a short persistence—the scanned image fades quickly, leaving a black screen.
At 24 or 30 fps, there would be more black-screen time, causing noticeable
flicker on the monitor. Interlaced scanning at 60 fields per second, refreshing
the image twice as fast, greatly reduces flicker. Progressive computer moni-
tors avoid the flicker problem by using a much higher frame rate.
The downside of interlaced scanning is a loss of resolution and steadi-
ness when shooting movement. With subject or camera movement, the
image changes from one field to the next. But those two consecutive fields
perceptually merge to create each complete frame, even though the mov-
ing subject has changed position. Therefore, any movement will appear
blurred and detail will be substantially reduced in areas of the frame that
contain movement. This factor also creates difficulty for any image pro-
cessing that involves spatial manipulation of the image, such as resizing or
reframing.
A film release of interlaced material requires the creation of progressive
frames from pairs of fields, so as to convert the interlaced video fields back to
progressive film frames. There are various methods of doing this video-field-
to-film-frame conversion. Some involve the interpolation of pixel values
between the different fields. This merging of field pairs into frames reduces
resolution wherever there is movement. For this reason, the progressive
format is preferable for recording back to film.
Some cameras offer the option of recording in either a progressive or
interlaced format. A key factor in deciding which format to use should be a
consideration of the primary distribution goal of the project—be it theatrical
screen, broadcast or DVD/Blu-ray.
18
|
Basic Digital Concepts
8) ComPRession
One of the biggest challenges to creating high quality digital images is
the handling of the large amounts of data they require. Compression is the
means used to reduce the data content of an image source. The objective of
compression is to reduce image file size with the least possible sacrifice in
quality and detail.
A common method used by compression schemes or codecs is to analyze
an image, identify redundant pixels in the picture, and remove them. For
instance, a stationary solid blue sky is “redundant” and can be easily com-
pressed. Conversely, subjects in motion change position every frame, are not
redundant and are difficult to compress.
There are two categories of compression codecs—intraframe and inter-
frame. The intraframe codec processes each frame individually, only remov-
ing redundant information from within that particular frame. The interframe
codec uses a series of frames, or group of pictures (GOP), to compress the
image data. Interframe compression compares consecutive frames within
each GOP to remove redundancy from frame to frame and arrive at “dif-
ference” information. Although it’s more efficient and can achieve a higher
compression ratio, interframe compression can create challenges in editing
and postprocessing due to the multiframe dependency.
Compression is quantified as a ratio, comparing the amount of data in the
original noncompressed signal to that in the compressed version. The lower
the compression ratio, the higher the quality of the resulting image. A format
that uses a 2:1 compression ratio preserves more image information than
one that uses a 4:1 compression ratio. Some cameras and recording systems
offer several compression ratios, providing a choice between higher quality
with larger image files and lower quality with smaller files.
Some compression codecs allow the original image to be fully reconstitut-
ed when decompressed. Such a codec uses “lossless” compression. However,
a danger exists with certain codecs that claim to be “visually lossless.” The
claim may hold true for direct display of an original image that needs no ma-
nipulation in postproduction, but if it later requires significant color grading
or visual effects, disturbing artifacts may appear that significantly reduce the
quality of the image. Other codecs discard image information that cannot be
subsequently retrieved. This is considered “lossey” compression.
9) ColoR sPACe
A color space is a framework that defines a range of colors within a color
model. Most color-space systems use a three-dimensional model to describe
the relationship among the colors. Each color space has a set of specific pri-
mary colors—usually three (a particular red, green and blue)—which shapes
its color model.
| 19
Basic Digital Concepts
Color spaces have been developed to define color for many particular
purposes, including film, video and digital-cinema display. The standard
HD video color space (ITU-R BT.709 or Rec. 709) uses YCrCb space, which
separates luminance (Y) and chrominance (Cr and Cb—red and blue color
values). This separation allows for choices in color sub-sampling—for
instance, the 4:2:2 sampling ratio measures color half as often as it does
luminance, thereby saving memory space and time. (See section 12.) Film
is represented by the RGB color space, in which the red, green and blue
channels are sampled equally. Both the YCrCb and RGB color spaces are
device-dependent.
The color space designated as the standard for Digital Cinema distribu-
tion is X'Y'Z,' a gamma encoded version of the CIE XYZ space on which
the chromaticity diagram is based. A device-independent space, X'Y'Z' has
.9
.8
.7
.6
.5
2000º K
.4 rve
Cu
dy
B o 4000º K 1000º K
k
ac
Bl
6000º K
.3
10000º K
.2
.1
.0
.0 .1 .2 .3 .4 .5 .6 .7 .8
Figure 3. Chromaticity diagram.
| 21
Basic Digital Concepts
a larger color gamut than other color spaces, going beyond the limits of
human perception. All physically realizable gamuts fit into this space.
10) ACes
The digital workflow of a production today involves the passing of an
image, with a particular Look attached, from facility to facility and system to
system, requiring conversions and transfers to different formats and media.
The risk of failing to achieve the intended result is significant—image quality
(including resolution, exposure and color data) can be lost along the way.
Look management and color management, discussed below, help reduce
these risks, but a larger framework for the entire workflow has been missing.
The Academy of Motion Picture Arts and Sciences’ Science and Technol-
ogy Council developed ACES—the Academy Color Encoding System. ACES
provides a framework for converting image data into a universal format. The
ACES format can be manipulated and output to any medium or device. The
system specifies the correct path from any camera or medium to ACES by
using a dedicated IDT (input device transform). Camera manufacturers such
as Sony and Arri have already generated an IDT for their camera systems.
Display device manufacturers of projectors and monitors provide an ODT
(output device transform) for their devices. Different ODTs are also needed
for each distribution medium, including digital cinema projection, Blu-ray,
and HD broadcast. A final color grading “trim pass” is still recommended
for each master so as to insure fidelity to the intended look.
The most striking gain for the cinematographer is the ability for ACES to
encode the full range of image information captured by any camera. Using a
16-bit floating-point OpenEXR format, ACES has the potential of encoding
the full gamut of human vision, beyond what any camera can record today.
It accommodates a dynamic range of 25 stops, far past the capability of any
device. Highlights that were clipped and shadows that have been crushed
using other formats and workflows now can re-emerge with surprising clar-
ity. ACES empowers the cinematographer to use the full capability and range
of the tools at his or her disposal.
or Colorist as they embed the Look in the dailies they create. Visual-effects
artists depend on look management to enable them to process shots that
fit seamlessly back into the edited project. Additionally, look management
communicates the cinematographer’s intended look to the DI colorist for
the final grading sessions.
intended. This can include frame number, resolution, frame rate, motion-
control data, as well as any look-management data, such as the ASC CDL
values. As the production image is passed through the numerous postpro-
duction steps in the digital workflow, it is important to preserve all metadata
associated with an image and pass it on intact to the next stage.
This discussion of the look and look management describes a “nonde-
structive” process of creating a look and recording its parameters in meta-
data to a format such as the ASC CDL. The original photographed image is
not altered until the final DI session, thereby preserving maximum image
quality (resolution and color data) and full potential for post manipulation.
Another working style instead applies the Look modifications to the image
on the spot during production, either using a LUT, a camera menu scene
file, or an analog device, such as a glass filter. This style “bakes” the Look
into the image, in effect marrying the look to the image. This workflow is
often used by productions seeking to avoid extensive postproduction work.
The disadvantage of baking-in a look, however, is significant. Depending
on the modifications used to create the look, image quality may be reduced,
whether in dynamic range, color record or resolution. The potential for later
image manipulation in the DI may have been compromised, making it diffi-
cult or impossible to undo or alter the look already imposed upon the image.
15) mAsteRing
Once the final grade is completed in the DI session, a digital source master
(DSM) is rendered, creating new altered digital image files that encompass
all the corrections and enhancements made during the DI. This master is the
source for the versions or sub-masters that will be made for distribution in
each format. Sub-masters are made for distribution in 35mm, digital cinema,
HD and SD broadcast, Blu-ray, DVD, and for online streaming and down-
loading. Creation of the sub-master in each medium requires a format con-
version—and may then require supplemental color grading (a “trim pass”),
so as to come as close as possible to matching the source master created in
the DI session. Each medium, however, has different characteristics, includ-
ing resolution, dynamic range and color gamut, and cannot exactly match
the original DSM. To ensure that the look is preserved in each distribution
format, the cinematographer should stay in the loop as the sub-master for
each medium is created and, if possible, attend the grading sessions or trim-
passes for each. The objective is to preserve the look—the original creative
intent of the filmmakers.
Film
Creating a 35mm film master involves conversion of the source master to
a digital format shaped to take best advantage of the capability of film. Tra-
28
|
Basic Digital Concepts
ditionally, the Cineon 10-bit log format has been used in film recorders, but
by using ACES, a higher quality output format could be used. The converted
digital files are uploaded into a film recorder—either laser or CRT-based—
which records the digital files onto 35mm film, exposing a new film master.
Digital Cinema
For the digital-cinema release, the DCI (Digital Cinema Initiatives—a
joint effort of the major studios) and the SMPTE have established a set of
universal format specifications for distribution—the DCDM and DCP. The
specifications define the elements (“assets”) used for digital-cinema display,
and a structure by which the assets should be used for successful exhibition.
The assets include the picture (Digital Source Master), soundtrack, subtitles,
metadata and security keys.
The DCDM (digital cinema distribution master) incorporates all specified
assets in an uncompressed format. The DCDM encodes the image in the
device-independent CIE X'Y'Z' color space as TIFF files, with 12 bits per
color channel. This color space accommodates the entire human visual color
spectrum and gamut. It has two resolution formats: 2K (2048x1080) and 4K
(4096x2160). Source images of other sizes and aspect ratios can be stored
within the 2K or 4K image containers. The DCDM also accepts 3-D formats.
The DCP (digital cinema package) is a compressed, encrypted version of
the DCDM. Using JPEG 2000 compression, the DCP is used for the efficient
and safe transport of the motion-picture content. Upon arrival at the theater
(or other exhibition venue), the DCP is unpackaged, decrypted and decom-
pressed for projection.
16) ARChiVing
Finally, all essential digital data should be properly archived for future
generations. Secure storage should be arranged for the digital source
master code (or ACES master data), and transferred periodically to fresh
media to avoid any degradation or corruption of data. Digital data is vul-
nerable on all currently available media, whether it be magnetic tape, disk,
hard drive or a solid-state medium. The frequency of transfer is dictated
by the conservatively estimated life span of the medium being used. The
only proven, long-term archive solution is film—black-and-white C/M/Y
separations. Properly stored, black-and-white film negative can last at least
100 years.
ConClusion
With a working understanding of digital technology, the cinematographer
can confidently choose the best methods and devices suited to any project in
production. As new challenges arise and new technology becomes available,
| 29
Basic Digital Concepts
he or she can better know what issues to consider in making the technical
and creative decisions that shape a career.
S uffice it to say that any aspect ratio is achievable with the current digital
motion picture cameras available. Their capture format is generally that
of HDTV (1920x1080) or what is commonly called 2K.1 The aspect ratio of
all but one or two of these cameras is the HDTV aspect ratio of 1.78:1 (16 x 9
in video parlance). This 1.78:1 aspect ratio is a result of the different cam-
era manufacturers leveraging what they have built for HDTV broadcasting
cameras. It’s unique to find a digital camera that doesn’t owe part of its de-
sign to television cameras.
When composing for 1.85, digital cameras generally use almost the en-
tire 1.78 imaging area. When composing for the 2.40:1 aspect ratio, most
digital cameras will capture a letterboxed 2.40 slice out of the center of the
imaging sensor, which, in 1920 x 1080 cameras, results in the pixel height
of the image being limited to only 800 lines (Star Wars Episode 3, Sin City,
Superman Returns).
There is one digital camera that employs Anamorphic lenses to squeeze
the 2.40 aspect ratio to fit within its sensor’s 1.97 aspect ratio, utilizing the
entire imaging area.
Yet another camera does creative things with the clocking of CCD pixels
so that the entire imaging area is still utilized when shooting a 2.40 image,
with a subtle compromise in overall resolution.
In the future, it is likely that more and more cameras will have imaging
sensors with 4K pixel resolution.
1. When film resolutions are discussed, and the terms 2K or 4K are used, these refer
to the number of lines that can be resolved by the film. In the case of 2K, that would
mean 2048 lines or 1024 line pairs as photographed from a resolution chart. In the
case of 4K, that would mean 4096 lines or 2048 line pairs (4096 lines) as photo-
graphed from a resolution chart. In digital imagery the term is applied a bit more
loosely. While 2K and 4K still mean 2048 and 4096, respectively, with digital scan-
ning and photography it refers to a number of photo sites on the scanner or camera
chip. Numbers of pixels does not necessarily translate into actual image resolution.
32
|
A Primer for Evaluating Digital Motion Picture Cameras
1080
2.40:1 Scope Aspect Ratio 800
rEsolution vs.
MEgAPixEls
When it comes to deter-
mining which digital camera
to choose, don’t allow claims Figure 1. In a 1920 x 1080 Digital Camera, only the
of “megapixels” to influence center 800 lines are used for a “Scope” or 2.40:1
aspect ratio.
your understanding of what a
camera is capable of. It is a term that is used loosely to indicate the resolution
of a digital camera that doesn’t follow any set guidelines. There are many fac-
tors that would have to be taken into account if you were going to evaluate the
merits of a digital imaging device based on specifications alone.
The most straightforward way to understand a digital motion picture
camera’s capabilities is to shoot your own rigorous tests and evaluate them.
In the same manner when a new film stock has been introduced, the best
way to truly understand that emulsion’s capabilities is to test it, rather than
rely on the claims of the manufacturer.
Also, it will be less confusing if you focus your evaluations on the final
image delivered by a given digital camera. Claims about a camera’s imag-
ing sensor can be influenced by marketing. If we concentrate on the final
processed image that is delivered for projection, color correction, and final
presentation, we will be evaluating a camera’s true caliber.
the image captured is 4096 x 3112. With both 2K and 4K scanners, each
individual pixel contains a single unique sample of Red, Green and Blue.
In the digital camera world, 2K often refers to an image that is 1920 pixels
horizontally and 1080 pixels vertically. Again, each individual pixel contains
a single unique sample of Red, Green and Blue. This sampling of a unique
Red, Green and Blue value for each pixel in the image is what is called a
“true RGB” image, or in video parlance, a 4:4:4 image. While these cameras
have an image frame size that corresponds to HDTV standard, they provide
a 4:4:4 image from the sensor, which is not to HDTV standard; which is a
good thing, as 4:4:4 will yield a superior picture.
Fill FACtor
There is one area of a digital camera’s specifications that is most helpful
in determining its sensitivity and dynamic range. This is the statistic that
conveys how much area of an imaging sensor is actually sensitive to, and
captures light vs. how much of a sensor is blind, relegated to the circuitry
for transferring image information. This is called the “fill factor.” It is also a
statistic that is not readily published by all camera manufacturers.
This is an area where not all digital cameras are created equal. The amount
of area a digital imaging sensor is actually sensitive to light (the “fill factor”)
has a direct correlation to image resolution and exposure latitude. With the
currently available professional Digital Motion Picture Cameras, you will
find a range of high profile cameras where less than 35% of the sensor’s total
area is sensitive to light, to cameras where more than 85% of the sensor is
light sensitive.
As film cinematographers, we are used to working with a medium where
it was presumed that the entire area of a 35mm film frame is sensitive to
light; in digital parlance, that would be a fill factor of 100%. When a digital
camera has a fill factor of 40%, that means it is throwing away 60% of the
image information that is focused on the chip. Your instincts are correct if
you think throwing away 60% of image information is a bad idea. With this
statistic, you can quickly compare camera capabilities, or at least understand
their potential.
The higher the fill factor of a given sensor (closer to 100%), the lower the
noise floor will be (the digital equivalent of film grain) and the better the
dynamic range will be.
a fine steel mesh. You don’t actually see the individual steel gridlines of the
mesh, but it tends to have an affect on image clarity under most conditions.
In terms of sensor types with progressively more area sensitive to light,
there are basically two categories: photodiodes (less area) and photogates
(more area). Depending on the pixel size, cameras utilizing a single-chip
photodiode interline transfer array (either CCD or CMOS) would be on the
low end with less than 40 to 35% of its total area sensitive to light, up to a
theoretical maximum of 50% for multichip (RGB) photodiode sensors. Next
would be single-chip photogate based sensors that can, again, depending on
pixel size, have anywhere from 70 to over 85% of its area sensitive to light.
In light sensitivity, as in exposure index, photodiode sensors will have a
higher sensitivity than photogate sensors, albeit with an associated trade off
in latitude and resolution.
In addition, solid state sensors tend to have various image processing cir-
cuits to make up for things such as lack of blue sensitivity, etc., so it is impor-
tant to examine individual color channels under various lighting conditions,
as well as the final RGB image. Shortcomings in automatic gain controls, etc.
may not appear until digital postproduction processes (VFX, DI, etc.) begin
to operate on individual color channels.
MorE on MEgAPixEls
Much confusion could be avoided if we would define the term “pixel”
to mean the smallest unit area which yields a full color image value (e.g.,
RGB, YUV, etc., or a full grayscale value in the case of black and white).
That is, a “pixel” is the smallest stand-alone unit of picture area that does not
require any information from another imaging unit. A “photosite” or “well”
is defined to be the smallest area that receives light and creates a measure of
light at that point. All current Digital Motion Picture cameras require infor-
mation from multiple “photosites” to create one RGB image value. In some
Photosites/Wells
Figure 2a. A Bayer pattern Photogate sensor, Figure 2b. A Photodiode “macrocell”
where each photo site is transformed to yield design, where it takes six photosites to
one full color pixel value. yield one full color pixel value.
| 35
A Primer for Evaluating Digital Motion Picture Cameras
cases three photosites for one RGB value, in others, six photosites to create
one RGB image value, a then Bayer pattern devices that combine numerous
photosites to create one RGB value.
These definitions inevitably lead us to define the term “resolution” as the
number of pixels that yield a single full color or grayscale image value (RGB,
YUV, etc.).
The images above are just two of many examples of photo sensors used
by digital cameras. One illustrates a Bayer pattern array of photosites using
a photogate sensor, and the other an interline transfer array of photosites
employing a photodiode lenslet design.
sounD CoMPliCAtED?
Perhaps, but all you need to understand is that product claims can, and
will, be misleading. We’ve lost our way a bit when thinking that counting
pixels alone is a way of quantifying a digital camera’s capability. A return to
photographing resolution charts, and actually examining what these cam-
eras are capable of will serve you much better in understanding how a given
camera will help you tell your story.
In short, do not be seduced by technical specification mumbo jumbo.
Look at images photographed by the camera in question, and evaluate from
a proper distance of screen heights from the image.
Digital Cinematography
on a Budget
by M. David Mullen, ASC
B efore the year 2000, video technology had only been used sporadically
for theatrical releases, mainly documentaries. For narrative fiction,
milestones include Rob Nilsson’s independent feature Signal 7 (1985), shot
on ¾" video, and Julia and Julia (1987), shot on 1125-line analog high-def-
inition video. However, using video technology for independent features—
primarily as a lower-cost alternative to film—didn’t catch on until a number
of elements fell into place: the introduction of digital video camcorders,
desktop computer nonlinear editing systems, and the increase in compa-
nies offering video-to-film transfer work, all of which began to appear by
the mid 1990s.
The major turning point came with the worldwide box office success of
two features shot on consumer camcorders, the Dogma’95 movie Festen (The
Celebration) (1998) and The Blair Witch Project (1999), proving that cinema
audiences were willing to watch movies shot in video if the subject matter
was compelling enough and the visual quality seemed to suit the content.
However, with 35mm film being the gold standard for narrative feature
production, many independent filmmakers have pushed manufacturers to
develop affordable technology that would bring an electronic image closer
to the look of 35mm photography.
24 fps progressive-scan (24P) digital video appeared in 2000 with the
introduction of the Sony HDW-F900 HDCAM pro camcorder; then in
late 2002, Panasonic released the AG-DVX100, a Mini-DV “prosumer”
camcorder with a 24P mode that cost less than $5000. Not long after that,
lower-cost high-definition video cameras appeared, starting with the JVC
GR-HD1 HDV camcorder in late 2003. The next significant trend was the
movement away from traditional tape-based recording, allowing greater
options in frame rates and recording formats within the same camera. The
first prosumer camera with this design approach was the Panasonic AG-
HVX200, released in late 2005; it came with a standard Mini-DV VTR but
could also record footage to P2 flash memory cards.
By 2009, there were HD video cameras being sold on the market for less
than $1000, and now even phones and still cameras are capable of shooting
HD video. Today many movie theaters are being converted to digital projec-
tion; this trend—combined with emerging online distribution schemes—
38
|
Digital Cinematography on a Budget
has diminished the need for a digital feature to be transferred to 35mm film
(though there remains good archival reasons for doing this).
The term “prosumer” vaguely covers a range of lower-cost video products
with a mix of consumer and professional features, often in a small-sized
camera body. Prosumer cameras, by definition, are not only used by con-
sumers but by professionals as well, either for cost reasons, or because their
portability and low profile are better suited to a particular type of produc-
tion. In fact, some of the cameras discussed in this article are actually made
and sold by the professional division of the manufacturer and thus are not
technically prosumer products.
Therefore, rather than separate cameras into debatable categories of “pro-
fessional/prosumer/consumer,” I will make the cut-off point for discussion
any camera sold for under $15,000 and under that is capable of professional-
quality video, preferably with a 24P or 25P option.
In this article, “SD” refers to standard definition video and “HD” refers to
high definition video. “24P” refers to 24 fps progressive-scan video. “DSLR”
refers to single-lens reflex still cameras with a digital sensor. “NLE” refers to
nonlinear editing.
that took PL-mount 35mm Figure 3. 1.85:1 format positioned near “common top”
cine lenses; Vantage Film inside a 1.78:1 frame.
| 41
Digital Cinematography on a Budget
makes a 1.33X version of their Hawk anamorphic lenses that would have
the correct squeeze ratio for a 16x9 sensor camera. And if you were using a
B4-mount 2⁄3" video camera, Canon makes a rear-mounted lens attachment
that adds a 1.33X squeeze.
Some cameras, particularly the large single-sensor ones, have cine lens
mounts thus 35mm anamorphic lenses can be used directly. Keep in mind
that the majority of anamorphic lenses made for cinema use have a 2X squeeze
and were designed to fit a 2.40:1 image onto a 1.20:1 area of 4-perf 35mm film.
Since most HD cameras have a 16 x 9 (1.78:1) sensor, once unsqueezed, the 2X
anamorphic lens would create a 3.56:1 image and thus require as much crop-
ping of the sides to make it 2.40:1 as would be involved in cropping a normal
16 x 9 HD image top and bottom to create the same aspect ratio.
After testing some of these options, you may end up preferring to just crop
a normal spherical image to 2.40:1. The simplest solutions are often the best,
especially in independent feature production.
problems of NTSC and PAL are being carried over into HDTV, at least as
far as the filmmaker attempting to use HD technology to create projects for
transfer to 24 fps film. For example, if using an interlace-scan prosumer HD
camera, it may be better to choose a 50i model over a 60i version since it is
simpler to convert 50 fields into 25 frames than it is to convert 60 fields into
24 frames. 50i footage de-interlaced to 25P can be transferred 1:1 to frames
of 35mm film for projection at 25 or 24 fps. However, with a 24P or 25P
option becoming more prevalent in HD cameras, there is little reason to deal
with conversions from interlace-scan photography anymore.
Since 60i is still used by broadcast video and many display devices, con-
sumer cameras often convert 24P capture into a 60i recording using a “pull-
down” scheme. This is particularly common for cameras that use an internal
VTR for recording. Since some users want to simply edit and display the
24P A B C D
original
60i A A B B B C C D D D
video
24P A B C D
recovered
24P A B C D
original
60i A A B B B C C C D D
video
24P A B C D
recovered
24P material in 60i, while others wish to edit the original progressive-scan
frames, many 24P cameras offer two 60i recording modes: one using the
standard 3:2 or 2:3 pulldown, and the other using an “advanced” pulldown.
This is actually a more visible 2:3:3:2 cadence designed to make it easier to
be removed at the editing stage.
In terms of motion artifacts during playback, the normal 3:2 pulldown
cadence is designed to be as smooth as possible when viewing 24 fps ma-
terial on 60i display devices; the scheme is more effectively “buried” but
therefore also harder to extract in post. The advanced pulldown cadence is
not as smooth but allows for a cleaner extraction of the original 24P frames.
As you can see in the charts above, the original “C” frame is split between
two different video frames when using normal pulldown; this means that in
order to recover this frame, those video frames have to be decompressed and
then recovered 24P frame recompressed back into the original codec. This
can mean that the “C” frame will have suffered some possible degradation
compared to the A, B, and D frames. As you can see in the chart for the
advanced pulldown scheme, every third video frame is simply discarded in
order to recover 24P in editing, and this discarded frame was the only one
where each field came from a different original frame.
Due to its name, a number of people mistakenly believe that the advanced
pulldown is “better”—either more film-like or better-looking. It’s not. Its
only purpose, in fact, is to be removed in post. If your edited 24P project
needs to be converted back to 60i for broadcast applications, you’d then add
a standard pulldown scheme to a separate 60i master.
While some interlaced-scan cameras offer a simulated 24 fps effect, this
is really only intended to create film-like motion for 60i projects, not for
transfer to 24 fps film. However, there are many consumer HD cameras now
with progressive-scan sensors capable of true 24P capture, even if some of
them record 24P to 60i with a pulldown. A number of these cameras are
multi-format, with different frame rate options, particularly the ones using
solid-state recording instead of videotape.
It is recommended that any pulldown be removed so that one can edit
with the original progressive frames. This allows greater flexibility and
quality in post for finishing the project to multiple deliverable formats. A
progressive-scan master is optimal for a transfer to film as well distribution
on DVD, Blu-ray, and the Internet. Otherwise, once you start editing a 24P-
to-60i recording, you will be breaking up the pulldown cadence, making it
harder to remove later.
Since many of these cameras record 24P in different ways, it would be pru-
dent to find out what current editing software handles the particular camera
and recording method you will be employing. Updates to current programs
are constantly being released to accommodate these variations of 24P.
44
|
Digital Cinematography on a Budget
ReCoRDing FoRmAts
This list is not all-inclusive because some formats and recorders fall out-
side the budget range that this article covers.
DV
This term describes both a recording codec (compression/decompression
algorithm) and a tape format. Most lower-end SD cameras uses the DV25
codec, a 5:1 DCT intraframe compression at a fixed bitrate of 25 Mbit/sec.
Chroma subsampling is 4:1:1 (NTSC) or 4:2:0 (PAL). The recorded video
signal is 60i (NTSC) or 50i (PAL). Mini-DV and DVCAM cassettes are com-
monly used; the picture quality is the same for both tape formats. DVCAM
uses a slightly wider track pitch and faster speed to reduce drop-outs and
increase physical robustness. DVCPRO uses 4:1:1 worldwide but otherwise
is still 25 Mbit/sec. DVCPRO50 is 50 Mbit/sec with 4:2:2 subsampling and
roughly 3.3:1 compression.
HDV
This is an MPEG-2 recording format that uses both intraframe and inter-
frame compression to reduce HD down to 25 Mbit/sec for 1080i, 19 Mbit/
sec for 720P. Having the same bitrate, more or less, as DV, it can therefore use
Mini-DV tapes for recording and the IEEE 1394 (FireWire) interface. The
aspect ratio is 16x9; pixel dimensions are 1280 x 720 (720P) or 1440 x 1080
(1080i). The 720P format supports 60P/30P or 50P/25P recording with a 24P
option; the 1080i format supports 60i or 50i recording with a 30P, 25P, and
24P option. Chroma subsampling is 4:2:0.
“Pro-HD” is JVC’s name for their professional line of MPEG-2/HDV
cameras.
DVCpRo HD
This is an HD codec and tape format developed by Panasonic. It uses DCT
intraframe compression; the bitrate is 100 Mbit/sec when recording to tape.
Chroma subsampling is 4:2:2. Aspect ratio is 16 x 9; the pixel dimensions
of the recorded image are 960 x 720 (720P), 1280 x 1080 (1080/60i), or
1440 x 1080 (1080/50i).
AVCHD
This stands for Advanced Video Coding High Definition. This is an 8-bit
4:2:0 HD format that uses an MPEG-4 codec known as AVC/H.264. In pro-
sumer HD cameras, it is recorded to nontape media such as memory cards,
hard drives, and 8cm DVDs. It supports the same frame rates as HDV and
DV. The bitrate is between 12 to 24 Mbit/sec. It uses long-GOP (group of
pictures) interframe compression. The Panasonic name for their AVCHD
| 45
Digital Cinematography on a Budget
pro camera line is “AVCCAM”. Sony uses the name “NXCAM” for their
line. AVCHD 2.0 allows bitrates to 28 Mbit/sec for 1080/60P and for 3-D
applications.
AVC-intra
Developed by Panasonic, this variation of MPEG-4/H.264 HD offers
intraframe compression rather than interframe, at 10-bits using at higher
bitrates, either 50 Mb/s (4:2:0) or 100 Mb/s (4:2:2).
XDCAm
Uses different codecs and compression methods (MPEG-2, MPEG-4,
DV25). See above for DV recording specifications. The HD version of XDCAM
uses MPEG-2 compression at bitrates of 18, 25, or 35 Mbit/sec. Recorded pixel
resolution is 1440 x 1080. Chroma sub-sampling is 4:2:0. The Sony XDCAM
EX “HQ” mode allows full-raster 1280 x 720 or 1920 x 1080 4:2:0 recordings at
35 Mbit/sec. “HD 422” mode allows 8-bit 4:2:2 at 50 Mbit/sec.
Canon XF
Similar to XDCAM HD, allows 8-bit 4:2:2 recording using MPEG-2 com-
pression at 50 Mbit/sec, as well as 4:2:0 at 35 or 25 Mbit/sec.
Redcode
This is the name of the wavelet compression (JPEG2000 variant) scheme
used by Red cameras to record RAW sensor data. Different bitrates are
offered.
BeyonD ViDeotApe
Over the past few years, there has been a shift away from using videotape
recorders inside camcorders. These are some alternative recording devices
and mediums:
optical Disk
SD and HD can be recorded using Sony’s professional XDCAM optical
disk system (similar to Blu-ray), but there are only some lower-end con-
sumer DV cameras with built-in optical disk recorders, using the miniDVD
format.
46
|
Digital Cinematography on a Budget
external Recorders
There are portable recorders that can store larger amounts of video or
data than the camera can internally; some of these external devices are small
enough to be mounted directly to the camera. Some use an HDD (hard disk
drive) or a SSD (solid-state drive); others use memory cards for storage.
Some examples:
w The Atomos Ninja and Samurai recorders capture HDMI (Ninja) or HD-
SDI (Samurai) output to Apple ProRes on 2.5" HDD/SSDs.
w Sound Device’s PIX 220 and 240 recorders capture HDMI (220) and/or
HD-SDI (240) to Apple ProRes or DNxHD to CF cards or 2.5" SSDs.
w A laptop computer can also be used during shooting to process signals for
recording to an internal or external hard drive.
Blackmagic Cinema Camera uses a sensor that is almost 16mm wide. The
Panasonic AG-FS100 uses a Micro 4/3 sensor, which is about 80% as large as
a 35mm motion picture frame (the sensor is about 18mm wide versus 24mm
wide). Finally, there are now affordable models with 35mm-sized sensors in
them—for example, the Canon C300 and Sony FS700U, NEX-FS100, NEX-
VG10, and PMW-F3 all use a 35mm-sized sensor.
There are also digital still cameras with 35mm-sized sensors that shoot
HD video. Some even have a “full-frame” (FF) 35mm sensor, 36mm x
24mm, which is the same size as the 8-perf 35mm VistaVision frame. With
FF35 cameras, there is an effective 1.5-stop loss in depth of field over 35mm
cine photography once you match field of view, distance, and shooting stop.
Today, the shallower-focus look of 35mm photography is no longer
restricted to expensive professional cameras; however, keep in mind that
with less depth of field comes greater difficulty in follow-focusing during a
scene—and good focus is critical if the goal is projection on a large theatrical
screen, or even on large consumer HDTVs.
sHooting HD ViDeo on A
DigitAl still CAmeRA
Digital still cameras have had limited video capabilities for years now, but
recently, many have been able to shoot progressive-scan HD, either 720P or
1080P. This type of photography has been labeled “V-DSLR”, “HD-DSLR”,
“HDSLR” or “DSLR Video” in various online forums and publications
(though some of the still cameras being used are not actually DSLR’s but are
mirrorless.) Their small size, low cost, and high sensitivity have opened up
new possibilities in cinematography.
w Depth of field. The larger FF35 sensor, such as in the Canon 5D Mark III
or Nikon D800, requires longer focal-length lenses be used to match the
same field of view of shorter focal-length lenses on a 35mm cine camera,
a 1.5X conversion factor. This will lead to a shallower-focus look unless
compensated for by stopping down the lens. In practical terms, it’s roughly
a 1.5-stop difference in effective depth of field. The lower noise floor of
the larger sensors help in this regard, giving you the flexibility of rating
the camera at a higher ISO in order to shoot at a deeper stop. The APS-C
sensor (such as in the Canon 7D) allows the same depth of field character-
istics as 35mm cine cameras.
w Lens selection. Due to the flange depth of most PL-mount cine lenses,
the mirror shutter in a DSLR can make it difficult to mount these lenses
without risk of hitting the mirror. An increasingly popular aftermarket
solution has been to remove the mirror shutter and add a PL-mount; still
photography remains possible but would be limited to using the LiveView
function. There are also some cine lenses made with a flange depth that
does not interfere with the mirror function. Another option is to use a
camera without a mirror shutter, such as the Panasonic Lumix DMC-
GH2. Adaptors are made that allow a PL-mount lens to use the still cam-
era’s mount; however, these still camera mounts are not as robust as the
PL-mount. Due to small variations in flange-to-sensor distances between
still camera bodies, a cine lens shimmed to the correct depth for one still
camera may be slightly off on another body, so test your set of lenses if you
plan on sharing them between multiple bodies.
w Lens coverage. PL-mount cine lenses are normally designed to cover a
Super-35 area, though many medium-to-long focal-length lenses will
cover the larger FF35 area. Therefore the problem becomes finding the
shorter PL-mount lenses that will cover FF35 for your wide-angle shots.
However, if your frame of reference is Super-35 photography and you are
trying to decide what short focal-length lenses you need, remember that a
50mm lens on a FF35 camera has approximately the same horizontal view
as a 35mm lens on a Super-35 camera. There are also some PL-mount
lenses on the market designed to cover FF35, such as the Zeiss Compact
Primes. At the moment, however, it is more common to use still camera
lenses on a FF35 camera shooting HD.
w Focusing still camera lenses. On most DSLR’s, the Auto-Focus function
does not work continuously in HD mode, unlike with many consumer
video zooms (there are some exceptions like the Panasonic DMC-GH2
and Nikon D800.) While focusing the lens by hand is possible, many opt
to add a bracket under the camera for holding standard 15mm rods, al-
lowing a follow-focus unit to be attached, as well as a mattebox and vari-
ous motors as needed. The still camera lens itself will need a toothed focus
| 51
Digital Cinematography on a Budget
on a separate unit and sync it with picture in post—some people are using
small portable recorders like the Zoom H4N and Tascam DR-100 along-
side their DSLR. In the meanwhile, the biggest challenge for those wishing
to record sound on a camera without manual audio controls has been to
defeat the AGC. There are various devices on the market that do this by
sending a tone to one of the two stereo channels, keeping the noise floor
consistent during moments of quiet. For example, the Beachtek DXA-SLR
and the JuicedLink DT454 have an AGC disabler, as well as XLR mic in-
puts, pre-amps and phantom power. Over time, more of these cameras
will allow manual audio control but you may still need attached devices to
allow XLR mic inputs and provide phantom power.
w Picture quality. These cameras do not offer a full-sensor RAW recording
capability as of yet, because of limitations in processing power and data
recording capacity needed to handle such large files (these cameras are
primarily designed for still photography after all.) The Canon 5D Mark
III, for example, has a 22MP sensor; a single frame of 1080P video, on the
other hand, is only 2MP. To get around this problem, most of these cameras
employ coarse downsampling techniques to reduce the amount of sensor
data to be processed into HD video. This can reduce effective resolution and
create aliasing and chroma moiré artifacts. Also, the reset rate of the rolling
shutter creates a distortion called “skewing” during fast motion.
w Postproduction. H.264 and the other similar compression schemes, plus
the color and luminance limitations of an 8-bit 4:2:0 recording, can have
an impact on flexibility in color-correction. Also, the contrast of the DSLR
video image was really designed for immediate viewing on a monitor,
making it somewhat high for material to be color-corrected. The common
solution has been to set the contrast level as low as possible through the
camera’s menu; some people are also loading custom gamma curves.
few thousand dollars beyond the budgetary limit of $15,000 that this article
is focused on.
A seleCtion oF CAmeRAs
The breakdown is based around average retail prices for the body only.
Please note that some of these cameras will need more accessories than
others to become production-friendly, and some do not come with lenses
at their base price. All of these factors can add dramatically to the cost of
ownership. Keep in mind that it is possible to find deals on many of these
cameras. Also, this is only a selection of what is available; new products are
constantly being released, and existing products are constantly being up-
dated with new features.
A few cameras allow a significantly better image, with attendant higher
data rates, to be recorded using external recorders. However, some of these
recorders are more expensive than the cameras mentioned.
w Canon EOS 1D C. Still camera (DSLR) with FF35 18MP CMOS sensor.
Capable of outputting 8-bit 4:2:2 Motion JPEG 4K video to a CF card at
24 fps (4K uses an APS-H crop of the sensor). Also records 8-bit 4:2:0
1080P video at either 24P or 60P using full-frame of the sensor, with
option to use a S35 crop. Uncompressed 8-bit 4:2:2 signal output via the
HDMI-out.
w Canon XF305. ⅓" 3-CMOS, 2.1MP full-res 1080P sensors. Records 8-bit
4:2:2 MPEG-2 (50 Mb/sec) to CF cards. Comes with Canon 4.1–738mm
zoom.
54
|
Digital Cinematography on a Budget
w Red Scarlet. 35mm (S35) CMOS sensor, 4K. Modular design; components
sold individually. Records 4K RAW using Redcode compression to SSDs.
w Sony NEX-FS700U. 35mm (S35) CMOS sensor, 4K. Records 1080P using
AVCHD to SD card/MemoryStick or via the FMU (flash memory unit)
port, or it can output 8-bit 4:2:2 (with embedded timecode) via HDMI 1.4
or 3G/HD-SDI to an external recorder. High frame rates in short bursts
(8 to 16 seconds) up to 240 fps at full resolution or 960 fps at reduced
resolution. Future option allowing 4K output to external recorder.
w Canon EOS 5D Mark III. Still camera (DSLR) with a single FF35 22MP
CMOS sensor. Records to CF or SD cards.
w Canon XF105. Single ⅓" CMOS sensor. Records 8-bit 4:2:2 MPEG2 (50
Mbit/sec) to CF cards.
w JVC GY-HM750U. ⅓" 3-CCD sensors. Records 8-bit 4:2:0 MPEG2 (1080P,
1080i, 720P), up to 35 Mbit/sec, to SDHC cards (two slots). Optional SxS
card adaptor. 1–30 fps (1080P/i) or 1–60 fps (720P). Comes with detach-
able Canon f/1.6 4.4–61.6mm zoom.
| 55
Digital Cinematography on a Budget
w JVC GY-HM250U. ⅓" 3-CCD sensors. Records 8-bit 4:2:0 MPEG2 (720P,
480P/i), up to 35 Mbit/sec, to HDV tape. Optional SxS card adaptor. 1–30
fps (1080P/i) or 1–60 fps (720P). Comes with detachable Canon f/1.6
4.4–61.6mm zoom.
w Nikon D800. Still camera (DSLR) with FF35 36MP CMOS sensor. Re-
cords to CF or SD cards. Has clean 8-bit 4:2:2 HDMI-out for recording to
external device.
w Panasonic AG-AF100. Single Micro 4/3 CMOS sensor. Shoots 12–60 fps;
records 1080P, 1080i, and 720P using SD cards to MPEG-4 AVC/H.264.
Has HD-SDI out. Micro 4/3 lens mount.
w Canon Vixia HF S21. Single ½.6" CMOS (3264 x 1840 pixels) sensor.
Records AVCHD to SD cards (two slots) or internal 64GB flash drive.
24P/30P/60i. Fixed f/1.8 6.4–64mm zoom.
w Canon XA10. Single ⅓" CMOS sensor. Records to internal 64GB flash
drive in 8-bit 4:2:0 MPEG-4-AVC H.264 (up to 17 Mbits/sec) Fixed f/1.8
4.25–42.5mm zoom.
w Sony NEX-VG10. Single APS-C 35mm sensor. Using Sony Memory Stick
or SD cards, records 30P to 1080/60i using MPEG-4-AVC H.264 (up to 24
Mbits/sec). No 24P. Sony E-mount lenses.
The author would like to thank Adam Wilt, Randy Wedick, Charles Crawford,
and Phil Rhodes for their technical advice.
M. David Mullen, ASC is a member of the Academy of Motion Picture Arts and
Sciences. He has photographed more than thirty independent feature films and two
television series. He has received two IFP Independent Spirit Award nominations
for Best Cinematography, the first for Twin Falls Idaho (2000) and the second for
Northfork (2004).
| 57
ExPosurE
Most exposure meters incorporate some sort of calculator—some simple,
some sophisticated. An exposure meter measures light, either incident or
reflected. The calculator helps you decide how to use the light. There are six
specific variables entering the calculation:
T-sToPs
The “T” stop number is defined as being the true “f ” stop number of a
lens if it is completely free from all reflection and absorption losses. The
T (transmission) number represents the f-stop number of an open circular
hole or of a perfect lens having 100-percent axial transmission. The T-stop
can be considered as the “effective” f-stop. It is from this concept that the
means arises for standardization of T-stop calibration. T-stops are calibrated
by measuring the light intensity electronically at the focal plane, whereas
f-stops are calculated mathematically, purely on the basis of dividing the
focal length of the lens by the diameter of the effective aperture (entrance
pupil). Thus, f-stops are based on the light that enters a lens, while T-stops
are based on the intensity of the light that emerges from the rear of the lens
and forms the image.
There is no fixed ratio, however, between T-stops and f-stops that applies to
all lenses. The difference actually represents light losses within the elements
of a given lens due to reflection from the glass-air surfaces and from absorp-
58
|
Putting the Image on Film
tion within the glass itself. Consequently, this factor is variable and cannot
be incorporated into an exposure meter, since the meter must function in
connection with many different lenses calibrated in both f-stops and T-stops.
The reason why lens and exposure tables are presented in f-stops when
all professional cine lenses are calibrated in T-stops, is that the f-stops are
required for all calculations involving object-image relationships, such as
depth of field, extreme close-up work with extension tubes, etc. Such tables
are based on the size of the “hole” or diameter of the bundle of light rays that
the lens admits to form the image. The diameter of the f-stop will normally
be the same for all lenses of similar focal length set at the same aperture. The
T-stop, however, is an arbitrary number that may result in the same T-stop
setting varying in aperture diameter with different lenses.
It is recommended that all professional cine lenses be calibrated in both
T-stops and f-stops, particularly for color work. T-stop calibration is es-
pecially important with zoom lenses, the highly complex optical design of
which necessitates a far greater number of optical elements than is required
in conventional lenses. A considerable light loss is encountered due to the
large number of reflective optical surfaces and absorption losses. A zoom
lens with a geometrical rating of f/2, for example, will transmit considerably
less light than a conventional fixed-focal-length lens of similar rating with
fewer elements.
Exposure tables are generally based on “effective” f-stops (which are, in
fact, T-stops). Small variations in emulsion speed, processing, exposure
readings, etc., tend to cancel out. Cinematographers should shoot tests with
their particular lenses, meter, light and film to find the best combinations for
optimum results.
Other variables such as direction and contrast of the light are factors cal-
culated from the experience of the cinematographer, aided by such tools as
photospheres and spot readings. Finally, manipulation of all the above, plus
special negative processing to achieve a desired “look,” is determined by the
cinematographer.
The laboratory and choice of film are closely tied to exposure. It is impor-
tant to keep exposure within limits satisfactory both to the selected film and
to the printing range of the laboratory.
The tables on pages 812, 813 and 866 through 877 will aid exposure cal-
culation for meters that lack settings for some of the factors, or will aid in
calculating constant exposure control when one factor varies from another.
| 59
Comparisons of 35mm,
1.85, Anamorphic,
Super 35 Film Formats
by Rob Hummel
Editor of the 8th edition
of the American Cinematographer Manual
INTRODUCTION
T his chapter has always addressed creative aesthetic and technical differ-
ences depending on the choice of 35mm film format. The most salient
change from earlier versions of this chapter is the impact of digital interme-
diate (DI) on final image quality.
While a DI can improve upon image quality that might degrade when cre-
ated at a film lab, it does so at greater expense. On many films, that expense
is trivial when compared to the overall cost of the film; on others, it may be
a significant portion of the budget. In 2011, John Bailey, ASC saved a low-
budget film substantially and was able to achieve everything desired from
the image with a photochemical answer print. It is becoming increasingly
rare to color correct a film in this manner, but it is still available.
Also, 2K DIs have made formats such as Super 35 and anamorphic ap-
pear almost equal in quality. Filmmakers often assume this is because the DI
process makes the Super 35 image look so much better. While a 2K DI does
improve the image quality of a Super 35 image, the only reason the image
looks competitive with an anamorphic DI, is because a 2K DI sacrifices the
image quality contained in the anamorphic frame. With the further rollout
of 4K scanning and 4K DI workflows, the superior image quality of ana-
morphic imaging over Super 35 is now readily apparent again. 4K scanning
begins to approach capturing all the resolution that the anamorphic image
has to offer. While Super 35 benefits from 4K scanning, the improvements
are not as dramatic.
Lastly, the focus of this piece has always been about image quality. That
being said, I recognize, more than most, that we’re not in the medical imag-
ing business, we’re in the motion picture business. When I reference grain,
or noise in an image, I recognize that image quality may actually be what you
desire in your picture. Where this chapter may talk about one format having
superior image quality to another, you shouldn’t take offense if the lower
quality image is the preferred method for telling your story.
60
|
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
What’s New
◗ A new section on how to evaluate images effectively, and discusses the
importance of screen heights, complete with illustrations. (Page 60)
◗ For motion pictures that will enjoy an IMAX release, you will find a
page briefly illustrating how that process works. (See Figure 11)
◗ If you are considering digital cameras instead of film, after the discus-
sion of film formats and aspect ratios, you will find a new section with
guidelines and information to help you understand what to look for
when examining a digital camera. This should allow you to make an
informed decision about what will most benefit your motion picture;
whether comparing various digital cameras or comparing digital to
film(see page 81).
S
c
r
e
e
n
H
e
i
g
h
t
Figure 1: A typical stadium seat theater design, illustrating how many screen heights the
audience is from the image.
| 61
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
Figure 2: As an example of how an image can improve with distance, hold the above image
at arm’s length and it will probably look just fine. Upon closer examination, you will notice
the difference between the right and left sides of the picture.
theaters are no more than 3 to 3½ screen heights deep (see Figure 1).
When you are closer than seven screen heights away from your normal
standard definition television set, you can resolve the pixels and scanning
lines that make up the image. However, further than seven screen heights
away, and it is impossible to resolve any of the picture elements that make up
the image; basically, the image will appear as sharp as your eyes can resolve.
For example, if you were looking at an IMAX image and a standard defini-
tion TV image side by side, but you were evaluating the images from 7 screen
heights away, both images would appear to have equal resolution. Once you
got closer than 7 to 6 screen heights, the TV image would begin to exhibit its
poor image quality, while the IMAX image would look exemplary at closer
than ½ a screen height. If you have been in an IMAX theater, take notice
that the back row of the theater is rarely much more than one screen height
away from the screen; something IMAX can do because of the dramatically
high resolution of IMAX photography.
At the risk of stating the obvious; the higher resolution the image, the
closer you can get to the image before noticing pixels (in the case of digital)
or grain (in the case of film).
In the case of HDTV, you have to get closer than three screen heights
before you can start to see the pixels in the image. In current stadium seat
theaters, our audiences are sitting no further than three screen heights from
the images, and, in most cases, closer than two screen heights. For this rea-
son, it’s important we evaluate imagery from the same screen distance as our
audience.
In the image above, your proximity to the image affects your perception
of the image quality.
Most studios still have screening rooms that place you anywhere from six
to eight screen heights from the image. Therefore, when sitting that far from
the screen, you can fool yourself into thinking the image quality is adequate,
perhaps even superb. Yet, when viewed from a distance of 1½ to 2 screen
heights, your conclusions may be entirely different. Please make sure when
62
|
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
evaluating image quality, you place yourself within 2 screen heights of the
projected image. It is important that critical evaluation of image quality be
viewed from where the audience will see the picture.
Also, the display medium must be taken into consideration as well. Many
people evaluate high-definition (HD) or 2K images on HD displays that can
resolve only 1400 of the 1920 horizontal pixels contained in an HD image.
Those 520 missing pixels of resolution can hide artifacts that will show up
clearly on a digital cinema projector, or a 35mm film out.
FIlm FORmATS
History and a Definition of Terms
Currently, in the United States (and most of the world), the most prevalent
motion picture film formats, or aspect ratios, are 1.85 and 2.40 (2.40 is still
often referred to as 2.35).1 As a point of reference, these ratios are determined
by dividing the width of the picture by the height, which is why you will also
see them written as 1.85:1 or 2.40:1. Verbally, you will hear them referred to
as “one eight five” or “two four oh.” 2.40 is also referred to as anamorphic, or
“Scope,” referring to its roots in CinemaScope.
An examination of films over the past sixty years shows that format is
not automatically dictated by dramatic content. It is a creative choice, deter-
mined by the cinematographer and director. The full range of drama, com-
edy, romance, action, or science fiction can be found in both aspect ratios.
The purpose here is to advise on the pros and cons of both aspect ratios and
the photographic alternatives available to achieve them. This should help a
filmmaker make an informed decision as to which format is best for a given
project. Most importantly, you will be presented with the “conventional wis-
dom” arguments for and against the formats; this conventional wisdom will
either be endorsed or countered with reality. This knowledge, in the end, will
help you realize that, creatively, there are no technical obstacles to choosing
any format. However, you will also understand the aesthetic impact those
choices will have upon your production.
1. A historical note regarding 2.35 vs. 2.40 vs. 2.39. CinemaScope films with an ana-
log soundtrack were originally an aspect ratio of 2.35:1. In the early 1970s, the height
of the anamorphic aspect ratio was modified slightly by SMPTE to help hide splices,
effectively changing the ratio to 2.40:1. Old habits die hard and many still refer to
the format as 2.35:1. Also, in 1995, SMPTE again made a change in the size of the
CinemaScope projection area to accommodate digital soundtracks. In both cases the
math of the aspect ratio yields an aspect ratio number of 2.39 and continues on sev-
eral places to the right of the decimal point. Cinematographers felt that rounding up
to 2.40 ensured there would be less confusion with the 2.35 aspect ratio. The actual
difference between 2.39 and 2.40 is so inconsequential that, from a compositional
standpoint, they are the same.
| 63
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
As a clarification, the term full aperture refers to the entire image area be-
tween the 35mm perforations, including the area normally reserved for the
soundtrack. In other literature you will find full aperture used interchange-
ably with camera aperture, silent aperture and full silent aperture. All four
terms define the same area of the 35mm film frame.
In general, the term Academy aperture is used when referring to the imag-
ing area of the negative excluding the analog soundtrack area. More prop-
erly, it would be referred to as the “sound aperture,” the term used to indicate
the area that remained when analog soundtracks were first added to 35mm
film. However, throughout this chapter, we will follow convention, and use
the term Academy aperture when referring to the imaging area excluding
the soundtrack.
Academy aperture2 is an aspect ratio of 1.37:1, centered within the sound
aperture area, arrived at jointly by the American Society of Cinematogra-
phers and Academy of Motion Picture Arts and Sciences in the early days of
sound to restore compositions closer to the 1.33 aspect ratio of silent films,
and resolve the unpleasant composition produced by framing images within
the narrow sound aperture.
All 1.85 composed films are achieved with “normal,” spherical lenses.
However, the 2.40 aspect ratio can be achieved two ways. One method is
with the use of anamorphic3 lenses that squeeze the image to fit within the
Academy aperture (see Figure 6). The alternate method (see Super 35 in
Figures 7, 8, and 9) uses normal lenses without any distortion of the image,
and then is later squeezed for theatrical film release. Both methods will be
discussed here.
The 1.85 and Super 35 formats can also be captured using cameras using
a 3-perf pull-down movement. While all 35mm illustrations in this chapter
use a 4 perforation frame, were you to use a 3-perf camera for 1.85 or Super
35, the advantages and disadvantages remain the same whether 3 or 4 perf.
3-perf camera movements do not provide adequate height for anamor-
phic photography.
Also, the film formats discussed here deal with general 35mm motion
picture photography. Formats such as VistaVision and 65mm are most often
2. If you actually calculate the aspect ratio of Academy aperture from SMPTE specs,
the math has always come out to 1.37:1. More often than not, people will refer to
Academy aperture compositions as 1.33 (the original aspect ratio of silent films), and
almost never is the ratio called 1.37. The compositional difference between the two
is negligible.
3. Anamorphic comes from the word anamorphosis, meaning an image that appears
distorted, yet under the proper conditions will appear normal again. Leonardo da
Vinci is credited with first using the technique during the Renaissance.
64
|
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
used for visual effects and special event cinematography and would require
an chapter of their own to discuss. However, Figures 10 and 11 illustrate how
widescreen compositions are presented when converted to IMAX, and how
films photographed in 65mm are released in 70mm.
At the end of this chapter, we’ll cover current methods for achieving 1.85
and 2.40 aspect ratios with currently available digital cameras.
COmPOSITION
Before getting into specifics
about the different formats,
I want to point out the com-
position differences between
the two aspect ratios 2.40 and
1.85, regardless of how they
are achieved photographically.
Figure 3 is an image of the
Taj Mahal with a 2.40 aspect
ratio outlined in yellow. Figure 3: The yellow rectangle within the image
In Figure 4, two 1.85 aspect outlines a 2.40:1 aspect ratio.
ratios are outlined by yellow
rectangles. The larger of those
two rectangles represents a
1.85 composition equal in its
width to the 2.40 aspect ratio
of Figure 3. The smaller 1.85
rectangle is equal in height to
the 2.40 ratio of Figure 3.
The purpose here, is to il-
lustrate that, depending on
your framing, a 1.85 image Figure 4: Two 1.85 compositions.
has potential of encompass-
ing as much width as a 2.40 composition. Although 1.85 will take in the
same width with greater height in the composition, it’s important to realize
that wide sets and vistas are not restricted to the 2.40 format.
I believe you will be able to sort the facts from opinions. That being said,
nothing here is intended to dissuade you from choosing a given film for-
mat. Where there are possible true disadvantages to a format, it is presented
so you have your eyes wide open to any potential challenges or creative
compromises. We are working in a very artistic medium where there can
truly be no absolute rights and wrongs. What one person may find an ob-
jectionable grainy image, another filmmaker may feel enhances the story
being told.
Where possible, when listing outdated opinions, I immediately counter
with the present fact, or contrary point of view. With this presentation, I
want you to be able to draw your own conclusions and inform you with
all the arguments and points of view about these formats. If you study it
carefully, you will be able to effectively articulate your desire to photograph
a film in the aspect ratio you feel will most benefit the story.
3. Depth of Field: Greater depth of field (the total area in focus at a given
distance). Since 1.85 uses shorter focal length lenses as compared with
Anamorphic, greater depth of field is available.
This advantage is often negated when lenses are shot “wide open,”
resulting in a little or no gain in depth of field.
4. Composition: Considered by many to be a “taller” composition. Lend-
ing itself to compositions with more emphasis on the vertical than
horizontal. Cathedral interiors, or city skylines for example.
5. Wide Sets: An opinion often expressed is that sets don’t need to be as
wide on a 1.85 film as one photographed in 2.40, resulting in savings in
set construction. However, there are many that would argue film format
has no bearing on the width of set construction. As the examples in
Figures 3 and 4 (page 64) point out, it’s possible for 1.85 to require as
wide a set as 2.40, depending on the composition.
6. Least Complex: 1.85 is the simplest format to execute from a mechani-
cal/technical standpoint. The choice of photographic equipment is vir-
tually unlimited, as any standard 35mm camera will accommodate this
format. Plus, you have a choice of very wide and “fisheye” lenses not
available in anamorphic.
7. Expendable Cameras: If a stunt camera mount is required that risks
destroying a camera lens, spherical lenses can be used that are much
more affordable than expensive anamorphic lenses.
8. Video Transfers: While they are effectively not manufactured any
more, many production companies, are still concerned about accom-
modating the 1.33 (4 x 3) aspect ratio of Standard Definition television.
With some effort on the shooting company’s part, the 1.85 composition
can protect for standard definition video so that a simple one-to-one
transfer can be done without panning and scanning. While left and
right image integrity remain virtually intact this way, there is an ap-
proximate 33% increase in the vertical height of the composition.
Although many think it routine to protect the TV area from intruding
objects (e.g., lights, microphones, etc.), it makes the cinematographer
and soundman’s job more difficult, by not being able to bring lights and
microphones down close to the area of composition. This is why many
cinematographers shooting 1.85 will request to shoot with a 1.66:1 as-
pect ratio hard matte. While the same width on the film, 1.66 is slightly
taller than 1.85, closely approximating the height of the 1.33 (4 x 3) TV
frame. This gives the cameraman more freedom to light his subjects
without fear of a light or microphone showing up when transferred to
video.
Yet, in a world where 1.78:1 (16 x 9) aspect ratio video displays are
now the norm, 1.85, for all intents and purposes, drops precisely into
68
|
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
the HDTV frame. While not precisely the same aspect ratio, the 1.78:1
HDTV frame is only 42 pixels taller than the 1.85 aspect ratio. Meaning,
if you did letterbox a 1.85 image in HDTV, you would have two 21-pixel
lines above and below the image.
9. Sharper Lenses: Many people believe it is an advantage to shoot 1.85
because spherical lenses are sharper than anamorphic 2.40’s lenses.
This is a misconception. It is true that spherical lenses are often sharper
than anamorphic; however, the much greater negative area used with
anamorphic more than makes up for the subtle difference in resolution
from spherical lenses. Properly executed camera tests comparing the
two formats always reach this conclusion.
B. DISADVANTAGeS OF 1.85
1. Negative Area: The principal disadvantage is the actual size of the 1.85
format on the negative. Because of the smaller area, 1.85 is noticeably
grainier than anamorphic 2.40. This is not as noticeable in the original
negative stage and projecting in small screening rooms, but becomes
more pronounced when projected in large theaters.
Graininess can be mitigated with digital grain reduction techniques
in the DI; however, if not applied carefully, such techniques can render
the images looking artificial, a loss of sharpness, and almost video like.
When compared to 1.85, anamorphic 2.40 uses 55% more area on the
negative. This is not insignificant.
2. Composition: Because of the greater height of the 1.85 aspect ratio,
ceilings of sets are more prone to being photographed. This can be a
restriction on how easily a cameraman can light an interior set (visible
ceilings limit where a cameraman can hang lights). On some sets, it may
require additional construction, and has been the experience on films
shooting on studio back lots. Sound can also be compromised by the
microphone having to be farther away.
3. Magnification: When projected, the area of the frame for 1.85 is sub-
jected to much greater magnification on a screen than an anamorphic
frame, resulting in more apparent grain in the image.
4. Jump and Weave: If your motion picture will be projected from a film
print, film projectors always have some level of jump and weave, rang-
ing from annoying to barely detectable. Because of 1.85’s vertical height
on the film frame, it is subjected to a 55% increase in visible jump and
weave over an anamorphic image projected in the same theatre. This
artifact is eliminated with digital projection.
5. 70mm: Not truly compatible with standard 70mm: Although it can be
done, there is a large amount of unused print on the sides when blown
up to 70mm. Also, because of the greater magnification in 1.85/70mm
| 69
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
Surprisingly, there still exist m ajor cable channels that transmit a 1.33
standard definition signal and present films panned and scanned.
The alternative is to release videos in letterbox format, where the
2.40 format is maintained by putting black mattes above and below the
frame. This is a common practice in DVD releases of films. Letterbox-
ing a 2.40 image with HDTV’s 1.78 (16 x 9) aspect ratio isn’t objection-
able at all, yet there are still cable outlets that resist respecting a film’s
original composition.
Performing an HDTV 16 x 9 pan and scan can have just as many
compositional compromises as those for standard definition TV.
The difficulty in video transfer is the most often stated disadvantage
of the 2.40 format.
It has always been the position of the ASC that the original com-
positions of motion pictures should always be respected. Therefore,
letterboxing of widescreen films is the preferred practice. When 1.33
motion pictures are presented on widescreen displays, the image should
be presented with “side curtains” or in “pillar-box” format.
2. Expense: It is often said that Anamorphic is more expensive than 1.85.
However, the difference in cost between an anamorphic lens package
vs. a 1.85 lens package is negligible. Panavision ‘C’ series anamorphic
lenses would be less than few thousand dollars more expensive over the
course of a ten-week film schedule.
Also, discussions with a number of prominent cameramen indicate
they wouldn’t increase the size of their lighting package significantly for
the 2.40 aspect ratio. In fact, some say it wouldn’t change at all.
3. Composition: Single close-ups result in wide areas on either side of a
face, with potential for distracting objects in the frame. However, due to
the nature of anamorphic’s longer focal length lenses, usually anything
in the background on either side of a face would be severely out of focus,
tending to emphasize the subject.
4. Set Construction: Many feel that sets need to be built wider because
of the wider aspect ratio. There are also many who feel it doesn’t mat-
ter, and can be accommodated by choosing lenses carefully. See again
Figures 3 and 4 and the discussion under Composition (page 64).
5. Blocking: Some directors have a hard time blocking action within the
wider frame.
6. Extras: Expense of additional extras may be necessary for some crowd
scenes.
7. Valuable lenses: Anamorphic lenses are far too valuable to risk in any
risky stunt situation (Panavision’s Primo lenses for example). Thus al-
ternative forms of capture with a Super 35 type of setup must be used,
risking the material may not cut well with the balance of the anamor-
| 73
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
phic originated material. Though, if used for a quick action shot, the
difference in image quality will, most likely, not be detectable.
with standard lenses, making use of the full aperture area, extending the
width of the frame into that area of the negative traditionally reserved for
the soundtrack; yet the composition is just over 2 perfs high. Although most
cameras already expose picture information in the soundtrack area, it nor-
mally goes unused. Figure 7 portrays how Super 35 composed for 2.40 gets
from camera to the screen.
One version of Super 35 is where the image is centered on the frame of
film, referred to as common center (see Figure 8). As the figure illustrates,
information is exposed over the entire full aperture area of the film. The
filmmaker decides what format he is composing for, and it is that aspect
ratio the film lab or digital intermediate facility will eventually extract from
the frame for theatrical pre-
sentation.
Another method of photog-
raphy for Super 35 is referred 2.40 1.85
has over 53% more area. The difference in negative area becomes most
pronounced after 35mm dupe negatives are made. Anamorphic dupe
negs are made with contact printing, which in itself tends to lessen the
appearance of grain. Super 35 dupe negs created in a film lab involve an
optical step where the image is blown up, then squeezed to produce an
anamorphic image for 35mm release prints. Because of this optical step,
grain in the negative tends to be more sharply resolved. Combine that
with magnifying a smaller area of negative to fill the same screen area,
images become noticeably grainier than the same scene photographed
anamorphically.
The most prevalent method for posting a Super 35 is to employ a
digital intermediate. With a DI, the detrimental effects of the film lab’s
optical steps are eliminated. However, if negatives are scanned and
projected at 4K, the differences between anamorphic and Super 35
negatives again become severe, with the anamorphic having much less
apparent grain, greater sharpness and clarity of the image.
2. Opticals: If a film laboratory workflow is employed, all dissolves and
fades must be done with double IP’s or A & B printing, for best image
quality. If traditional optical methods are employed, they become exces-
sively grainy after being subjected to the second optical step required to
produce the film release prints.
Obviously, you will usually be using a digital intermediate, and these
disadvantages vanish.
3. Composite Prints: In a film lab workflow, because of the full aperture
image, composite prints cannot be struck until after the image has been
repositioned into a dupe negative, thus making room for the optical
sound track. If digital or HD previews are employed, this becomes a
nonissue.
4. Prints from Original Negative: In traditional photochemical work-
flows, because of the optical step involved, original negative composite
show prints cannot be struck. Actually, it is technically possible, but can
only be done with complex procedures and such a high risk of failure, it
doesn’t merit subjecting the original negative to the handling involved.
As you may have guessed by now, with a digital intermediate, a
“new” negative will be created on a film recorder that will integrate the
required anamorphic squeeze without introducing the grain found in
traditional optical printing.
5. Previews: More difficult to preview a traditional film print, because of a
special projection mask required for the full aperture work print. Since
Super 35 uses the area reserved for a soundtrack in the workprint stage,
many film theatres cannot be adapted to project the format. A nonissue
for digital previews.
| 81
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
1080
2.40:1 Scope Aspect Ratio 800
The aspect ratio of all but a few
of these cameras is the HDTV
aspect ratio of 1.78:1 (16 x 9
in video parlance). This 1.78:1
aspect ratio is a result of the Figure 12: In a 1920 x 1080 digital camera, only the
different camera manufactur- center 800 lines are used for a “Scope” or 2.40:1
aspect ratio.
ers leveraging what they have
built for HDTV broadcasting cameras. It’s rare to find a digital camera that
doesn’t owe part of its design to television cameras.
More recently however, Arri, Red, and Sony have built cameras that have
unique designs optimized for motion picture cinematography.
Previously, the photo chemical process of film put all the burden of im-
aging science on the film itself, allowing motion picture camera manufac-
turers to focus their tasks, when building cameras, on the functionality of
everything that was required to get the image to the film; lenses, stable film
transport, ergonomics, etc.
With the world of digital imaging, there has been a paradigm shift. Now
the burden of the imaging science has been put on the camera manufactur-
ers themselves; something once left to the likes of Kodak or Fuji.
When composing for the 2.40:1 aspect ratio, most digital cameras will
capture a letterboxed 2.40 slice out of the center of the imaging sensor,
which, in 1920 x 1080 cameras, results in the pixel height of the image being
limited to only 800 lines.
Cameras with sensor aspect ratios approaching 2.0:1 or 1.33 sensors that
can use anamorphic lenses mitigate the compromise of clarity when com-
posing for 2.40:1.
4. When film resolutions are discussed, and the terms 2K or 4K are used, these refer
to the number of lines that can be resolved by the film. In the case of 2K, that would
mean 2048 lines or 1024 line pairs as photographed from a resolution chart. In the
case of 4K, that would mean 4096 lines or 2048 line pairs (4096 lines) as photo-
graphed from a resolution chart.
In digital imagery the term is applied a bit more loosely. While 2K and 4K still
mean 2048 and 4096, respectively, with digital scanning and photography it refers to
a number of photo sites on the scanner or camera image sensor. Numbers of pixels
does not necessarily translate into actual image resolution. Add to the confusion that
many manufacturers have a curious habit of rounding up; which is how 1920 pixels
gets called “2K.”
| 83
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
Yet another camera does creative things with the clocking of CCD pixels
so that the entire imaging area is still utilized when shooting a 2.40 image,
with a subtle compromise in overall resolution.
We are already witnessing more and more cameras with imaging sensors
with 4K pixel resolution. So, hopefully, we’ll be back to movies having the
highest fidelity imagery again.
Fill Factor
There is one area of a digital camera’s specifications that is most helpful
in determining its sensitivity and dynamic range. This is the statistic that
conveys how much area of an imaging sensor is actually sensitive to, and
captures light vs. how much of a sensor is blind, relegated to the circuitry
for transferring image information. This is called the “fill factor.” It is also a
statistic that is not readily published by all camera manufacturers.
This is an area where not all digital cameras are created equal. The amount
of area a digital imaging sensor is actually sensitive to light (the “fill factor”)
has a direct correlation to image resolution and exposure latitude. With the
currently available professional digital motion picture cameras, you will find
a range of high profile cameras where less than 35% of the sensor’s total area
84
|
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
is sensitive to light, to cameras where more than 50% of the sensor is light
sensitive.
As film cinematographers, we are used to working with a medium where
it was presumed that the entire area of a 35mm film frame is sensitive to
light; in digital parlance, that would be a fill factor of 100%. When a digital
camera has a fill factor of 40%, that means it is throwing away 60% of the
image information that is focused on the chip. Your instincts are correct if
you think throwing away 60% of image information is a bad idea.
Since the imaging sites on a solid state sensor are arrayed in a regular grid,
think of the 40% sensitive area as being holes in a steel plate. Thus, the image
gathered is basically similar to shooting with a film camera through a fine
steel mesh. You don’t actually see the individual steel gridlines of the mesh,
but it tends to have an affect on image clarity under most conditions.
The fill factor is accurate only when it ignores lenslets, and only reflect
the size of the actual photosite well that captures photons vs. the rest of the
imaging area.
With this statistic, you can quickly compare camera capabilities, or at least
understand their potential.
The higher the fill factor of a given sensor (closer to 100%), the lower the
noise floor will be (the digital equivalent of film grain) and the better the
dynamic range will be.
In addition, solid state sensors tend to have various image processing cir-
cuits to make up for things such as lack of blue sensitivity, etc., so it is impor-
tant to examine individual color channels under various lighting conditions,
as well as the final RGB image. Shortcomings in automatic gain controls, etc.
may not appear until digital postproduction processes (VFX, DI, etc.) begin
to operate on individual color channels.
Sound complicated? Perhaps, but all you need to understand is that
product claims can, and will, be misleading. We’ve lost our way a bit when
thinking that counting pixels alone is a way of quantifying a digital cam-
era’s capability. A return to photographing resolution charts, and actually
examining what these cameras are capable of will serve you much better in
understanding how a given camera will help you tell your story.
In short, do not be seduced by technical specification mumbo jumbo.
Look at images photographed by the camera in question, and evaluate from
a proper distance of screen heights from the image.
Photosites/Wells
Figure 13: At left, a Bayer pattern photogate sensor, where each photosite is transformed
to yield one full color pixel value. At right, A photodiode “macrocell” design, where it
takes six photosites to yield one full color pixel value.
of 2K or 4K can vary; the terms 2K and 4K are only guidelines, your mileage
may vary.
It is important to understand these variables in characteristics, and the
need to be very specific when describing the physical characteristics of
film cameras, digital cameras, scanners and telecines. In the world of film
cameras (and film scanners), 2K refers to an image which is 2048 pixels
horizontally (perf to perf) and 1556 pixels vertically. This image captures the
area of either the SMPTE 59 Style C full aperture frame (.981" x .735") or the
SMPTE 59 Style B sound aperture frame (.866" x .735").
A 4K scan captures the same areas of the film frame as a 2K scan, but the
image yielded is usually 4096 x 3112. With both 2K and 4K scanners, each
individual pixel contains a single unique sample of each of red, green and
blue.
In the digital camera world, 2K often refers to an image that is 1920 pixels
horizontally and 1080 pixels vertically. Again, each individual pixel contains
a single unique sample of each of red, green and blue. This sampling of a
unique red, green and blue value for each pixel in the image is what is called
a “true RGB” image, or in video parlance, a 4:4:4 image. While these cameras
have an image frame size that corresponds to HDTV standard, they provide
a 4:4:4 image from the sensor, which is not to HDTV standard; which is a
good thing, as 4:4:4 will yield a superior picture.
Pixels
Much confusion could be avoided if we would define the term “pixel” to
mean the smallest unit area which yields a full color image value (e.g., RGB,
YUV, etc., or a full grayscale value in the case of black and white). That is, a
“pixel” is the smallest stand-alone unit of picture area that does not require
any information from another imaging unit. A “photosite” or “well” is de-
fined to be the smallest area which receives light and creates a measure of
86
|
Comparisons of 35mm, 1.85, Anamorphic, Super 35 Film Formats
light at that point. All current digital motion picture cameras require infor-
mation from multiple “photosites” to create one RGB image value. In some
cases three photosites for one RGB value, in others, six photosites to create
one RGB image value, a then Bayer pattern devices that combine numerous
photosites to create one RGB value.
These definitions inevitably lead us to define the term “resolution” as the
number of pixels that yield a single full color or grayscale image value (RGB,
YUV, etc.).
The images above are just two of many examples of photo sensors used
by digital cameras. One illustrates a Bayer pattern array of photosites using
a photogate sensor, and the other an interline transfer array of photosites
employing a photodiode lenslet design.
© 2012 Robert C. Hummel III
Special Thanks
To Stephen H. Burum, ASC, Daniel Rosen, Garrett Smith, Evans Wetmore, and
Anne Kemp Hummel in helping bring greater clarity to this discussion.
| 87
Anamorphic Cinematography
by John Hora, ASC
ratio is now less than a full 2 times in the horizontal direction when com-
pared to the unsqueezed vertical direction. Since the projection expansion
remains constant at 2 times, close-up objects (such a human face) will be
overly expanded horizontally by the projector lens, resulting in fat faces or
“anamorphic mumps.”
This can be avoided by leaving the lens at a more distant focus, even at in-
finity, and affecting focus by placing a lens of positive Diopter in front of the
anamorphoser. A method of constructing anamorphics is to manufacture
the lens pre-focused at some ideal distance for the particular lens and then
use a spherical focusing lens system in front to do the actual focusing. This
is similar to the methods used to focus most zoom lenses.
Panavision successfully eliminated the “mumps” problem by using
counterrotating cylindrical elements. When the lens was set at infinity,
these two elements were aligned to cancel each other and provided no
cylindrical effect. As focus was placed on closer objects, the prime ele-
ments were moved forward to focus the vertical axis; at the same time, the
horizontal axis would be brought to focus by the power obtained by rotat-
ing the two astigmatizers by appropriate amounts in opposing directions.
With this design, rather than the image “pumping” horizontally with focus
changes, the background will appear to stretch vertically. The lack of dis-
tortion on actor’s faces quickly made the Panavision system universally
favored in Hollywood.
On longer focal lengths, and on zoom lenses, a rear anamorphic unit can
be fitted. This is equivalent to the familiar 2X extender attachment, except
that it is again not a spherical lens but rather a cylindrical section. In this
case, it cannot increase the angle of view of the spherical fore section, but
rather is used to stretch the image vertically by a factor of 2 so that when
expanded horizontally two times by the projection lens, images will have
the correct geometry. Since the field of view is not increased horizontally
but rather decreased vertically, the focal length is described as that which
would be equivalent to a spherical lens with the same vertical field of view,
i.e., equal to two times the lens’ original focal length. Thus, a 20–100mm
becomes a 40–200mm zoom. A 200mm prime would be labeled a 400mm.
This keeps the terminology consistent. Also, the 2X vertical stretch will
result in a loss of light of one stop, which is half what a conventional 2X
extender would have required, since the expansion of the image is only
along one axis, the vertical.
As a rule, short lenses are anamorphosed in front of the spherical compo-
nents, and long lenses in the rear.
Other methods of compressing the image include the use of prisms and
mirrors. Two wedge prisms may be placed before a spherical prime so
that they will compress the image in one direction. This system allows the
90
|
Anamorphic Cinematography
John Hora is a member of the ASC Board of Governors. His many credits as a
cinematographer include the feature films The Howling, Twilight Zone: The
Movie, Gremlins and Honey, I Blew Up the Kid.
| 93
Exposure Meters
by Jim Branch
IncIdEnt-LIght MEtErs
These meters are normally used at the location of the photographic
subject. They measure the light that is effective in illuminating the subject,
and provide an answer in terms of f-stop or T-stop for the camera lens. The
camera lens diaphragm opening is then set to match the effective intensity
of the prevailing illumination.
When the film is exposed, the various reflectances presented by the sub-
ject will then each fall into a given place in the film acceptance range. For
example, a face tone of 30% reflectance will fall into the 30% reflectance po-
94
|
Exposure Meters
sition in the film acceptance range. This method thus provides consistently
uniform face tones from scene to scene.
The incident-light meter accomplishes its purpose by doing two things. It
measures the incident light intensity at the location of the photographic sub-
ject. It also takes into account the conditions of illumination geometry—that
is, whether the subject has front key light, side key light, or a back key light.
The meter combines these factors and gives an answer in terms of the correct
setting for the camera’s lens diaphragm.
There are several makes of incident-light meters that use a three-dimen-
sional light collector. The hemispherical light collector allows these meters
to automatically perform the dual function described above.
These incident-light meters are normally used at the position of the prin-
cipal subject, with the hemisphere pointed at the camera lens. The hemi-
sphere then acts as a miniature face of the subject. All illumination that will
be effective on the subject, including key light, fill light, line light, hair light,
eye lights, etc., will be received, evaluated and integrated by the meter. The
meter will then indicate directly the correct f-stop or T-stop for the camera
lens. Incident-light meters are particularly useful because they may be used
on a scene before the principal subject appears. They may also be carried
through a scene, with the hemisphere always pointed at the camera lens, to
detect uneven illumination and particularly hot spots, into which the subject
may move during the action. This allows the scene illumination to be suit-
ably balanced before the principal subject is at hand.
In the case of outdoor photography, it is not always necessary to take the
meter to the location of the principal subject. Under such conditions, the
illumination is usually uniform over considerable areas. If the illumination
is the same at subject location and at camera location, the meter may be used
at camera location. Care should be exercised to point the meter in the proper
direction, as though it were at the subject location.
In general, exposure meters are either analog (with a needle) or digital.
The introduction of the analog incident meter with the 3-D light-collecting
hemisphere revolutionized the method of determining proper exposure for
the cinematographer.
Today, a number of companies throughout the world manufacture expo-
sure meters that employ the basic incident-type principles in their design,
but all due credit for the invention of this meter should be given to ASC
Associate Member Don Norwood, who patented it, and Karl Freund, ASC,
who was instrumental in its development. Most incident meters are provided
with suitable adapters so that they may be converted for use as a reflected-
light meter if the occasion should require it. The reflected-light adapter can
be used in a situation where the cinematographer encounters difficulty in
putting the meter into a position to read either the illumination directly on
| 95
Exposure Meters
the subject, or illumination similar to that on the subject. For example, such
a situation might be encountered when taking a picture out of the window of
an airliner in flight. The reflected-light attachment can also be used in other
situations to evaluate the relative brightness of a background.
spEcIaL EffEcts
When a special effect is desired, the cinematographer may use the incident-
light meter to first determine normal exposure for the subject. He may then
deliberately modify that value, up or down, to achieve the desired effect. This
can be done with considerable confidence, because the incident light meter
will give a firm foundation upon which to base the desired modification.
spEcIfIc sItuatIons
There are some situations, occasionally encountered in outdoor photogra-
phy, that require special attention.
1. Unusually light or dark backgrounds are cause for consideration. When
a scene includes an unusually light background, the cinematographer
may wish to first use the meter as an incident-light meter to determine
the basic exposure for the principal subject in the foreground. Then he
can convert the meter to a reflected-light meter in order to measure the
brightness of the unusual background. The second reading is then used to
modify the basic incident-light reading somewhat. The same procedure
could be followed in the case of an unusually dark background.
2. Outdoor scenes that include a subject in the foreground as well as distant
objects, such as mountains, in the background, usually also include con-
siderable aerial haze. This haze may be invisible or only partly visible to
the eye, but strongly visible to the camera. A frequent photographic result
is a recording of the aerial haze overlaid on the scene background, which
gives the appearance of an overexposed background. In such a situation, a
haze-cutting filter should be used to improve the background. In addition,
use the procedure previously described for the case of an unusually light
background.
3. Scenes consisting of a mixture of sunshine and shade areas, with the principal
subject in a shade area, can be handled by: (a) using the meter in the sunshine
area, or (b) opening up the lens by ½ to ⅔ f-stop from the meter indication.
rEfLEctEd-LIght MEtErs
A spot meter may be used at camera location and aimed at a selected spot
in the scene. The effectiveness of the meter is heavily dependent on the opera-
tor’s judgment in the selection of the spot. The selected spot must be precisely
representative of the particular combination of elements in the composition
of the scene. When using such a meter, the operator must be particularly
96
|
Exposure Meters
careful when confronted with a scene that presents strong contrasts between
the selected spot and the scene background. An example of such a situation
would be a case where a person in the foreground is in front of a very light
background, such as sky or white buildings, etc. In such a situation, the op-
erator should modify the spot reading provided by the meter according to
his own estimate of the situation. When the use of a reflected-light meter is
required, the results of determining the exposure can be greatly improved by
using a “Kodak Neutral Test Card.”
This card is a piece of sturdy 8" x 10" cardboard that is neutral gray on one side
and white on the other. The gray side reflects 18% of the light falling on it, and
the white side reflects approximately 90%. Also, the gray side has a protective
lacquer overcoat that reduces specular reflectance and resists damage due to
fading, fingerprints, soil, etc. To a light meter, an average scene is one in which
the tones, when averaged, form a tone brightness that is equivalent to middle
gray—a tone that reflects 18% of the light illuminating it (the same tone and
reflectance of the gray card). When a scene is not average, the gray card is a
reference that helps you make the proper exposure judgments. A Kodak Gray
Card is manufactured under close tolerances to provide a neutral gray-side
reflectance of 18% (+ 1%) and white-side reflectance of approximately 90%.
tEstIng
Small errors may exist in meters, lens calibrations, emulsion speeds and
development. These small errors will frequently cancel out without undue
harm to the final picture. It is when these errors add up in the same direction
that their cumulative effect is serious. It is therefore wise to test equipment,
film and meters under simulated production conditions so that errors may
be detected and corrected before production begins. It is always a good idea
to “tune up to the variables.”
Much of the material in this section of the manual is basic, but reference
should be made to ASC Associate Member Don Norwood and Eastman
Kodak Co. for the gray card information.
ExposurE MEtErs
gossen starlite
Type: Handheld exposure meter for measuring ambient and flash and incor-
porating both incident and spot meter reading ability.
Light Sensor: 2 silicon photo diodes
Measuring Capability:
Measuring Range: Ambient light: Incident (at ISO 100/21°): EV -2.5 to +18;
Reflected with 1°: EV 2.0 to 18; Reflected light with 5°: EV 1.0 to 18. Flash
Light: Incident (at ISO 100/21°): f/1.0 to f/128; Reflected light with 1°:
f/2.8 to f/128; Reflected light with 5°: f/1.4 to f/128.
| 97
Exposure Meters
gossen color-pro 3f
Type: Handheld digital 3-color meter for ambient and flash; determines
photographic color temperature of light sources and filtration required.
Light Sensor: 3 balanced silicon photodiodes for ambient and flash.
Measuring Range: 2000 to 40,000 degrees Kelvin.
Light Balancing Filters: -399 to 475 Mired Scale, switchable to correspond-
ing Kodak Wratten filters.
CC filter Values: 0 to 95 Magenta and 0 to 06 Green.
Power Source: 9V MN1604 or equivalent.
Dimensions: 5" x 2¾" x 1"
Weight: Approximately 4.5 ounces.
gossen Luna-star f2
Type: Handheld exposure meter for measuring ambient and flash in both
incident and reflected light (with 5° Spot attachment).
Light Sensor: sbc photodiode, swivel head
Measuring Range: Ambient light (at ISO 100/21°): EV -2.5 to +18; Flash
Light (at ISO 100/21°) f/1.0 to f/90.
Measuring Angle in Reflected Mode: 30°
ISO Film Speeds: 3/6° to 8000/40°
Camera Cine Speeds: 8–64 fps, as well as 25 fps and 30 fps for TV.
Shutter Speeds: 1⁄8000 sec. to 60 min.
Flash Sync Speeds: 1 to 1⁄1000 sec., as well as 1⁄90 sec.
F-Stops: f/1.0 to f/90.
Power Source: 9V battery.
Dimensions: 2¾" x 5" x 1"
Weight: Approximately 4.5 ounces.
gossen Luna-pro f
Type: Handheld analog exposure meter for measuring ambient and flash light.
Light Sensor: sbc photodiode.
Measuring Range: Incident Light (at ISO 100/21°): EV -4 to +17; Flash Light
(at ISO 100/21°) f/0.7 to f/128.
ISO Film Speeds: 0.8/0° to 100,000/51°
Camera Cine Speeds: 4.5–144 fps.
Shutter Speeds: 1⁄4000 sec. to 8 hours.
Flash Sync Speeds: 1⁄60 sec.
F-Stops: f/0.7 to f/128.
Power Source: 9V battery.
Dimensions: 2½" x 4⅝" x ¾"
Weight: Approximately 3.3 ounces.
gossen Luna-pro s
Type: Handheld analog exposure meter for measuring ambient sun and
moon incident and reflected light.
Light Sensor: Photoresistance (CdS)
Measuring Range: Incident Light (at ISO 100/21°): EV -4 to +17.
Measuring Angle in Reflected Light Mode: 30° (with Tele attachment).
| 99
Exposure Meters
gossen ultra-spot 2
Type: Handheld Spot meter for measuring ambient and flash light.
Light Sensor: sbc photodiode.
Measuring Range: Ambient Light (at ISO 100/21°): EV -1 to +22; Flash Light
(at ISO 100/21°) f/2.8 to f/90.
Measuring Angle of Reflected Light: Viewfinder (15°), metering field (1°).
ISO Film Speeds: 1/1° to 80,000/50°
Camera Speeds: 8-64 fps, as well as 25 fps and 30 fps for TV.
Shutter Speeds: 1⁄8000 sec. to 60 min, as well as 1⁄90 sec.
Flash Sync Speeds: ⅛ sec. to 1⁄1000 sec., as well as 1⁄90 sec.
F-Stops: f/1.0 to f/90.
Power Source: 9V battery.
Dimensions: 3½" x 2¼" x 7½"
Weight: Approximately 12 ounces.
Minolta cinemeter II
Type: Handheld digital/analog incident meter.
Light Sensor: Large area, blue enhanced silicon photo sensor. Swivel head,
270 degrees.
100
|
Exposure Meters
pentax spotmeter V
Measuring Range: EV 1–20 (100 ASA).
Film Speeds: ASA 6-6400
Shutter Speeds: 1⁄4000 sec.-4 min.
F-Stops: f/1 to f/128.
EV Numbers: 1-19⅔; IRE 1-10.
Measuring Angle: 1 degree.
Measuring Method: Spot measuring of reflected light; meter switches on
when button pressed; EV direct reading; IRE scale provided.
Exposure Read Out: LED digital display of EV numbers (100 ASA) and up
to 2 dots (each of which equals ⅓ EV).
Photosensitive Cell: Silicon Photo Diode.
Power Source: 6V silver-oxide battery.
sekonic L508c
Type: Handheld exposure meter for ambient and flash incorporating both
incident and spot meter reading ability.
104
|
Exposure Meters
sekonic L608c
Type: Handheld exposure meter for ambient and flash incorporating both
incident and spot meter reading ability.
Light Sensor: Silicon photo diodes (incident and reflected)
Measuring Range: Incident Light: EV (-) 2 to EV 22.9 @ 100 ISO
Reflected Light: EV3 to EV 24.4 @ 100 ISO
Measurement Modes: Footcandle 0.12 to 180,000, Lux 0.63 to 190,000;
Cd/m2 1.0 to 190,000; Foot-lambert 0.3 to 190,000
Display Mode: Digital f/0.5 to f/128.9 (in ⅓-stops); Analog f/0.5 to f/45
(in ⅓-stops)
ISO Film Speed: ISO 3 to ISO 8000 (1⁄3-stops)
Camera Speed: 1, 2, 3, 4, 6, 8, 12, 16, 18, 24, 25, 30, 32, 36, 40, 48, 50, 60, 64,
72, 75, 90, 96, 100, 120, 125, 128, 150, 200, 240, 250, 256, 300, 360, 375,
500, 625, 750, 1000 fps
Shutter Angle: 5–270 at 5 stops + 144 and 172
Shutter Speeds: 30 min. to 1⁄8000 sec. (full, ½- or ⅓-stops)
F-Stops: f/0.5 to f/128.9 (full, ½- or ⅓-stops)
Filter Factors: 85, -n.3, -n.6, -n.9, -A3, -A6, -A9
Memory Function: 9 readings on analog scale (f/stop and shutter speed)
with memory recall and clear feature
Accuracy: ±0.1 EV or less
Additional Functions: Digital f-stop and shutter speed readout in viewfind-
| 105
Exposure Meters
er; Parallax-free rectangular 1–4 spot zoom with digital display. Shutter
speed and aperture are displayed in viewfinder; Retractable incident
Lumisphere for dome or flat disc readings; Digital Radio Transmitter
Module that eliminates the need for an additional external transmit-
ter at the meter’s position; 12 Custom Function Settings for advanced
preferences and features.
Power Source: 3.0V (CR123A lithium battery)
Dimensions: 3.5" W x 6.7" H x 1.9" D (90mmW x 170mmH x 48mmD)
Weight: 9½ ounces (268g)
sekonic L-308BII
Measuring System: Incident or reflected for flash and ambient light; Silicon
photo diode.
Measuring Modes: Ambient and flash (cord, cordless) – incident and
reflected (40 degrees)
Receptor Head: Nonrotating, noninterchangeable.
Aperture/Shutter Priority: Shutter speed priority.
Display Read-out: Digital LCD
ISO Range: ISO 3 to 8000 in ⅓-stop increments.
F-Stops: f/0.5–f/90 9/10
Shutter Speeds: Ambient: 60 sec.–1⁄8000 sec.; Flash: 1 sec.–1⁄500 sec.
EV Range: (ISO-100) EV(-) 5 to EV 26.2
Camera Speeds: 8–28 fps.
Power Source: 1.5V AA battery.
Dimensions: 4.3" x 2.5" x .9" (110 x 63 x 22mm) WDH
Weight: 2.8 ounces (80 g) without battery.
sekonic L-398M
Measuring System: Incident light type, reflected light measurement is also
possible
Measuring Modes: Ambient incident and reflected
Receptor Head: Rotating, interchangeable receptor.
Display Readout: Indicator needle
ISO Range: 6 to 12,000; Measuring Range: EV4-EV17 (for incident light)
EV9-EV17 (for reflected light)
F-Stops: f/0.7–f/128
Shutter Speeds: Ambient: 1⁄8000 to 60 sec.; Flash: None.
EV Range: (ISO-100) EV 4 to 17
Camera Speeds: 8–128 fps
Power Source: Selenium photocell (no battery needed)
Dimensions: 4.4" x 2.3" x 1.3" (112 x 58 x 34mm) WDH
Weight: 6.7 ounces (190 g)
106
|
Exposure Meters
sekonic L-358
Measuring System: Incident: Dual retractable lumisphere, Reflected: with
included reflected light attachment; Silicon photo diodes
Measuring Modes: Ambient and flash (cord, cordless, multi flash)—
incident and reflected (54 degrees).
Receptor Head: Rotating 270 degree with built-in retractable lumisphere.
Aperture/Shutter Priority: Aperture and shutter priority
Display Readout: Digital LCD plus LCD analog, (auto-backlit LCD at EV 3
and under for 20 sec.)
ISO Range: Dual ISO settings: 3 to 8000 (⅓-steps)
F-Stops: f/1.0 to f/90.9 (full, ½- or ⅓-steps)
Shutter Speeds: Ambient: 1⁄8000 sec. to 30 min.; Flash: 1⁄1000 sec to 30 min.
EV Range: (ISO-100) EV -2 to 22.9
Camera Speeds: 2–360fps
Exposure Memory: Capable of nine exposure measurement readings
Shadow/Highlight Calculation: Yes
Brightness Difference: Displays the difference in 1⁄10-stop increments
Flash To Ambient Ratio: Yes
Multiple Flash: Yes, unlimited
Exposure Calibration: ±1.0 EV
Power Source: One CR123A lithium battery
Dimensions: 2.4" x 6.1" x 1.46" (60 x 155 x 37mm) WHD
Weight: 5.4 oz (154 g)
sekonic L-558
Measuring System: Dual function retractable incident lumisphere; 1° spot
viewfinder; Two silicon photo diodes (SPD).
Measuring Modes: Ambient and flash (cord, cordless, multi-flash) – inci-
dent and spot (1°).
Metering Range: Ambient Incident Light: EV -2 to EV 22.9; Reflected Light:
EV 1 to EV 24.4; Flash Incident Light: f/0.5 to f/161.2; Reflected Light:
f/2.0 to f/161.2
Receptor Head: Rotating 270 degrees; with built-in retractable lumisphere.
Aperture/Shutter Priority: Aperture, shutter priority and EV metering value
Display Readout: Digital LCD plus LCD analog, (Auto-backlit LCD at EV
6 or under for 20 seconds)
ISO Range: 3 to 8000 (in ⅓-stop steps)
F-Stops: f/0.5–f/128.9 (full, ½- or ⅓-stops); Under and Overexposure indi-
cation.
Shutter Speeds: Ambient: 30 min. to 1⁄8000 sec. (full, ½- or ⅓-stops, plus 1⁄200
and 1⁄400); Flash: 30 sec. to 1⁄1000 sec. (Full, ½- or ⅓- stops; Special flash
speeds: 1⁄75, ⅛0, 1⁄90, 1⁄100, 1⁄200, 1⁄400)
| 107
Exposure Meters
spectra professional IV
Type: Handheld exposure meter for measuring incident and reflected light.
Light Sensor: Silicon Photovoltaic cell, computer-selected glass filters tai-
lored to spectral response of the film. Swivel head, 270 degrees.
Measuring Capability: Direct readout of photographic exposures. Also
measures illuminance level in foot-candles and Lux.
Measuring Range: One million to one (20 f-stops) direct-reading multiple-
range linear circuit controlled by microcomputer.
Display Range: ISO film speed: 3 to 8000 in ⅓-stop increments.
Camera Speeds: 2–360 fps.
Resolution: Digital: 0.1 f-stop. Analog: 0.2 f-stops.
Accuracy: Digital: 0.05 f-stop.
Additional Functions: Memory store and recall.
Lamp: Optional electroluminescent lamp for backlit liquid-crystal display.
Power Consumption: Operating (reading) 5mA. Data retention 5uA.
Power Source: 6V battery (A544, PX28L or PX28).
Estimated Battery Life: Approximately 1 year with normal use.
Dimensions: 5½" x 2½" x 2".
Weight: Approximately 6 ounces.
apply both to cine and still-photographic lenses. The most common ques-
tions pertain to the basic formulation and function of lenses. For example:
i) What is a lens?
ii) What does it do?
iii) What is it made of?
iv) Where and how does it fit into a cine camera system?
To answer such questions, Figure 1 depicts a simple lens configuration,
which may be referred to in the following brief discussion of the pertinent
questions and other points of interest.
Essentially, a cine camera objective lens, or “taking lens,” is tasked with
collecting light (or radiation) emanating from object points in an object
space (within a given field of view, which is dependent on lens effective focal
length and size of image format) and forming image points in an image
space; hence, the name objective lens.
Before we answer the above questions, some definitions are required.
Because cine lenses are all tasked with collecting light (or radiation) from
an object space and relaying this to an image space, and because the object
and image are real and the light or radiation is almost always in the visible
spectrum (as seen or nearly seen by the human eye), the following discus-
sion of lenses refers to visible waveband objective lenses. Therefore, all cine
lenses may be classified as objective lenses that collect visible light from a
real object in front of the lens (anywhere from close to the lens to infinity
distance) and form a real image of the light somewhere after the lens.
A lens is an image-forming device, normally refractive but sometimes
reflective, which collects light emanating from a real object and, by virtue of
its refracting or reflecting properties, forms a real image. Lens systems may
be refractive or reflective or a combination thereof. Systems that are either
partly or totally reflective, which are quite popular in long focal-length still-
photography lenses and astronomical telescopes because of their compact-
ness or efficiency at collecting light, are uncommon in cine lenses for one
major reason. A reflective or partly reflective system (with coaxial optics)
depends on at least two mirrors to change or reverse direction of the light
from object space before it reaches a real image (see Figure 2a). To achieve
this, such an optical system, with mirrors aligned on a common optical axis,
must involve a central obscuration so that, in the central portion of light
beams, light is vignetted and not transmitted to the final image (see Figure
2a). At first, this condition might not seem important to cinematography;
however, the aesthetic result can be quite unacceptable to a cinematogra-
pher. To explain this, imagine a night scene with two street lamps, one at
six feet and in focus, and one at 20 feet and considerably out of focus. With
a refractive lens system, the result is as expected: one lamp sharp, and one
lamp soft with a blurred image. However, in the case of a reflective or partly
| 113
Lenses
-
-h Where: h = object height, h' = image height
n = refractive index (in object space)
n' = refractive index (in image space)
u = tan (angle) in object space
u' = tan (angle) in image space
CAMERA
FACEPLATE
OBJECT SPACE STOP/IRIS
FILM
OBJECT AT
INFINITY
BACK FOCAL
LENGTH
FOCAL
LENGTH
AXIAL BEAM OF
LIGHT FROM
OBJECT SPACE
IMAGE
SURFACE
Obscuration
blocks central
portion of
light beam(s)
reflective system, one lamp is sharp and the other is soft, but in the form of a
blurred, donut-shaped image that does not look normal or realistic (see Fig.
2b). Sometimes such a result is visually acceptable, maybe even appealing.
However, for the majority of filming situations, this characteristic or effect of
reflective (sometimes referred to as catadioptric3 lens systems makes them
unappealing to cinematographers).
The question “What is a lens made of?”(now restricted to refractive lens
systems) is answered as follows: Objective lenses, including cine lenses,
comprise one or more lens elements in a series that refract or “bend light” at
their air-to-element surface interfaces. The lens element refractive materials
can be glass, plastic, crystalline, liquid or even chemically vapor-deposited
(CVD) materials (such as zinc sulfide, which transmits light from the visible
to far infrared wavebands).
For the most part, glasses are the predominant refractive medium or
substrate, mainly because their optical and mechanical characteristics are
superior and consistently precise; this is extremely important in the field
of cinematography, where lens-performance requirements are very de-
manding. Therefore, cine lenses are almost always made of glass elements,
4. The use of calcium fluoride is best avoided in telephoto cine lenses because it is
highly sensitive to temperature changes of even a few degrees Fahrenheit (or Celsius)
and can be expected to produce significant defocusing that may be troublesome in
obtaining and maintaining sharp focus of objects over a short period of time (1–5
minutes).
116
|
Lenses
the case of old wide-angle, short focal-length lenses, their back focal length
was normally smaller than their focal length, which would make them in-
compatible with modern reflex cameras. However, all of the light that is lost
has to go somewhere, and even in the best lens designs, some of it would,
by successive lens-element surface reflections, head toward the film, causing
ghosting and/or veiling glare. To aggravate matters even more, these slow
lenses of T3.6–T5.6 full aperture, coupled with the insensitivity of film stock,
say ASA 50, meant that huge amounts of light were required to illuminate a
scene—good for the lighting supplier but trouble for the cinematographer,
especially in terms of ghosting and veiling glare. Still, the cinematographer
benefitted from one great, indeed overwhelming advantage—a larger depth
of field than he/she is accustomed to now. So these early cine lenses got close
to the film, were necessarily simple in construction (no coatings), and due
to their lack of speed (aperture) performed well (because of good aberration
correction at their full aperture, albeit with careful lighting). A sampling of
these old lens forms is depicted in Figure 3, which includes their well-known
technical or inventor names.
Of course, modern cine cameras are virtually all reflex because of their
need to provide continuous line-of-sight, through-the-lens viewing to the
camera operator. What this means for the lens is that its rear element must
be located some distance in front of the film as predicated by the reflex mir-
ror design of the camera. Fortunately, by the 1950s the previously discussed
transmission problem had been remedied by the introduction of thin-film
technology that ushered in anti-reflection coatings. More complex lens con-
figurations, containing anywhere from ten to twenty elements, were now
considered practical, and the fixed focal-length lens (or prime) suddenly had
a partner—the zoom lens. Both lens types still had to deal with a large back
focal-length distance, but this was now easily managed because complex
lens arrangements were feasible. Even those troublesome wide-angle lenses,
now sitting at a film distance mostly exceeding their focal lengths, could be
relatively easily constructed.
Even though the post-1950s cine lenses were substantially better than their
predecessors, they had one additional demand—faster speed, i.e., greater
aperture. Although film-stock sensitivity had gradually improved, low-light
filming situations had increased, thus requiring cine lenses of full aperture
T1.3-T1.4 and sometimes T1.0 or less. Fortunately, or perhaps with good
timing due to demand, glass technology started to improve substantially in
the 1960s. The first major effect on cine lenses was the realization that those
fast-aperture lenses were now possible due to high refractive index glasses.
However, aberration correction was still limited, especially at T1.3-T1.9 aper-
tures. By the early 1980s, glass technology had improved so much that aber-
ration correction, even in lenses of T1.9 full aperture and, to a lesser extent,
| 117
Lenses
Focal length F less than lens length L Focal length F less than lens length L
Focal length F less than lens length L Focal length F less than lens length L
Focal length F more than lens length L Focal length F much less than lens length L
Petzval
For each lens, object space is to the left,
image space is to the right
(light travels left to right),
and image is formed at surface S.
T1.3, was approaching the maximum theoretical limit, even after allowing
for all other lens design constraints such as length, diameter, weight, cost, etc.
Perhaps more significantly, zoom lenses could be designed to perform as
well as prime lenses but were still of greater size, weight and cost. Of course,
it is easy to draw comparisons with the still-photography market, but this is
misleading because the performance requirements of that market are nor-
mally lower than than those of the cine arena. Nevertheless, advancements
in the still-photography lens market are a good indication of where cine
lenses might go. One important area of distinction between still and cine
lenses is in the mechanical design. Whereas still lenses are intended for a
consumer (amateur) market, cine lenses address an industrial (professional)
market. The mechanical requirements placed on the latter dictate greater ac-
curacy, reliability and higher cost than is necessary for the former. Precision
lead screws (or threaded mated parts) have, for some time, been the norm
in prime cine lenses, but they are slowly being supplanted by linear-bearing
technology in some primes and many zooms. Zooms are the main benefac-
tor of linear-bearing technology because they have at least two moving zoom
groups and one focus group, all of high optical power requiring precision
alignment and maintenance thereof. Just like in the still-photography mar-
ket, the cost of all the technologies so far described means that in the field
of cine lenses, zooms are likely to eventually dominate over prime lenses,
except in extreme applications such as very wide-angle, fisheye, or long
focal-length lenses.
Another optical technology in its infancy is the design and manufacture
of cine lenses utilizing aspherical surfaces. These axially rotational, sym-
metrical, nonspherical surfaces, which have been used in infrared waveband
military systems (e.g., thermal imagers) since the 1970s, are only now being
introduced in cine lenses. Manufacturing and assembly techniques have im-
proved to the extent that several cine zoom lenses employing one aspherical
surface, and even one cine zoom lens employing two aspherical surfaces,
are now available for use by the cinematographer. The aspheric technology
utilized in cine lenses should not be confused with that used in inferior-per-
formance still-photography lenses. Extremely high-precision, ground and
polished glass aspherical surfaces are needed for cine lenses to achieve the
high-performance imaging expected, but the much-advertised, aspherically
surfaced still-photography lenses depend on essentially low-quality, low-
cost, molded and replicated lens elements. Many other optical technologies
that are perhaps relevant to cine lenses could be described, such as gradient
index glasses (i.e., GRINS), diffractive or binary surfaces, and holographic
elements, but for the time being aspherical-surface technology is the most
promising (at least until, the next edition of the ASC Manual). Figure 4 il-
lustrates several modern cine-lens optical designs.
| 119
Lenses
INVERSE
a. TELEPHOTO WIDE ANGLE LENS 21mm, T1.9
40mm
FOCUS
ZOOM ZOOM
FOCUS
the mid-1950s but died out by the early 1970s. Just consider that over the last
thirty years, the vast majority of 35mm format anamorphic movies have been
shot using anamorphic lenses with a 2X horizontal squeeze covering a 1.2:1
film negative format, giving 2.40:1 presentation. The originator of the ana-
morphic format, CinemaScope, in the early 1950s defined it as an even larger
2.55:1, but this quickly disappeared when the 2.35:1 format was introduced.
By the early 1970s, these anamorphic formats and others were superceded by
the widely adopted 2.40:1 format, which became and is presently the de facto
standard in anamorphic cinematography (see SMPTE 59-1998).
Before getting into format and lens specifics, it should be mentioned
that detailed information about format image size, area, etc., can be found
elsewhere in this manual (see Cinematographic Systems chapter). Also, to
discuss the effect of specific formats on lenses, it is necessary to explain some
elementary theory about film formats and lenses. Referring to Figure 5a, it
can be seen that if the same focal-length lens, set at a constant aperture, is
used in three widely differing image format diagonals, then the fields of view
are entirely different. Now, let’s say the focal lengths of the lenses (still at a
constant aperture) are selected for constant fields of view as shown in Figure
5b. Then, upon projection of each image (after processing to a print) on a
constant-size viewing screen, it would be apparent that the in-focus objects
would look the same. However, for out-of-focus objects it would be clearly
apparent that the depths of field are quite different. This result is extremely
important to the cinematographer, not only because of the artistic impact,
but also because apart from changing the lens focal length and hence field
of view and perspective, nothing can be done to the lens design to alter this
result. The optical term “Lagrange invariant” (an unalterable law of phys-
ics), has been defined (see Figure 1), and the aforementioned result is a
direct consequence of it. In Figure 5b, its controlling effect on field of view
(perspective), focal length and depth of field vs. image format size are self-
evident. Only one real option is available to the cinematographer to alleviate
this condition or even solve it—change the lens aperture. This seems quite
simple until the practicalities of actual shooting are fully considered. How
can you shoot in the 65mm format or, for that matter, the 16mm format and
achieve the same look as for the 35mm format? Some remedies (not cures)
can be implemented, and they are best understood by taking examples from
old and new 65mm-format feature films. It should be understood that be-
cause the 65mm format intrinsically has less depth of field than the 35mm
format for lenses of equivalent field of view, an abundant use of lighting
combined with stopping down the lens enables a similar image to be realized
(see Lawrence of Arabia, Dr. Zhivago and Ryan’s Daughter all shot by Freddie
Young BSC). Also, diffusion filters can help to increase the apparent depth
of field, albeit with some loss of image sharpness. Another option to help the
| 123
Lenses
6.0˚ 65
50
27.8˚
50 35
11.8˚
50
16
6.0˚
100
35
6.0˚
250
65
6.0˚
65mm-format depth of field in certain scenes are slant-focus lenses (see the
bar-top bar scene in Far and Away, shot by Mikael Salomon, ASC). In com-
parison, for the 16mm format the greater depth of field is more difficult to
correct since this implies even faster lenses, which are not available because
there is a minimum f-stop that cannot be gone below, f/0.5. Therefore, the
only real solution for the 16mm format is to forego the preferred field of
view and corresponding perspective by changing the focal length of the lens
or working at lesser object distances. Using hard lighting is another approach
that helps somewhat with 16mm format depth of field, but the overall look
may suffer. Electronic enhancement in postproduction is another possibility,
but again, the overall look may suffer.
To conclude, it is fair to say that for 35mm and 65mm film formats, just
about anything can be successfully shot as long as one is willing to accept
the costs involved for, say, lighting. For smaller formats, such as 16mm film
or high-definition video cameras (with ⅔-inch detectors), the main limita-
tion is too much depth of field in low-light-level situations where the lens
aperture cannot realistically and practically be less than T1.2–T1.5. Only
faster lenses or a retreat to larger formats, be they film or electronic, will
completely solve the depth of field issue. Of course, what is or is not deemed
acceptable in terms of the depth of field or look of the picture has so far
been determined by what is expected. In other words, it is highly influenced
by past results. Future results, especially with digital-video cameras and
lenses, might look different, and over time might become quite acceptable.
So maybe the depth of field concerns will disappear. In the meantime, the
Lagrange invariant, just like Einstein’s theory of relativity, cannot be broken,
so lens depth of field, perspective and look is inextricably linked to and gov-
erned by the format size.
Anamorphic vs. spherical depth of field will be covered later in this chap-
ter. Also, the deliberate omission of the circle of confusion in the preceding
discussion about depth of field is because it has no bearing on different film
formats that have similar resolution capabilities, especially when viewed
eventually on a cinema screen. The circle of confusion is a purely mathemati-
cal value used to determine an estimate of expected or effective or apparent
depth of field, but that’s all, and it should only be used for that purpose.
in the process of film negative to release print. Since the late 1950s, the ana-
morphic film format has been about 59% greater in negative film area than
the spherical 1.85:1 film format. An often-asked question is, what happened
to the original CinemaScope anamorphic lenses? Interestingly, the word
“scope” has survived to this day, even though the terms spherical (i.e., flat)
and anamorphic (i.e., widescreen) are best suited to describe the format dif-
ference. There are many reasons, mostly economic or business-related, as to
why CinemaScope lenses disappeared by the mid-1960s. Some aspects of the
early lenses did have technical deficiencies, and these are worth expanding
upon.
Early anamorphic lenses produced one particularly disconcerting, focus-
related image characteristic, which caused several problems with both actors
(especially actresses) and the camera crew. The problem was “anamorphic
mumps,” a well-known term coined by movie production people. A good
example of this is to consider an actress (speaking her lines) walking from,
say, 20 feet to 5 feet (i.e., full-body to facial shot) while the camera assis-
tant or focus puller does a follow focus to keep her in focus at all times.
Assuming a high-quality anamorphic prime lens was used (CinemaScope
circa 1955-1965), the face of the actress would naturally increase in size as
she approaches the camera. However, due to lens breathing through focus
(explained in detail later), and more specifically anamorphic lens breathing,
not only did the face of the actress increase in size, but also the face would
become fatter at the close focus. So the breathing effect, or increase in size of
the object, is much greater in the horizontal direction as compared to verti-
cal direction. Obviously, actors were unhappy about this phenomenon, so
early anamorphic pictures had close-up shots at 10 feet instead of 5 feet (to
alleviate the effect). Production companies and camera crews, particularly
the cinematographer, did not like this, because with the old, slow film stocks,
a ton (for lack of a better word) of lighting was required, and the perspective
of the shot was not as it should be. Heat from the vast lighting required also
produced problems, like sweat on the actors’ faces and makeup meltdown.
In 1958, a U.S. patent was issued for an anamorphic lens design that virtu-
ally eliminated this problem, and anamorphic lenses utilizing the patented
invention have been used continuously for more than forty years. They are
the predominant anamorphic lenses used to shoot the majority of widescreen
movies to this day. The importance of these new anamorphic lenses was ex-
emplified by the fact that Frank Sinatra the main actor, in the movie Von
Ryan’s Express, shot by William H. Daniels, ASC, demanded that these lenses
be used. Before leaving this subject, an interesting piece of historical informa-
tion: The first prototype anamorphic prime lenses with reduced “anamorphic
mumps” were used in the 65mm-format film (2.75:1 with 1.25x anamorphic
squeeze) Ben Hur, released in 1959 by MGM and shot by Robert Surtees,
126
|
Lenses
80%
Sweet of total Sweet
Spot height Spot
OR
tance are best kept within these sweet spots (at lens apertures approaching
full aperture). It should also be noted that all lenses, spherical and anamor-
phic, tend to perform best beginning at an aperture stopped down by at least
one from their maximum aperture opening and up to, say, an aperture of T11
to T32 depending on lens focal length (i.e., T11 for very short focal-length
lenses, T32 for very long focal-length lenses).
Until quite recently, a particular problem associated with anamorphic
prime lenses has been their limited ability to provide good-quality imaging at
close focus distances. In fact, all lenses, spherical and anamorphic, are usually
designed to perform best at one distance, and gradually lose image-quality
performance toward infinity focus and especially at close focus. Modern zoom
lenses, spherical and anamorphic, are less afflicted by this problem because
they incorporate complex, usually multiple, internal focus lens groups (de-
scribed later). In the case of anamorphic prime lenses (as opposed to spherical
prime lenses, which do quite well in this respect), there has always been a
trade-off with regard to lens size, weight and image quality at close focus (6 feet
to 3 feet). Indeed, compact, lightweight close-focusing anamorphic lenses of
fairly low image quality have been around since the 1960s, but until recently
no anamorphic lenses, large or small, could provide good image quality over
128
|
Lenses
a focus range from infinity to 2–3 feet with low veiling glare characteristics. In
the mid-1980s, technological advances in lens coatings and fabrication tech-
niques brought spherical prime lenses with the above attributes to the mar-
ketplace. Later, cinematographer (now director) Jan DeBont, ASC suggested
that the combination of these spherical prime lenses with the best anamorphic
optics might produce anamorphic prime lenses of high image quality and
low veiling glare that would compete favorably with the best available spheri-
cal prime lenses. Such anamorphic lenses were produced and first used on
the movie Flatliners, shot by DeBont. Still, the close-focus anamorphic lens
image-quality problem had to be solved. The source of the solution turned
out to be using developments in spherical zoom-lens cam technology. Using
precision cams in modern anamorphic lenses, still based around that 1958
anamorphic lens patent invention, high-quality-imaging anamorphic prime
lenses with substantially reduced veiling glare and close focusing down to 21⁄2
feet or less were produced and are commonly used in the marketplace. They
are still somewhat large and heavy, but they provide the main and preferred
image-quality characteristics required for modern filmmaking.
Since anamorphic (widescreen) and spherical (flat) movies are no longer
identified before being shown in theaters, and many times are not even pre-
sented in their full format ratio, what sets them apart lens-wise? Depth of field
differences previously mentioned can be looked for. Streaking of hot (bright)
objects, especially point objects such as a kick of sunlight off the chrome trim
of a car, is usually more pronounced when anamorphic lenses have been
used. Two good examples of films that purposefully use this characteristic to
intensify the action are Close Encounters of the Third Kind,5 shot by Vilmos
Zsigmond, ASC, where the small alien spaceships with bright lights fly low
along a twisty road at night; and Speed, shot by Andrezej Bartkowiak, ASC,
where the underground runaway train with bright headlights hurtles along
the tracks just before crashing. In both of these scenes, the streaking can be
seen as blue, red, purple and white bright lines, mainly horizontally spread
across the picture but also at other angles. To reduce streaking where bright
point sources are unavoidable, one technique is to introduce a tiny amount
of diffusion, say, by single fog or black Pro-Mist types of filters. Perhaps the
definitive giveaway of an anamorphically lensed movie appears in a scene
shot at night with a streetlamp well out of focus. In a spherical movie the
lamp will be blurred and circular in shape, but in an anamorphic movie the
lamp will be blurred and elliptical in shape. (See Figure 6b.)
In summary, the pace of development in spherical vs. anamorphic prime
lenses has been greatest in the former, and yet the latter has benefited greatly
5. In some scenes, the streaking was introduced by visual effects using 65mm format
lenses.
| 129
Lenses
same factors. However, zoom lenses, apart from their zoom capabilities, can
and already do offer considerable advantages over even the best prime lenses.
Primarily due to their greater cost but far better return on investment, as
well as their greater overall complexity, zoom lenses can readily incorporate
advanced features, such as close-to-macro focusing (with virtually constant
aperture throughout focus and zoom) and reduced breathing at short focal
lengths. In particular, the breathing control offered in some modern cine
zoom lenses is important to many cinematographers.
Optical breathing is a phenomenon peculiar to cameras that continually
record images over time, i.e., film or video. It is not present in still photog-
raphy. Breathing is well illustrated by considering a scene containing two
persons talking intermittently with each other, one at 6 feet and one at 20 feet
focus. Let’s say the person at 20 feet (in focus and at the edge of the scene)
first talks to the person at 6 feet (slightly out of focus but quite discernable,
centered in the scene). Then let’s say that during the conversation, the person
at 6 feet (now in focus by refocusing the lens) starts talking to the person at 20
feet (now slightly out of focus), but the person at 20 feet, due to refocusing the
lens, moves out of the scene. This means that the person at 6 feet is talking to
nobody in the scene, thus ruining the take. In other words, breathing, through
change in field of view during focusing, has moved objects at the edge of the
scene into and out of the scene. Patents of zoom-lens inventions dating back
to the late 1950s have addressed this problem, and several modern cine zoom
lenses, sometimes using complex internal focusing arrangements, have suc-
cessfully minimized this effect—especially at short focal lengths, where the
larger depths of field make this effect more noticeable.
Modern cine zoom lenses are available in a variety of forms—lightweight
(2-3:1 zoom ratio), variable prime (2.5:1 zoom ratio) and conventional.
The latter comprises medium ratio (3:1–5:1), wide-angle and telephoto
zoom lenses, e.g., 14.5–50mm T2.2, 17.5–75mm T2.3, 20–100mm T2.8,
135–420mm T2.8, 150–600mm T6.3, and large-ratio (10:1-11:1) wide to long
focal length zoom lenses, e.g., 24–275mm T2.8, 25–250mm T3.5–4.0.
To conclude, prime and zoom lenses complement each other in the field of
cinematography, and virtually all movies now feature prime lenses and one
zoom lens. Many movies use prime lenses and more than one zoom lens;
some movies use only a few prime lenses and are shot almost entirely with
zoom lenses.
OBJECT TO
FRONT EFL (mm)
VERTEX AT INF.
DISTANCE FOCUS
(MAGNIFICATION) STOP/IRIS
14.5
INFINITY
(INF:1)
COMPENSATOR VARIATOR
LOCUS LOCUS
23.9
208mm
(12:1)
ASHPHERICAL
SURFACE AUXILIAY GROUP WITH
ASHPHERICAL SURFACE
50.0
60mm
(3:1)
50mm
lenses with unusual features, and some may involve an optical system that
accepts attachment of a variety of primes or zooms.
The Cine Lens list, starting on page 65 3 in this manual, contains some
of the best-known specialty lenses and systems and identifies their specific
characteristics and possible applications. Some of them are dependent on
folded optical configurations utilizing mirrors or prisms. They are all unique,
but some have overlapping properties. None of them can be construed as
front or rear lens attachments, because they attach directly to the camera.
By far the most significant aspect of these lenses and optical systems is
their ability to achieve in-camera real-time shots not possible with regular
primes and zoom lenses. Other advantages include provision of large depth
of field, extreme close or even macro focusing, and maneuvering among
objects (e.g., miniatures, models, forced perspective).
Some good examples of their shot-making capability can be seen in the
following movie and TV sequences. In Titanic, cinematographer Russell
Carpenter, ASC and visual-effects supervisor Erik Nash used a Panavision
/Frazier lens system and camera, each under motion control, to shoot the
beginning of the last sequence in the movie. Shortly after the woman drops
the gemstone into the ocean from the back of the research ship, a dry-for-
wet scene commences with the camera system approaching a model of the
sunken Titanic hulk (in dark blue lighting), then traversing over the bow of
the ship heading toward the port side, then entering and traveling through
a covered outside walkway, and eventually slowing to a halt after turning
left to see two doors with unlit glass windows, which then are lit and open
to reveal a congregation of people collected together to toast the lead actor
and actress. In this shot, which is far too complicated to describe fully (al-
though it is worth noting that CGI and a Steadicam rig are also involved),
the large depth of field, image-rotation control and pointing capability of
the Frazier lens system are utilized up to the point where the doors open.
Another movie, The Rock, shot by John Schwartzman, ASC, exemplifies the
variety of shots that can be accomplished with the Frazier lens system and
some other specialty lenses. Many of the shots are seen toward the end of
the movie, when Nicolas Cage is being chased on the Alcatraz prison walk-
ways while carrying the deadly green marbles and accidentally dropping,
then grabbing, them on the parapet of the lighthouse tower. In this shot,
the macro focus (close-up of his feet) to the infinity focus (San Francisco
distant skyline) carry of focus (i.e., huge depth of field with no follow focus)
is clearly illustrated. For periscopic specialty lenses, a good example can be
seen in the introduction to public television’s Masterpiece Theatre series,
where the point of view given has the lens working its way through table-top
memorabilia in a Victorian-era drawing room, eventually halting in front of
a hardbound book with the title of the play about to be presented on its front
| 133
Lenses
cover. This particular shot combines the maneuvering, pointing, role and
close-focus capabilities of the Kenworthy snorkel.
The above examples relate to specialty lens systems where different objec-
tive or taking lenses, primes and zooms, can be attached to an optical unit
which basically relays the light through mirrors and prisms to form a final
image, and wherein pointing and image-rotation means are housed. Many
other lenses or lens systems, some under remote control, such as the pitch-
ing lens, can be used to achieve similar results.
Other specialty lenses, such as slant-focus lenses and bellows-type lenses,
typically have fewer features than those afforded by the aforementioned
lens systems. However, they do offer the cinematographer opportunities
to capture other distinctive views of a scene. For example, the slant-focus
lens permits reduced depth of field limitations to be overcome in low-light-
level scenes where objects span across the field of view with continuously
increasing focus distance or with continuously decreasing focus distance,
such as is found in the typical car scene with the driver and passenger in
conversation and the camera looking into the car from either the passenger
or driver window. Bellows-type lenses can produce shots similar to slant-
focus lenses, but because of their bellows dependency cannot easily be ad-
justed in shot. However, the real advantage of bellow lenses is their ability to
produce a distorted field of view or perspective. In other words, even though
the film-format shape remains constant, objects can be highly distorted and
defocused differentially across the scene. For example, the well-known THX
rectangular-shaped credit could be made trapezoidal in shape, with each
corner defocused by different amounts.
All of these specialty lenses are currently very popular with cinematog-
raphers. Their attributes are quite well-suited to movie making, and they
are used to great effect in TV commercials to get that difficult-to-obtain or
different look.
Lens attachments
The most common cine lens attachments are placed before or after (and
occasionally within) a cine camera lens, be it a prime or zoom, spherical or
anamorphic.
Front lens attachments include diopter and split diopter close-focusing
lenses, image stabilizers, image shakers, distortion optics and inclining
prisms. Their main duty is to maximize the versatility of the lens to which
they are attached. Most of these front attachments lose some light transmis-
sion and, in some instances, image quality, but their overall effect proves
them to be worthwhile even though they have deficiencies.
Rear lens attachments mainly include extenders (to increase or decrease
the overall lens focal length) and mesmerizers (rotatable anamorphic optics).
134
|
Lenses
(volume and weight), image quality and aperture, as well as reflex mirrored
camera considerations.
Perhaps the most noticeable effect is the tendency for lens size to increase
almost linearly with each format size, assuming a constant focal length and
constant full aperture. Of course, this must partly happen in reflex camera
systems because the reflex mirror, size and position dictate the lens back-
focal length. Since the size and distance of the mirror from the film (or
image plane) increases mainly according to film-format size, this means that
65mm-format lenses tend to be larger than 16mm-format lenses, at least
at their rears. However, the greater difficulty in covering a larger format,
combined with the need for larger depth of field, makes 65mm-format
lenses slower, T2.8–T5.6 full aperture, whereas 16mm format lenses are
faster T1.4–T2.0. Unlike in still photography, where a larger-format lens is
chosen to provide a larger print, in cinematography the 65mm, 35mm and
occasionally 16mm prints are invariably shown on similarly sized cinema
screens, roughly 20–50 feet wide. In summary, 65mm-format cine lenses
are optically less complex, slower and have slightly less resolution per mil-
limeter of film negative than 35mm- or 16mm-format lenses. Nevertheless,
65mm lenses perform as well as they need to, based on the size of the format.
The key to the final presentation quality using large-format lenses, including
65mm, VistaVision and 35mm anamorphic, is, as it always has been, the
larger format size and corresponding area of the negative.
Lens erGonomIcs
Unlike most still-photography lenses, which are purely driven by cost vs.
image-quality requirements, cine lenses have a strong ergonomic compo-
nent. This component is indirectly dependent on what the cinematogra-
pher expects of a cine lens, but is directly related to what his or her crew,
138
|
Lenses
specifically camera assistants, must do to obtain the desired shot. Since the
assistant’s job may depend on ergonomic aspects of the chosen cine lens, this
little-discussed lens aspect is certainly worth noting.
It is often believed that the quickness of sharp focus being attained in a
cine lens by rotation of the focus gear means that a sharp in-focus image
of the object has been achieved. However, the opposite is true, because it
is best (and this has been conclusively tested and will be verified by most
camera assistants) for the lens to have a lengthened focus scale with many
spaced-apart focus calibration marks (e.g., infinity, 60, 30, 20, 15, 12, 10, 9
feet, etc., continuing down to close focus) so that the sharpness of focus may
appear more difficult to reach, but the likely error in focus is reduced. Cam-
era assistants say that this ergonomic lens aspect of providing an extended,
well-spaced and clear focus scale, usually on a large lens barrel, is crucial to
providing a well-focused, sharp image.
The aperture or T-stop scale is also important, because it should have clear
markings (e.g., T2, 2.8, 4, 5.6, etc.) spaced well apart and preferably linearly
spaced for equal stop differentials. The ergonomics of the latter condition
have become more important lately due to motion-control-powered iris
practices including variable, aperture and frame-rate shots.
Zoom scales were until recently largely ignored in terms of the focal-length
markings and spacing on the lens zoom scale. On modern zoom-lens scales,
the normal cramping of long focal-length marks has been avoided through
optimization of zoom cam data so that long focal-length zoom mark spacings
have been expanded and short focal-length zoom markings have been com-
pressed (but not to adversely affect selection of short focal-length settings).
Most modern prime and zoom cine lenses can be expected to offer ex-
panded focus, aperture and zoom scales on both sides of the lens, i.e., dual
scales. Also, all of these scales should not be obscured by matte boxes, mo-
tors or other ancillary devices.
Some other ergonomic or perhaps more practical aspects of cine lenses are
worth identifying. In prime or zoom lenses, mechanical precision and repeat-
ability of focus marks are important, especially in 35mm-format lenses of short
focal length, say 30mm or less. Typically, in a high-quality, wide-angle prime
lens, where the entire lens is moved to focus, the repeatability error due to me-
chanical backlash, slop, high spots, etc., should be less than a half-thousandth
of an inch (i.e., 12 microns metrically). Correspondingly, in wide-angle prime
or zoom lenses where internal or “floating” lens elements are utilized, a similar
tolerance, sometimes smaller, needs to be achieved. The eventual tolerance
required in this case may be larger or smaller than that mentioned, depending
upon the focal length (or power) of the movable focus group(s).
Maintenance of line of sight or “boresight” in primes, but more so for
zooms, is a major ergonomic consideration, especially in visual-effects and
| 139
Lenses
motion-control applications. The best quality zoom lenses today can be ex-
pected to provide a maximum, or, worst case, line of sight variation through
zoom of less than one inch at a 12-foot focus distance.
In addition to what has been described, the camera assistant or any other
lens user demands that a cine lens not only provide accurate focus, aperture
and zoom control, but also the right feel. An analogous way to explain this is
to consider the turning of the frequency-control knob on a radio. Any move-
ment of the knob should be smooth, not rough or sticky, and yet require some
force. The same applies to the feel of the focus, aperture, zoom or other gears
on a cine lens. For the best cine lenses, this feel is taken for granted, but for
many others an uncertainty and lack of confidence about the lens is built up
in the camera assistant’s mind, and this is not conducive to good filmmaking.
future Lenses
From conception to final use, all modern cine lenses are heavily depen-
dent on computers. The complexity of modern cine lenses, whether optical,
mechanical, electronic, some combination thereof, or otherwise, is so great
that computers are not only needed, they are fundamental to producing
high-quality cine lenses. The advancement and gradual inroad made by
zoom lenses in many fields, including cinematography, is a testament to the
importance of the computer in the whole zoom-lens design process, from
initial idea to final application. Without computers, most modern zoom
lenses would not exist.
A full series of prime lenses will still continue to be available for some time,
but increasingly, the prime lens will need to become more complex, with fea-
tures such as continuous focusing from infinity to close or even macro object
distances; internal, optically generated filtration effects; and so on, so that
their design and manufacturing costs, i.e., return on investment, remain eco-
nomically attractive. Individual prime lenses do offer one valuable advantage
over zooms: they fill niche or special lens requirements better, e.g. very wide
angle and fisheye field of view, very long focal lengths, slant focus, bellows,
etc. Therefore, in the future, cine lens complements will still include primes
as well as zooms and specialty lenses, but the mix will undoubtedly change.
Technological advancements in raw glasses, aspherical surfaces, thin films
(i.e., coatings), diffractive optics and other aspects of lenses will continue to
fuel lens development. A recently introduced compact, wide-angle, macro-
focus (continuously focusable from infinity down to almost the front lens
element), constant-aperture (through focus and zoom) cine zoom lens in-
dicates what is to come. This lens, which employs two aspherical surfaces,
some of the most exotic glasses ever produced, five cams, two movable focus
groups, two movable zoom groups, a movable iris and a total of twenty-three
lens elements, is a good example of what is already possible (See Figure 7).
140
|
Lenses
Changing TeChnology
More than ever, changes in technology have fostered new applications,
considerations and formats for filters. Digital technology requires a new
array of spectrum and sensitivity concerns as well as opening up new im-
aging opportunities, especially post-capture. Look for references to such
changes throughout this section.
FilTer Planning
Filter effects can become a key part of the look of a production, if con-
sidered in the planning stages. They can also provide a crucial last-minute
fix to unexpected problems, if you have them readily available. Where pos-
sible, it is best to run advance tests for preconceived situations when time
allows.
144
|
Camera Filters
FilTer FaCTorS
Many filter types absorb light that must be compensated for when calcu-
lating exposure. These are supplied with either a recommended filter factor
or a stop value. Filter factors are multiples of the unfiltered exposure. Stop
values are added to the stop to be set without the filter. Multiple filters will
add stop values. Since each full stop added is a doubling of the exposure, a
filter factor of 2 is equal to a one-stop increase. Example: Three filters of one
stop each will need three additional stops, or a filter factor of 2 x 2 x 2 = 8
times the unfiltered exposure.
When in doubt in the field about compensation needed for a filter that you
have no information on, you might use your light meter with the incident bulb
removed. If you have a flat diffuser, use it; otherwise just leave the sensor bare.
Aim it at an unchanging light source of sufficient intensity. On the ground,
face-up at a blank sky can be a good field situation. Make a reading without
the filter. Watch out for your own shadow. Make a reading with the filter cover-
ing the entire sensor. No light should enter from the sides. The difference in
the readings is the compensation needed for that filter. You could also use a
spot meter, reading the same bright patch, with similar results. There are some
exceptions to this depending on the filter color, the meter sensitivity, and the
target color, but this method is often better than taking a guess.
Published filter-factor information should be taken as a starting point.
Differing circumstances may call for deviations from the norm.
numbering range can vary with the effect type; generally, the higher the num-
ber, the stronger the effect. Unless otherwise stated, there is no mathematical
relationship between the numbers and the strengths. A grade 4 is not twice
the strength of a grade 2. A grade 1 plus a grade 4 doesn’t add up to a grade 5.
Another possible source of confusion is that the various filter manufac-
turers often offer filters with similar names, using terms such as “fog” and
‘diffusion,’ which may have different characteristics.
The oldest standard for naming filter colors was developed early in the
20th century by Frederick Wratten and his associate, C.E. Kenneth Mees.
While at Kodak they sought to make early film capabilities more closely re-
spond to customer requirements. In doing so, they created specifications for
a series of filter colors that correspond to particular applications. They gave
each color a number, what we have since called Wratten numbers. These
will be referenced later in this article. Kodak makes gel filters that are the
defining standard for the Wratten system. Other manufacturers may or may
not reference the Wratten designations, even if they otherwise use their own
numbering system.
Contact the various manufacturers for additional details about their filter
products and nomenclature.
Figure 1b. WITH FAR RED/INFRARED FILTER: The addition of a filter that absorbs or
reflects an appropriate amount of light in the far red/infrared region can eliminate the
reddish cast and visually restore the objects to their proper neutral appearance, as seen
in this image. Digital cameras differ by design so that the correction filter required by
each will vary. It is best to check with the camera and filter manufacturers for what will
work best with any given camera.
neutral-density Filters
When it is desirable to maintain a particular lens opening for sharpness
or depth-of-field purposes, or simply to obtain proper exposure when con-
fronted with too much light intensity, use a neutral-density (ND) filter. This
will absorb light evenly throughout the visible spectrum, effectively altering
exposure without requiring a change in lens opening and without introduc-
ing a color shift.
Neutral-density filters are denoted by (optical) density value. Density
is defined as the log, to base 10, of the opacitance. Opacitance (degree of
absorption) of a filter is the reciprocal of (and inversely proportional to) its
transmittance. As an example, a filter with a compensation of one stop has a
transmittance of 50%, or 0.5 times the original light intensity. The reciprocal
of the transmittance, 0.5, is 2. The log, base 10, of 2 is approximately 0.3,
which is the nominal density value. The benefit of using density values is
that they can be added when combined. Thus, two ND 0.3 filters have a
density value of 0.6. However, their combined transmittance would be found
by multiplying 0.5 x 0.5 = 0.25, or 25% of the original light intensity.
Neutral-density filters are also available in combination with other filters.
Since it is preferable to minimize the number of filters used (see section on
multiple filters), common combinations such as a Wratten #85 (daylight-
conversion filter for tungsten film) with an ND filter are available as one
filter, as in the 85N6. In this case, the two-stop ND 0.6 value is in addition to
the exposure compensation needed for the base 85 filter.
There are two types of neutral-density filters in general use. The most preva-
lent type uses organic dyes to attenuate light. For situations where it is neces-
sary to obtain the most even control from near-ultraviolet through the visible
spectrum into the near-infrared, a metallic vacuum-deposition coating, often
a nickel alloy, is ideal. For most situations, though, the silvered-mirror appear-
ance of these filters imparts internal reflections that need to be addressed in use.
Special metallic coatings can also be employed for recording extreme
bright-light situations, such as the sun during an eclipse. These filters are
very dense and absorb substantially through the IR and UV range, as well as
the visible, to reduce the potentially blinding level of light. The best have a
density value of about 5.7, which allows less than 0.001% of the overall light
through. Caution: Do not use any filter to aim at the sun unless it is clearly
labeled as having been made for that purpose. Follow the manufacturer’s
directions and take all possible precautions to avoid accidental (and poten-
tially permanent) blindness.
the variable ND filter. The most common version combines two polarizers in
one assembly which, when rotated relative to each other, produce the desired
variation in overall light transmission while retaining a mostly neutral color
rendition. While you can produce your own version of this effect using two
separate polarizers, particularly a circular one mounted on the lens and a
linear one mounted onto that (away from the camera), there are reasons to
obtain a purpose-built unit. Since polarizers can affect light of different wave-
lengths somewhat differently, they can generate a color cast, often deep blue,
at the lowest transmission point, which is reached when both polarizers are
positioned so their polarization axes are perpendicular to each other. Proper
selection of polarizer foil characteristics for use by the manufacturers of vari-
able ND filters will produce more neutral color balance, but even then, using
these filters is usually better when not set for the lowest possible transmission.
For most current cameras that require circular polarization, it is impor-
tant that the variable ND has a quarter-wave retarder on the side facing the
camera, as with a traditional circular polarizer. In addition, if the variable
ND assembly allows not only one filter element to rotate relative to the
other (necessary to achieve the variable density effect) but also allows both
elements to rotate together (while retaining their positions relative to each
other) then the assembly can also function similarly to a traditional polar-
izer, where reduction of polarized reflected glare improves images of the
sky, surfaces of bodies of water, and when imaging through windows. When
using these remember to separately make rotational adjustments for both
overall transmission as well as for polarized glare reduction.
Figure 2b. GRADUATED NEUTRAL DENSITY 0.6 FILTER: Taken at the same midday time as
the unfiltered image, the filter absorbs two stops from the sky, allowing it to appear cor-
rectly while the exposure is adjusted to properly render the foreground through the clear
half of the filter. The soft transition between the clear and the ND halves of the filter allow
the effect to blend well together and make for a more balanced image.
graduated nd Filters
Often it is necessary or desirable to balance light intensity in one part of
a scene with another, namely in situations where you don’t have total light
control, as in bright exteriors. Exposing for the foreground will produce a
washed-out, overexposed sky. Exposing for the sky will leave the foreground
dark, underexposed
Graduated ND filters are part clear, part neutral-density, with a smoothly
graded transition between. This allows the transition to be blended into the
scene, often imperceptibly. An ND .6-to-clear, with a two-stop differential,
will sometimes compensate the average bright-sky-to-foreground situation.
These filters are also available in combination colors, where the entire
filter is, for example, a Wratten #85, while one half also combines a graded-
transition neutral-density, as in the #85-to-85N6. This allows one filter to
fulfill the need for two
Graduated filters generally come in three transition types. The most com-
monly used is the soft-edge graduation. It has a wide enough transition area
on the filter to blend smoothly into most scenes, even with a wide-angle
lens (which tends to narrow the transition within the image). A long focal
length, however, might only image in the center of the transition. In this
case, or where the blend must take place in a narrow, straight area, use a hard
edge. This is ideal for featureless marine horizons. For situations where an
152
|
Camera Filters
Polarizing Filters
Polarizers allow color and contrast enhancement, as well as reflection
control, using optical principles different from any other filter types. Most
light that we record is reflected light that takes on its color and intensity from
the objects we are looking at. White light, as from the sun, reflecting off a
blue object appears blue because all other colors are absorbed by that object.
A small portion of the reflected light bounces off the object without being
| 153
Camera Filters
absorbed and colored, retaining the original (often white) color of its source.
With sufficient light intensity, such as outdoor sunlight, this reflected glare
has the effect of washing out the color saturation of the object. It happens
that for many surfaces, the reflected glare we don’t want is polarized, while
the colored reflection we want isn’t.
The waveform description of light defines nonpolarized light as vibrating in
a full 360-degree range of directions around its travel path. Polarized light in its
linear form is defined as vibrating in only one such direction. A (linear) polar-
izing filter passes light through in only one vibratory direction. It is generally
used in a rotating mount to allow for alignment as needed. In our example
above, if it is aligned perpendicular to the plane of vibration of the polarized
reflected glare, the glare will be absorbed. The rest of the light, the true-colored
reflection vibrating in all directions, will pass through no matter how the po-
larizing filter is turned. The result is that colors will be more strongly saturated,
or darker. This effect varies as you rotate the polarizer through a quarter-turn,
producing the complete variation of effect from full to none.
Polarizers are most useful for increasing general outdoor color saturation
and contrast. Polarizers can darken a blue sky, a key application, on color as
well as on black-and-white film, but there are several factors to remember
when doing this. To deepen a blue sky, the sky must be blue to start with,
not white or hazy. Polarization is also angle-dependent. A blue sky will not be
equally affected in all directions. The areas of deepest blue are determined by
the following rule of thumb: when setting up an exterior shot, make a right
angle between thumb and forefinger; point your forefinger at the sun. The area
of deepest blue will be the band outlined by your thumb as it rotates around
the pointing axis of your forefinger, directing the thumb from horizon to ho-
rizon. Generally, as you aim your camera either into or away from the sun, the
effect will gradually diminish. There is no effect directly at or away from the
sun. Do not pan with a polarizer without checking to see that the change in
camera angle doesn’t create undesirable, noticeable changes in color or satura-
tion. Also, with an extra-wide-angle view, the area of deepest blue may appear
as a distinctly darker band in the sky. Both situations are best avoided. In all
cases, the effect of the polarizer will be visible when viewing through it.
Polarizers need approximately 1½ to 2 stops exposure compensation,
generally without regard to rotational orientation or subject matter. They are
also available in combination with certain standard conversion filters, such
as the 85BPOL. In this case, add the polarizer’s compensation to that of the
second filter.
Certain camera optical systems employ internal surfaces that also polar-
ize light. One example is the use of a videotap. Using a standard (linear)
polarizer may cause the light to be further absorbed by the internal optics,
depending on the relative orientation. This may interfere with the normal
154
|
Camera Filters
cameras at right angles to each other, each aimed at opposing sides of a par-
tially silvered mirror. One camera sees through the mirror, the other images
off the reflection. The reflected light will be partially polarized by the mirror;
the transmitted light will only exhibit the partial polarization inherent in
the scene that both cameras are recording. The differences in the two light
paths can cause troublesome differences in brightness of particular details in
the image that can disrupt the stereoscopic effect. One solution to this is to
position a quarter-wave retarder plate, essentially a clear piece of glass that
twists the polarization axis of the incoming light, in front of the mirror in
the beam-splitter rig, effectively neutralizing the negative polarization effect.
diffusion Filters
Many different techniques have been developed to diffuse image-forming
light. Strong diffusion can blur reality for a dream-like effect. In more subtle
forms, diffusion can soften wrinkles to remove years from a face. The optical
effects all involve bending a percentage of the image-forming light from its
original path to defocus it.
Some of the earliest portrait diffusion filters still in use today are nets. Fine
mesh, like a stocking, stretched across the lens has made many a face appear
youthful, flawless. This effect can now be obtained through standard-sized
optical glass filters, with the mesh laminated within. These function through
“selective” diffusion. They have a greater effect on small details, such as wrin-
kles and skin blemishes, than on the rest of the image. The clear spaces in the
mesh transmit light unchanged, preserving the overall sharp appearance of
156
|
Camera Filters
the image. Light striking the flat surface of the net lines, however, is reflected
or absorbed. A light-colored mesh will reflect enough to either tint shadows
lighter, which lowers contrast, or its color while leaving highlight areas alone.
The effect of diffusion, however, is produced by the diffraction of light that
just strikes the edges of the mesh lines. This is bent at a different angle, chang-
ing its distance to the film plane, putting it out of focus. It happens that this
has a proportionately greater effect on finer details than on larger image ele-
ments. The result is that fewer wrinkles or blemishes are visible on a face that
otherwise retains an overall, relatively sharp appearance.
The finer the mesh, the more the image area covered by mesh lines and
the greater the effect. Sometimes, multiple filters are used to produce even
stronger results.
As with any filter that has a discrete pattern, be sure that depth of field
doesn’t cause the net pattern to become visible in the image. Using small
apertures or short focal length lenses make this more likely, as will using a
smaller film format such as 16mm vs. 35mm, given an equal field of view.
Generally, midrange or larger apertures are suitable, but test before critical
situations. When in need of net diffusion in circumstances where mounting
it in front of the lens will cause the pattern to show, try mounting the filter
in a suitable location behind the lens (if the equipment design allows). This
should reduce the chance of the pattern appearing. Placing a glass filter be-
hind the lens may alter the back-focal length, which may need readjustment.
Check with your lens technician. A test is recommended.
When diffusing to improve an actor’s facial appearance, it is important not
to draw attention to the presence of the filter, especially with stronger grades,
when diffusion is not required elsewhere. It may be desirable to lightly dif-
fuse adjacent scenes or subjects not otherwise needing it to ensure that the
stronger filtration, where needed, is not made obvious.
In diffusing faces, it is especially important that the eyes do not get overly
soft and dull. This is the theory behind what might be called circular diffusion
filters. A series of concentric circles, sometimes also having additional radial
lines, are etched or cast into the surface of a clear filterThese patterns have the
effect of selectively bending light in a somewhat more efficient way than nets,
but in a more radial orientation. This requires that the center of the circular
pattern is aligned with one of the subject’s eyes—not always an easy or possible
task—to keep it sharp. The rest of the image will exhibit the diffusion effect.
A variation on the clear-center concept is the center-spot filter. This is a
special-application filter that has a moderate degree of diffusion surrounding
a clear central area that is generally larger than that of the circular diffusion
filter mentioned previously. Use it to help isolate the main subject, held sharp
in the clear center, while diffusing a distracting background, especially in
situations where a long lens and depth-of-field differentiation aren’t possible.
| 157
Camera Filters
Figure 3a.
Images by Ira tIffen
NO FILTER –
STANDARD EXPOSURE.
Midday scene as it appears
without a filter.
Figure 3b.
BLACK PRO-MIST 1
STANDARD EXPOSURE
Midday scene with highlight haze
more visually suggestive of the
sun’s heat and the humidity by
the lake.
Figure 3c.
SUNRISE 3 GRAD PLUS
BLACK PRO-MIST 1,
one stop under.
Combining the hazy atmosphere
of the Black Pro-Mist with the
color of the Sunrise Grad, under-
exposure produces a visual sense
of early morning.
Figure 3d.
TWILIGHT 3 GRAD,
two stops under.
The cool colors of the Twilight
Grad plus a two-stop underexpo-
sure produces a visual sense of
early evening.
158
|
Camera Filters
droplets in the air. The soft glow can be used to make lighting more visible,
make it better felt by the viewer. The effect of humidity in, say, a tropical scene
can be created or enhanced. In lighter grades, these filters can take the edge off
excess contrast and sharpness. Heavier grades can create unnatural effects, as
for fantasy sequences. In general, however, the effect of a strong natural fog is
not produced accurately by fog filters in their stronger grades. Their look is too
soft, with too much contrast, to faithfully reproduce the effect of a thick natural
fog. For that, double-fog or graduated-fog filters are recommended.
Double-fog filters have milder flare and softening characteristics than
standard fog filters while exhibiting a much greater effect on contrast, espe-
cially in the stronger grades. A very thick natural fog will still allow close-up
objects to appear sharp. So will a double fog filter. The key to the effect is the
much lower contrast combined with a minimal amount of highlight flare.
Graduated-fog filters, sometimes called “scenic,” are part clear or light fog
and part denser fog effect. Aligning the clear or weaker half with the fore-
ground and the stronger half with the background will render an effect more
like that of a natural fog, accumulating density with distance.
Mist filters generally produce highlight flare which, by staying closer to
the source, appears more as a halo than will the more outwardly extended
flare of a fog filter. They give an almost pearlescent glow to highlights. The
lighter grades also find uses in toning down the excessive sharpness and
contrast of modern film and lens combinations without detracting from the
image. Black Pro Mist-type filters also create moderate image softening and
modest-to-strong highlight flare, but without as much of a lightening effect
on shadows.
Color-Conversion Filters
Color-conversion filters are used to correct for sizable differences in color
temperature between the film and the light source. These are comprised of
both the Wratten #80 (blue, as used for daylight film in tungsten lighting)
and the Wratten #85 (amber, as used for tungsten film in daylight) series of
filters. Since they see frequent outdoor use in bright sunlight, the #85 series,
especially the #85 and #85B, are also available in combination with various
neutral-density filters for exposure control.
light-Balancing Filters
Light-balancing filters are used to make minor corrections in color tem-
perature. These are comprised of both the Wratten #81 (yellowish) and the
Wratten #82 (bluish) series of filters. They are often used in combination
with color conversion filters. Certain #81 series filters may also be available
in combination with various neutral-density filters for exposure control.
Color-Compensating Filters
Color-compensating (CC) filters are used to make adjustments to the red,
blue or green characteristics of light. These find applications in correcting
for color balance, light source variations, different reversal film batches and
other color effects. They are available in density variations of cyan, magenta
and yellow, as well as red, blue and green filters.
decamired® Filters
Decamired (a trademark of the manufacturer) filters are designed to
more easily handle a wide range of color-temperature variations than the
previously mentioned filters. Available in increments of both a red and a
blue series, Decamired filters can be readily combined to create almost any
required correction. In measuring the color temperature of the light source
and comparing it to that for which the film was designed, we can predict the
required filtration fairly well.
A filter that produces a color-temperature change of 100° K at 3400° K will
| 163
Camera Filters
Figure 4b. TAKEN AT THE SAME MIDDAY TIME AS THE UNFILTERED IMAGE: The filter
absorbs two stops and is shown here with an additional one stop underexposure – a total
of three stops under. The cool tones and underexposure produce a visual sense of evening
under moonlight illumination.
Coral Filters
As the sun moves through the sky, the color temperature of its light changes.
It is often necessary to compensate for this in a variety of small steps as the day
progresses, to match the appearance of different adjacent sequences to look as
166
|
Camera Filters
though they all took place at the same time. Coral filters are a range of graded
filters of a color similar to a #85 conversion filter. From light to heavy, any ef-
fect from basic correction to warmer or cooler than normal is possible. Corals
can also compensate for the overly cool blue effect of outdoor shade.
Sepia Filters
People often associate sepia-toned images with “early times.” This makes
sepia filters useful tools for producing believable flashbacks and for period
effects with color film. Other colors are still visible, which is different from
original sepia-toned photography, but appear infused with an overall sepia tint.
didymium Filters
The Didymium Filter is a combination of rare earth elements in glass. It
completely removes a portion of the spectrum in the orange region. The ef-
fect is to increase the color-saturation intensity of certain brown, orange and
reddish objects by eliminating the muddy tones and maximizing the crim-
son and scarlet components. Its most frequent use is for obtaining strongly
saturated fall foliage. It also enlivens brick and barn reds, colors that aren’t
bright scarlet to begin with, most effectively. The effect is minimal on objects
of other colors. Skin tones might be overly warm. Even after subsequent
color timing or correction to balance out any unwanted bias in these other
areas, the effect on reddish objects will still be apparent. Prior testing should
be done because film color sensitivities vary.
a film production. This accounts for variables in exposure, print stock and
processing. Timing can also be used to impart certain color effects for densi-
ties available in the film emulsion to work with, and is limited to the range
of variation of the optical printer. These are much more limiting than the
multitude of colorants in the real world and the number of ways in which
adjustments can be made at the camera. Filtering on the camera brings the
lab that much closer to the desired result, providing a greater latitude of
timing options. It can also center more creative control in the hands of the
cinematographer.
There will be times when counting on the lab is either the only choice, or
can produce some unusual effects. When faced with a low-light situation in
daylight using tungsten film, it may be necessary for exposure reasons to pull
the #85 filter and correct in the printing. When you do this, however, neutral
gray tones will appear slightly yellow, even when all else looks correct. This
effect can be used to artificially enhance lush green foliage colors through
the addition of yellow. It may have other uses, but you will not achieve the
same result as if you had used the 85 filter.
ll-d®
The LL-D was designed to help in the above situation. It requires no ex-
posure compensation and makes sufficient adjustments to the film to enable
the timer to match the color of a properly 85-filtered original. It is not an
all-around replacement for the #85. Use it only where needed for exposure
purposes, and for subsequently printer-timed work.
Figure 5a. WITHOUT THE SPLIT-FIELD LENS (1): The foreground here is out of focus
when the background is sharp. You can’t focus on both at the same time.
as well as various viewer densities for color film. A darker viewer is used for
slower film speeds, where you would tend to use brighter lighting. Faster
film, which can be used in dimmer settings, would require a lighter viewer.
Further details can be obtained from the manufacturers.
There are also Green, Red and Blue viewing filters used to judge lighting
effects when doing process work like greenscreen.
Figure 5b. WITHOUT THE SPLIT-FIELD LENS (2): The foreground here is sharp when the
background is out of focus. You can’t focus on both at the same time.
| 169
Camera Filters
Figure 5c. WITH THE SPLIT-FIELD LENS: The close-up lens half allows sharp focus on
the foreground while the camera lens is focused on the background. There is a soft
transition between the two areas at the edge of the split-field lens at this middle-of-the-
range lens opening.
Secondary reflections
Lighting can cause flare problems, especially when using more than one fil-
ter. Lights in the image pose the greatest difficulties. They can reflect between
filter surfaces and cause unwanted secondary reflections. Maintaining paral-
lelism between filters, and further aligning the lights in the image with their
secondary reflections where possible, can minimize this problem. In critical
situations, it may be best to make use of a matte box with a tilting filter stage.
Tilting filters of good optical quality only a few degrees in such a unit can
divert the secondary reflections out of the lens axis, out of the image, without
introducing unwanted distortion or noticeable changes in the filter’s effect.
rain Spinners
When rain or boat spray send distracting water drops onto the lens, you
can mount a rain spinner, sometimes called a rain deflector, as you would a
matte box on the front of the lens. A round filter, usually clear, spins at over
3000 rpm and flings off droplets before they form. This is very effective when
filming the Tour De France in the rain.
Ira Tiffen, ASC Associate Member, has created many of the important filter ef-
fects of the last forty years. He earned a 1992 Academy Technical Achievement
Award for the development of the Ultra Contrast filter series and received a Prime
Time Emmy Award in 1998. He was elected a Fellow of the SMPTE in 2002.
172
|
Camera Filters
| 173
Camera Stabilizing Systems
T here is good reason why the medium discussed in this manual is popu-
larly known as “motion” pictures. Since the earliest days of the industry,
cinematographers have been challenged to serve their stories by finding new
and inventive ways of moving the camera. This section examines the latest
devices that allow the operator to maintain control of the image while taking
the viewer on a ride up, down or anywhere else.
Body-Worn SyStemS
Modern camera-stabilizing systems enable a camera operator to move
about freely and make dolly-smooth handheld shots without the restrictions
or the resultant image unsteadiness encountered with prior methods. These
systems transfer the weight of the camera unit to the operator’s body via a
support structure and weight distribution suit. This arrangement frees the
camera from body motion influences. It allows the camera to be moved by
the operator through an area, generally defined by the range through which
his arm can move.
Camera smoothness is controlled by the “hand-eye-brain” human servo
system that we use to carry a glass of water around a room or up and down
stairs. Viewing is accomplished through the use of a video monitor system
that displays an actual through-the-lens image, the same image one would
see when looking through a reflex viewfinder. The advantage of these
camera-stabilizing systems is that the camera now moves as if it were an
extension of the operator’s own body, controlled by his or her internal servo
system, which constantly adjusts and corrects for body motions, whether the
operator is walking or running. The camera moves and glides freely in all
directions—panning, tilting, booming—and all movements are integrated
into a single, fluid motion that makes the camera seem as if it were suspend-
ed in midair and being directed to move at will. These camera-stabilization
systems turn any vehicle into an instant camera platform.
As with remotely controlled camera systems, servo controls may be used
for control of focus, iris and zoom on the camera lens.
piece. The lens can be placed anywhere the arm can reach. This also allows
the operator’s unblocked peripheral vision to see upcoming dangers and
obstacles.
The BodyCam prevents unwanted motion from being transferred to the
camera. With the average video camera, lens focal lengths up to 125mm are
possible, and on 35mm film cameras, focal lengths up to 250mm are usable.
The Balance Assembly is suspended in front of the operator. It supports
the camera, a viewfinder monitor and one or two batteries. The monitor is
the green CRT-type. Batteries are brick-style and last two to three hours.
Adjustments allow precise balancing of the Balance Assembly around a
three-axis gimbal. At this gimbal is an attach point where the Balance As-
sembly connects to the Suspension Assembly.
The Suspension Assembly consists of an arm supported and pivoted be-
hind the left shoulder. The rear part of the arm is attached to a double-spring
arrangement that counters any weight carried by the other end of the arm and
provides vertical movement over a short distance. This arm isolates higher
frequency motion. The front end of the arm attaches to the Balance Assembly
with a hook. This hook is connected to a cable that travels from the front of
the arm to the pivot point of the arm and an internal resilient coil.
The Suspension Assembly is attached to the Backbrace. The Backbrace is a
framework that carries the load and transfers it to the human body.
Model L weighs 24 lbs (10.9kg) without camera or batteries and will carry
up to 25 lbs (11.3kg) of camera package (not counting batteries). Model XL
weighs 28 lbs (12.7kg) without camera or battery and will carry up to 40 lbs
(18.2kg) without battery. Both models offer options for wired or wireless
focus, zoom and iris control.
field adjustable to allow for varying camera weights. For safety, dual-spring
design is used to reduce possible spring failure damage.
The free-floating Three-Axis Gimbal, which incorporates integrally shield-
ed bearings, creates the smooth and pivotal connections between the front
end of the arm and the mounting assembly. A locking mechanism allows the
gimbal to be placed at varying positions on the Central Support Post.
The Camera Mounting Assembly (Sled) is designed with a lower Tele-
scoping Center Post which allows for vertical balance adjustment as well as
varying lens heights. The center post can be adjusted from 22" to 32". The
Camera Plate has both ¼" and 3⁄8" mounting slots to accommodate a variety
of camera bases. For remote viewing, a LCD Monitor can be attached on
the Base Platform, or either a LCD or a CRT Monitor can be attached to
the base’s adjustable monitor bracket. The base platform can also be config-
ured to use counterbalance weight disks if a monitor and/or battery is not
mounted on the base platform. The back of the base platform has threaded
mounting holes for an Anton Bauer Gold Mount battery adapter plate.
Accessories include a low-mode camera mount, sled offset adapter, ve-
hicle mount, and Vista Post 33" central support post extender.
Glidecam v-16
The V-16 is the same as the V-20 but designed to support lighter cameras
weighing 10–20 lbs.
mk-v
mk-v evolution
The MK-V Evolution Modular Sled system can evolve with the operator.
With the addition of optional extras, the basic MK-V Evolution rig can
become a 24V 35mm sled. The base features a modular framework, allow-
ing for customization in the field to any configuration and battery system
and the addition of various accessories. It has a 4-by battery bar and clamp
system for stability and universal battery compatibility. The modular base is
also compatible with other leading stabilization systems. All battery systems,
12V and 24V, plug into the D-box distribution box. The D-box Deluxe has
built-in digital level sensing and an onscreen battery display. MK-V’s latest,
highly configurable Nexus base is compatible with the Evolution.
The available carbon-fiber posts are two- and four-stage telescopic and
available in three different lengths. The base has optional gyro mounts. The
modular telescopic monitor works with CRT and LCD monitors. The V2
advanced, frictionless gimbal is compatible with leading sled systems, tools-
free and designed to have no play in any axis. Front-mounting standard and
advanced Xo vests are available. The Xo arm is frictionless and can support
cameras of any weight with the change of its springs and features a modular
front end.
SAChtler
Artemis Cine/hd
The Artemis Cine/HD Camera Stabilizing System’s post has a telescopic
length ranging from 16.5"–27.5" (42–70cm). A variety of modules can be
fixed to either end of the post to enable cameras, monitors and battery packs
to be mounted. Scales on both tubes allow fast and precise adjustment. The
gimbal’s self-centering mechanism ensures aligned, precise position of the
central bearing for easy system balancing.
The side-to-side module situated between post and camera gives the cam-
era a sliding range of 1.18" (30mm) left, right, forward and back and includes
a built-in self-illuminating level at the rear of the module. Two positions at
the front and two at the rear enable mounting of remote focus brackets. All
adjustments or locking mechanisms can be performed tool-free or with only
a 4mm Allen wrench. The Artemis vest, made from materials such as cor-
dura, has a full-length pivoting bridge made of reinforced aluminum alloy.
It can be vertically adjusted independently of body size. Most of the weight
is carried on the hips, reducing pressure on the operator’s back. The arm is
available with different spring sets that offer a load capacity ranging from
178
|
Camera Stabilizing Systems
30–70 lbs (15–35kg). The two standard arm capacities are 44 lbs (20kg) and
57 lbs (25kg). It is fully compatible with all other stabilizer systems using the
standard arm-vest connector and a 5⁄8" arm post.
tIffen SteAdICAm
universal model III
The Steadicam system consists of a stabilizing support arm which attaches
at one end to the camera operator’s vest and at the other end to a floating
camera mounting assembly which can accept either a 16mm, 35mm or
video camera. The comfortable, adjustable, close-fitting camera operator’s
vest transfers and distributes the weight of the Steadicam system (including
camera and lens) across the operator’s shoulders, back and hips.
The stabilizer support arm is an articulated support system that parallels
the operator’s arm in any position, and almost completely counteracts the
weight of the camera systems with a carefully calibrated spring force. The
double-jointed arm maximizes maneuverability with an articulated elbow
hinge. One end of the arm attaches to either side of the vest’s front plate,
which can be quickly reversed to allow the stabilizer arm to be mounted on
the right or left side of the plate. A free-floating gimbal connects the stabi-
lizer support arm to the camera-mounting assembly.
The camera-mounting assembly consists of a central support post, around
which the individual components are free to rotate as needed. One end of
the post supports the camera mounting platform, while the other end termi-
nates in the electronics module. The adjustable video monitor is attached to a
pivoting bracket. An electronic level indicator is visible on the CRT viewing
screen. Electronically generated framelines can be adjusted to accommodate
any aspect ratio. Positions of the components may be reversed to permit “low
mode” configuration. The Steadicam unit is internally wired to accept wire-
less or cable-controlled remote servo systems for lens control. A quick-release
mechanism permits the operator to divest himself of the entire Steadicam
unit in emergency. A 12V/3.5A NiCad battery pack mounts on the electron-
ics module to supply the viewfinder system and film or video camera.
master Series
The cornerstone of the Steadicam family, this high-end production tool
incorporates a new motorized stage, an enhanced Master Series vest, an ad-
vanced 5" monitor with artificial horizon and every state-of-the-art feature
in the industry today.
System Includes: Camera Mounting Chassis (Sled), Motorized Stage,
Low-Friction Gimbal, Composite Center Post and Covers, Advanced Moni-
tor: 5" CRT w/16:9 aspect ratio switchable to 4:3, Artificial Horizon, Frame-
line Generator, On-screen Battery Display, Dovetail Accessory Bracket,
| 179
Camera Stabilizing Systems
20–45 lbs camera capacity Iso-Elastic Arm, No-tools Arm Post, Master
Series Leather Vest, four CP batteries with built-in Meter Systems, 12V/24V
DC to DC Power Converter, Dynamic Spin & Docking Bracket, Long and
Short Dovetail Plates, 12V power cable, 24V power cable (open-ended),
two power cables for Panavision and Arri (24V) cameras, two power/video
cables for Sony XC-75, soft case for the Arm, soft case for the Vest, hard case
for the Sled, Owner’s Manual.
robodog
This motorized version of the Doggicam allows the operator full-tilt con-
trol of the camera by the handle-mounted motor controller. The inherent
stability of the system is maintained by keeping the post vertical while allow-
ing the full range of camera tilt within the yoke.
doggimount
The Doggimount quickly secures the camera at any point in space using a
compact aluminum framework similar to speedrail, yet much smaller.
Bodymount
Bodymount uses a specially designed vest and the standard Doggimount
bracketry to allow easy and comfortable attachment of the camera to a per-
son. The vest can be covered with wardrobe to allow framing to the middle
of the chest.
Pogocam
The Pogocam is a small, weight-balanced handheld system that utilizes a
converted 35mm Eyemo (100' daylight loads) with a videotap and onboard
monitor. Nikon lenses are used. The compact size allows you to shoot where
most stabilization systems would be too cumbersome to go.
| 181
Camera Stabilizing Systems
mItChell-BASed PlAtform-mounted
StABIlIzAtIon SyStem
CoPtervISIon
rollvision
Rollvision is a lightweight, portable, wireless 3-axis camera system with
360° unlimited pan, tilt and roll capabilities and a gyro for horizon compen-
sation. The dimensions are 14.5" x 19" x 19.5" with a weight of 16 lbs (camera
system only). Rollvision works on numerous platforms including, but not
limited to, cable systems, cranes, jib arms, camera cars, Steadicams, tracking
and rail systems and much more. Mounting options consist of a 4" square
pattern 1⁄4"–20, Mitchell Mount adapter receiver 1⁄4"–20 and two adjustable
T-nuts sized from 1⁄4"–20.
The Rollvision features electronic end stops, an hour meter with total sys-
tem running time, lock for pan and tilt, removable sphere covers, film and
video mode switch, and voltmeter.
Camera options include: Arri 435, 35-3, 2-C, 2-C SL Cine, 16SR-2 and
16SR-3, Aaton XTRprod and A-Minima plus multiple Sony and Panasonic
High-Definition cameras.
Lenses up to 6 lbs can be used, including but not limited to, Panavision
Primos from 17.5mm to 35mm, Ultra, Super and Normal Speeds; 35mm
primes; and 16mm primes. Lens mount options include Arri standard, light-
weight PL and lightweight Panavision mounts.
Several types of film magazines can be used including 400' SL Steadicam
Mag, 200' Standard and 200' Steadicam Mag.
Remotevision,also designed by Coptervision, is the wireless, 4-in-1 radio
control used to operate the Rollvision. Remotevision features a 5.6" color
monitor and pan, tilt, and roll controls as well as camera zoom and iris con-
trols. Gyro on/off and roll/record switches are included. The speed of the
pan, tilt and roll is adjustable and a reverse switch allows the head to reset to
the zero position.
The Rollvision fits in one to two ATA-approved size cases, and travels with
the Rollvision operator as luggage.
flying-Cam
The Flying-Cam is an unmanned aerial filming helicopter. The Flying-
Cam aeronautical design, manufacturing quality, safety features and main-
tenance procedures are equivalent to those used in General Aviation. The
vehicle has all the characteristics required for safe operation close to actors.
The system is self-sufficient and combines the use of aeronautics, electron-
ics, computer vision and wireless transmission technologies that are tailor
made to fit in a 30 lbs take-off weight platform and a 6' main rotor diameter.
Camera available are: 16mm, S16mm, 35mm, S35mm and various Video
standards including HD and Broadcast live transmission.
The embedded Super 35mm 200' camera is custom made by Flying-
Cam and mounted in a 1' diameter, three-axis gyro-stabilized remote
head. The camera is the integration between an Arri 2-C and an Eyemo.
The movement has been designed to achieve the same steadiness of an
Arri 35-3 up to 30 fps and of an Arri 2-C up to 50 fps. The 200' magazine
use standard core and is part of the camera body. When short reloading
time is requested, the cameras are hot-swappable. The electric camera
motor is crystal-controlled with two-digit precision. Ramping is optional.
The minimum speed is 4 fps, the maximum 50 fps. Shutter is 160°. Cam-
era trigger is remote. Indicators and readouts—timer, sync signal and roll
out—are monitored from the ground control station. A color LCD moni-
tor and a digital 8mm tape recorder are provided for monitoring playback
on the ground. A color wireless video assist, used as parallax, gives pe-
ripheral information, allowing for anticipation in the camera remote head
operation. A frameline generator is provided with prememorized ratio.
184
|
Camera Stabilizing Systems
The mattebox used is an Arri 3"x3". ND3-6-9, 85ND3-6-9, Pola and 81EF
filters are provided as standard. Aperture Plate is full and optical axis is
centered on full. Available lenses are wide-angle: Cooke, Zeiss and ana-
morphic 40mm. Aperture is remote, focus remote is optional on standard
lens and included on Anamorphic. Lens mounts are Arri Standard, PL,
BNCR, and Panavision.
The HD Camera is a 3-CCD with HD-SDI output and onboard recording
capability.
The Flying-Cam gyro-stabilized patented Remote Head includes one-of-
a-kind top-shot horizon control. Pan: 360° unlimited, 180°/sec adjustable
from ground. Roll: 360° unlimited, 60°/sec adjustable from ground. Tilt:
190º including 90º straight up and 100° down, 60°/sec adjustable from
ground. On 90° tilt down with the 16mm Zeiss lens the front glass is 1"
above ground. The maximum height above ground is 300' (100m) to re-
spect FAA safety separation with general aviation. Flying-Cam provides
the wireless Body-Pan® proprietary system: the camera operator has the
option to take control of the tail rotor giving an unlimited unobstructed,
gyro-stabilized 360° pan capability. The pilot has a transparent control and
override authority if needed.
The Flying Platform has a top speed of 75 mph (120 kph) and can com-
bine all the moves of a full-size helicopter. Take off weight of 30 lbs (15kg)
reduces down wash to a minimum. Autorotation capability is enhanced
by high rotor inertia. Tail rotor is overpowered for Body-Pan operation.
The flight is controlled by a Flying-Cam licensed Pilot using a computer-
base radiodensity. Density altitude is affected by temperature, humidity
and pressure. The radio range is more than one mile. The maximum flight
altitude is 14,000'. The practical range is the visual one. Pilot-in-relay op-
erations are optional.
CABleCAm
A scaleable, multi-axis camera and stunt platform for film and HD. Cable-
cam scales its rigs in interior and exterior locations. Truss, track, ropeways,
towers, and crane attachments are modular.
Types of rigs include,
Basic: One-axis point to point.
Elevator Skate: Two-axis XZ travel.
Flying V: Two-axis XZ travel.
Traveling High Line: Three-axis XYZ travel using a gantry-like rail and
rope rig.
Dual High Line: XYZ travel using parallel high lines.
Multi-V: Three-axis XYZ travel using four attachment points to construc-
tion cranes or high steel.
Mega-V: The upside down crane. XYZ flight of a 20' vertically telescoping
pedestal. Attached to the bottom of the pedestal is a horizontal 12'ex-
tension/retraction arm which pans, tilts, and rolls. The camera head of
choice is attached at one end of the arm.
Teathered Ballooncam: One-three-axis of aerial control over expansive
distances.
Speed range of systems: 0–70 mph. Length, width, and height of rigs:
100– 5,000'. Propulsion: electronic or hydraulic. Software: Kuper and
custom.
Cablecam’s Kuper or custom, controlled servo-driven camera and stunt
rigs are modeled in Maya and Soft image and exported to a previz model.
“Virtual” Cablecam is flown in the previz milieu using teach and learn
protocol.
Aerial platforms, balloons, and flying 3-D cranes support the Libra, Stab-
C and XR as well as the Cablecam LP (low profile) stabilized HD head.
186
|
Camera Stabilizing Systems
I. IntroductIon to PrevIsualIzatIon
Previsualization, also known as “previz,” is a collaborative process that gen-
erates preliminary versions of shots or sequences, predominantly using 3-D
animation tools and a virtual environment. It enables filmmakers to visually
explore creative ideas, plan technical solutions and communicate a shared
vision for efficient production. Filmmakers can leverage the previz process to
effectively work out and convey the details of their vision to the people that
will help execute it. The process allows sets, stunts, visual effects, sequences of
action, story flow and other elements to be designed, developed, understood,
refined and planned in advance using the visual power of 3-D modeling and
animation and with a high degree of production-specific accuracy.
The prehistory of previsualization prior to the availability of computer-
based imaging includes many years of creative solutions using a variety
of physical tools. Popular techniques included mocking up shots and key
elements with materials such as foam core and prefilming the “action” with
lipstick video cameras. Digital previsualization first developed in the early
1990s in tandem with major milestones and enhancements in computer
animation software and processing speed. Professionals around the industry
quickly embraced these enhancements, with the majority utilizing the new
3-D animation capabilities towards creating stunning, never-before-seen
synthetic creatures and imagery in visual effects. Concurrently, a smaller
group was applying the same advancements to improve fundamental film
design, planning and production techniques to create what is now called
previz.
The earliest innovations in digital previz were made by pioneers at stu-
dios such as Sony Pictures Imageworks, Lucasfilm/ILM and The Trumbull
Company. These innovators focused on large, visual effects-driven projects,
paving the way for a wider use of previz with an expanding pool of previz-
specific artists. Today previz is used on projects of all genres where a high
degree of control over creative execution or advanced problem solving is
sought. Previz continues to map to the needs of the market as the demands
188
|
Previsualization
quickly evaluate captured imagery. This includes the use of techniques that
can synchronize and composite live photography with 2-D or 3-D virtual
elements for immediate visual feedback.
Rather than delivering a specific end previz product or finished sequence,
the purpose of on-set previz is to leverage previz assets to provide a quick
mechanism for visualizing and evaluating any number of “what if?” sce-
narios. Decisions on trying out a specific setup can be made effectively, for
example, before heavy rigging or equipment is moved. On-set previz can
additionally serve a purely creative function as instant “postviz,” with the
video tap image being integrated into the previz sequence to immediately
see how the shot footage works.
Postviz combines digital elements and production photography to validate
footage selection, provide placeholder shots for editorial, and refine effects
design. Edits incorporating postviz sequences are often shown to test audi-
ences for feedback, and to producers and visual effects vendors for planning
and budgeting.
Postviz is typically driven by the editorial process and is especially essen-
tial when there are key plot points, CG characters, etc. that are dependent
on being realized through visual effects. Postviz, typically a combination of
previz combined with principal photography, bridges the gap to provide im-
mediate visual placeholders for storytelling purposes until final visual effects
elements arrive. Postviz often also provides helpful information for visual
effects completion and integration.
asset assembly
Working from a script, treatment, story outline or storyboards, key scene
elements comprising the main visual subject matter of the sequence are
constructed accurately in 3-D in the computer. The elements can include
characters, sets, vehicles, creatures, props—anything that appears in the
sequence being previsualized. The assets are ideally sourced through the
Art Department (e.g., as concept art, from storyboards or from physical
or digital models) and have already been approved by key creatives on the
project. If specific assets, such as locations, are still being determined, best
approximations are made.
sequence animation
The assets are laid out in space and time using 3-D software to create an
animated telling of the story’s action. The resulting previz sequence is re-
190
|
Previsualization
viewed by key creative personnel and is further tweaked until accurate and
approved.
virtual shoot
Virtual cameras are placed in the scene. Previz artists ensure that these
cameras are calibrated to specific physical camera and lens characteristics,
including camera format and aspect ratio. This step allows the animated
scene to be “seen” as it would from actual shooting cameras. These views can
be based on storyboard imagery or requests from the director or cinematog-
rapher. It is relatively easy to generate new camera angles of the same scene
or produce additional coverage without much extra effort.
sequence cutting
As previz shots/sequences are completed, they can be cut together to re-
flect a more cinematic flow of action, and are more useful as the entirety of
the film is evaluated and planned. Changes required for previz sequences
often are identified in the editing process.
needs that would benefit from on-set previz. This team can continue to be
effective after the shoot is complete in a postviz capacity, using previz assets
to make temp shots that accelerate the editing process and streamline the
production of final visual effects.
1. The previz represents an approved creative road map. The director can
watch a previz sequence and is confident that it expresses the film’s in-
tended story and creative vision.
2. The previz represents an approved technical road map. The previz is
based on a foundation of the most accurate and current representations
of sets, locations, production conditions and incorporates input from
across many production departments.
3. The previz is distributed across the production and viewed as achievable.
Previz sequences have been evaluated and provided as needed, and ev-
eryone on the production that the director has chosen to help realize the
project feels it can be done as planned in both the practical and budgetary
sense.
4. The previz is completed early enough to be effective. Previz sequences are
done in time to affect the actual production of the shots.
Proscenium Arch: For our purposes, this refers to the edge of the screen
when an object becomes occluded.
Interocular (also called Interaxial): The distance between your eyes
(Figure 3) is properly referred to as interocular. In 3-D cinematogra-
phy, the distance between the taking lenses is properly called interaxial;
however, more recently, you will find filmmakers incorrectly referring
to the distance between taking lenses as the interocular. In 3-D cinema-
tography, if done properly, the interaxial distance between the taking
lenses needs to be calculated on a shot by shot basis. The interaxial
distance between the taking lenses must take into account an average
of the viewing conditions in which the motion picture will be screened.
For large screen presentations, the distance is often much less than the
distance between an average set of human eyes. Within reason, the in-
teraxial can be altered to exaggerate or minimize the 3-D effect.
The 3-D cinematographer must weigh several factors when determining
the appropriate interaxial for a shot. They are: focal length of taking lenses,
average screen size for how the movie will be projected, continuity with the
next shot in the final edit, and whether it will be necessary to have a dynamic
interaxial that will change during the shot.
Because the interaxial distances are crafted for a specific theatrical presen-
tation, a 3-D motion picture doesn’t easily drop into a smaller home viewing
environment. A movie usually will require adaptation and modification of
the interaxial distances in order to work effectively in a small home theater
display screen environment.
The facts presented in this chapter are indisputable, but once you become
enmeshed in the world of 3-D, you will encounter many differing opinions
on the appropriate ways to photograph and project a 3-D image. For ex-
ample, when you’re originating images for large-format 3-D presentations
(IMAX, Iwerks, etc.), some people will direct you to photograph images in
ways that differ from the methods used for 1.85:1 or 2.40:1 presentations.
Part of this is due to the requirements for creating stereoscopic illusions in
a large-screen (rather than small-screen) environment, but approaches also
derive from personal preferences.
Many think stereoscopic cinematography involves merely adding an ad-
ditional camera to mimic the left-eye/right-eye way we see the world, and
everything else about the image-making process remains the same. If that
were the case, this chapter wouldn’t be necessary.
First off, “3-D” movies are not actually three-dimensional. 3-D movies
hinge on visual cues to your brain that trigger depth stimuli, which in turn
create an illusion resembling our 3-D depth perception. In a theatrical envi-
ronment, this is achieved by simultaneously projecting images that represent,
respectively, the left-eye and right-eye points of view. Through the use of
196
|
3-D Cinematography
glasses worn by the audience, th e left eye sees only the left-eye images, and
the right eye sees only the right-eye images. (At the end of this chapter is a
list of the types of 3-D glasses and projection techniques that currently exist.)
Most people believe depth perception is only created by the use of our
eyes. This is only partially correct. As human beings, our left-eye/right-eye
stereoscopic depth perception ends somewhere between 19' to 23' (6 to 7
meters). Beyond that, where stereoscopic depth perception ends, monocular
depth perception kicks in.
Monocular depth perception is an acquired knowledge you gain gradually
as a child. For example, when an object gets larger, you soon learn it is get-
ting closer, and when you lean left to right, objects closer to you move side to
side more quickly than distant objects. Monocular depth perception is what
allows you catch a ball, for example.
3-D movies create visual depth cues based on where left-eye/right-eye
images are placed on the screen. When you want an object to appear that it
exists coincident with the screen plane, both left- and right-eye images are
projected on the same location on the screen. When photographing such a
scene, the cinematographer determines the apparent distance of the screen
plane to the audience by the width of the field of view, as dictated by the focal
length of the chosen lenses. For example, a wide landscape vista might create
a screen-plane distance that appears to be 40' from the audience, whereas
a tight close-up might make the screen appear to be 2' from the audience.
Screen Plane
Audience
Figure 4. Eyes converging on an “on screen” object. As seen from above, as if viewing from
the theater’s ceiling, looking down on the audience and the screen plane.
| 197
3-D Cinematography
Apparent location
of object to audience
Screen Plane
Audience
Figure 5. How a behind screen object is created.
Figure 4 illustrates when an object is at the screen plane and where the au-
dience’s eyes converge while viewing that object. (Figure 4 also effectively
represents where your eyes converge and focus when watching a standard
2-D movie without special glasses).
If we want an object to appear behind the screen, the image is photo-
graphed with the lenses converged behind the screen plane. On set, the
screen plane is an invisible plane that you establish to control where objects
will be perceived by the viewer of the 3-D motion picture. In the theater, of
course, the screen plane is a very real, physical object.
When the object you want to appear behind the screen is projected, it
appears similar to what is presented in Figure 5.
In Figure 5, the right-eye and left-eye images are kept separated by the
special glasses worn by the audience; in other words, the left eye sees only
the left-eye image and the right eye sees only the right-eye image. If you were
to remove your glasses, you would see both images simultaneously, as in the
Figure 6 example of a behind-screen object viewed without special glasses.
Next, we want an object to appear in front of the screen plane so that from
the audience’s perspective, the object appears to be coming into the theater
198
|
3-D Cinematography
Screen Plane
Apparent location
of object to audience
Audience
Figure 7. How an object appearing in front of the screen is created.
| 199
3-D Cinematography
Screen Plane
Emily
Audience
Claire
Figure 8. On-screen objects are seen in the same location by all audience members.
your face, your eyes converge and focus at half a meter from your face. In
a 3-D movie environment, you can choose an angle of view and scale that,
from your perspective, makes an object appear to be half a meter from your
face even as your eyes are focused on the screen plane, which may be any-
where from 5 to 30 meters away from you.
This doesn’t mean the 3-D approach is “wrong;” it’s just an example of
why 3-D depth cues in a 3-D movie often seem to be exaggerated—why 3-D
movies seem to be more 3-D than reality.
When an object appears on the screen plane, every member of the audi-
ence sees the object at the same location on the screen because the left- and
right-eye images appear precisely laid on top of each other (and thus appear
as one image). Basically, the image appears the same as it would during a
regular “2-D” movie projection (Figure 8).
Take a look at Figure 9, however, and see how things change when an
object is placed behind the screen plane. As you can see, a person’s specific
location in the theater will affect his perception of where that behind-screen
object is located. Also, how close that person is to the screen will affect how
far behind the screen that object appears to be; the closer one’s seat is to the
screen, the shorter the distance between the screen and the object “behind”
it appears to be.
Again, it is not “wrong” that this happens. Figure 9 simply clarifies the
point that stereoscopic cinematography is not 3-D. Were it truly 3-D, every
audience member would see these behind-screen objects in the same loca-
200
|
3-D Cinematography
Apparent location
Apparent location of object to Claire
of object to Emily
Screen Plane
Emily
Audience
Claire
Figure 9. Audience position affects both lateral and depth convergence of behind-screen
objects.
tion. When planning shots for a 3-D motion picture, the filmmaker should
be conscious of how a dramatic moment might be received by viewers seated
in various locations. Audience position will also affect the perceived location
of off screen objects as well.
My next points concern the proscenium arch and “off-screen” objects. As
mentioned earlier, the edges of the screen image (top, bottom, left and right)
are collectively referred to as the proscenium arch. This is a nod towards
live theater, where the term applies to that area of the stage in front of the
curtain. In 3-D, the term is used when referring to objects that appear to be
in front of the screen plane.
In short, the edges of the screen are relevant to objects appearing in front
of the screen plane. Such an object can have very strong stereoscopic con-
vergence cues that will make it appear to be “floating” very close to a viewer’s
face. A good example of this phenomenon occurs in the film Muppet*vision
3-D, during a scene in which the characters Waldo and Kermit the Frog ap-
pear to extend into the audience while clinging to the end of a ladder. Many
recent 3-D motion pictures avail themselves of this 3-D principle, from
Beowulf, when a spear is thrust toward Beowulf ’s face after he arrives on a
beach, to dogs lunging towards the camera in Hugo.
| 201
3-D Cinematography
However, if that floating object moves so close to the edge of the screen
that it is occluded by that edge, your brain will quickly employ its knowledge
of monocular depth cues, and your perception that the object is floating in
front of the screen will diminish to the point of inconsequence. Your brain
has learned that when one object is occluded by another, the occluded object
must be farther away. In spite of all the stereoscopic depth cues, your brain
knows that if that object is occluded by the edge of the screen, then it must
be at, or behind, the screen plane. This scenario will be very noticeable to
viewers as their brains attempt to sort out these contradictory depth cues.
Monocular depth perception overrules stereo depth cues because we are
hard-wired to protect ourselves from danger. Because most danger (such as
an approaching lion, bear or saber-toothed tiger) starts from outside our
stereoscopic depth zone, it’s easy to understand how the brain defaults to
the depth cues that govern most of our life. The 3-D axiom to remember is
that off-screen objects should never touch the edge of the screen, because
if they do, the illusion will be disrupted. The illusion is most effective with
objects that can float or be thrust toward the audience. You will also no-
tice that when you experience these illusions, filmmakers are keeping the
off-screen objects closer to the center of the screen in order to avoid the
proscenium arch.
As with many axioms, however, there is sometimes an exception. There
is a scenario in which an occluded object can still appear as though it is
coming off the screen. Imagine a medium shot of a man who walks from
behind the screen plane toward the screen plane, and then continues toward
the audience until he is in front of the screen. Surprisingly, this shot will still
work with the character apparently coming off the screen, even though the
lower half of his body is cut off by the bottom of the projected image. The
requirement for it to work, contrary to our earlier axiom, is that the viewer
must have other audience members in front of him, with the bottom of the
screen occluded by people’s heads. When the bottom of an object is occluded
by people very close to you, your brain will still believe the object is getting
closer. However, even a clear view of the bottom of the screen will result
in a fairly good effect of the man coming off of the screen; because we’re
programmed to look straight ahead, and often don’t see, or focus on the
lower half of a person coming towards us, obscuration of the lower half of a
person usually won’t entirely ruin the off screen effect.
One must also be aware of the constraints on editing in 3-D. This concept
is relatively simple to grasp but is often disregarded to the detriment of a
3-D presentation. When editing for 3-D, it is important to consider the con-
vergence extremes that the audience will experience in order to realize the
stereoscopic illusion. For example, if the audience is viewing action that oc-
curs behind the screen plane, it is inadvisable to then cut directly to an object
202
|
3-D Cinematography
in front of the screen. The average viewer will have difficulty converging the
suddenly “close” object, to the point where he might see double images for
several moments.
Experienced viewers of 3-D films won’t have this problem, and this can
lead to mistakes if you happen to be part of the creative team involved in a
3-D production. If you work extensively in post for 3-D movies, you become
more and more adept at quickly converging disparate objects. However,
your audience won’t have the advantage of exercising their eyes as much as
someone working on a 3-D film. If this disparity isn’t taken into account,
the resultant movie can cause problems for the audience. The filmmakers
will have no trouble watching it, but the average viewer will find it difficult
to converge 3-D images that cut between extreme positions in front of and
behind the screen plane.
These extremes of convergence can mitigated to a great degree in the
digital intermediate or by the post handling of the 3-D images. A technique
of “handing off ” the A side of cut to the B side by quickly easing the con-
vergence between the two cuts can be very effective at making the cut much
easier for the audience to watch.
Some 3-D films attempt to guide the viewer to converge objects in front of
the screen. They do this by slowly bringing an object closer to the audience,
allowing viewers to track the object as it comes farther and farther off the
screen. The makers of the theme-park attraction Captain EO accomplished
this with a shot of a floating asteroid that comes off the screen at the begin-
ning of the film. In Muppet*vision 3-D, the effect is created with the simple
gag of a “3-D” logo positioned at the end of a broomstick that is pushed into
the audience’s face; the effect is repeated at the end of the film with the shot of
Kermit perched at the end of a fire truck’s ladder. In Terminator2®: 3-D, Robert
Patrick’s molten head slowly comes out off the screen towards the audience.
The glaSSeS
Creating the illusion of true stereoscopic imaging requires that the left
eye and right eye see unique left/right imagery. This is achieved with various
kinds of glasses that perform the task for the viewer. It should be noted that
all 3-D glasses—and projection systems—have an impact on the brightness
of the projected image, generally a light loss on the order of 2–3 stops.
The first public use of polarized glasses for a 3-D motion-picture presenta-
tion took place at the 1939 New York World’s Fair. Called linear polarizers,
these glasses work by orienting the left- and right-eye polarizing material at
right angles to each other. Though this is not actually visible to your eye, the
effect is graphically illustrated in Figure 11.
In this system, the left- and right-eye projected images are each projected
through linear polarizers that match the orientation of the glasses. This meth-
| 203
3-D Cinematography
od is effective at separating
the left and right image ma-
terial; however, if the viewer
tilts his head at an angle,
there will be “leakage” of the
image information, allowing
the viewer to see both im-
ages simultaneously, which,
of course, is not ideal. Polar-
izing glasses, and all glasses
that do not require any elec-
tronics in order to function, Figure 10. Linear Polarized Glasses
are called “passive glasses.”
Polarized 3-D presentations
always require projection
onto a silver screen in order
to maintain polarization of
the projected image.
Glasses that employ
circular polarization, with
a casual observation, look
just like their linear cousins
in Figure 10, but perform
quite differently. A simple
explanation is that circular Figure 11. Linear Polarizing Glasses, graphically
illustrated.
polarization can be oriented
in clockwise and counter-
clockwise directions. The
left- and right-eye projected
images are polarized in op-
posing circular-polarization
orientation, as are the
respective lenses on the
glasses. The circular-polar-
ization effect is graphically
illustrated in Figure 12.
A principle advantage of
circular polarization is that
the integrity of the left- and Figure 12. Circular Polarization Glasses, graphically
illustrated.
right-eye image informa-
tion is always maintained, no matter how much the viewer tilts their head.
Circular polarization is the technique employed with the Real D “Z Screen”
204
|
3-D Cinematography
Day-for-NIght
darken the sky with the sun at certain angles, are both useful for either color
or black-and-white films because they do not affect color values and can be
used in combination with other effect filters.
Neutral-density filters will tone down a “hot” sky even if it is bald white.
A partial or graduated neutral-density filter covering only the sky will
therefore be very useful for bringing the sky into exposure balance with the
foreground. Care must be taken, however, that action does not cross the
demarcation line between the filter material and the clear glass area. Pola
Screens are most useful when the sun is directly overhead at right angles to
the camera.
A Pola Screen should not be employed if the camera must be panned
through a wide arc, since the polarization will vary and the sky tone will
change in density as the camera revolves. Typical underexposure is 1½ to
2½ stops, rarely more. Brilliant sunlight will require greater underexpo-
sure, gray days less. The underexposure can be handled in several ways.
One is by ignoring the filter exposure increase required, if it is close to
the amount of underexposure desired. For instance, the filter being em-
ployed may require two stops increase in exposure for a normal effect.
The increase is ignored and the diaphragm set for the exposure without
the filter, thus delivering the necessary underexposure for the night effect.
Or, a neutral density of the desired strength is employed and its exposure
increase ignored.
Proceed as follows: insert the effect filter or combination of filters for the
desired effect, and allow for their exposure increase as in normal filming.
Add the desired neutral (a .30 for one stop, .50 for 1½ stops or a .60 for
two stops). Ignoring the neutral filter’s exposure increase will automatically
underexpose the negative the necessary amount. This is a quick and effec-
tive method in fast production shooting where night effects are suddenly
required and little or no time is available for computations.
If the sky is not sufficiently blue to filter properly, and if it is impossible to
use a graduated neutral-density filter, try to avoid the sky as much as pos-
sible by shooting against buildings or foliage, or choose a high angle and
shoot downward.
The contrast between the players and the background is very important
because a definite separation is desirable. Dark clothing, for instance, will
merge with a dark background, and the player will be lost. It is better to
leave a dark background dark and players in lighter, although not necessarily
white, clothing than to have a light background and players in dark clothing.
The latter combination will result in a silhouette, rather than a night effect.
This is the reason that back-cross lighting is preferable: so the background
is not illuminated and the players have a definite separation through edge
lighting, which also imparts shimmering highlights.
| 209
Day-for-Night, Infrared and Ultraviolet Cinematography
Black-and-White film
The illusion of night in black-and-white cinematography is obtained by
combining contrast filtering with underexposure. Since the sky is light by day
and dark by night, it is the principal area of the scene requiring correction. Any
of the yellow-orange or red filters may be used. A very popular combination is
the light red Wratten 23A plus the green 56. This combination does everything
the red filters accomplish—plus it darkens flesh tones, which are rendered too
light by the red filters alone. When combining filters, remember that red filters
add contrast but green filters flatten, so if a greater flattening effect is desired
add a heavier green filter. Because flesh tones are not important in long shots,
such shots are sometimes filmed with heavier red filters, and only the medium
and close shots are made with the combination red-green filters. Care must
be taken, however, that clothing and background colors do not photograph
differently when filters are switched in the same sequence. If in doubt, shoot
tests before production filming begins. Remember that only a blue sky can be
filtered down. No amount of color filtering will darken a bald white sky. Use
graduated neutral densities, or avoid the sky under these adverse conditions.
The 23A-56 combination is usually employed with a filter factor of 6, rather
than the 20 normally required (5 for the 23A and 4 for the 56, which multiplied
equals 20). The factor of 6 automatically underexposes this filter combination
approximately 1½ stops and achieves the desired effect without further com-
putation. If a red filter is used alone, bear in mind that it will lighten faces, and
use a darker makeup (approximately two shades) on close shots.
which will prevent overexposure of the blue sensitive layer and keep the
negative within printing range. Warmer effects may be obtained by sub-
stituting a light yellow filter for the 85. A Pola Screen may also be used to
darken a blue sky and provide the required underexposure (by ignoring its
filter factor). It will have no effect on a bald sky, but it will act as a neutral-
density filter and provide the needed underexposure. Remember that ap-
proximately ⅔ stop exposure is gained by removing the 85 filter. This must
be included in exposure calculations.
INfrareD CINematography
Because cinematography by infrared light has had limited pictorial use,
this will be a brief review. For more information, refer to Kodak publica-
tions number N-17 “Kodak Infrared Films” and M-28 “Applied Infrared
Photography.” Infrared for photographic purposes is defined as that part of
the spectrum, approximately 700 to 900 nanometers (nm), which is beyond
the visible red but not as far as would be sensed by humans as heat.
All infrared films are sensitive to heat and should be kept refrigerated
before exposure and during any holding time before processing. While no
longer listed as a regular catalogue item, Eastman Kodak still manufactures
a black-and-white infrared-sensitive film, Kodak High Speed Infrared Film
2481, and modified color-sensitive film, Kodak Ektachrome Infrared Film
2236. Both of these films are on Estar base. Before deciding to use either
film in a production, the manufacturer should be contacted regarding its
availability, minimum order quantities and delay in delivery.
Black-and-White films
For pictorial purposes, the greatest use of infrared-sensitive film for
motion-picture photography has been for “day for night” effects. Foliage
and grass reflect infrared and record as white on black-and-white film.
Painted materials that visually match in color but do not have a high in-
frared reflectance will appear dark. Skies are rendered almost black, clouds
and snow are white, shadows are dark but often show considerable detail.
Faces require special makeup and clothing can only be judged by testing.
A suggested EI for testing prior to production is daylight EI 50, tungsten
EI 125 with a Wratten 25, 29, 70 or 89 filter, or daylight EI 25, tungsten EI
64 with 87 or 88A (visually opaque) filter. Infrared light comes to a focus
farther from the lens than does visual light. (Speak to your lens supplier for
correct focus compensation for infrared photography.)
Color
No human can see infrared; color film can only record and interpret it.
Kodak Ektachrome Infrared Film 2236 was originally devised for cam-
| 211
Day-for-Night, Infrared and Ultraviolet Cinematography
ouflage detection. Its three image layers are sensitized to green, red and
infrared instead of blue, green and red. Later applications were found in
medicine, ecology, plant pathology, hydrology, geology and archeology. Its
only pictorial use has been to produce weird color effects.
In use, all blue light is filtered out with a Wratten 12 filter; visible green re-
cords as blue, visible red as green, and infrared as red. The blue, being filtered
out, is black on the reversal color film. Because visible yellow light is used as
well as infrared, focus is normal, and the use of a light meter is normal for
this part of the spectrum. What happens to the infrared reflected light is not
measurable by conventional methods, so testing is advisable. A suggested
EI for testing prior to production is daylight EI 100 with a Wratten 12 filter.
UltravIolet CINematography
There are two distinctly different techniques for cinematography using
ultraviolet radiation, and since they are often confused with each other,
both will be described.
In the first technique, called reflected-ultraviolet photography, the expo-
sure is made by invisible ultraviolet radiation reflected from an object. This
method is similar to conventional photography, in which you photograph
light reflected from the subject. To take pictures by reflected ultraviolet,
most conventional films can be used, but the camera lens must be covered
with a filter, such as the Wratten 18A, that transmits the invisible ultraviolet
and allows no visible light to reach the film. This is true ultraviolet photog-
raphy; it is used principally to show details otherwise invisible in scientific
and technical photography. Reflected-ultraviolet photography has almost
no application for motion-picture purposes; if you have questions about
reflected-ultraviolet photography, information is given in the book “Ultra-
violet and Fluorescence Photography,” available from Eastman Kodak.
The second technique is known as fluorescence, or black-light, pho-
tography. In motion-picture photography, it is used principally for visual
effects. Certain objects, when subjected to invisible ultraviolet light, will
give off visible radiation called fluorescence, which can be photographed
with conventional film. Some objects fluoresce particularly well and are
described as being fluorescent. They can be obtained in various forms such
as inks, paints, crayons, papers, cloth and some rocks. Some plastic items,
bright-colored articles of clothing and cosmetics are also typical objects
that may fluoresce. For objects that don’t fluoresce, fluorescent paints (oil
or water base), chalks or crayons can be added. These materials are sold
by art-supply stores, craft shops, department stores and hardware stores.
(Many of these items can also be obtained from Wildfire Inc., 10853 Venice
Blvd., Los Angeles, CA, 90034, which manufactures them specially for the
motion-picture industry.)
212
|
Day-for-Night, Infrared and Ultraviolet Cinematography
Determining exposure
Many exposure meters are not sensitive enough to determine exposure
for fluorescence. An extremely sensitive exposure meter should indicate
proper exposure of objects that fluoresce brightly under intense ultraviolet,
if you make the meter reading with a No. 2A or 2B filter over the meter
cell. If your exposure meter is not sensitive enough to respond to the rela-
tive brightness of fluorescence, the most practical method of determining
exposure is to make exposure tests using the same type of film, filters and
setup you plan to use for your fluorescence photography.
films
While either black-and-white or color camera films can be used for
fluorescence photography, color film produces the most dramatic results.
The daylight-balanced films will accentuate the reds and yellows, while
the tungsten balanced films will accentuate the blues. Since fluorescence
produces a relatively low light level for photography, a high-speed film
such as Eastman Vision 3 500T (5219) or Eastman Vision 250D (5207) is
recommended.
214
|
Day-for-Night, Infrared and Ultraviolet Cinematography
SpeCIal CoNSIDeratIoNS
Some lenses and filters will also fluoresce under ultraviolet radiation.
Hold the lens or filter close to the ultraviolet lamp to look for fluorescence.
Fluorescence of the lens or filter will cause a general veiling or fog in your
pictures. In severe cases, the fog completely obscures the image. If a lens or
filter fluoresces, you can still use it for fluorescence photography if you put
the recommended ultraviolet-absorbing filter over the camera lens or the
filter that fluoresces. It also helps to position the ultraviolet lamp or use a
matte box to prevent the ultraviolet radiation from striking the lens or filter.
| 215
Aerial Cinematography
by Jon Kranhouse
AesthetiCs
Time of day: Of course, early morning and late afternoon light is desirable
to reveal textures on the ground, and wind is usually calmest at these
times. Consideration must be made for keeping aircraft shadow or
reflection out of the shot. Midday heat can cause wind gusts, affecting
stability. It is common for pilot and camera crew to be aboard a warmed-
up helicopter, parked and “turning” on the ground, awaiting the earliest
flyable light—it is not safe to fly low-to-ground in pitch-black night.
Night filming is perfectly safe at altitude—above the power lines.
Metering: When the sky is clear, an incident light meter is quick and easy;
the light hitting you is the same as that hitting your subject. When shafts
of sunlight poke through stormy clouds, a spot meter is most useful.
Lens hoods for spot meters prevent flare from biasing readings.
Filming speeds: To create shot dynamics, shooting at less than 24 fps is
common. When viewed from the air, most inanimate air-to-ground
subjects (e.g., city traffic, ocean waves and large ships) appear to move
slowly; they can safely be shot at 18–22 fps and still appear quite natural.
Exterior gyro mounts allow for exceptional stability at very slow fram-
ing rates (e.g., 2 fps), but usable pans and tilts must be pre-programmed.
A door mount or handheld may be more appropriate if a less steady
POV is called for. Conversely, in turbulent air with a door mount, slight
overcranking (30–36 fps) can smooth results.
Choice of film stocks/T-stops: Geography and sun conditions may require
changing the f-stop in-shot. It is not unusual to pull from T4 to T22
in one shot; the trick is hiding the change with shot choreography and
a smooth remote aperture motor. Door mounts allow the changing
of filters and magazines in mid-air, allowing for quick adaptation to
changing conditions. Nose and exterior gyro mounts, however, require
a smart compromise to be considered before take-off during sunrise/
sunset periods of rapidly changing light intensity. You don’t want to
have to land to change filters or stocks just when the light is magic.
| 217
Aerial Cinematography
AviAtion BAsiCs
FAA Regulations for Camera Mounts
Federal Aviation Administration rules require that when any object is
bolted or strapped anywhere to any airframe—Cessna or 747—the instal-
lation must be certified airworthy by a recognized FAA inspector. The in-
spector issues a temporary-use type 337 certificate specific to that aircraft.
If you use the same mounting bracket on another aircraft of identical type,
a new 337 must be obtained. Once granted, the 337 is added to a particular
aircraft’s manual, so future reinstallations don’t require FAA recertification.
Time must be allowed in the production schedule for this FAA field inspec-
tion, if required; the inspector is typically busy and backlogged.
Mounting hardware that has earned an STC (Standard Type Certificate)
means that the equipment vendor has persevered through a costly process.
STC installations do not require field inspection by an FAA examiner and
are certified flight-ready by a more readily available A&P (airframe and
power-plant mechanic).
Pilots
A truly qualified pilot is critical for both the safety and success of the pro-
duction; it is obviously essential for the pilot to have many hours of “time-in-
type” of similar aircraft. When filming in the United States, a pilot should be
218
|
Aerial Cinematography
operating under his or her own (or company’s) FAA Motion Picture Manual.
This allows a pilot some latitude within the FAA guidelines, which restrict
the activities of all other aircraft. A high level of expertise and judgment
must be demonstrated before the FAA grants such manuals. Of course, many
flying situations still require advance approval from the regional FAA office,
as well as other local authorities. Preproduction consultation with pilots is
strongly recommended.
Remotely operated gyrostabilized camera systems are often called upon
for very close work with long lenses; precise camera positioning is absolutely
critical. Few pilots have the skills necessary to hold a helicopter in a steady-
enough hover for a shot with a tight frame size. While the gyro systems isolate
the camera from helicopter vibration, an unstable hover causes XYZ spatial
excursions, resulting in constant relocations of the film-plane—an undesir-
able parallax shift. The footage may appear as if the camera is panning or
tilting; actually, the helicopter is wobbling about in the hover position.
A local pilot and helicopter may be the only option when on very remote
locations. Some regional helicopter companies may not allow other pilots
to fly their helicopter. These local pilots must understand that some filming
maneuvers might require an exceptional degree of mechanical reliability from
their aircraft; when hovering below 300 feet in calm air, a safe autorotate is
impossible. Spend the minimum amount of time hovering low, and don’t press
your luck with unnecessary takes. If you must work with a helicopter pilot who
has no film experience, try to choose one with many hours of long-line work
(i.e., heavy cargo suspended by cable, or water-bucket fire suppression), as this
type of flying requires both a high level of aviation skill and judgment.
have less lift and agility. Unexpected wind gusts may require extra torque (or
temperature demands of the turbine), which is not available when close to
maximum payload. An extra person also means that less fuel can be carried.
Fuel trucks
Helicopter agility suffers greatly as payload increases; having a fuel
truck on location allows fuel weight to be kept to a safe minimum. Not
all airports have jet fuel, which is required for the jet-turbine helicopters
listed below. Not all fuel trucks are permitted to leave airports and travel
on public roads.
Weather Basics
Aircraft performance suffers when air becomes less dense, which happens
when heat and/or altitude is increased. Helicopters are most stable when
moving forward faster than 20 mph air speed (translational lift). Therefore,
220
|
Aerial Cinematography
Max speed
When an aircraft has any kind of exterior modification for camera mounts
(i.e., door removed, nose/belly mounts or gyro mounts), a maximum allow-
able air speed known as VNE or “Velocity Not to Exceed” is determined
through FAA certification. Be sure this VNE speed will match your subject.
Air speed should not be confused with ground speed; prevailing wind con-
ditions may help or hinder filming logistics.
safety
See the related Industry-Wide Labor-Management Safety Bulletins at:
http://www.csatf.org/bulletintro.shtml.
quires such a position, have plenty of altitude above ground level for
recovery.
Rudder Pedals: Cause an aircraft to yaw; if using a nose mount, the same as
panning. Airplanes have a vertical rudder, though single-rotor helicop-
ters use a tail rotor to control yaw. Helicopters change the pitch of the
tail rotor blades to alter the force of the tail rotor, which counteracts the
torque from the main rotor.
Settling with Power: When helicopters hover out of ground effect, but still
near the ground in still air, prop wash can bounce off the ground and
wrap back around the top of the helicopter as a cyclonic down-draft.
A pilot can be fooled into applying more horsepower, which only in-
creases the down-draft intensity. All pilots know of this phenomenon,
but those who work infrequently in low-AGL situations may not be on
guard. Try to change shot choreography to continuously move out of
your own “bad” air.
Yoke/Control Stick: Makes the aircraft pitch and roll (stick forward = nose
down, stick right = roll right). When using nose-mounts, coordinate
with the pilot so aircraft pitch and camera tilts are harmonious.
CoMMon heliCoPteRs
Aerospatiale A-star/twin-star (As350 and As355 twin)
Pro: Very powerful engine(s) for superior passenger and fuel-range capacity.
Extremely smooth flying with three-blade system. Accepts door and ex-
terior gyro-stabilized mounts (gyro mounts require helicopter to have aft
“hard points”). Tyler Nose Mount can fit if the aircraft is equipped with
custom bracket by the helicopter owner. High-altitude ability is excellent.
Con: Costlier than Jet Ranger. Does not hold critical position well in
ground-effect hover.
Aerospatiale llama
Pro: Powerful engine mated to lightweight fuselage specifically designed
for great lifting ability and high-altitude work. Accepts door and some
gyro-stabilized mounts. Holds position quite well in ground-effect
hover. Set the helicopter height record in 1972 at over 40,000'.
Con: Expensive; very few available.
Con: Limited field of view to avoid aircraft structure for both air-to-air or
and air-to-ground.
steadicam
Using a short center post between camera and sled and attached to an
an appropriate vehicle mount (not worn), good results can be achieved as
long as the camera is kept sheltered from the wind. Best for onboard a story
aircraft, rather than as a substitute for a door mount.
Limitations: Wind pressure striking the camera causes gross instability. 337
inspections required.
226
|
Aerial Cinematography
market. By partnering with some of the world’s most creative and experi-
enced DPs, Pictorvision’s engineering team integrated user functionality
with industry defining technology to produce the Eclipse—ushering in a
NEW dawn of aerial cinematography.
Cameras: The open-architecture of the Eclipse & XR-Series allows for an
unlimited choice in digital or film camera and lens combinations, includ-
ing Arri 435, Alexa-Series, F65, Genesis, IMAX MSM, iWerks Compact,
Phantom 65, Red One, RED Epic. The Wescam can support many of the
current digital camera systems, or our own Mitchell Mk-2 (1–60 fps) all with
a variety of cinema lens choices.
HD UnDerwater
HD technology is constantly evolving. Although you can instantly see
what you’ve recorded, at this time, HD is not as forgiving as film. The need
230
|
Underwater Cinematography
Refraction: Light travels in a straight line, but light that penetrates the water
undergoes a change in direction—it is refracted. For example, a pencil
placed in a half-full glass of water appears to be bent at the place where
it enters the water. Refraction is the source of most of the problems that
are encountered in underwater photography.
Color Absorption: As light travels through water, the water acts as a filter
reducing the spectrum of light that penetrates it. The long wave lengths
of light—reds, oranges and other warm colors—are the first to be
absorbed and the visible light is dominated by the blue-green (cyan)
effects just a few feet below the surface. All water acts as a continuous
filter and the water depth added to the distance of the subject to the
camera equals the total water path that the light has to travel.
Scattering: Caused by suspended minerals, plankton, dirt, etc. The absorp-
tion and scattering of light rays reduces the intensity of light and the
saturation of colors.
eqUipment
The cameraperson and assistant should be completely familiar with the
underwater camera equipment and have the ability to perform simple field
repairs. In general, the camera housing should have provisions to adjust its
weight for salt or fresh water and be slightly negative in buoyancy when sub-
merged. Features to look for are: simplicity in design; easy-to-locate, bright
reflex viewing; the ability to accept a variety of lenses; interchangeable flat
and dome ports; external focus, iris and camera run controls; video assist
to the surface; an underwater monitor; remote camera run capabilities; and
the ability to mount the housing to a tripod. Servicing the housing for lens
changes, fresh batteries and film reloads should be quick and easy so as not
to eat up production time, and a spare parts kit for repairs is mandatory.
Even though a water alarm might be incorporated into the housing’s de-
sign, the assistant should have the ability to check the watertight integrity
of the equipment before it goes into the water. Depending on the design of
the underwater housing, you can check the sealing of the housing by either
pulling a vacuum or adding slight positive pressure (approximately 2 to 3
pounds). Check with the underwater housing manufacturer as to whether a
vacuum or positive pressure should be used.
Care of eqUipment
At the finish of every shooting day, always rinse off the camera housing,
lights and accessories with fresh water, especially when shooting in salt
water. A daily clean and check of all exposed O-rings should be performed,
as well as checking the external controls for camera run, iris and focus for
ease of operation. In full sun, always cover the housing with a solar blan-
232
|
Underwater Cinematography
ket or at least a white towel to avoid overheating the camera, batteries and
especially the film. Do not overgrease any O-ring. The purpose of silicone
on an O-ring is to keep the rubber supple, allowing it to lay comfortably in
the groove. Too much silicone grease does nothing for the seal and actually
attracts grit and debris, which can cause a leak.
In case of salt water exposure to any camera system, immediately dis-
connect battery power. Break down the entire camera system and rinse
with fresh water. After the fresh water rinse, pour alcohol over the parts to
displace the fresh water and allow the unit to dry. If there was fresh water
exposure, go directly to the alcohol step and allow the unit to dry completely.
These emergency steps might not get your camera system back on line, but
will at least prevent much of the damage that would be caused to the camera
if it were sent in for repairs while wet.
As a camera assistant caring for an underwater housing, one of the most
important things to remember is that it’s your responsibility to maintain the
watertight integrity of the housing. No matter where the equipment comes
from, no matter how recently it has been serviced or how thoroughly it was
prepped, there are countless accidents, mistakes and oversights that can
cause a potential leak. Every time a housing is opened for a magazine or
lens change, careful attention must be paid while resealing the housing. No
housing is 100 percent leakproof if safeguards are not applied.
In addition to the camera assistant’s normal complement of tools, an
underwater service kit will cover most of the problems that might arise. This
kit should include a bottle of alcohol, a tube of silicone lube and spray, sili-
cone adhesive, dental picks, Q-tips, a Scotch Brite pad, electrical tape, WD-
40, soldering iron and solder, cotton towels, a multi-meter, spare O-rings,
self-adhesive Velcro and an Allen wrench set.
flat port
The flat port is unable to correct for the distortion produced by the dif-
| 233
Underwater Cinematography
ferences between the indexes of light refraction in air and water. Using a flat
port introduces a number of aberrations when used underwater. They are:
Refraction: This is the bending of light waves as they pass through different
mediums of optical density (the air inside the camera housing and the
water outside the lens port). Light is refracted 25 percent, causing the lens
to undergo the same magnification you would see through a facemask.
The focal length of your lens also increases by approximately 25 percent.
Radial Distortion: Because flat ports do not distort light rays equally, they
have a progressive radial distortion that becomes more obvious as wider
lenses are used. The effect is a progressive blur, that increases with large
apertures on wide lenses. Light rays passing through the center of the
port are not affected because their direction of travel is at right angles to
the water-air interface of the port.
Chromatic Aberration: White light, when refracted, is separated into the
color spectrum. The component colors of white light do not travel at
the same speed, and light rays passing from water to glass to air will
be unequally bent. When light separates into its component colors, the
different colors slightly overlap, causing a loss of sharpness and color
saturation, which is more noticeable with wider lenses.
Dome port
The dome port is a concentric lens that acts as an additional optical ele-
ment to the camera lens. The dome port significantly reduces the problems
of refraction, radial distortion and axial and chromatic aberrations when the
curvature of the dome’s inside radius center is placed as close as possible to
the nodal point of the lens. When a dome port is used, all the rays of light
pass through unrefracted, which allows the “in-air” lens to retain its angle
of view. Optically a “virtual image” is created inches in front of the lens.
To photograph a subject underwater with a dome port you must focus the
lens on the “virtual image”, not the subject itself. The dome port makes the
footage marks on the lens totally inaccurate for underwater focus. Therefore
lenses should be calibrated underwater. The dome port offers no special
optics above water and functions as a clear window.
from the underwater housing’s film plane to the focus chart. Eye focus the
lens and mark the housing’s white focus knob data ring with a pencil. Slide
the camera back to continue the same process at 3', 4', 5', 6', 8', 10', 12', and
14'. This should be done for all lenses. Once a lens has been calibrated, you
must establish reference marks between the lens and data ring so that you
can accurately sync up for underwater focus when lenses are changed during
the shoot. After marking the data rings underwater with pencil, go over the
calibration marks with a fine point permanent marker on the surface.
Dome port
To calibrate a lens with a dome port, use this basic formula to determine
the starting point for underwater focus: Simply multiply the inside radius of
the dome by four. That number will be the approximate distance in inches
from the film plane that the lens should be set on as a starting point for
underwater eye focus calibration. The most commonly used dome radius is
4". Multiply the 4" dome radius times 4. That gives you a measurement of 16"
at which to set your lens in the housing to begin calibration for underwater
photography. If a lens cannot focus close enough to take advantage of the
dome port, use a plus diopter to shift the lens focus back. Ultimately, your
lens should be able to focus down to at least 10" to be able to follow focus
from 1' to underwater infinity. When using most anamorphic lenses with
the dome port, you will have to add a +3 diopter to the lens to shift close
focus back in order to focus on the aerial image.
flat port
The refractive effect of the 25 percent magnification produces an apparent
shift of the subject towards the camera, to ¾ its true distance. As a general
rule, for flat port underwater photography, if you measured your camera to
subject distance at 4', you would set your lens at ¾ the distance, or around 3'.
For critical focus, especially on longer lenses and when shooting in low light,
underwater eye focus calibration is recommended. Shooting through a win-
dow or port of a tank or pool is the same as using a flat port. If you do shoot
through a tank window and want to minimize distortion, the camera lens axis
must be kept at 90° to the window’s surface. Camera moves will be limited to
dollying and booming to keep the lens perpendicular to the window’s surface.
Panning and tilting should not be done unless a distortive effect is desired.
Lens seLeCtion
Wider lenses are usually the choice in underwater photography because
they eliminate much of the water column between the camera and subject
to produce the clearest, sharpest image. This is especially important in low
visibility situations. Longer lenses, which are being used more and more
| 235
Underwater Cinematography
underwater, usually will not focus close enough to take advantage of the
dome port’s “virtual image” (you need to focus down to 12") and a diopter
will have to be added to the lens to shift the focus back, or you can switch to
a flat port and allow for the 25 percent magnification. Most Zeiss and Cooke
lenses between 10mm and 35mm can close focus on the dome ports “virtual
image” without using diopters. When using Panavision spherical lenses, the
close focus series of lenses allow the use of the dome port without using
diopters. The commonly used anamorphic lenses for dome port cinematog-
raphy are the Panavision “C” Series 30mm, 35mm and 40mm focal lengths.
Because these anamorphic lenses only close focus to approximately 30", you
will have to use a +3 diopter to shift minimum focus back to 12" to 14" to
focus on the “virtual image” created by the dome port. When using longer
lenses for spherical or anamorphic close-ups with the flat port, I set the lens
at minimum focus and move the camera back and forth on the subject or a
slate to find critical focus instead of racking the focus knob.
fiLters
Filtering for film underwater, as in above water applications, is used to
lower light levels, correct color balance and improve image contrast. Aside
from the standard 85 filter used to correct tungsten film for daylight ex-
posure, the use of color correction filters can be a very subjective issue.
Water’s natural ability to remove reds, oranges and other warm colors can be
overcome by using underwater color correction filters. These filters alter the
spectrum of light reaching the camera to reproduce accurate skin tones and
the true colors of the sea. Because all water is a continuous filter, the deeper
you go beneath the surface the more colors are filtered out. Also, the distance
between the camera and subject must be added to the depth of the water
to determine the correct underwater filter distance. UR/PRO™ and Tiffen
AQUACOLOR® filters both provide color correction for underwater filming,
and both manufacturers offer detailed information on the use of their filters.
Polarizing filters can improve contrast where backscatter from artificial light
is a problem but sunlight illumination underwater is not sufficiently polar-
ized and makes polarizing filters ineffective.
exposUre meters
Both incident and reflected light meters can be used for reading your ex-
posure underwater. In ambient light situations, the reflected meter is usually
used because it measures the light reflected from the subject through the
water column.
The most commonly used underwater light meters are the Sekonic L-164
Marine Meter and the Ikelite digital exposure meter. The Sekonic L-164, de-
signed in 1969 specifically for underwater photography, was discontinued in
236
|
Underwater Cinematography
UnDerwater CommUniCations
Communication between the director/AD and cameraperson and crew
is critical. We’ve certainly come a long way since the days of banging two
pieces of pipe together, so depending on your budget and shot list, there are
different degrees of technology available.
A simple but effective means of communications between the camera-
person and the above water director/AD is a one-way underwater P.A.
speaker system. Used in conjunction with video assist from the underwater
camera, cues for roll, cut and talent performance can be instigated by the
director on the surface. The cameraperson can also answer questions via the
video assist with hand signals or a slate presented to the camera lens or by
nodding the underwater camera for “yes” or “no”.
For more complex set-ups where the cinematographer and the director
need to talk directly to each other, or when the director works underwater
with the crew, two-way communication is very efficient. While wearing full-
face masks equipped with wireless communication systems, the underwater
cinematographer can converse with the director and also talk to the under-
water and surface crews via underwater and above water speakers.
LigHting
Specifics on lighting underwater, as in above water, will vary from cin-
ematographer to cinematographer, and the following information is merely
a guide. Experimenting and developing your own technique should be your
goal. Today’s underwater cinematographer has a wide variety of HMI, in-
candescent, fluorescent and ultraviolet lights and accessories from which to
choose. Underwater lights designed for motion picture work should have
the ability to control the light output by means of diffusion glass, filters,
scrims, snoots and barndoors. Always check the condition of lampheads,
connectors, plugs and cables before use. Become familiar with the lamp’s
lighting system and be able to perform simple repairs and maintenance.
Unlike air, even in the best water, the clarity is significantly reduced in dis-
tances as little as 10' to 20'. Artificial lighting is often needed to adjust light
| 237
Underwater Cinematography
levels for exposure and vision, as well as to modify the color balance and
image contrast of the subject. By incorporating supplemental lighting, the
light’s water path, from lamp head to subject to camera, can be kept relatively
short, and the selective color absorption properties of water will be much
less apparent than if the light had to originate at the water’s surface.
HMIs light has a higher color temperature, approximately 5600 K (longer
light wavelengths), and thus penetrates further, providing more illumination
over a wider area.
The reflection of artificial light off suspended particles in the water is
known as “back scattering”. The effect is much like driving your car with
your headlights on in heavy fog or a snowstorm. In addition to back scatter-
ing there is also side scattering and in some instances even front scattering.
Light scattering can be greatly reduced if you separate the light source from
the camera and by using multiple moderately intense lamps rather than
using a single high intensity lamp. It is advisable to keep the light at a 45-de-
gree to 90-degree angle to the lens axis of the camera. Generating a sharp
light beam to help control the light and further reduce backscatter is done
with the aid of reflectors, barndoors or snoots.
In addition to the basic effect of light intensity reduction in water due to
absorption, the matter is further complicated by the fact that absorption is a
function of color. Red light is absorbed approximately five times faster than
blue-green light in water. This is why long distance underwater photographs
are simply a blue tint without much color.
Fill lighting underwater on a moving subject is best accomplished by
having the lamp handheld by the underwater gaffer. With the lamp being
handheld, the underwater gaffer can maintain a constant distance between
the lamp and the moving subject, keeping the exposure ratio constant. In a
more controlled situation where the subject is confined to an area of a set,
light can be bounced off a white griffolyn stretched on a frame or through a
silk on a frame.
hosed off after every day of shooting. Chlorine bleaches the screen quickly,
and rotating the screen 180° every day helps minimizes the color difference.
From the bottom to the top, the frame should tilt back approximately 10°.
Extending the screen above the surface of the water, even if only a few inches
is very important. By extending the screen above the water, it allows the
under surface of the water to act as a mirror and reflect the screens color on
the surface. This extends the screens coverage and allows the cameraperson
to tilt up to follow the shot while using the waters surface as an extension of
the screen. It is also important to stop or at least reduce the agitation of the
water to prevent a visible ripple effect on the screen.
LigHting safety
From a safety standpoint, when using AC power in or near water or other
potentially wet locations, it is essential (and in most cases, mandatory) to use
a Class “A” UL approved GFCI (Ground Fault Circuit Interrupter) for actor
and film crew protection.
The purpose of a GFCI is to interrupt current flow to a circuit or load
when excessive fault current (or shock current) flows to ground or along
any other unintentional current return path. The standard level of current
needed to trip a “people” protection Class “A” GFCI is 5mA.
Class “A” GFCIs are designed for “people” protection. Other GFCIs are
designed for various levels of “equipment” and “distribution” protection. In
general, if you can’t readily see the Class “A” marking on a GFCI, the device
is probably not designed for “people” protection. To make sure that the
GFCI being used is a Class “A” device, Section 38 of UL 943 (the standard for
GFCIs) requires that the “Class” of the GFCI is clearly marked on the device
in letters that are clearly visible to the user.
Today, Class “A” GFCIs are readily available for loads up to 100 Amps,
single- and three-phase, for both 120v and 208/240v fixed voltage applications.
Certain special GFCIs can also operate on variable voltage power supplies
(behind dimmers). If the device’s label does not clearly state the working
voltage range of the unit, check with the device’s manufacturer before using
the unit on dimmers (or other variable voltage applications), since conven-
tional GFCIs may not operate correctly below their rated voltage.
Specialty GFCI manufacturers produce advanced GFCI devices that offer
other important safety features, such as monitoring predictive maintenance,
power supply phase and ground correctness.
Choose your GFCIs carefully, because if misapplied, these important
safety devices may unknowingly fail to function and render your installation
with a false sense of security.
| 239
Arctic and Tropical
Cinematography
PrePArATion of equiPmenT
While the difficulties of photography under arctic conditions can be se-
vere, they are by no means insurmountable. Careful advance preparation
will pay rich dividends in the form of easier and more reliable equipment
operation and better pictorial results. The first step in preparing for filming
in the arctic, high mountain regions, or in unheated aircraft at high altitudes
is to select the most suitable equipment with due regard for the work to be
done and the results desired.
Each kind of camera has its adherents, and no one type seems to be out-
standingly superior to the others. However, considering the working condi-
tions, good judgment dictates that the camera or cameras selected should
be compact, lightweight, easy to use, dependable, adaptable, and portable.
In choosing a 16mm motion picture camera, many arctic explorers prefer
the ease and convenience of magazine loading, threading roll film can be
very difficult under conditions of extreme cold. Certain camera models
are advantageous for low-temperature use because large-radius bends in
the film path and low film accelerations help prevent broken film. For best
protection of the film emulsion at extremely low temperatures, film-travel
rollers should have a diameter no smaller than ½". (13mm). Electric power,
if available from a reliable source such as a generator or vehicular power
system, is more dependable than spring-driven or battery power. However,
under field conditions, a spring-driven motor may prove more reliable than
an electric motor drive that depends on portable or storage batteries, which
can fail when subjected to extremely low temperatures.
Cameras should be winterized for satisfactory service under frigid
conditions. Some camera manufacturers provide a winterizing service
for cameras that are to be used at low temperatures over a long period of
time. Winterizing is a highly specialized operation, best entrusted to the
manufacturer or a competent independent camera service representative.
Essentially, the procedure calls for dismantling the camera and removing
the original lubricants. The shutter, lens diaphragm, film transport mecha-
nism, and other moving parts are then relubricated with materials that will
not thicken when the camera is exposed to extreme cold. In some cases,
powdered graphite is still used for this purpose. However, so-called “broad-
range” lubricants (such as Teflon and silicone) are becoming increasingly
popular, not only because of their effectiveness at low temperatures, but
also because they can be left in the camera permanently. In fact, such
lubricants are being used in manufacture. Hence, a camera that has been
lubricated with a broad-range lubricant, either in manufacture or as part of
a winterizing operation, need not be dewinterized and relubricated when
it is returned to use under normal conditions. When cameras are stripped
down for winterizing, weakened or damaged parts may be discovered and
| 241
Arctic and Tropical Cinematography
should be replaced to avoid possible failure under the extra stress of severe
arctic temperatures.
It is also sometimes necessary to machine parts to allow greater clearance
between components. This is because aluminum and certain alloys have
greater coefficients of thermal contraction and expansion than steel. Since
small levers and knobs on cameras are difficult to operate when the photog-
rapher is wearing thick gloves, extensions can sometimes be added to levers,
and small knobs can be replaced with larger ones.
It may be helpful to run even recently winterized motion picture cameras
for a period of three or four hours to break them in thoroughly. A piece of
film three or four feet long can be spliced end to end (to form a continuous
loop), threaded into the camera, and allowed to run during the breaking-in.
In cameras intended for use with film magazines, the loop should be formed
in a dummy magazine. After the breaking-in period, the camera should be
checked for speed and general behavior. It should be noted that, although
magazine-type motion picture cameras can be winterized, the magazines
themselves are not winterized and may jam under conditions of extreme
cold. If film magazines are used, each day’s working reserve carried into the
field should be kept as warm as possible under the cinematographer’s parka.
Another possibility is to carry the film supply in an insulated thermal bag,
along with one or two small hand warmers.
Before your location shoot, a test run should be made in a refrigerator or
freezer capable of reaching temperatures as low as -30°F (-34°C) or -40°F
(-40°C). Even “winterized” cameras can fail in use because some detail was
overlooked in preparation, so this final test run is quite important. The film
and camera should be cooled for at least 24 hours prior to the test. This long
period of precooling is often overlooked, and the test becomes invalid.
Motion picture cameras should be given as much protection from icy winds
as possible during use. When battery-driven motors are used on cameras, the
motors and batteries should be kept as warm as possible. A flat black finish
on the cameras has some advantage in the arctic because it absorbs heat when
the sun is shining. Covers made from black felt material or fur and fitted with
eyelets or other suitable fasteners protect the camera from frigid winds and
help to retain its initial warmth for a time. Snaps and slide fasteners are not
recommended for use in subzero temperatures. Small magazine-type motion
picture cameras can be hung inside the coat to obtain some warmth from the
body; you may even need to wrap a chemical heating pad around the camera.
Inspect the camera’s lens each time it is removed from the clothing to take a
picture. The amount of “body static” generated under cold, dry conditions
can cause the lens to attract lint from the clothing.
Tripods should also be conditioned properly for use in the arctic. When
lubrication is required, there are oils available for use at temperatures down
242
|
Arctic and Tropical Cinematography
film
Great care must be used in handling film in subzero weather. The edges of
cold, brittle film are extremely sharp, and unless caution is exercised, they
can cut the fingers severely.
It is important that film be loaded and exposed promptly after removal
from the original packing, not left in the camera for long periods of time. If
motion picture film is allowed to stand in the camera for a day or so, the film
may dry out and break where the loop was formed when the camera is again
started. The film is adequately protected against moisture loss as long as the
original packaging is intact. When loading the camera, make sure the film and
the camera are at the same temperature—if possible, load the camera indoors.
Static markings are caused by an electrostatic discharge, and they ap-
pear on the developed film emulsion as marks resembling lightning, tree
branches, or fuzzy spots. When static difficulties occur they can usually be
traced to the use of film that has a very low moisture content.
Static markings are not likely to occur if the film is loaded and exposed
within a short time after the original package is opened. In general, field
photography under arctic conditions involves subjects of extremely low
brightness scale and very high levels of illumination. Exposures should be
held to a minimum and overexposure should be avoided.
STorAge
If a cold camera is taken indoors where it is warm and humid, condensa-
tion may form on the lens, film, and camera parts. If the camera is then
taken back outdoors before the condensed moisture evaporates, it will freeze
and interfere with operation; the condensate can also cause metal parts to
rust. One way to solve this problem is to leave the camera, when not in use,
in a room at about 32°F (0°C).
If a camera is left in its case outdoors, the case should be made reason-
ably airtight. In the arctic, blown snow becomes as fine as dust or silt and
can enter the smallest slit or crevice. If allowed to enter the camera around
the shutter or other moving parts, the snow will affect the operation of the
equipment. The speed and timing of motors should be checked frequently.
Batteries should be checked every day and recharged at a base every night,
if possible.
T. R. Stobart, who filmed the first conquest of Mt. Everest, preferred to seal
the camera in an airtight polyethylene or rubber bag and then take the cam-
era into the warmth of indoors. Any condensation takes place outside the
bag, not inside, and the camera remains both dry and warm. This method
has the advantage of keeping the camera from becoming “saturated in cold”
for long periods of time. There is no problem in taking warm equipment
back out into the cold, provided the snow isn’t blowing.
244
|
Arctic and Tropical Cinematography
TroPiCAl CinemATogrAPhy
Heat and humidity are two basic sources of potential difficulty when using
or storing photographic goods in tropical climates. Heat alone is not the
worst factor, though it may necessitate special equipment care and process-
ing techniques and may shorten the life of incorrectly stored light-sensitive
materials. High humidity is by far the greater problem because it can cause
serious trouble at temperatures only slightly above normal, and these trou-
bles are greatly increased by high temperatures.
Associated with these conditions are several biological factors—the
warmth and dampness levels encountered in the tropics are conducive to
the profuse growth of fungus and bacteria and encourage the activities of
insects. Many photographic and other related products are “food” for these
organisms—gelatin in films, filters, leather, adhesives and so on. Even if fun-
gus, bacteria or insects cannot attack materials directly, they can develop an
environment that can. Fungus can also either directly or indirectly induce
corrosion in metals, attack textiles and leather, change the color of dyes,
attack glass, and cause a great variety of other forms of deterioration. The
probability of damage is greater with frequent handling and transportation,
especially under the difficulties met in hunting and scientific expeditions
and in military operations. Exposure to harm is greater when equipment is
used out of doors, on the ground or in makeshift facilities.
Atmospheric condition, with respect to moisture content, is usually
described in terms of “relative humidity.” This is the ratio, expressed as a
percentage, between the quantity of water vapor actually present in the air
and the maximum quantity that the air could hold at that temperature. Thus,
if a given sample of air contains only half as much water as it would at satura-
tion, its relative humidity is 50%.
When the temperature rises, a given space can accommodate more water
vapor, and hence the relative humidity decreases, and vice versa. When air
(or an object) is cooled sufficiently, a saturation point (100% relative hu-
midity) is reached, and below this temperature drops of water or “dew” are
deposited. In any locality, the temperature is much lower at high altitudes, so
that dew is likely to form on objects following their arrival by air transport,
especially when high relative humidity is present at ground level. In tropical
climates, this “dew point” is often only a few degrees below the actual tem-
perature during the day and is reached when the temperature drops at night.
The amount of moisture absorbed by films and by nonmetallic parts of
equipment is determined by the relative humidity of the atmosphere. There-
fore, the moisture absorption of photographic or other equipment can be
reduced by lowering the relative humidity, either by removing some of the
moisture with a desiccating agent or by raising the temperature of the atmo-
sphere where the equipment is stored.
| 245
Arctic and Tropical Cinematography
mAinTenAnCe of equiPmenT
One of the best protective measures that can be supplied in the tropics is to
thoroughly clean every piece of photographic equipment at frequent intervals
and expose it to air and sun whenever practical. This is particularly important
for retarding the corrosion of metal surfaces and the growth of fungus or
mold on lens surfaces and on leather coverings. Lens-cleaning fluids and
papers now on the market are recommended for cleaning lenses. During the
tropical dry season, or in any desert areas, dust should be removed from the
lens surfaces with a sable or camel-hair brush before the lens tissue is used, to
avoid scratches. Lens cleaning tissues containing silicones should not be used
for coated lenses. They leave an oily film that changes the color characteristics
of the coating and reduces its antireflection properties. This film is almost
impossible to remove. Leather coverings and cases can best be kept clean by
wiping them often and thoroughly with a clean, dry cloth. Frequent cleaning
and polishing will minimize corrosion on exposed metal parts.
BlACk-And-WhiTe film
The exposure of black-and-white film in tropical areas is strongly influ-
enced by the illumination in the subject shadow areas. The moisture and dust
248
|
Arctic and Tropical Cinematography
Color film
In general, the exposure of color films should follow the same basic
recommendations given for temperate zone exposure, with due regard to
lighting and scene classification. There are, however, some differences in
the lighting conditions and scene characteristics in the tropics that justify
special considerations.
1. During the rainy season, a light haze is generally present in the atmo-
sphere. When this haze is present, the disk of the sun is clearly discernible
and fairly distinct shadows are cast. Under these conditions, the exposure
should be increased by about one-half stop over that required for bright
sunlight.
2. Frequently the brightness of beach and marine scenes is appreciably great-
| 249
Arctic and Tropical Cinematography
tor every 1⁄60 of a second. With the film camera running at 30 frames per
second with a 180-degree shutter, the exposure time is 1⁄60 of a second. This
is exactly the right relationship we are looking for. Even though we have now
matched the film camera’s exposure time to our monitor, this still requires
what is generically referred to as a “sync box.” We must align the pulldown
or shutter closed timing of the film camera to occur exactly during video’s
equivalent of pulldown, known as vertical blanking. This is the time interval,
in video, which allows the scanning beam to jump up from the bottom of
the screen to the top and start painting or scanning the next field. The sync
box is fed a reference signal that is in exact speed and time as the video we
are photographing. A phase control on the sync box allows us to adjust the
shutter bar out of the image and precisely match the film camera’s pulldown
to the video’s vertical blanking.
This basic example and explanation holds true whether you are shooting
30-, 25- or 24-frame video. Before we review some newer display technolo-
gies, let’s discuss a few more ground rules for shooting “picture tube” type
displays or projection systems.
CompuTer moniTors
The same rules just discussed apply to standard computer monitors based
on glass picture tubes. The images are still scanned top to bottom, and the
frame rates must still be matched between the display and film camera.
When photographing a single computer monitor “insert style,” and where
you can shoot at nonstandard frame rates, a manual speed control and some
way to measure the computer monitor’s refresh rate are required. There are
two well-known film industry optical frequency meters that allow you to mea-
sure a monitor’s exact frame rate and then set that rate into the speed control.
| 253
Filming Television and Computer Displays
Computer monitors are generally noninterlaced, but this has little impact
on shooting them. Since some computer monitors can display very high
refresh rates, it is possible to photograph them at high speeds. For example,
a computer monitor with a 96 Hz refresh rate can be photographed at 48
frames a second. The exposure time of 1⁄96 of a second matches the amount
of time it takes the monitor to display one complete image.
Flat-panel displays
With display technology constantly evolving, there is now and will con-
tinue to be a large variety of new and constantly changing image display
devices. Each one poses unique and technically challenging problems for
successful photography.
plasma displays
Often misunderstood, this display technology is a unique mix of both
CRT and LCD science. The images are created by ionizing a gas which
strikes a phosphor. The images are not always scanned like a traditional
CRT display, but addressed pixel by pixel or row by row without regard for
the input signal’s refresh rate. This can create very annoying and different-
looking artifacts from those seen when shooting standard monitors. Some
manufacturers and other companies have taken steps to modify plasma
panels that allow them to be used in film productions and photographed at
30, 25 and 24 frames per second. When presented with the task of shoot-
ing plasma screens, try to include them in a test day to work out syncing,
exposure and color-temperature issues.
254
|
Filming Television and Computer Displays
CoLor TemperaTure
Once the frame rate and syncing issues have been worked out, the final
important aspect to be dealt with is color temperature.
Regardless of the display technology you are photographing, most
CRT, LCD, plasma and DLP devices are intrinsically daylight or near-
daylight in color temperature. Most consumer televisions will be at or near
6000-8000°Kelvin. Broadcast and industrial video monitors should be close
to 6500°K. Computer CRT and LCD monitors vary greatly between 5000°K
to 9000°K; some are even as high as 12,000°K.
| 255
Filming Television and Computer Displays
ConCLusion
A majority of productions will hire a specialist to be responsible for play-
back and/or projection. They will usually precorrect the playback material or
display device to correct for tungsten photography. The playback specialist
will also assist you in setting exposure and work with the 1st AC concerning
film camera sync equipment setup and operation.
There is a large amount of misinformation circulating in the industry, and
it is very difficult to keep up on all the display technologies as they evolve.
Often, manufacturers will update or introduce new products several times a
year, and what worked for you six months ago may not work the same way
now. It cannot be emphasized enough to test any new or different display
device when considering its use in your production.
Editor’s note: For photography requiring synchronous sound recording that in-
cludes a TV screen, video monitor or certain computer displays, 24-frame play-
back is necessary. Refer to the American Cinematographer Video Manual for
further information.
256
|
Filming Television and Computer Displays
|257
Digital Postproduction
for Feature Films
by Glenn Kennel and Sarah Priestnall
SD Letterbox
Master
SD Pan/Scan
Master
offered film scanning and film recording services to enable the industry.
Within a few short years, visual effects converted entirely from traditional
optical printers and manual rotoscoping processes to digital compositing,
3-D animation and paint tools.
Kodak’s original goal for Cineon was a complete “electronic intermedi-
ate system” for digitally assembling movies. In the early 1990s, it was not
cost effective to digitize and assemble whole movies this way, but in choos-
ing computer-based platforms over purpose-built (traditional television)
architectures, these costs came down steadily, driven by the much bigger
investments that other industries were making in computing, networking
and storage technologies.
By the end of the 1990s, Kodak had discontinued the Cineon product line,
but Apple, Arri, Autodesk, Filmlight, Imagica, Quantel, Thomson and other
companies were offering products that provided the necessary tools for the
digital postproduction of movies. Kodak chose to focus its energies on the
demonstration and promotion of a new process dubbed “Digital Intermedi-
ate” that offered to revolutionize the traditional lab process of answer print-
ing and release printing. The digital intermediate process provided digital
conforming and color grading tools that opened up new creative opportu-
nities, while improving the quality of the release prints. In 2000 Cinesite
pioneered the digital intermediate process on the Coen Brothers’ O Brother,
Where Art Thou utilizing the Thomson Spirit Datacine and a Pandora color
corrector, working in a digital, tapeless workflow at 2K resolution.
Since that early start, the supporting technology has evolved substan-
tially with software-based color correctors offered from Autodesk, Filmlight,
daVinci and others, and faster film scanners and recorders available. Within
five years, the industry had embraced the digital intermediate process, with
a majority of major Hollywood films, and many independent films, now
finished digitally.
Digital Dailies
With the move to offline editing in the 1990s, and the increasing scrutiny
of the production process by studio executives, video dailies were widely
embraced. However, while the low-resolution, compressed video formats
may have been good enough for editorial or continuity checking, they were
not sufficient for judging focus, exposure and the subtleties of lighting. For
this reason, most cinematographers demanded traditional film dailies al-
though this was sometimes overruled for cost savings.
Now that HD infrastructure is widely available in postproduction, and
the costs of the transfer, playback and projection displays have come down
to reasonable levels, HD dailies are commonly used for the critical dailies
review by the cinematographer and the director. Certainly, 1080/24p HDTV
260
|
Digital Postproduction for Feature Film
Data Dailies
With modern scanners offering high speed scanning capabilities, and with
the cost of storage coming down, it is now possible to forego traditional tele-
cines and videotape workflows and scan dailies footage directly to data. This
approach has been pioneered by E-Film in their CinemaScan process. Dailies
are scanned to the SAN as 2K DPX files, where they are graded, synched and
logged with software tools. The scanned files are archived to LTO4 tape and
stored in a massive tape library, where the select shots can be pulled based on
the editor’s EDL, and conformed for the DI finishing step later.
scanning
Once the film has been edited and is “locked”, the editor provides an Edit
Decision List (EDL) to the scanning facility. There is no need to cut the
negative. The EDL is translated to negative pull lists with keycode numbers
marking the in and out points for scanning. The scanners provide auto-
mated scanning based on frame counts or keycodes, so it is actually easier
to work with uncut camera or lab rolls, rather than compiling the select
shots on a prespliced roll. And the negative is cleaner without this extra
handling step.
Several manufacturers provide digital film scanners that are used for
digital intermediate work. The Thomson Spirit Datacine, and its successor
the Spirit4K scanner which provides real-time 2K scans, is widely used. The
ARRIscan and Filmlight Northlight pin-registered scanners are also popular,
but run at a somewhat slower rate. Since the Spirit4K is edge-guided, picture
stability depends on clean film edges and tight transport velocity control.
Steadiness is seldom a problem, but some people prefer pin-registration,
262
|
Digital Postproduction for Feature Film
which has always been used for visual effects shots. All of these scanners are
calibrated to output logarithmic printing density in 10-bit DPX files.
Framing is a critical issue since camera apertures and ground glasses vary.
It is extremely important to shoot a framing chart with each production cam-
era and to provide that chart to the scanning facility for set up and framing of
the scanner. In addition to defining the scan width, it is important to define
the target aspect ratio. With the increasing use of Super-35 (full aperture)
cameras, the target aspect ratio of 1.85 or 2.39 must be specified, as well as
whether it is desired to scan the full height to use for the 1.33 home video ver-
sion. Table A summarizes the typical camera aperture dimensions and scan
resolutions for popular motion picture production formats including Acad-
emy, Cinemascope, Super-35 and Super-1.85, 3-perf 35mm and Super-16. All
of these are transferred to 35mm inter-negatives for release printing.
TABLE A:
Image Sizes, Scanned Dimensions and
Aspect Ratios for common film formats
Camera Aperture 2K Scanned Camera Projection
Film Format Dimensions Dimensions Aspect Aspect
(H x V) (H x V) Ratio Ratio
2K or 4K
One of the biggest debates within the technical community has been the
question of what resolution to use for digital cinema mastering and distribu-
tion, which in itself is a continuation of a longstanding debate about the
resolution of a frame of motion picture negative film. The studios are evenly
divided between two opinions—2K (2048 pixels wide) is good enough and
more cost effective, or 4K (4096 pixels wide) is better and will raise the bar.
This debate raged for nearly a year, before DCI arrived at the grand com-
promise of supporting both 2K and 4K digital cinema distribution masters,
using the hierarchical structure of JPEG2000 compression to provide a com-
patible delivery vehicle.
Today, most Digital Intermediate masters are produced at 2K resolution
although more and more are being produced at 4K as costs come down.
| 263
Digital Postproduction for Feature Film
Working in 4K requires four times the storage and four times the render-
ing time as 2K. The creative color grading process for 4K can be done in
essentially the same time as 2K, because proxies are used to support interac-
tive adjustment and display. The color correction can be rendered to the 4K
images once the reel has been approved. Several of the software-based color
correctors support this architecture, including products from Autodesk,
Filmlight, Digital Vision, and DaVinci.
So what do you get for working in 4K? Does it produce pictures that are
twice as good? If the original film format is 35mm and the release format is
35mm (the only viable film distribution format except for 70mm IMAX),
then the answer is no. The difference between 2K and 4K is much more subtle,
and very dependent on the quality of the camera lenses. A seminal paper by
Brad Hunt of Kodak in the March 1991 SMPTE Journal describes the basis
for Kodak’s development of a 4K High Resolution Electronic Intermediate
System in 1992 (which became Cineon), using a system MTF analysis to
illustrate the effect of sampling resolution on the resulting images.1 This is
reproduced in Figure 2.
0.6
0.4
0.2
0.0
0 10 20 30 40 50 60
Frequency (cycles/mm)
Figure 2. Modulation Transfer Function (MTF) a a function of sampling structure for Cineon
digital file systems.
1. B. Hunt, et al., “High Resolution Electronic Intermediate System for Motion Pic-
ture Film”, SMPTE Journal, 100:156, March 1991.
264
|
Digital Postproduction for Feature Film
responses for a sampling series of 1000 to 5000 samples per picture width
on a 35mm Academy aperture. For reference, the MTF of a camera nega-
tive, which includes the negative film and camera lens, is also shown. MTF
curves representing an analog photochemical system do not give you a
single number to judge the resolution of film. If one takes a middle fre-
quency of 25 cycles per mm as a basis, you can see that the system MTF
response increases from about 15% at 1K to 40% at 2K and about 50% at 3K
and 53% at 4K. So the system MTF improves significantly as the sampling
resolution is increased from 1K to 2K, with diminishing returns beyond
that. Kodak decided to design its Cineon system with a base resolution
of 4K samples per picture width. With a nod towards practical and cost-
effective postproduction work, however, Kodak also provided a direct 2K
mode. Even today, more than fifteen years later, nearly all visual effects are
done at 2K resolution.
Kodak’s analysis included the input camera negative film and the output
internegative element. It modeled the aperture response of the CCD film
scanner and the Gaussian spot of the laser film recorder. It did not include
the print film or the film projector, both of which further reduce the system
MTF. Original negative films have improved substantially in the last fifteen
years, now exhibiting higher speed and lower grain. However, the MTF has
not improved much, and typical camera lenses are the same, and today’s
CCD film scanners and laser film recorders have similar MTF characteris-
tics as the original Cineon conversion devices.
Even with an optically band-limited system like a camera negative (filtered
by the camera lens and the MTF response of the film), it is still desirable to
oversample the picture to avoid aliasing. In fact, scanning at 4K, then filter-
ing and downsampling the image to 2K, produces sharper pictures with less
aliasing than a direct 2K scanning process. So, even if the final digital master
is produced at 2K resolution, there are good reasons to scan it at 4K and
“down-rez” it for 2K manipulation.
Some studios are willing to pay a little extra for a 4K digital intermedi-
ate to deliver 4K digital cinema packages to those screens that are equipped
with 4K digital cinema projectors. Although the initial installations of 5,000
digital cinema screens were all 2K systems, Sony introduced their 4K SXRD
projectors in 2006, and Texas Instruments announced their support for 4K
in 2009. There may also be some value in archiving a 4K digital master to
protect for future release options.
Data ManageMent
With most digital intermediate processes, the scanned data is centrally
stored in a Storage Area Network (SAN), where it can be accessed by vari-
ous workstations performing dirt fix, conforming and color correction op-
| 265
Digital Postproduction for Feature Film
Dirt Fix
All film images contain some dust or handling marks that have to be
cleaned up. This process is called dirt fixing or “dustbusting.” Basically, the
process is to examine each frame for white marks caused by dirt, cinch marks
or scratches and to remove these artifacts with a combination of automated
and manual tools. Common tools include MTI’s DRS and PF Clean by The
Pixel Farm. The amount of dirt fixing varies with film format and handling.
Typically, 16mm film requires more work than 35mm due to the higher
magnification. Lab dirt and scratches are twice as big with 16mm format.
Some scanners are now equipped with an infrared sensor that can detect
dirt and either remove the dirt or provide a defect matte for later removal
by software. Kodak licenses a technology called Digital ICE for this purpose,
and it is now available on ARRI and Northlight scanners.
conForMing
The next step is to conform or assemble the shots to the EDL, and render
the optical transitions (fades, dissolves etc.) to match the offline edit. As nec-
essary, the data editor also generates subtitles and inserts visual effects shots,
main titles and end credits.
color graDing
The color of the assembled shots is graded in a mastering suite equipped
with a calibrated digital projector. DLP Cinema™ projectors from Barco,
Christie or NEC are widely used as the reference display for mastering. Most
facilities use a 3-D color display LUT to emulate a film print, implemented
either in the color corrector or an external LUT box. This LUT is tuned for
the characteristics of the specific print stock and lab that will be used for
release printing.
Color grading is more than just adjusting or correcting the color from
scene to scene to provide consistency and continuity. It also helps to impart
the emotional context of the story, and complements the lighting and ex-
posure used by the cinematographer to capture the scene. Color correctors
provide a range of controls, generally grouped into primaries, secondaries
266
|
Digital Postproduction for Feature Film
and windows. The primary color correction controls adjust parameters that
were originally implemented inside the telecine—these control the gain
(white), gamma (midscale) and lift (black) in each of the red, green and
blue records. The secondary controls mix the color primaries to create color
difference signals that allow individual hues to be selected and rotated.
Although the first color correctors were analog processors, digital tech-
nology was incorporated in the 1980s, allowing much more control and
flexibility with higher quality. One of the digitally enabled features was the
introduction of user defined windows that allowed the colorist to draw
mattes that applied selective color correction to the windowed region of the
frame. These mattes can be tracked and animated as the subject or camera
moves. Soft-edged mattes make the edges invisible. This powerful capability
allows the cinematographer to digitally “dodge and burn” shots, brighten-
ing or darkening areas of the frame to enhance the original photography. In
certain circumstances these tools can be used to save time in production,
reducing the need for relighting or waiting for just the right natural light.
Color grading does have its limits, however. You can’t enhance detail that
wasn’t captured on the original negative, so lighting and exposure are still
critical. And creating mattes to selectively grade regions of the picture takes
time and money. Extreme adjustments of color or contrast will amplify
grain, so these color changes don’t come free. It is always best to capture a
sharp, well-lit image on the set, because you can always darken or defocus
portions of the frame in post. It’s much harder to go the other way. The
larger the negative area, the finer the grain structure and the further you
can push the color without artifacts. This means that Super-35 and Cin-
emascope negatives provide more creative flexibility over standard 35mm
Academy or Super-16 formats.
color gaMut
DLP Cinema™ projectors were designed by Texas Instruments to support
a wider color gamut than standard video (ITU Rec. 709). This wider gamut
is commonly called P3, and was supported by DCI and SMPTE DC28 as
the minimum color gamut for digital cinema. This extended gamut is based
on practical mastering tests in which digital images were graded to match
a film answer print, and has served the industry for several years. Figure
3 shows the color gamut of a SMPTE Reference Projector, compared to
HDTV (ITU Rec. 709) and samples from Kodak Vision™ color print film.
There are some film colors that fall outside of the gamut of the digital pro-
jector, particularly dark cyans, but most of the film gamut is enclosed. It
is useful to note that the digital projectors are also capable of reproducing
brighter primary colors (red, green and blue) since print film is a secondary
color system with the image composed of cyan, magenta and yellow dyes.
| 267
Digital Postproduction for Feature Film
0.9000
Measured Film Data
Compared to Reference Projector
0.8000
0.7000
0.6000
0.5000
y
0.4000
0.3000
0.2000
0.1000
0.0000
0.000 0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 0.7000 0.8000 0.9000
x
Figure 3. Color gamut of film compared to that of the digital cinema reference projector and
ITU Rec. 709. A series of measured film samples are shown in green, with the reference
projector gamut shown by the dashed redline and the ITU Rec. 709 gamut shown by the
dashed blue line.
FilM outPut
The finished digital master, called the Digital Source Master (DSM) in the
DCI specification and some SMPTE digital cinema documents, is sent to a
bank of film recorders for output to one or more inter-negatives for release
printing. The widely used ArriLaser™ film recorder runs at about 2 seconds per
frame, so a typical 20-minute reel requires 16 hours of continuous recording.
Color calibration is critical, and the 3-D color display LUTs that are used
for the grading process must be built for the specific film stock and pro-
cessing laboratory, because the variations between film stocks and labs are
significant. Film processing is a chemical process that varies somewhat from
day to day, and is a moving target that cannot be precisely matched. It is
important to also recognize that the precision of digital color correction is
much tighter than the process control and simple printer light adjustments
of the film lab.
Each internegative can produce from 1,000 to 2,000 release prints on a
268
|
Digital Postproduction for Feature Film
high speed release printer before accumulating dirt or scratches that make it
unsuitable for further printing. Therefore, a wide release (3000–5,000 prints
or more) will require multiple internegatives. This can be accommodated by
using the conventional IP/IN photochemical duplicating process, but this
introduces another two generations of losses, which reduce the sharpness
and steadiness of the image. The preferred approach is to generate multiple
internegatives so that each release print is made directly from the digitally
recorded internegative, so that each print is essentially a “digital show print”.
Digital cineMa
Since the movie was graded on a projector that is calibrated to the DCI/
SMPTE Reference Projector specifications, the creative work on the Digital
Cinema Distribution Master (DCDM) is done. All that remains is to render
the 3-D color LUT that was used in the preview process into the digital mas-
ter, along with a simple color conversion to X’Y’Z’. This can be implemented
as a software batch process, or if working in 2K, it can be a real-time output
through an external 3-D LUT box.
The traditional practice in film scanning and recording is to scan and
reproduce the full camera aperture, so that the image remains the same size
on the internegative as the original negative without resampling, and the
resulting print can be projected with a standard projection aperture. The
standard film projection aperture is 5% smaller than the camera aperture,
providing some margin for hiding the frame lines, covering any dirt caught
in the camera aperture and also providing some “wiggle room” for picture
weave (unsteadiness) introduced due to mechanical tolerances in printing
and projection. The traditional practice works fine for film output. However,
the Digital Cinema Distribution Master does not distinguish between the
image structure of the master and the display. In fact, the DCI specifica-
tion explicitly states that the projector must be capable of displaying the full
DCDM, pixel for pixel, without any cropping.
If the same master is converted to a DCDM without resizing, the resulting
digitally projected image will be 5% smaller than the film print shown on a
projector with a standard aperture plate. While this may not be a big issue
aesthetically, it does become a problem if the camera aperture is uneven or
dirty, because now this distracting impairment may be visible on the digital
projector, depending on the adjustment of the masking. With most theatres
equipped with variable masking, it is standard practice to bring the masking
into the picture area a little to provide a good clean black boundary to the
picture. But this may not be enough.
Another option is to resize the picture for the DCDM, pushing in so that
95% of the original scan (1946 W) is resized to 2048 W, insuring that the digi-
tal picture is projected at the same size as the film print. However, this means
| 269
Digital Postproduction for Feature Film
that all DCDMs will suffer a small sharpness loss due to the 5% enlargement.
Although this is probably acceptable operationally, it is inconsistent with the
objective of optimizing the quality of the digital cinema presentation.
At the time of this writing, a proposal has been made to SMPTE to define
a Safe Image Area for digital projection, similar in concept to the film pro-
jection aperture.
required. For these reasons, a color trim pass is often used to generate the
video masters. However, this trim pass can typically be performed in a day or
two, rather than the 2 to 4 weeks of the traditional mastering process.
arcHiVal eleMents
As we move from a film to a digital world, the question of what materials
to archive is stirring heated debates. The biggest problem is that there is no
accepted archival digital medium today, due to both media deterioration
and software and hardware obsolescence. Magnetic tape formats are not
expected to last for more than ten years or so, requiring an expensive migra-
tion strategy to protect the data. The lowest risk approach today to archive
the finished digital master is to make black and white film separations that
have a proven shelf life of over 100 years (if stored properly) and can either
be reconstructed photo-chemically or redigitized.
Which elements should be saved for the future? With traditional film
production, it is common practice to store the original camera negative,
a timed IP, and an answer print in addition to the black and white separa-
tions. A HD 1080/24p master tape is also stored as the home video master.
A similar approach should be taken in the digital world. A comparison of
archival elements for film and digital postproduction is shown in Table B.
TABLE B:
A comparison of archival elements
for traditional film and digital processes.
Element Film Process Digital Process
Original Original negative (o-neg) Original tapes
Color corrected Digital Source Master
Timed Interpositive (IP)
master (DSM)
Digital Cinema Distribution
Reference picture Answer Print
Master (DCDM)
Video master HD 1080/24p HD 1080/24p
Black and White Black and White
Long term storage
Separations Separations
The original negative or original digital tapes should be stored. The con-
formed and graded Digital Source Master (DSM) is the equivalent to the
timed IP. Although the color grading has been applied, it preserves much of
the range of the original photography if the movie needs to be remastered in
the future for a high dynamic range display. The most common file format
is DPX, with the data stored as 10-bit printing density. In the future, the
Open EXR file format may also be used, storing linear RGB data in floating
point form. It is important to tag the data with the color characteristics of
| 271
Digital Postproduction for Feature Film
the Reference Projector used in mastering, and the 3-D color LUT used to
emulate print film.
Although the various distribution formats can easily be regenerated from
the DSM, it is also worth saving the output Digital Cinema Distribution
Master (DCDM) in TIFF-16 file format, storing 12-bit X’Y’Z’ color data,
as well as the HD 1080/24p home video master. The DCDM serves as the
answer print for color reference in future releases or restorations as well as
the master from which additional copies can be made. The HD 1080/24p
format is the master from which all other home video formats can be made.
aDDitional consiDerations
For Digital caMeras
Digital cameras are increasingly being embraced for motion picture origi-
nation. Just as with film, it is important to run preproduction camera tests
to explore the range of the camera and to optimize the setup. Calibration is
important with both the camera setup and the post facility that is generat-
ing the dailies or Digital Intermediate. Pioneered by the Panavision Genesis,
Thomson Viper and ARRI D-21 cameras, many cameras now offer a film-
style logarithmic storage mode, that captures an extended dynamic range
that provides grading flexibility in postproduction, much like negative film.
Data storage decisions affect production and postproduction. It is criti-
cal that you consult the postproduction facility that will be supporting your
dailies and finishing work. Many digital cameras store the pictures on
HDCAM-SR tape which is efficient to unload and reload on the set, as well
as quick to load into postproduction. However, this does require the rental
of relatively expensive HDCAM-SR recorders.
Another option is to temporarily store the pictures as data files on portable
flash cards, flash magazines or disk packs. Both compressed and uncom-
pressed storage solutions are available, but it is important to understand the
tradeoffs. Later, these digital files may be transferred to tape, or transferred
directly to working storage in the postproduction facility.
S.two and Codex offer uncompressed HD data storage on their disk-based
field recorders, which can be tethered to the camera via dual HD-SDI cables.
In 2009, S.two introduced the OB-1, providing dockable onboard storage of
uncompressed HD data via flash magazines. Also in 2009, Panavision in-
troduced their dockable Solid State Recorder SSR-1 for the Genesis camera.
The S.two and Codex field recorders also support the ARRI D-21’s
ARRIRAW format, which stores the full 12-bit signal range and full 1.33
frame area in a packed data format that can be transmitted across dual HD-
SDI cables, a proprietary mode that ARRI calls T-LINK. This means that the
ARRI D-21 camera can support traditional anamorphic lenses for widescreen
photography without compromising vertical resolution. ARRI supplies a free
272
|
Digital Postproduction for Feature Film
software utility, the ARRIRAW Converter (ARC), which allows the ARRI-
RAW files to be converted to full frame 1.33 DPX files in postproduction.
Several cameras also support compressed data storage with onboard flash
cards or disk drives, including the RED One (r3d), Panasonic 3700 (P2),
and Sony EX-3 (XD-CAM) cameras. The RED camera uses a proprietary
wavelet compression codec, which produces reasonable results at a compres-
sion ratio of about 10:1. RED also provides software utilities for high qual-
ity rendering to DPX files for finishing. The P2 and XD-CAM formats use
much higher compression ratios, and while the resulting artifacts are not a
problem for editorial, they are not the best solution for finishing.
With any of these data formats, the production company must consider
the need to backup or archive the production footage to tape for storage. The
most widely used approach is LTO3 or LTO4 tape. For uncompressed data,
this tape backup takes about 3:1 run time and can be performed on location
or back at the post house that is providing dailies support.
| 273
ASC Color Decision List
by David Reisner and Joshua Pines
ASC Associate Members
Dailies Color
Dailies Correction Dailies
Dailies
ASC CDL (Look baked in)
(ALE, FLEx, XML)
Editorial Editorial(2)
ASC CDL
CMX EDL (CMX EDL, XML)
DI
Conform Color
Process Conformed Correction
Raw Image Data
1) Currently, this communication is performed via the ASC CDL XML format or various vendor-specified methods.
A communication protocol for this step will be defined in a future ACS CDL release.
must still be viewed on calibrated displays with the same characteristics and
in very similar viewing conditions to communicate the intended look. Co-
ordination of data metrics and viewing conditions is also outside the scope
of the ASC CDL and must be handled elsewhere in a project’s workflow.
Although the ASC CDL functions are intended for purposes of inter-
change—to communicate basic color correction operations from set to
facility and between systems from different vendors—many vendors also
provide user level controls that operate on ASC CDL functions directly.
In a workflow, the set of ten ASC CDL parameters for a correction is
interchanged via ASC CDL-defined XML files, by new fields in ALE and
FLEx files, or special comments in CMX EDL files. Most often the formats
shown in Fig. 1 will be used, but the exact methods used will be facility- and
workflow-dependent.
ASC CDL corrections are metadata that are associated with shots. Unlike
LUT corrections, the ASC CDL corrections are not baked-in to the image.
Because of this approach, corrections later in the workflow can be based
on earlier ASC CDL corrections without modifying the image data multiple
times—yielding highest possible image quality. And sharing an ASC CDL
correction gives information about how the earlier corrector was thinking
about the look. Corrections implemented with LUTs are fixed—they can be
viewed, or additional corrections can be layered on, but they cannot practi-
cally be adjusted or tuned.
The ASC CDL supports only the most basic color correction operations.
Not all operations of interest (e.g., log/linear conversions, 3D LUTs, window-
ing, tracking) can be expressed with these operations. It is possible that future
releases will support a somewhat expanded set of interchangeable operations.
The ASC CDL does not handle everything necessary to communicate a
look. A project must manage and communicate basic and critical information
like color space, data representation format, display device, and viewing en-
vironment. To communicate a look between on-set and post or between post
facilities absolutely requires that information be shared and used intelligently.
power = 0.5
slope = 1 offset = 0
O ut p u t
O u tput
Output
power = 1
0 .5 0.5 0.5
slope = 0.5
power = 2
offset = -0.5
0 0 0
0 0.5 1 0 0.5 1 0 0.5 1
In put Input Input
Slope
Slope (see Figure 2) changes the slope of the transfer function without
shifting the black level established by Offset (see next section). The input
value, slope, ranges from 0.0 (constant output at Offset) to less than infinity
(although, in practice, systems probably limit at a substantially lower value).
The nominal slope value is 1.0.
Power
Power (see Figure 4) is the only nonlinear function. It changes the overall
| 277
ASC Color Decision List
curve of the transfer function. The input value, power, ranges from greater
than 0.0 to less than infinity. The nominal power value is 1.0.
Saturation
Saturation provides a weighted average of the normal color (saturation
1.0) and all gray (fully desaturated, saturation 0.0) images. The saturation
operation modifies all color components. Color components are weighted
by the values used in most Rec. 709 implementations of saturation. Satura-
tion values > 1.0 are supported. Values > 4 or so will probably only be used
for special purposes.
sat is the user input saturation parameter. inR is the input red color com-
ponent value, G green, and B blue. outR is the output red color component
value, G green, and B blue. gray is the fully desaturated gray value, based on
the color compo nent weightings.
gray = 0.2126 * inR +
0.7152 * inG +
0.0722 * inB
outR = Clamp( gray + sat * (inR - gray) )
outG = Clamp( gray + sat * (inG - gray) )
outB = Clamp( gray + sat * (inB - gray) )
0 ≤ sat < ∞
BEHAVIOUR FOR DIFFERENT IMAGE ENCODINGS
The ASC CDL operations perform the same math regardless of the encod-
ing of the image data to which they are being applied. The resulting modifi-
cations to the image will vary a great deal for different image data encodings.
Management of the image encoding and appropriate application of the ASC
CDL operations is the responsibility of the project/show and outside the
scope of the ASC CDL.
Slope
For linear encodings, Slope controls the Slope: RGB all 0.5
brightness of the image while maintaining
contrast—like adjusting the f- or T-stop.
Offset
For linear encodings, Offset controls
the overall “base fog” of the image. The
values of the entire image are moved up Slope: RGB all 1.0
or down together, affecting both bright-
ness and contrast. This is not traditionally
a common operation for linear data.
Power
For linear encodings, Power controls Slope: RGB all 1.5
the contrast of the image. b) Offset
Saturation
For all encodings, including linear, Sat-
uration controls the saturation—intensity
of the color of the image.
Offset: RGB all -0.2
The old telecine Lift function—raising
or lowering the darks while holding the
highlights constant—can be achieved via
a combination of Offset and Slope. Simi-
larly, the telecine Gain function can also
be achieved via a combination of Offset
and Slope. Offset: RGB all 0.0
Log
ASC CDL operations will have similar
effects on images in the common en-
codings that are generally log. Those
encodings include printing density (e.g. Offset: RGB all 0.2
| 279
ASC Color Decision List
Slope
For log encodings, Slope controls the
contrast of the image.
Saturation: 0.5
Offset
For log encodings, Offset controls the
brightness of the image while maintain-
ing contrast—like adjusting the f- or
T-stops. This is essentially the same as
Printer Lights but with different values/
units. Saturation: 1.0
Power
For log encodings, Power controls the
level of detail in shadows vs. highlights.
This is not traditionally a common opera-
tion for log data. Saturation: 2.0
280
|
ASC Color Decision List
ASC CDL
INTERCHANGE FORMATS
The ASC CDL allows basic color cor-
rections to be communicated through Slope: RGB all 0.75
the stages of production and postpro-
duction and to be interchanged between
equipment and software from different
manufacturers at different facilities. The
underlying color correction algorithms
are described above.
When ASC CDL color correction Slope: RGB all 1.0
metadata is transferred from dailies to
editorial and from editorial to post-
production, provided that data rep-
resentation, color space, and viewing
parameters are handled consistently, the
initial “look” set for dailies (perhaps from Slope: RGB all 1.25
an on-set color correction) can be used b) Offset
as an automatic starting point or first
pass for the final color correction session.
ASC CDL metadata is transferred via
extensions to existing, commonly used
file formats currently employed through-
out the industry: ALE, FLEx, and CMX Offset: RGB all -0.1
EDL files. There are also two ASC CDL-
specific XML file types that can be used
to contain and transfer individual color
corrections or (usually project-specific)
libraries of color corrections.
ALE and FLEx files are used to trans-
fer information available at the time of Offset: RGB all 0.0
dailies creation to the editorial database.
New fields have been added to these files
to accommodate ASC CDL color correc-
tion metadata for each shot.
CMX EDL files are output from edi-
torial and used primarily to “conform” Offset: RGB all 0.1
| 281
ASC Color Decision List
e-mail to asc-cdl@theasc.com.
The ASC CDL was created by the American Society of Cinematographers
Technology Committee, DI subcommittee. Participants included ASC mem-
bers, color correction system vendors, postproduction facilities, and color
scientists.
Discussion and questions may be directed to: Lou Levinson, ASC Technol-
ogy Committee, DI sub committee chair: joe.beats@yahoo.com; Joshua Pines,
ASC Technology Committee, DI subcommittee vice-chair: jzp@technicolor.
com; David Reisner, ASC Technology Committee and DI subcommittee sec-
retary: dreisner@d-cinema.us.
| 283
The Academy of Motion
Picture Arts and Sciences
Academy Color
Encoding System
by Curtis Clark, ASC and
Andy Maltz, ASC Associate Member
OvErviEw
◗ Preserves the full range of highlights, shadows and colors captured on-
set for use throughout postproduction and mastering
◗ Preserves the ability to use traditional photometric tools for exposure
control rather than having to compensate for custom or proprietary
viewing transforms in conjunction with a video monitor
◗ Simplifies matching of images from different camera sources
◗ Enables future expansion of the creative palette by removing the limita-
tions of legacy workflows
Key components and features of ACES include:
◗ A standardized, fully-specified high high-precision, high dynamic
range, wide color gamut color encoding specification (SMPTE ST2065-
1:2012 Academy Color Encoding Specification) encompassing the full
range of image color and detail captured by current and future digital
cameras
◗ A standardized, fully-specified high precision and high dynamic range
printing density specification (SMPTE ST2065-3:2012 Academy Densi-
ty Exchange and the related SMPTE ST2065-2:2012 Academy Printing
Density) encompassing the full range of image color and detail captured
by modern motion picture film stocks
◗ Standardized file formats for colorimetric and densitometric data based
on the popular OpenEXR and DPX data containers
◗ A methodology for display device-independent mastering that pro-
duces higher quality images than are possible with legacy workflows
◗ Recommended best practices for interfacing digital motion picture
camera “raw” data and film scanner output data to ACES
◗ Support for ASC CDL in on-set color management systems
The full benefits of ACES are achieved when cameras, color correctors,
image creation and processing tools, and display devices correctly imple-
ment the ACES specifications and recommended practices.
ACES is capable of encoding all colors viewable by the human visual sys-
tem (see Figure 1). This greatly exceeds the range covered by HDTV and
Digital Cinema projectors.
ACES is derived from a hypothetical ideal recording device, designated as
the Reference Input Capture Device (RICD), against which actual recording
devices’ behavior can be compared (see Figure 2). As conceived, the RICD
can distinguish and record all visible colors, as well as capture a luminance
range exceeding that of any current or anticipated physical camera. The
RICD’s purpose is to provide a documented, unambiguous, fixed relation-
Figure 1. CIE 2-Degree Chromaticity Diagram with Gamuts showing ACES primaries com-
pared to other color encoding specifications
286
|
Academy Color Encoding System
ship between scene colors (also called “relative scene exposure values”) and
encoded RGB values. This fixed relationship is integral to a fully specified
imaging architecture, as it provides a reference point for bringing any form
of image source data into the system.
Because ACES encodes scene colors, ACES values must be adjusted for
the target display environment and device characteristics to faithfully repre-
sent the recorded images. These adjustments are performed by the Reference
Rendering Transform (RRT) and a display device-specific Output Device
Transform (ODT) that are, in practice, combined into a single transform.
Figure 2 also shows the combined RRT and ODT along with the SMPTE
Digital Cinema Reference projector, a common configuration for mastering
theatrical motion pictures.
When an actual camera records a physical scene, or a virtual camera (e.g.,
a CGI rendering program) constructs a virtual scene, a camera-specific
Input Device Transform (IDT) converts the resulting image data into the
ACES relative exposure values as though the subject had been captured by
the RICD (see Figure 3).
As noted earlier, ACES images are not appropriate for final color evalu-
ation or direct viewing;
like film negative or digital
files containing scanned
images encoded as print- A
ing density they are only Original Scene Digital
Motion Picture
Camera
intermediate representa- A
Original Scene Film Camera Film Negative APD Calibrated Film Negative Film Print Film Projector Theater
Film Negative Academy Density Film Recorder Reproduction
Exchange
Encoding
2. RDT (Reference Device Transform) is the ODT for SMPTE Reference Projector
Minimum Color Gamut (Nominal) (see SMPTE RP431-2-2007 Reference Projector
and Environment, Table A. 1).
288
|
Academy Color Encoding System
Curtis Clark, ASC currently chairs the ASC’s Technology Committee and has
done so since it was revitalized in 2002.
Andy Maltz serves as the director for the Academy of Motion Picture Arts and
Sciences’s Science and Technology Council.
| 289
The Cinematographer
and the Laboratory
revised by Rob Hummel
ASC Associate Member
PrinTer PoinTs
The laboratory controls print density and color balance by increasing or
decreasing the intensity of each primary color of light in steps called printer
points. Since the development of the B&H Model C printer, most manufac-
turers have standardized on a range of 50 light points in 0.025 Log E incre-
ments. In addition to the light points, each printer usually also has 24 trim
settings (0.025 Log E), providing an available total of 74 lights.
When a lab changes the brightness/darkness of a print, they will say they
are adjusting the density of the print (which, quite literally, they are). Density
is adjusted by moving the values of all three light points (RGB or YCM) in
unison. Individual adjustment of the colors is obviously used for specific
color correction, or timing.
This means that at some laboratories, just because your light point is up
at 50, it may not mean there isn’t any more room to go. Sometimes you may
have the ability to go up to the equivalent to a 65 light point. That being said,
at some laboratories it can also mean that when you are at a 50 light point,
there isn’t any more room at all. Again, you must consult the lab you are
using and get advice about the flexibility of their light point scale.
One could argue that the ideal settings for scene-to-scene timing would
be at mid-scale (Trim 12 + Tape 25 = 37 lights). In actual practice the avail-
able range is considerably less. Printer lamps are usually operated under their
rated voltage. This reduces the light intensity in all three colors. For example,
| 291
The Cinematographer and the Laboratory
lowering the voltage from 120 to 90 volts on a BRN-1200 watt lamp results in
a relative change in printer points equal to minus 12 Red, 13 Green, 17 Blue.
The trims are usually used to balance the printer for a given print-film emul-
sion. A typical emulsion might require 16 Red, 13 Green, 10 Blue or, in terms
of the ideal, plus 4 Red, plus 1 Green, minus 2 Blue. Other factors influencing
the available printer points are the operating speed of the printer, and the use
of neutral-density filters in the individual channels and the main light beam.
The sum of these variables explains why a given negative might be printed
28 Red, 29 Green, 22 Blue at one laboratory and 36 Red, 32 Green, 36 Blue at
another laboratory to produce matched prints. It is important to understand
that printer points relate only to how the printer exposes film. A one-stop .30
Log E change (12 printer points x .025 Log E ) is equal to a one-stop exposure
in the camera only if the film in the camera has a gamma of approximately 1.0.
The current negative films, both black & white and color, have gammas of
approximately .65. Therefore, in correlating camera and printer exposure,
one stop equals ⅔ x 12 = 8 printer points per stop. Much testing has borne
out the fact that one camera stop change in exposure equals 8 points of
printer density. At extreme ends of the printer scale it may vary a point or so
from that, but using 8 points is always the best rule to follow.
lab will happily expose a section of your negative with a 21-step gray scale,
process the negative and plot the negative’s curve, so you will immediately
be able to see what condition your emulsion is in. When compared with
the curves supplied by the manufacturer of that emulsion batch, you can
precisely determine if this batch has shifted.
I’d like to make a comment here about professional motion picture nega-
tives. It is always assumed that motion picture negatives will be used within
six months or so of manufacture, and once exposed, will be processed within
72 hours. Negatives older than six months run the risk of a high base fog
density that will result in milky blacks in your print. Motion picture films
don’t remotely have the latent image stability of still-camera films, which is
why they need to be processed promptly or refrigerated at very low tempera-
tures when timely processing isn’t possible.
For consistent tracking and monitoring of processing from lab to lab, the
industry came up with Laboratory Aim Density, or LAD.
With LAD, film laboratories can determine how far off of a reference
standard they are when printing. We are principally concerned with LAD
for color positives printed from original negative; however, there are LAD
values for interpositives and internegatives as well. Again, a densitometer
is used to read a predefined “LAD patch” on a print. The RGB/YCM LAD
value is the same for all Eastman Kodak Motion Picture negatives. Fuji ele-
ments have their own LAD values.
LAD values for all Kodak negatives are: 0.80 Red, 1.20 Green and 1.60 Blue.
Cutting in of a LAD reference frame is standard operating procedure for
answer printing at film laboratories. This enables the answer-print timer
to know exactly where any given print stands vs. the LAD reference point.
Reading of the LAD patch lets the timer know if the print is leaning one
way or another in color or density. It’s always advisable before beginning a
screening of any answer print to ask the timer how the print is, or “What’s
the chart reading?” The timer will then express to you the values for each
color and how far off of standard, plus or minus, the print is.
While not standard, it’s not out of the question to ask to have a LAD patch
cut into your dailies each day, and have the patch read so you can know how
far off of standard your daily prints may be. If this is requested, also under-
stand that it adds one more part to the delivery of your print dailies, and
will delay their delivery ever so slightly. When printing one-light dailies, an
advantage of a LAD patch cut into your dailies is you will know immediately
if color drift is the cause of the laboratory or your own cinematography.
As for tolerance in LAD readings, understand that for camera negative,
0.025 Log E equals one point. For release prints, the lab should be allowed
a swing of 0.05 Log E (two points) on any individual color; however, you
shouldn’t have to be tolerant of anything more than 0.05 Log E sway in any
| 293
The Cinematographer and the Laboratory
of the readings (especially for answer prints), but how tolerant you want to
be is at your discretion.
An example of how this procedure is implemented at a lab might be as
follows:
You travel to the laboratory to screen a reel of your movie with the answer
print timer. As you settle into your seat, you ask, “How did this print come
up?,” “What’s the chart reading for this print?” or “How are we doing today?”
While the timer is provided the precise Log E value plus or minus from the
standard, he or she will usually do some quick math and express its values in
light points. For example, you might hear, “It came up plus one, zero, minus
a half,” which would translate into plus one point red, on target for green
and minus a half point of blue (unless you are at a lab that uses subtractive
colors, where they would be referring to yellow, cyan, and magenta). That
being said, they will happily provide you with the precise Log E values if you
request it, remembering that 0.025 Log E translates into one printer point.
exPosure rePorTing
It used to be normal practice for laboratories to furnish one-light rather
than timed daily rush prints. However, most labs today find that, to be com-
petitive, they must offer scene-to-scene timing of dailies. That being said,
timed dailies usually yield an inconsistent workprint since they end up being
timed on an individual basis each day.
The term “one-light dailies” doesn’t mean that all negatives are printed at
the same light points. The laboratory establishes a day exterior, day interior,
night exterior and night interior light points for each film stock a cinema-
tographer may choose when he or she starts a picture, based on the first few
days of shooting. Each laboratory establishes its own method, but basically
all try to keep usable negative within the 1 to 50 light point scale (laborato-
ries do not necessarily agree on the numerical value of the preferred mid-
scale light point, but this is not critical as long as you know which system
your laboratory uses). A conference with your laboratory representative will
establish methods that fit your style of photography. After that, variation in
your exposure will show up as variation in the density of your solid black in
any area of the print; exposure must be kept at or above what the laboratory
tells you is their average light point for a well-exposed piece of film.
Should you feel uncomfortable committing to one-light dailies, at least
instruct your dailies timer/grader to try and use the same color and density
ratio light point, should you be shooting the same scene for several days at
a time. This will mitigate color shifts from timing the dailies on their own
merits each day.
Negative raw stock from different manufacturers may or may not have
the same base density, maximum density or density/exposure characteristic
294
|
The Cinematographer and the Laboratory
(curve shape), although these differences are usually small. A daily print
made by the LAD control method shows the density and color ratio at
mid-scale on the printer. Negative from two manufacturers, both exposed
correctly, may or may not look the same at this printer point. If necessary, an
adjustment to the printer point may be made for the difference in raw stock,
and this new light point can be used for printing dailies on the subject.
sPeCiaL ProCessing
If special processing is requested, a conference with the laboratory represen-
tative and experimentation (or experience) is desirable. If special processing is
requested, or the cinematographer is using high or low exposure for effect, it is
desirable to test the effect by going through the entire release-print technique,
including the interpositive/duplicate negative generations, and to view the
result as nearly as possible under the anticipated release-print viewing con-
ditions. If the scene to be photographed will be used in an optically printed
visual-effect, it is wise to confer with the appropriate visual-effects experts.
emergency situations they can be pushed up to three stops with some loss
in quality. The ability to underexpose these films and still obtain on film a
usable image should by no means be regarded as a suitable substitute for
additional lighting when it can be provided.
If a cinematographer anticipates the need for deliberate underexposure
during a production, he or she should, if possible, shoot careful tests in ad-
vance using the same emulsion that to be used for the production and have
them processed by the lab that will be processing the production film. The re-
sults can then be analyzed with the help of a laboratory representative. Need-
less to say, underexposed rolls should be clearly marked with instructions as
to how much they should be pushed when they are sent to the laboratory.
FLashing
Even before discussing flashing, it must be said that one lab’s measure-
ment of a flash is not likely to match how another lab measures it. Thus, don’t
base a request for a type of flash on an experience at another lab; testing is
critically important.
Flashing may be described qualitatively as subjecting the negative film to
a weak, controlled, uniform fogging exposure prior to development—either
before, during or after photographing the desired subject. There is no mea-
surable difference in the effect if the flashing takes place before or after the
principal exposure. As a result, because of various unfavorable factors (such
as not being able to control the time interval between the flash exposure
and the time that development will actually take place, and not knowing
the actual conditions of photography in advance), preflashing is generally
avoided in favor of postflashing.
Simultaneous flashing during actual photography by means of a special
device attached to the front of the camera lens is described shortly under the
heading Lightflex. However, if not properly controlled, flashing on the cam-
era in the field runs the risk of an imprecise and improperly color-balanced
flash.
Since color negative consists basically of three emulsion layers sensitive
to red, green and blue light, the spectral composition of the light used for
flashing can be a neutral equivalent to tungsten light (3200°K) or daylight
(5500°K) which, depending on the film, would affect all three emulsion layers
equally. The fundamental reasons for using a neutral flash are to reduce the
contrast of the image and to increase shadow detail. This effect is accom-
plished because the flashing exposure affects principally the shadow region
of the negative image.
Another reason for flashing is to achieve certain creative effects by using a
nonneutral flashing exposure that would then alter the normal color rendi-
tion of the developed negative.
| 297
The Cinematographer and the Laboratory
LighTFLex
Lightflex (and Arriflex Varicon) is an on-camera accessory, mounted in
front of the lens, which overlays a controlled amount of light on the scene
to be photographed at the time of exposure. It allows the cameraperson to
modify the gamma curve of the film in the camera during shooting. This
can extend the photometric range of the film, and provide more detail in the
shadow areas without affecting grain.
Lightflex works exactly the same with video cameras, providing more
shadow area detail with no increase in noise. A filter tray between light sphere
and reflector permits colored light overlay for special effects. The system’s
lamphouse holds three quartz bulbs that operate selectively or simultane-
ously to hold color temperature stable over the widest possible range. Com-
pound curved reflectors cover all standard 16mm and 35mm fixed lenses
and zooms.
Intensity control is by a handheld electronic dimmer with a digital 1⁄10-volt
line indicator. The lamphouse front features a built-in Obie light, with
changeable masks for direct or diffused light. The unit housing is mounted
on a swing-away bracket to facilitate lens changes and maintenance. There
are two front-mounted matte box shades for wide angle or normal lenses.
There are two lens filter stages: one for two 3" x 3" filters and one for two
4" x 4" or 4" x 5.6" filters or grads (rotatable). A single 4" x 5.6" filter may be
mounted inside the Lightflex housing.
Panavision’s Panaflasher achieves similar results with an internal device.
298
|
The Cinematographer and the Laboratory
| 299
Emulsion Testing
by Steven Poster, ASC
printing techniques (flashing the print stock, ENR, CCR, optical printing,
digital scanning or print skip-bleach processing for example). This is done
by printing your piece of unexposed processed film stock at a succession of
printer lights increasing by 2 to 4 points of density (the equivalent of ⅓ of a
stop of printer exposure at your lab). If you are planning to use any unusual
printing techniques or print processing techniques, they should be applied at
this point. Any subsequent printing for these series of tests should have these
techniques applied as well.
A trick that I have often used to help me judge my optimum black density
is to punch a hole in the negative with a single-hole paper punch before it
is printed. This will give you a reference to zero density in the frame, which
can help determine the optimal visual black tone that you want. Your desired
black tone will never be as black as the portion printed through the hole, but
the reference helps to determine what density you will want to achieve with
your processing and printing techniques.
If your lab has film-strip projectors that they use for timing purposes, this
is a very good way to view these tests. Two identical prints can be made
which can be viewed side by side on these projectors, allowing you to study
the results and compare different densities. If no filmstrip projectors are
available, the length of each exposure should be long enough to allow you
time to view it sufficiently on the screen during projection.
Once you have determined which density you would like to represent black
in your final print, this should be read on the densitometer and used for later
reference. You can also read the densities of each level of printer lights to see
where reciprocity sets in, although this is not actually necessary because this
density will probably be deeper than you will actually be using to print.
A test for no-density print highlights can also be done at this time by print-
ing a piece of opaque leader at the determined printer lights and reading the
resulting density. The difference between your chosen black density and the
resulting white density will determine the dynamic range of the print stock.
In order to determine the speed and working range of your negative in rela-
tion to that print stock, further testing is necessary.
You should now have an optimum black density and a reference to the
intensity of the printer lights that will be required at your lab to achieve that
density with your chosen negative stock, as well as any unusual processing
methods and any variation in printing techniques that you choose to use.
This brings us to the third part of the test.
ExPosurE caliBraTion
This will be the first camera test that will provide the working speed or
exposure index (EI) that will allow you to judge the exposure necessary to
represent the values photographed as normal tones on the final print, when
| 301
Emulsion Testing
that print is made using the recommended density arrived at during the first
two parts of these tests. You must determine the amount of light that it will
require to properly photograph a midgray tone when the negative is printed
to the benchmark density.
There are several things I would like to mention at this point about testing
methods. Everybody has their own method of measuring light values. There
are probably as many methods as there are people taking exposure readings.
If your meter and method of reading works for you, it is correct.
I prefer to use a spot meter and take my neutral readings off of a Per-
manent Gray Card. I feel that this gives me a consistent and accurate way
of judging not only the light falling on a subject but the reflectance of that
subject as well. For these tests I also like to vary the amount of light falling
on the subject rather than changing the T-stop on the lens. I feel that this
method gives me a more accurate series of exposures because there is no
reliable way to vary the stop by fractions due to the variables and tolerances
of the lens iris. As you see later extreme exposures will be needed. In these
cases I may vary exposure speed but never T-stop.
The lighting for these tests require flat, even illumination over the surface
of the subject like copy light (light from two sides of the subject, at a 45°
angle from the camera). The color temperature of the light should be as close
to 3200° Kelvin as you can get, except in tests of daylight film, when 5500°
Kelvin should be used.
If you are planning to use filtration (such as diffusion of some kind), these
filters should be used in all subsequent tests. This is because some of these
filters can tend to absorb light. Even though this effect will be very slight, it
can affect the results of your tests by as much as 2⁄3 of a stop when you use
heavy filtration.
Make a series of exposures of an 8" x 10" gray card and a face with neu-
tral skin tone at a series of stops based on variations in the manufacturer’s
recommended exposure index. Start the series at one stop under the EI and
decrease the exposure by ⅓ of a stop until you reach one stop over the rec-
ommended speed.
For instance, if you were testing an emulsion with a recommended speed
of 500, you would start your test at an EI of 1000 and proceed to an EI of 250
in ⅓-stop increments, resulting in seven different exposures.
Remember, don’t vary the T-stop. Change the amount of light to give the
proper exposure at the T-stop you are using.
Print the negative at the benchmark density arrived at in the second part
of the test, adjusting the printer ratio (color balance) to reproduce a neutral
gray. Read the print density of the gray in each exposure. A proper mid-
gray print density for theater viewing should be R/1.09, G/1.06 and B/1.03
(status A filters).
302
|
Emulsion Testing
For example:
First Series Second Series
Normal Normal
1 Stop Under 1 Stop Over
1⅓ Stops Under 2 Stops Over
1⅔ Stops Under 3 Stops Over
2 Stops Under 3⅓Stops Over
2⅓ Stops Under 3⅔ Stops Over
2⅔ Stops Under 4 Stops Over
3 Stops Under 4⅓ Stops Over
3⅓ Stops Under 5 Stops Over
4 Stops Under
5 Stops Under
conclusion:
TrusT your EyE morE Than your mETEr
It is important to remember that these tests are not scientific but em-
pirical. They are meant to train your eye to the dynamic range of your
emulsion under working conditions. The tests should be a good working
reference. In fact, I have often taken frames of each exposure and mounted
304
|
Emulsion Testing
them in slide mounts for viewing on the set if I want to know exactly where
to place a specific tone on the scale so that it will be represented exactly as
I want in the final print. To do this, you will need a small light box properly
color-corrected and with an illumination of 425 FC ±10%.
It is most important to learn to trust your eye rather than relying upon too
many exposure readings. These tests should give you a better understand-
ing of the results of exposing, processing and printing your original camera
negative so that you can predict exactly what the images you make will look
like. With this knowledge, you should be able to make more consistent, dra-
matic images that will help tell the story of your motion picture.
same words. Left to its own devices, any lab can deliver a one-light print
every day. The difference is that since the dailies timer is making color and
density choices on his or her own, the lab’s version will inevitably change its
R-G-B Hazletine values from negative roll to negative roll or even among
different shots and setups within each roll. The printer light that ultimately
results from the following procedure is something chosen by the cinematog-
rapher, and is meant for use in all situations matching the lighting conditions
under which the test is performed. Thus, it is possible—indeed preferable—
to shoot an entire feature film on the same printer light. That said, it is also
viable to establish printer lights for defined situations, (i.e., day/exterior,
day/interior, night/exterior, night/interior, etc.).
Preliminaries
A. Make sure that camera body and lens tests are already completed and
approved.
B. Secure a pair of showcards flat against a wall, end-to-end in a horizontal
fashion. The black side of one of the cards should be placed on the left, the
white side of the second card on the right.
C. Recruit a model with a representative fleshtone. The term “representative”
is somewhat ambiguous in its use here, but that is part of what this test is
trying to determine. Avoid extreme complexions or coloring of any kind
unless you anticipate dealing with them during principal photography.
Also, make sure that your model dresses in neutral shades rather than
solid blacks or whites or any heavily saturated colors.
D. Using a lens from your tested, matched production set, compose the shot
so that the two showcards completely fill the frame. Place your model
directly facing the lens, just slightly in front of the center seam where
the two showcards meet. In 1.85 format, a 40mm lens works well at a
distance of about eight feet and renders a pleasingly medium close-up of
the model.
E. Create a series of individual flash cards that the model will hold within
the frame to clearly indicate the exposure information relevant to each
take:
NORMAL +1½ -½ -2
+½ +2 -1 NORMAL
+1 NORMAL -1½
F. In addition to a gray card, the following information should be mounted
in a plainly readable fashion somewhere within the frame but not in such
a way as to impede sight of the model:
• emulsion type and batch/cut numbers
• ASA/Exposure Index
• lens focal length
| 307
Finding Your Own Printer Light
• T-stop
• development information (normal, push, pull, ENR, CCE, etc.)
• print stock type
• optional: color temperature of the light source you’re using
Lighting
A 2K fresnel is an ideal unit to use with this test. Placed at an angle of
about 20 degrees off camera right, make sure the light is evenly spread at
full flood across both the model and the two showcards—with no hot spots
or dropoff of any kind. From this position, the lamp serves a dual purpose
by not only properly illuminating the model but by throwing the model’s
shadow onto the black-sided showcard that covers the left half of frame. The
deep, rich, velvety darkness this provides will serve as an important point of
reference when judging the projected print.
Do not use any diffusion on the lamp and do not add any light to the fill side.
asa/exposure index
Since film speed is a relative concept, the best starting point is to rate the
negative at the manufacturer’s suggested value. Besides providing the infor-
mation necessary to choose a single printer light for the run of the show, this
test will also allow the setting of an effective ASA/EI rating for the manner
in which the film is to be exposed.
t-stop
In the interest of contrast uniformity and the elimination of as many
variables as possible, lock the iris ring at a predetermined T-stop and leave
it alone for the length of the test. It should rest precisely at or close to the
primary setting intended for use across much of the shoot. Measured expo-
sure shifts will be carried out through adjustments in lighting intensity and
a combination of neutral-density filters and shutter-angle changes. (For the
purpose of this article, the working stop for the test will be T2.8.)
Filters
If plans for principal photography include the use of filters in front of or
behind the lens, slip the appropriate grade into the matte box or filter slot
before you begin the test.
Laboratory instructions
The camera report should prominently display the following orders:
◗ Develop—(normal, push, pull, ENR, CCE, etc.)
◗ Print this negative roll two times.
◗ First pass: print on best light for gray card/normal exposure only.
308
|
Finding Your Own Printer Light
◗ Second pass: correct each take back to normal in ½ stop (4 points each)*
increments.
◗ Note well: normal exposures should all print at the same light in all cases.
◗ Do not join these rolls together or to any other roll.
By keeping the two test rolls separate, you will be able to view them
in rapid succession without having to deal with any other distracting
material.
Basically, the one-light printing of the gray card and normal takes—
and thus the timing of the entire first roll—allows the dailies timer a
fighting chance at showing his or her interpretation of what will look
best onscreen with respect to the lab’s processing standards and the con-
ditions set up by the cinematographer. The second uncorrected printing
pass insures that you will see the effect over- and underexposure will
have on the emulsion in its purest state—without any assistance or aug-
mentation from the lab. Examining both rolls together essentially de-
fines a place from which the lab timer and cinematographer can begin
to deviate.
Note: For purposes of this test, it is given that the laboratory’s system
is calibrated so that 8 printer points equal one T-stop on the lens.
miscellaneous
◗ Beware of ambient light or anything else that might compromise the
test’s integrity.
◗ Be meticulous with meter readings. If you choose an iris setting of T2.8,
your normal exposure should read precisely T2.8 at the model’s face.
Measuring the increase in light level needed to support the overexpo-
sure parts of the test should be handled with equal care.
◗ Do a separate test for each emulsion you plan to use and each lighting
condition you plan on encountering.
◗ Be sure the model clearly displays the placards indicating the proper
exposure for the take being photographed.
◗ Don’t rush.
the test
First, fill the frame with a gray card. Light it to T2.8 and expose 20 feet at
the same value.
Next, recompose to fill the frame with the black and white showcards,
featuring the model at the center seam.
Following the notations in each column, expose 30 feet for each step as
noted:
(Note that after the first normal exposure the light level increases from
T2.8 to T5.6. This is done to facilitate the overexposure takes. The standard
| 309
Finding Your Own Printer Light
iris setting here is T2.8, so before starting the test, simply light the model to
T5.6 and then use two double scrims on the 2K Fresnel to knock down the
intensity to T2.8 when needed.)
If overexposure is to be carried as far as +3 stops, the basic light level must
be increased to T8 to accommodate that portion of the test. Proportional
changes should then be made to the scrims and shutter angle/neutral density
filter combinations.
the results
When viewing the projected film, refer to the lab’s printer-light notation
sheet that corresponds to the test exposures.
You should speak to your lab contact as to what is considered a “normal”
printing light for the lab you are using; however, we will assume for this
article that a “normal” printer light would read 25-25-25. Roll 1 will now
obviously play all the way through at light 25-25-25. Any exposure changes
noted on screen will thus be a direct result of what was done at the lens. This
pass is especially helpful in gauging color drift as it relates to exposure. It
is also a good indicator of the emulsion’s ability to hold detail at the noted
extremes.
Roll 2 is merely a second printing pass of the same negative but with the
identical series of exposures corrected back to normal in measured incre-
ments by the Hazletine timer. Based on the concept of 25 across being nor-
mal, refer to the following boxed chart for the progression of printer lights
(assuming a laboratory printing scale of 8 points = 1 stop).
In this instance, the dailies timer has “helped out” by correcting “mistakes”
in exposure. The second pass thus provides an idea of how far the emulsion
can be stretched before it falls apart. Special attention should be paid to the
white and black showcards that make up the background behind the model.
Grain, color shift and variation in detail will be most readily apparent in
these areas. This pass can also provide information about contrast. If neces-
sary, it may be requested that the laboratory print “over scale” in order to
achieve correction on an overexposure.
Conclusion
Now that the data needed to decide all critical concerns has been revealed,
decisions that will directly affect the film’s look can be made in an informed
manner. Subjective judgement once again comes into play, but the difference
is that the cinematographer is the one doing the judging.
Usually, the model’s fleshtone will need some tweaking regardless of which
test exposure is most pleasing. Let’s say that Eastman 5213 was used at its
recommended ASA/EI rating of 200. While viewing results of the corrected
Roll 2 on screen, it is decided that the grain structure and shadow detail
310
|
Finding Your Own Printer Light
Richard Crudo, ASC currently serves on the Academy of Motion Picture Arts and
Sciences Board of Governors, and is a former ASC President and current ASC Vice
President. He has photographed the feature films Federal Hill, American Buffalo,
American Pie and Down to Earth.
| 311
Adjusting Printer Lights
to Match a Sample Clip
by Bill Taylor, ASC
H
filters.
ere’s a low-tech but usefully accurate method for modifying printer
lights “in the field” to match the color of a film clip, using CC Wratten
You’ll need a light box with a color temperature around 5400°K, a card-
board mask with two closely-spaced, side-by-side frame-size holes cut into
it and a set of color correction filters of .05, .10, .20, .30, .40, and .50 values
in Yellow, Cyan, Magenta, and Neutral Density. (Kodak Wratten CC filters
are expensive and fragile to use as viewing filters, however they will last
a long time if handled carefully. PC filters retired from darkroom or still
photo use are just fine. You may be able to find less-expensive plastic sub-
stitutes.) You’ll also need a loupe with a field big enough to see at least part
of both frames.
Put the mask on the light box, put the sample print film clip (the one
to match) over the left-hand mask aperture. Put the target print (to be
reprinted) over the right-hand aperture. (The mask keeps your eyes from
being dazzled by white light leaking around the film frame.)
It’s a two-step process.
CoLor AdjuStMent
Estimate the color shift needed to bring the color of the target clip around
to match the sample. For example if the sample is a too blue, add yellow
filters. If it’s too cyan, add yellow and magenta. If the target clip now looks
too dark with the color filters added, remove some ND from the target (or
add it to the sample). With practice, you’ll be able to get a reasonable match.
From time to time, look away for a moment at a neutral color so your eyes
will not get tired.
Now add up the color filters (ignore the ND’s for now). Let’s say you
added a total CC .15 magenta and .30 yellow to the target print. Consult the
table below for the correction needed in printer points.
312
|
Adjusting Printer Lights
2 = .15 5 = .40
3 = .20 6 = .45
3 = .25 7 = .50
4 = .30 8 = .60
Why it works: color print stock has a gamma of about 3, meaning, as an exmple,
that a CC .10 (log E) change in exposure produces a change of .30 density. Each
printer point equals CC .025 change in exposure, so it takes 4 printer points to produce
that .30 change. The table shows calculations to the nearest whole printer point.
| 313
Cinemagic of the
Optical Printer
by Linwood G. Dunn, ASC
Lin Dunn, the author of this article, is one of the motion-picture industry’s most
accomplished pioneers. Involved with the design and origins of the first optical
printers, Dunn remained vibrantly active in the industry throughout his life and
applauded the introduction of CGI techniques that, in many ways, obviated many
of the needs for his optical printer. Dunn is responsible for the effects you can
and can’t see in Citizen Kane, and literally hundreds of film projects after that.
He was always on the cutting edge, and in his nineties founded of one of the first
companies to develop digital cinema, Real Image Digital. Dunn’s passing in May
1998 was a loss to all who were fortunate enough to be on the receiving end of his
generous knowledge and skill; his was also the loss of one of our remaining links
to Hollywood’s earliest days. Winner of the Gordon E. Sawyer Academy Award
from the Motion Picture Academy of Arts and Sciences, and a past President of
the ASC, Dunn penned an article that is as relevant today as when he first wrote
it. — Editor
TransiTiOnal EffECTs
Employed to create a definite change in time or location between scenes.
The fade, lap dissolve, wipe-off, push-off, ripple dissolve, out-of-focus or
diffusion dissolve, flip-over, page turn, zoom dissolve, spin-in and out, and
an unlimited variety of film matte wipe effects are all typical examples of the
many optical transitional effects possible.
OPTiCal zOOm
Optical zoom is used to change frame-area coverage and image size during
forward and reverse zooming action in order to: produce a dramatic or impact
| 315
Cinemagic of the Optical Printer
effect (according to speed of the move); counteract or add to the speed and
motion of camera zooms or dolly shots; reframe by enlargement and/or add
footage to either end of camera zooms or dolly shots by extending the range
of moves; momentarily eliminate unwanted areas or objects by zooming for-
ward and back at specific footage points (such as when a microphone or lamp
is accidentally framed in during part of a scene); add optical zoom to static
scene to match camera zoom or dolly in a superimposure. The out-of-focus
zoom also is effective to depict delirium, blindness, retrospect, transition, etc.
suPErimPOsurE
Superimposure is the capability used to print an image from one or more
films overlaid on one film. This is commonly done in positioning title let-
tering over backgrounds. Also used for montages, visionary effects, bas
relief; adding snow, rain, fog, fire, clouds, lightning flashes, sparks, water
reflections and a myriad of other light effects.
sPliT sCrEEn
Employed for multiple image, montage effects, dual roles played by one
actor, for dangerous animals shown appearing in the same scene with peo-
ple (as in Bringing Up Baby, which shows Katharine Hepburn working with
a leopard throughout the picture), where such split screens move with the
action. Matte paintings often utilize this technique when live-action areas
require manipulation within an involved composite scene.
qualiTy maniPulaTiOn
The quality of a scene, or an area within a scene, may be altered in order
to create an entirely new scene or special effect or to match it in with other
scenes. There are innumerable ways to accomplish this, such as adding or
reducing diffusion, filtering, matting and dodging areas, and altering con-
trast. Often library stock material must be modified to fill certain needs,
such as creating night scenes from day; reproducing black-and-white on
color film through filtering, printed masks or appropriately coloring certain
areas through localized filtering; and the combining of certain areas of two
or more scenes to obtain a new scene, such as the water from one scene and
the terrain or clouded sky of another.
adding mOTiOn
Employed to create the effect of spinning or rotating, as in plane and auto
interiors and in certain montage effects; rocking motion for boat action,
sudden jarring or shaking the scene for explosion and earthquake effects;
distortion in motion through special lenses for drunk, delirious and vision-
ary effects.
316
|
Cinemagic of the Optical Printer
Traveling mattes
Used to matte a foreground action into a background film made at another
time. The various matte systems in use today require the optical printer in
order to properly manipulate the separate films to obtain a realistic-quality
matching balance between them when combined into a composite. Use of
this process has greatly increased as modern techniques produce improved
results at reduced costs. Motion control, referred to earlier, has greatly wid-
ened the scope of this visual-effects category.
anamorphic Conversions
The standard optical printer equipped with a specially designed “squeeze”
or “unsqueeze” lens can be used to produce anamorphic prints from “flat”
images, or the function reversed. The possibility of the “flat” or spheri-
cal film being converted for anamorphic projection without serious loss
of quality has greatly widened this field of theatrical exhibition. The ma-
nipulations available on the optical printer also make it possible to scan and
| 317
Cinemagic of the Optical Printer
nEw sysTEms
The optical printer is being used to develop new horizons in the creation
of special camera moves within an oversized aperture. This is particularly
effective in the creation of camera movement in a composite scene, such as
one involving a matte painting, thereby giving a greater illusion of reality.
318
|
Cinemagic of the Optical Printer
ness of operation for sound, high tracking speed, long distance of travel, and
adaptability to various production cameras and visual effects cameras.
A visual effects supervisor who is familiar with this paraphernalia should
be available to collaborate with the director, director of photography, first
assistant director, production designer, and other appropriate crew mem-
bers to achieve the proper set-up for any given plate. Of course there is
responsibility to achieve any given plate within reasonable and predictable
set-up time, and for this reason careful preproduction planning is neces-
sary between the contracted visual effects facility and the unit production
manager. When shooting bluescreen or greenscreen scenes involving actors
within the principal production schedule, the wardrobe should be discussed
with the visual effects supervisor to make an attempt to avoid certain colors
that might cause matting problems in postproduction.
Motion-Control ExtEnds
Cinematic Capabilities
Motion-control systems are useful in many ways for visual effects. The
following list is certainly not exhaustive:
1. The ability to program model shots so that the motion of objects in an
effects scene is believable, and to preview these moves and modify them
as needed for approval.
2. The ability to repeat these scenes for front-light/back-light or front-light/
front-light matte passes if needed.
3. The ability to repeat these scenes for enhancement effects such as engine
passes, running lights, smoke-room effects, filtration, etc.
4. Precision flyby and extremely close approaches to objects can be accom-
plished smoothly and in perfect (programmable) focus.
5. Stop-motion and other forms of animation can be subsequently included
in live-action scenes that have field recorded moving camera data.
6. Massive rig removals on stage or on location can be easily accomplished
with moving split-screen techniques by shooting a clean pass of the back-
ground once the director is satisfied with any given take.
time and played back in concert until all of the functions necessary to the
particular shot are completed. The result is similar to remote controlling a
model airplane or car and making an exact record of what happened.
Using another programming technique, the joystick can be used to move
the components of the system to a series of start and end positions while
a record is made of these key positions, then subsequently commanding
the system to generate a mathematically smooth path through these points.
Much more complex methods of move generation are available using com-
puter graphics. There are many bells and whistles available which include
move-smoothing programs, even the use of a digital graphics tablet which
enables visual modification of graphically displayed motion curves; and
other specialized software ad infinitum.
The move files can be edited and modified in as many ways as there are
motion-control systems. A few commercial electronic motion-control sys-
tems are available, as well as mechanical systems, the most ubiquitous being
the Kuper system which recently received a Scientific and Engineering Oscar®.
Motion-Control technique
When working on Star Wars, we started with an empty building and had
to amass, modify and build our motion-control equipment before we could
produce any images. We had created visual ”violins” and had to learn to play
them. Fortunately, the picture hit and a large audience showed up for our
motion-control recitals. Since then, many innovations have come about in
the equipment and many excellent motion-control cinematographers have
appeared, and with many specialty techniques. In the studio there are two
main techniques for programming motion files: one is to use start and end
positions for each axis of motion and have the computer generate the moves;
the other allows the cameraperson to generate the move by joystick. It is my
opinion that the computer-generated method is superior for graphics and
animation purposes, and the human interface is best for most miniature and
model photography. If shots are created using a computer, the moves will
have mathematically perfect curves, slow-ins, slow-outs, etc., and will have
no heartbeat or verve, especially in action sequences, therefore becoming
subliminally predictable and less interesting to the audience. The human
operator is not interested in mathematical perfection, rather, they tailor the
camera move moment by moment to what is interesting in their viewfinder.
This human sense of curiosity is present in the work of a talented operator,
and it transfers to the audience.
Richard Edlund, ASC currently serves on the Academy of Motion Picture Arts
and Sciences Board of Governors and is a chairman of the Academy’s Scientific
and Technical Awards Committee as well as the Academy’s Visual Effects Branch.
He has been awarded Oscars® for visual effects in Star Wars, The Empire Strikes
Back, Raiders of the Lost Ark and Return of the Jedi.
324
|
Motion-Control Cinematography
| 325
Greenscreen and Bluescreen
Photography
by Bill Taylor ASC and Petro Vlahos
Overview
Fix it in POst
Production and post-production are not only separated in the budget,
they are too often separated by a huge cultural gulf. The driving force of pro-
duction is time, and in the name of speed it’s very easy to take shortcuts that
have a disastrous downstream impact. “Fixing in post” can cost hundreds of
thousands over the scope of a film and jam up an already tight post schedule.
Furthermore there is no “post fix” possible for some on-set mistakes. We
describe here what have proven to be the best practices, materials and equip-
ment available for this work.
For ease of reference, we’ll deal with the most-requested topic first:
se
pon White
s
Re point
ear
Lin
Toe Green
(shadows compressed) Red
Blue
Exposure (log e)
Color Negative Response Curve (colors offset for clarity)
Figure 1. Schematic H&D curve
A common misconception is that backing brightness should be adjusted
to match the level of foreground illumination. In fact, the optimum back-
ing brightness depends only on the f-stop at which the scene is shot. Thus,
normally lit day scenes and low-key night scenes require the same backing
brightness if the appropriate f-stop is the same for both scenes. The goal is
to achieve the same blue or green density on the negative, or at the sensor, in
the backing area for every shot at any f-stop.
The ideal blue or green density is toward the upper end of the straight-line
portion of the H and D curve, but not on the shoulder of this curve, where
the values are compressed. Figure 1 shows an idealized H&D curve, a graph
which shows how the color negative responds to increasing exposure. In
film, each color record has a linear section, where density increases in direct
proportion to exposure, and a “toe” and a “shoulder” where shadows and
highlights respectively can still be distinguished but are compressed. Eight
stops of exposure range can comfortably fit on the H&D curve, a range only
recently achieved by digital cameras. The “white point” is shown for all three
records: the density of a fully exposed white shirt which still has detail.
Imagine a plume of black smoke shot against a white background. It’s a per-
fect white: the measured brightness is the same in red, green, and blue records.
The density of the smoke in the left-hand image ranges from dead black
to just a whisper. What exposure of that white backing will capture the full
range of transparencies of that smoke plume?
Obviously, it’s the best-compromise exposure that lands the white backing
at the white point toward the top of the straight line portion of the H&D
| 327
Greenscreen and Bluescreen Photography
Image Courtesy of BIll taylor asC
is now lit to the brightest tone that still has detail (white shirt white) even
though the actual set lighting may not reach that level.
2. View the white card against the screen through a Wratten No. 99 green
filter. (Use a Wratten No. 98 blue filter for a blue backing.) In a pinch, Pri-
mary Green or Primary Blue lighting gels, folded to several thicknesses,
will serve.
3. Adjust the backing brightness so that the white card blends into the back-
ing. The circle overlay in Figure 3 shows the view through the filter. When
the edges of the card are invisible or nearly invisible, the green light com-
ing from the screen is now the same brightness as the green light compo-
nent coming from the f4 white card. (If you were to photograph the white
card now, the red, blue and green components coming from the card
would reproduce near the top of the straight line portion of the curve.
Since the greenscreen matches the brightness of the green component
coming from the white card, the green layer will also be exposed near the
top of the straight line portion of the curve, without overexposure.) The
backing will now expose properly at f4.
If it is easier to adjust set lighting than backing brightness, the procedure
can be reversed. Adjust the white card’s light until the card blends in, then
take an incident reading. Light the set to that f-stop.
The white card procedure needs to be done only once for a given screen
and illuminator setup. For the rest of the shoot, a quick spotmeter reading
will confirm the brightness and exposure needed. Once the backing bright-
ness is set, the spot meter may be calibrated for use with the appropriate
color filter to read f-stops directly: Wratten No. 99 + 2B for green; Wratten
No. 98 (or 47B + 2B) for blue. (The UV filters—built into the No. 98—en-
sure that UV from the tubes does not affect the reading.) Simply adjust the
meter’s ISO speed setting until the reading from the screen yields the target
f-stop (f4 in the example above). It’s advisable to use the same individual
meters (not just the same models) for the shoot as was used for testing.
Just as in the smoke plume example, more exposure is counterproductive;
it pinches out fine detail due to image spread and pushes the backing values
into the nonlinear range of the film or video sensor. Slight underexposure
is preferable to overexposure, but more than a stop of underexposure is
also counterproductive; the software will have to make up matte density by
boosting contrast or a levels adjustment.
Figure 4. Waveform display showing Green Figure 5. Greenscreen with white strip in
at 90% same light
an IRE video level of about 90% in the appropriate channel, as shown on the
waveform monitor.
The waveform display Figure 4 show the 4 channels of the image of a
greenscreen displayed in “parade mode”: Y or Luminance (brightness), Red,
Green and Blue. The display tells us a lot about the subject, which is a green-
screen filling the width of the frame and exposed to a level of 90% IRE. (Note
that 100% IRE represents the full scale of brightness from black to white. IRE
values are expressed as a relative percentage because a video signal can be
any amplitude.) The traces are flat in all channels, showing that the screen
was evenly illuminated across its width. Since this screen was lit with white
light, there is a substantial level of Red and Green contamination present,
and the relative amount is shown clearly. (Had it been lit with Kino Green
fluorescents, the Red and Blue values would be close to 0.)
The difference in level between Green and Blue (indicated by the white
double-headed arrow in Figure 4) is the headroom that the software exploits
to create the Alpha channel. The greater the difference, the less the software
will need to boost the contrast of the Alpha channel. The waveform monitor
makes it a simple matter to compare paints and materials,
However If the screen and the action must be lit together, a value of about
80 IRE on the waveform monitor is the maximum level at which the screen
can be exposed without clipping the whites in the foreground. Including a
strip of white card lit to the same level as the green screen, as in Figure 5, and
the corresponding waveform display Figure 6 shows why. With Green at 80,
the white card (indicated by the three white arrows in Figure 6) is at 90; more
exposure would push the white into clip. In practice a slight underexposure
is preferable to overexposure.
Too much underexposure is undesirable. In Figure 7, the Green level has
been set to IRE 45. There is now significantly less difference between Green
and Blue levels available to create the matte image in the Alpha channel.
Digital cameras like Arri Alexa and Sony F65 have even greater dynamic
range than film, with more latitude for exposure in the linear (nonclipping)
range. In these cameras the waveform monitor can be set to read the “log C”
| 331
Greenscreen and Bluescreen Photography
Figure 6. Waveform display showing Green Figure 7. Waveform display showing Green
at 80%, White at 90% at 45%
transform (preferable to Rec. 709) of the camera’s “Raw” or data mode. Lack-
ing a waveform monitor, it’s a safe bet to set exposure for these cameras as
you would a film negative.
tHe cOMPOsite
Once the Processed Foreground and the Alpha Channel are made, the
final composite is a straightforward two-step operation. First the values in
the Alpha Channel image are multiplied pixel-by-pixel with the background
image. The result is a silhouette of the foreground combined with the back-
ground image (see Figure 9b).
Then the processed foreground is added pixel-by-pixel to the combined
Alpha and background. If good practices have been followed, the final re-
sult is a seamless combination of foreground and background images (see
Figure 10b).
Image Courtesy of BIll taylor asC
rotoscoping
“Rotoscoping” or “roto” is another method for making alpha-channel
composites. The alpha-channel masks are created by hand-tracing the out-
line of the foreground action frame-by-frame. This technique of tracing live
action (named by its inventor, the pioneer cartoonist Max Fleischer) is now
more refined and widely used than ever, because 3-D conversion depends on
it. It is extremely labor intensive but very flexible, since no special backing
or lighting is required.
Present-day computer-assisted rotoscoping can produce excellent com-
posites. Nearly all the composites of foreground actors in Flags of Our
Fathers (2006) were created by rotoscoping. Good planning was the key to
avoiding difficult foreground subjects. It helped a lot that many of the actors
wore helmets, simplifying their silhouettes.
Some foregrounds are poor candidates for rotoscoping. For example if the
foreground has a bright, burned-out sky behind it, it will flare into and even de-
stroy the edge of the foreground. Flare can be very difficult to deal with (unless
of course the final background is also a bright, burned out sky.) Fine detail and
blurred motion are also difficult or impossible to roto, often requiring paint-
work replacement. The skill of the artists is the single most important factor.
Rough roto masks (“G mattes”) are often used with green screen com-
positing, to clean up contaminated areas of the backing, remove unwanted
supports, and so forth.
Fabrics
Even an indifferent backing can give good results if it is lit evenly with nar-
row-band tubes or LEDs to the proper level (within plus or minus ⅓ f-stop)
Spill from set lighting remains a concern.
| 335
Greenscreen and Bluescreen Photography
Paint
Composite Components’ Digital Green© or Digital Blue© paint is the
preferred choice for large painted backings. As with fabrics, there are other
paint brands with similar names that may not have the same efficiency.
Equivalent paints made specifically for this purpose are also available from
Rosco. Paints intended for video use, such as Ultimatte chroma-key paints,
can also be used with good illuminators (lights). A test of a small swatch is
worthwhile with materials whose performance is unknown.
Plastic Materials
Plastic materials are a good alternative to fabric or paint for floor covering.
Fabric can be hazardous if loose underfoot. Painted floors scuff easily and
quickly show shoe marks and dusty footprints.
ProCyc’s Pro Matte plastic material is a good choice for floors or for entire
limbo sets. The material is a good match to Digital Green© and Digital Blue©
3. A noted researcher and pioneer in the field, Jonathan Erland of Composite Com-
ponents Co. in Los Angeles, won an Academy Award for CCC’s line of patented
Digital Green© and Digital Green© lamps, fabric and paint.
336
|
Greenscreen and Bluescreen Photography
Figure 12a & 12b. Straight composite, composition with clipped Alpha
doorway, and so forth. The props all cast shadows on themselves and the
blue or green floor, and the actor casts shadows on everything. With lighting
alone it’s impossible to eliminate set piece shadows without washing out the
actor’s shadow.
Several software packages have features to cope with nonuniform back-
ings. Ultimatte Screen Correction software can compensate for backing
luminance variations as great as two stops.
Screen correction is easy to use: After lighting the set, shoot a few seconds
before the actors enter. This footage is called the “clean plate” or “reference
plate”. All the backing and lighting imperfections are recorded on those few
frames. Now shoot the actors as usual.
When making the composite, the artist selects a well-lit reference point
near the subject. Software derives a correction value by comparison with the
clean plate and corrects the rest of the backing to the measured level. Software
compares the clean frames pixel by pixel with the action frames, and inhibits
the correction process in the subject area (the actor) and proportionately
inhibits the correction in transparencies. Even though the set had a wide
variation in color and brightness, the fine hair detail, the actor’s shadows,
Image Courtesy ultImatte Corp.
her reflections and the full range of transparencies in the shawl have been
retained in the Alpha (Figure 13a) and in the composite (Figure 13b).
Backing defects such as scuffed floors, set piece shadows, and color varia-
tions in the backing as well as minor lens vignetting all disappear. Note that
the actor’s shadows reproduce normally, even where they cross over shadows
already on the backing.
There is a significant limitation: if the camera moves during the shot,
the identical camera move must be photographed on the empty set for the
length of the scene. While it is reasonably quick and simple to repeat pan-
tilt-focus camera moves with small, portable motion control equipment, that
equipment is not always available. Fortunately top camera operators have
an almost uncanny skill at repeating previous moves. Skilled match movers
can bring a “wild” clean pass into useful conformance around the actor, and
remove discrepancies with rotoscoping. Some matchmovers prefer the clean
footage to be shot at a slower tempo to improve the chances that more wild
frames will closely match the takes with the actors.
When it’s not possible to shoot a clean plate, Ultimatte AdvantEdge soft-
ware can semiautomatically generate synthetic clean frames. The software
can detect the edges of the foreground image, interpolate screen values in-
ward to cover the foreground, and then create an alpha using that synthetic
clean frame. There are some limitations; it’s always best to shoot a clean plate
if possible.
iLLuMinatOrs
The best screen illuminators are banks of narrow-band green or green
fluorescent tubes driven by high-frequency flickerless electronic ballasts.4
These tubes can be filmed at any camera speed. The tube phosphors are
formulated to produce sharply cut wavelengths that will expose only the
desired negative layer while not exposing the other two layers to a harm-
ful degree. These nearly perfect sources allow the use of the lowest possible
matte contrast (gamma) for best results in reproducing smoke, transparen-
cies, blowing hair, reflections, and so forth.
Kino Flo four-tube and eight-tube units are the most widely used lamps.
They are available for rent with “Super Green” or “Super Blue” tubes from
Kino Flo in Sun Valley, CA, and lighting suppliers worldwide. The originators
4. Flickerless electronic ballasts prevent the light from being unevenly exposed on
film at speeds that are faster or slower than 24 frames per second. If one does not use
them and shots at any other speed than 24 frames per second the image will appear
to flicker. At 180° shutter, other frame rates which are evenly divisible into 120 will
be flicker free with standard balasts, assuming 60 Hz AC power. For 50 Hz, divide
fps into 100. For frame rates that are not evenly divisble, other shutter angles that
eliminate lighting flicker can be confirmed by testing.
| 339
Greenscreen and Bluescreen Photography
GREEN SCREEN
Approx. 1/2
Screen Height
LOWER GREEN
LIGHT BANK
SIDE VIEW
(no scale)
Approx. 1/2
UPPER AND LOWER GREEN LIGHT BANKS
Screen Height
CAMERA
ACTOR GREEN SCREEN
TOP VIEW
(no scale)
Figure 14a & b. Side view and top views of screen lit with 6 light banks
| 341
Greenscreen and Bluescreen Photography
results. The Rosco CalColor line of primaries works best. Balance to the
desired brightness in the screen color as described below. The downside is
great loss of efficiency; it takes about four filtered daylight tubes to equal the
output from one special-purpose tube.
Regular 60Hz ballasts can be used with commercial tubes at the cost of
weight and power efficiency. As with any 60Hz fluorescent lamps, 24 fps
filming must be speed-locked (nonlocked cameras are fortunately rare) to
avoid pulsating brightness changes, and any high-speed work must be at
crystal-controlled multiples of 30 fps. These tubes are somewhat forgiving of
off-speed filming because of the “lag” of the phosphors.
Backings can also be front-lit with Primary Green or Primary Blue-filtered
HMI lamps. The only advantage is that the equipment is usually already
“on the truck” when a shot must be improvised. Getting even illumination
over a large area is time-consuming, and filters must be carefully watched
for fading. Heat Shield filter material is helpful. Because of high levels of
the two unwanted colors, HMI is not an ideal source, but it is better than
incandescent.
In an emergency, filtered incandescent lamps can do the job. They are an
inefficient source of green light and much worse for blue (less than 10% the
output of fluorescents) so they are a poor choice for lighting large screens.
Watch for filter fading as above.
A green or green surface illuminated with white light is the most chal-
lenging, least desirable backing from a compositing standpoint. White light,
however, is required for floor shots and virtual sets when the full figure of
the actor and his shadow must appear to merge in the background scene.
Advanced software can get good results from white-lit backings with the
aid of Screen Correction and a “clean plate” as described above. Difficult
subjects may require assistance with hand paintwork.
eye PrOtectiOn
A word about eye protection: Many high-output tubes produce enough
ultraviolet light to be uncomfortable and even damaging to the eyes. Crew
members should not work around lit banks of these fixtures without UV
eye protection. It is good practice to turn the tubes off when they are not in
use. The past practice of using commercial blueprint tubes was dangerous
because of their sunburn-level UV output.
A blue backing is satisfactory for most colors except saturated blue. Pastel
blues (blue eyes, faded blue jeans, etc.) reproduce well. The color threshold
can be adjusted to allow some colors containing more blue than green (such
as magenta/purple) into the foreground. If too much blue is allowed back
into the foreground, some of the blue bounce light will return. Therefore,
if magenta wardrobe must be reproduced, it is prudent to take extra care
to avoid blue bounce and flare. Keep the actors away from the backing, and
mask off as much of the backing as possible with neutral flats or curtains.
Saturated yellows on the subject’s edge may produce a dark outline that re-
quires an additional post step to eliminate. Pastel yellows cause no problems.
A green backing is satisfactory for most colors except saturated green.
Pastel greens are acceptable. Saturated yellow will turn red in the composite
unless green is allowed back into the subject, along with some of the green
bounce or flare from the original photography. The same precautions as
above should be taken to minimize bounce and flare. Pastel yellow is accept-
able. Figure 15 shows a test of green car paint swatches against green screen.
The hue and saturation of the “hero” swatch was sufficiently distinct from
the screen color to pose no difficulties in matting or reproduction. Bounce
light from the screen was carefully flagged off in the actual shot. Note that
none of the colors in the MacBeth chart are affected except for two saturated
green patches, which have become semi-transparent.
High bounce levels are unavoidable where the actor is surrounded by a
green floor or virtual set: one should not expect to reproduce saturated ma-
genta or saturated yellow on a green floor without assistance in post.
If the foreground subject contains neither saturated green nor saturated
blue, then either backing color may be used. However, the grain noise of
the green emulsion layer on color negative and the green sensor in a digital
camera is generally much lower than the grain noise of the blue layer. Using
a green backing will therefore result in less noise in shadows and in semi-
transparent subjects. Black smoke in particular reproduces better against a
green backing.
Obviously, it is important for the cinematographer and his ally the vfx
supervisor to be aware of wardrobe and props to be used in green screen
and blue screen scenes. Sometimes a difficult color can be slightly changed
without losing visual impact, and save much trouble and expense in post. If
in doubt, a test is always worthwhile. Video Ultimatte Preview (see below)
can be invaluable.
Some visual effects experts prefer blue backings for scenes with Caucasian
and Asian actors, finding it somewhat easier to achieve a pleasing flesh tone
without allowing the backing color into the foreground. Those skin tones
reflect mostly red and green and relatively little blue. For dark-skinned ac-
tors, either backing color seems to work equally well.
| 343
Greenscreen and Bluescreen Photography
Image Courtesy of BIll taylor asC
Back-lit Backings
Backings can be backlit (translucent) or front-lit. Big translucent backings
are almost extinct due to their high cost, limited size and relative fragility.
Translucent Stewart blue backings gave nearly ideal results and required
no foreground stage space for lighting. Due to lack of demand, Stewart has
never made translucent greenscreens. Front-lit backings are more suscep-
tible to spill light, but with careful flagging they can produce a result every
bit as good as back-lit screens.
344
|
Greenscreen and Bluescreen Photography
Front-lit Backings
If the actor’s feet and/or shadow do not enter the background scene, then
a simple vertical green or green surface is all that is needed. The screen can
be either a painted surface or colored fabric. Any smooth surface that can be
painted, including flats, a canvas backing, and so forth, can be used. Fabrics
are easy to hang, tie to frames, spread over stunt air bags, and so on. Please
see the section on Illuminators in this chapter for spacing and positioning
of lamps.
both front light and back light. In that instance the more reflective Digital
Green© material is the best compromise, with exposure controlled carefully
to avoid overexposure.
The choice of fabric and paint affects not only the quality of the composite,
but also lighting cost. Some screen materials are much more efficient than
others, and require many fewer lamps to light to the correct level. In general,
green screens and tubes are more efficient than blue screens and tubes. Sav-
ings on lamp rentals can amount to tens of thousands of dollars per week on
large backings.
underwater PHOtOGraPHy
In addition to underwater diving or swimming shots, underwater green-
screen photography creates a zero-G environment for actors with an all-axis
freedom of motion impossible on wire rigs.
The biggest challenge is keeping the water clear of sediment and particu-
lates. Underwater diffusion causes the screen to flare into the foreground and
vice-versa; It’s ruinous to the matte edges. High capacity pumps, good water
circulation and a multistage filter are necessary to keep the water clear. It’s
also important that all personnel have clean feet when they enter the tank.
Composite Components’ green material stretched on a frame works well
under water in a swimming pool or a tank. Tip the screen back to catch light
from above, with Rosco diffusion material floating on the water surface to kill
caustic patterns on the screen. Build up the screen lighting level with green
fluorescent units above the water. Underwater Kino Flo lamps are also available.
| 347
Greenscreen and Bluescreen Photography
High chlorine levels common in swimming pools bleach out the screen
quickly; pull the screen out of the tank daily and rinse it off with tap water.
Local color
Of course, skylight is intensely blue, so fill light supposedly coming from
the sky should be blue relative to the key. Likewise, if actors and buildings in
the background are standing on grass, much green light is reflected upward
into their shadows. If the actor matted into the shot does not have a similar
greenish fill, he will not look like he belongs in the shot. Careful observa-
tion is the key. In a greenscreen shot, the bounce light from grass is low in
both brightness and saturation compared to the screen color, so that color
cast can be allowed in the composite foreground while still suppressing the
screen. The same is true of sky bounce in a bluescreen shot.
A day exterior shot will often shoot in the f5.6 to f11 range or even deeper.
Fortunately, efficient lighting and high ASA ratings on films and sensors per-
mit matching these deep f-stops on the stage. In a day car shot, for example,
holding focus in depth from the front to the rear of the car contributes to
the illusion.
Figure 17 shows a 28'-wide screen lit with 16 four-tube Kino Flo lamps, plus
two HMI “helper” lamps with green filters on the sides. This combination made
it possible to film at f11 with 200 ASA Vision 2 negative. Curtains at left, right,
and top made it easy to mask off portions of the screen outside the frame.
Of course, when it’s possible to film foregrounds like this one in daylight,
so much the better.
SILK DIFFUSION
Upper blue
light bank
GRADUATED TEASERS
BLUE
SCREEN
WEIR &
SPILLWAY
Lower blue
light bank
©unIversal studIos
Image Courtesy BruCe almIghty,
Figure 20. Bruce Almighty water tank composite
the contamination disappears, so do all the transparent foreground pixels of
the same color. Screen Correction is invaluable in extracting the maximum
detail from smoke and spray shot against white-lit backings.
If the foreground must be flat-lit to simulate overcast, a good approach is
to bring most of the light in from overhead through a large, translucent silk.
On stage, much of the overhead soft light may be kept off the backing with a
series of horizontal black teasers hung directly beneath the silk, running its
entire width parallel to the backing. The teasers are progressively longer top
to bottom as they get near the backing, preventing the backing from “seeing”
the silk (see Figure 18 above).
tracking Markers
When the foreground camera moves, the background must move ap-
propriately. Unless foreground and/or background can be photographed
354
|
Greenscreen and Bluescreen Photography
unIversal studIos
Image Courtesy
Figure 22. Three frames from “Bruce Almighty” Steadicam shot
with a motion-control camera, tracking data must be extracted from the
foreground image and applied to the background during compositing. This
process is called Matchmoving.
Tracking marks applied to the otherwise featureless screen give the
matchmovers fixed points to track. These marks must obviously show in
the photographed scene, but ideally they should clear the foreground actors,
or at least avoid their heads, since they must be removed in the composite.
Marks are typically laid out in a rectangular pattern, with about 3' to 5' be-
tween them—depending on the lens used, the action and the distance to the
backing. Black or white tape crosses will usually suffice, though uniquely
identifiable markers are very helpful if there is much tracking to do.
Figures 22 shows the continuation of the elaborate Steadicam shot that
begins in Figures 8 through 10 earlier in this chapter that includes a pan of
about 140 degrees The black tape marks on the screen provided the data to
track in the panoramic background (Figure 23), which was seamed together
from three static BeauCam VistaVision plates shot from a tugboat with a
gyro-stabilized head. In the process the American Falls and the Canadian
Falls were moved closer together.
If camera shake or other sudden motion is required in the foreground
photography, motion blur can obliterate the tracking marks. The Aerocrane
Strobe Tracking System created by Alvah Miller provides target arrays of
LED lamps which strobe in sync with the camera shutter, giving well-defined
marks on every frame even if they are not in focus. Cylindrical LEDs have
uniform brightness even when viewed off axis.
Sometimes it is desirable to light the tracking LEDs continuously, allowing
them to blur in motion. Valuable tracking information can be derived from
the length of the blur. Consult the tracking team for their preference.
On-set Preview
On-set preview composites made with a still camera and calibrated monitor,
like the Kodak/Panavision Preview System, or a live composite made with a
hardware Ultimatte device will alert the crew to problems before they are com-
mitted to film. A few video assist companies provide this specialized service.
Using the digital Ultimatte previewer (hardware device or software on a
computer) on the motion picture set eliminates much guesswork and un-
| 355
Image Courtesy BruCe almIghty, Greenscreen and Bluescreen Photography
©unIversal studIos
tv Monitors
It’s often necessary to matte TV images into monitors when the desired on-
screen material is not available before shooting. When the monitor is live, the
best approach overall is to feed it a pure green signal and adjust its brightness
to match the shooting stop. With this approach the room reflections in the
monitor surface can carry over believeably into the composite, the camera
can operate freely, and actors can cross over the screen without difficulties
in post. Watch for reflections of the screen on actors. Where the monitor is
just a prop, it’s often possible to rig a backlit green fabric screen in or behind
the monitor. If set lighting permits the monitor to be front-lit to a sufficiently
high level, the monitor can be painted green or covered with green fabric
behind the glass face plate. The edges of the monitor usually provide all the
tracking data needed in operated shots; no on-screen markers required.
and background film stocks do not have to match, but of course it’s helpful if
they have similar grain and color characteristics.
Kodak Vision 2, 100T and 200T (tungsten balance), films are ideal for
green and blue backing work. The dye clouds are very tight and well defined.
Vision 3, 500T, the latest in a series of remarkably fine-grain high-speed
films, as one would expect is still grainier than the lower speed films. While
the 500T film is not ideal, a well-exposed 500T negative is much better than
a marginally exposed 200T negative!
An interlayer effect in these films produces a dark line around bright fore-
ground objects (such as white shirts) when they are photographed against a
green screen. Software can deal with this effect.
Kodak Vision 2, 50-speed daylight film produces superb results in sun-
light, with very low shadow noise, but require high light levels on stage.
If these 100T and 200T films cannot be used for aesthetic reasons, one
should still pick the finest grain emulsion compatible with lighting re-
quirements. Be aware that additional image processing (and cost) may be
required. A few negative emulsions have so much cross-sensitivity between
the color layers that they should not be used.
Film emulsions are constantly evolving. As an example, recent improve-
ments in red sensitivity in some emulsions have been accompanied by more
sensitivity to infra-red reflected from costumes, altering their color notice-
ably. This effect is easily dealt with by filtration—if you know it’s there. A
quick test of actors and costumes is always worthwhile.
spatial resolution
Spatial resolution is broadly related to the number of photosites (light-
sensitive elements) available for each color. In single-chip cameras, the
green, red, and blue photosites are on a single plane in a mosaic geometry.
Depending on the camera, groups of four to six adjacent photosites are
sampled and interpolated to create each full-color pixel. (The variation in
sampling methods is the reason that there is not necessarily a relationship
between pixel count and actual resolution of a given camera.) Most digital
cameras use a mosaic called a Bayer Array on which there are half as many
| 357
Greenscreen and Bluescreen Photography
blue photosites as there are green photosites. Likewise there are half as many
red photosites as green photosites. The “missing” values are derived through
interpolation from adjacent pixels in the “de-Bayering” operation. Since
human visual acuity is greatest in the green wavelengths, Bayer’s array gives
excellent visual results from an optimally small number of photosites.
Even in the best high-resolution Bayer arrays the blue and red image is
still half the resolution of the green image, which limits the resolution and
fine detail of the mask image.5 To address this and other image quality issues,
a few high-end single-sensor cameras (Panavision’s Genesis, Sony F35) have
a 35mm-film-sized sensor with full resolution in all three colors. (Although
the sensors in the two cameras are nearly identical, at this writing the F35
has the edge in dynamic range.)
In three-chip cameras like Sony F23, the color image is split into green, red,
and blue images by a beam-splitter behind the lens. Each component color is
imaged on one of three full-resolution chips, so there is no resolution loss in the
red and blue channels, and no need to interpolate color values. The F23 uses
2⁄3" HD resolution sensors, smaller than 35mm film, which results in greater
depth of field (similar to that of 16mm), which some filmmakers love and oth-
ers find undesirable. F23’s native output is a 4:4:4 pixel-for-pixel uncompressed
image, which when correctly processed yields first-class composites.
“4:4:4” does not refer directly to RGB bandwidth, but rather to “YUV”.
The Y channel carries the luma or brightness information while U and V
are the channels from which the color information is derived (similar to Lab
color space in Photoshop). In a 4:4:4 recording, every channel is recorded
at the full color depth. “4-4-4” is actually a misnomer, carried over from
standard definition D1 digital video. Because it’s well understood to mean
full bandwidth in all three channels, its use has continued into the high-
definition-and-higher digital cinema world.)
Arri Alexa, with a 35mm-film-sized Bayer Array sensor, on paper isn’t the
winner of the Pixel Count contest. Nevertheless Alexa has produced some
of the best composite results to date, thanks to dynamic range at least the
equal of present-day film negative and extremely high quality on-board image
processing.
RED Epic and Sony F65, examples of a new generation of 4K-and-higher
Bayer Array cameras, have produced state-of-the-art composite work. F65’s
huge dynamic range is particularly useful. At 4K and above, detail loss due to
de-Bayering is less of a factor. Any untried camera should of course be tested
with the actual subject matter.
5. It should be noted that film builders use a roughly equivalent compromise: green-
and red-sensitive negative layers have more grain and less resolution than the green
layer.
358
|
Greenscreen and Bluescreen Photography
recording
Recording in “data mode” gives maximum flexibility and best quality in
post. “Data mode” records the uncompressed data (as directly off the cam-
era sensor as the camera’s design allows) to a hard disk. This is often called
“Raw” mode, but beware: at least one camera’s (RED) “Raw” mode is in fact
compressed. Since Raw data cannot be viewed directly, a separate viewing
conversion path is required to feed on-set monitors.
If recording in data mode is not possible, shoot material intended for
postcompositing as uncompressed 4:4:4 full-bandwidth HD (or better)
video onto a hard drive or a full-bandwidth VCR, such as Sony’s 4:4:4 SR
format machines. While Arri Raw is the preferred output from Alexa cam-
eras, Alexa can also record in Apple ProRes 4444, a remarkably high quality
compressed format that has produced good composite results.
To sum up, resolution numbers are not the whole story, since some cam-
eras trade off resolution for color depth. Test your available camera and
recorder choices.
an imperfect world
You may have no choice but to shoot or record with 4:2:2 equipment.
While 4:2:2 is not ideal, don’t forget that the last two Star Wars films were
| 359
Greenscreen and Bluescreen Photography
shot with 2⁄3" 4:2:2 cameras, cropping a 2.40 slice from the center, including
thousands of green screen composites. Test the camera on the subject mat-
ter. 4:2:2 can produce a satisfactory result in greenscreen (since the green
channel has the highest resolution in these cameras), but one should not
expect the ultimate in fine edge detail. (Consumer cameras typically record
4:1:1, and are not recommended for pro visual effects use.)
Whatever the camera, it can’t be overemphasized that any edge enhance-
ment or sharpening should be turned off. The artificial edges that sharpening
produces will otherwise carry into the composite and cannot be removed. If
sharpening is needed, it can be added during compositing.
FiLtratiOn
In general no color or diffusion filters other than color-temperature cor-
rection should be used on the camera when shooting green or blue screen
work. Compositing can be called “the struggle to hold edge detail”; obvi-
ously low-con, soft effects or diffusion filtering that affects the edge or allows
screen illumination to leak into the foreground will have an adverse effect.
For that reason, smoke in the atmosphere is not recommended; it can be
simulated convincingly in the composite.
To ensure that the filter effect you desire will be duplicated in the compos-
ite, shoot a short burst of the subject with the chosen filter, making sure it is
slated as “Filter Effect Reference”.
cOLOr cOrrectiOn
Color correction at the scanning/conversion stage can be a major source
of data loss. It should not be built-in to image files intended for compositing.
360
|
Greenscreen and Bluescreen Photography
On the other hand, a few frames recorded with the desired color and filtra-
tion will be invaluable reference in the composite step.
ultimatte
Ultimatte and Ultimatte AdvantEdge, are still the tools of choice for dif-
ficult shots. AdvantEdge borrows from Ultimatte’s knockout concept by
processing the edge transitions separately from the core of the foreground
image, blending them seamlessly into the background without loss of detail.
The deep and rich user controls require an experienced operator to get
the most from the software. The interface works as a “black box” within the
compositing package, which can complicate workflow. One benefit of this
| 361
Greenscreen and Bluescreen Photography
Primatte
Primatte was orginally developed at Imagica Japan by Kaz Mishima. The
unique polyhedral color analysis allows fine-tuned color selections between
foreground and background. The user interface is intuitive and uncompli-
cated while offering many options.
Petro Vlahos is the inventor of the film Color Difference Travelling Matte System,
and both video and film Ultimatte compositing. He has won three Oscars® for his
work.
362
|
Greenscreen and Bluescreen Photography
Model Size
Water, fire and exploding models should be as large as the budget and
safety allow, even half size if possible and shot high-speed. Intense wind can
help break up out-of-scale water droplets and, in some cases, fire. Exploding
models should be prebroken, reassembled and exploded within slow-mov-
ing, low-powered and colorful pyrotechnics, preferably with two or more
blasts. Other types of models can be built just big enough to be adequately
detailed and still carry depth of field.
Miniature explosions and fire can be dangerous because the camera may
need to be in close proximity to the miniature. Plan accordingly.
Shooting SPeedS
If there is no motion on the miniature, it can be photographed at any
speed. Water, fire, explosions and falling effects are usually done with large
models and camera speeds of up to 360 fps. The exact speed depends upon
the scale of the model and the effect desired. The chart on page 896 is a
starting point, but for the best results, tests should be made.
High-speed shots can often be expensive and unpredictable events because
of the uncertainty of required camera speeds, pyrotechnics, winds, mechani-
cal equipment, human error and the need to sequence events in much faster
succession than they will be viewed. It is not unusual to shoot miniature ex-
plosions at 300 fps to achieve a huge scale. One second of shooting time will
be seen as 12.5 seconds of screen time (300⁄24). If the pyrotechnician wants to
sequence many explosions, he may ignite the charges using a computer and be
able to specify each timing by the millisecond. Movements of the camera and
any objects in the shot will need to move 12.5 times faster than in real time.
This can make for some very fast-moving riggings. Be sure to work closely
with the rigging crew so that they will understand your problems (reach-
ing speed, operating, event timings, aborting, etc.). Achieving an adequate
level of good-looking lighting can be very difficult if shooting high-speed at
a small f-stop. If using HMIs, make sure that there will be no flicker at the
filming speeds. Scenes which are supposed to take place outdoors should be
shoot outdoors if weather permits.
With stop motion, shooting is accomplished at one frame at a time with
the object being slightly moved by hand between each frame. One fourth-
second exposures or more per frame allow for great depth of field in low
light levels. Stop-motion photography is used to give a freedom of move-
ment and expression to an object or figure.
Motion-control photography is used when the camera or an object or figure
is moved by computer-controlled motors at very slow speeds. Long exposure
times per frame allow for very small f-stops. The computer can repeat the
movements of the motors, which allows for multiple exposures. Any facet of a
| 365
Photographing Miniatures
shot can be isolated and wedged for intensity, color, filtration and atmosphere.
The image can be built up through multiple exposures made from the chosen
wedge frames, while the computer repeats the same motions each time.
Go-motion shooting is used when shooting animal or creature models.
The major body parts are attached to rods that are moved by computer-con-
trolled motors. Detail movements are animated by hand each frame. Single-
frame shooting allows for small f-stops at long exposure times. Coverage at
various angles and camera speeds is especially useful to help cushion the
risks on high-speed shots.
Dennis Muren, ASC is the senior visual-effects supervisor at Industrial Light &
Magic. Recipient of eight Academy Awards for Best Achievement in Visual Effects,
Muren is actively involved in the design and development of new techniques and
equipment.
366
|
Photographing Miniatures
| 367
In-Camera Compositing
of Miniatures with Full-Scale
Live-Action Actors
by Dan Curry
ASC Associate Member
h = actual height
above ground
h
sh = equivalent
scale height
| 369
Mixed-Scale Compositing
It is impossible to predict every situation that may arise, but the following
examples may provide useful guidelines:
Example 1: Actors approach a distant city or structure.
◗ If the actual height of the lens is 10', the ground level on the miniature
must be set below the lens an equal distance in scale. If the scale of the
miniature is ½" = 1', then ground level on the miniature should be 5"
below the lens.
◗ Depth of field must be determined (use the chart elsewhere in this
manual) to carry focus to include the miniature. If the nearest point on
the model is 4' from the camera and an 18 mm lens (on a 35 mm camera)
an f-stop of 5.6 with focus set at 6½' will carry focus from 3' 6" to infinity.
Example 2: Using a miniature as a foreground cutting piece.
Example 3: Perspective from a tall structure.
370
|
Mixed-Scale Compositing
Figure 3. Floor level of the miniature should be the same elevation it would have if full
scale. E.g., If the set was full sized and the floor was supposed to be 25' above ground
level, the floor of the miniature should also be 25' in elevation to keep the people on the
ground scale with the miniature. The lens should be the correct scale height above the
model’s floor.
ForCed PerSPeCTIve
Situations may arise where forced perspective or constantly changing
scales may provide the best solution to production needs. If this is the case,
close coordination between director, director of photography, visual effects
supervisor, production designer, and a highly skilled model maker is re-
quired, as there is no room for error.
MIxed SCALeS
To gain a greater illusion of distance, elements within a miniature scene
can be built, working from larger scale in the foreground to smaller scale the
farther away objects are intended to appear.
Example:
◗ A miniature elevated train, scale ¼" = 1', is to pass through frame as part
of a live-action scene.
◗ At 4 (¼" units) per foot 4 x 12" = 48 (¼" per second)
◗ The fractional equivalent of this scale is 1⁄48. (scale expressed as a frac-
tion) x (depicted speed) = scale speed
◗ If the speed depicted is 30 mph, then 1⁄48 x 30mph = .625mph
◗ The speed of the miniature must be 1⁄48 of the desired speed or .625 mph.
◗ It is more practical to calculate in feet per second.
(5280'/3600 sec) x .625 = .92 ft/sec
◗ In a practical situation, mark off as many .92 ft increments as is conve-
nient, and adjust the speed of the miniature train to match. If you mark
372
|
Figure 4.
The actual elevation of the lens
Illustration is not to scale below the level of the top of
the set wall should be the same
as the distance in scale below
the level of the equivalent part
(bottom) of the miniature.
The actual distance to the back
wall of the set should equal the
scale distance to the back of the
le = actual lens elevation measured
Mixed-Scale Compositing
off four .92 increments, then it should take 4 seconds for your train to
cover the distance.
(Please refer to “Calculating Camera Speed” on pages 365, 682-683 and 896.)
CoLor TemperaTure
Color temperature describes the “true” temperature of a “black-body
radiator” and thereby completely defines the spectral energy distribution
(SED) of the object. When the object becomes luminous and radiates energy
in the visible portion of the spectrum, it is said to be incandescent. Simply
stated, this means that when an object is heated to an appropriate tempera-
ture, some of its radiated energy is visible.
The color temperature is usually described in terms of degrees Kelvin (°K).
The first visible color when an object is heated is usually described as “dull
cherry red”. As the temperature is increased, it visually becomes “orange,”
then “yellow,” and finally “white” hot.
One of the most important features of incandescent radiators is that they
have a continuous spectrum. This means that energy is being radiated at
all the wavelengths in its spectrum. The term “color temperature” can only
be properly applied to radiating sources that can meet this requirement.
When the term “color temperature” is applied to fluorescent lamps (or other
sources that do not meet the criteria for incandescence), it really refers to
“correlated color temperature.”
1,000,000 106
Mired Value = =
Color Temperature (degrees Kelvin) ºK
As a convenience, refer to page 835 to determine the MIRED values for
color temperatures between 2000°K and 10,000°K in 100-degree steps.
Filters which change the effective color temperature of a source by a
definite amount can be characterized by a “MIRED shift value.” This value is
computed as follows:
MIRED shift values can be positive (yellowish or minus blue filters) or
negative (blue or minus red/green filters). The same filter (representing a
single MIRED shift value) applied on light sources with different color tem-
peratures will produce significantly different color-temperature shifts. Oc-
casionally, the term “decamireds” will be used to describe color temperature
and filter effects. Decamireds are simply MIREDs divided by 10.
HMI lamps have a CRI of 90 to 93, referred to the D55 standard illuminant
(D55 is the artificial match to standard daylight of 5500°K).
naTuraL dayLIghT
Daylight conditions are highly varied from a photographic viewpoint
and are based on the local atmospheric conditions, location on the earth,
altitude, time of year, hour of the day and the amount of atmospheric pol-
lutants that may be present. A brief summary of some of the possibilities are
presented in the Correlated Color Temperature chart on page 821.
| 379
Light Sources, Luminaires and Lighting Filters
Least Diffuse – In clear cloudless sunlight, the sun as the main lighting
source (key) is truly a point. This produces the hardest, most distinct shad-
ows. The incident light level from the sun on such a day can be as much
as 10,000 footcandles. The skylight contribution (fill) is about 1,500 foot-
candles. This produces a lighting ratio of about 7:1 (key to fill). Lighting
control in these situations may require booster lighting or the use of certain
grip devices such as large overhead scrims.
Most Diffuse – A completely overcast day is essentially shadowless light-
ing. The entire sky, horizon to horizon, becomes the light source. The inci-
dent level may be as low as 200 footcandles.
InCandeSCenT Lamp
operaTIonaL CharaCTerISTICS
Filters for Incandescent Lamps
These filters are placed in front of incandescent sources to change the
color temperature to an approximation of daylight. The filters may be plastic
film types, or dichroic filters. The dichroics are usually only utilized as filters
380
|
Light Sources, Luminaires and Lighting Filters
the world until the advent of the tungsten-halogen lamp. Little used in the
United States now, it is still in wide use in other parts of the world and offers
some interesting advantages. There are many situations in which this system
may be both cost-effective and functionally desirable.
Typically, when 120-volt lamps are operated at 165 volts, the color temper-
ature should be approximately 3100–3200°K. It is possible to continue the
boosting operation, and some lamp types will actually yield 3300–3400°K
when operated at approximately 185 volts. Due to the low pressure in the
standard incandescent, long-life lamps, this is a safe type of operation.
A further advantage of this system is that the standard incandescent types
utilized in it tend to be much less expensive than the photographic lamp
types rated at 3200°K at the operating voltage. Further, the expected life of
many of these lamps at 3200°K operation is directly comparable to the life
that can be expected from 3200°K-type photographic lamps operated at
their rated voltages.
Fluorescent Lighting
There is now a considerable selection of professional lighting fixtures, some
of which are very portable and compact, that utilize a range of fluorescent
lamps which closely approximate 3200°K and 5500°K lighting. These utilize
high frequency (25,000Hz) electronic ballasts which completely eliminate
any concerns regarding possible “flicker” problems from operation at usual
line frequencies. This system was perfected by Kino Flo. The fluorescent
tubes used in these systems have a typical life of 10,000 hours.
Noncolor-correct fluorescents may be corrected by using a series of either
minus green (magenta) or plus green filters on the lamps or camera. Some
color-correct fluorescent tubes still may have some green spikes in their out-
put when they get too hot. This can be easily taken care of with these filters.
(See charts pages 832 and 839.)
hmI™ Lamps
The most widely used of the new types of photographic enclosed AC dis-
charge lamps are known as HMIs. They are made in wattages ranging from
125–18,000. The chart on page 836 illustrates the various versions of this
light source.
These are considered medium-length arc types and are fundamentally
mercury arcs with rare earth halide additives. The color-temperature range
is normally quoted as being between 5600°K and 6000°K with a tolerance of
±400°K. The CRI of all these types is 90 or more, and they are dimmable and
capable of being restruck hot.
As the power to the lamp is reduced further, color temperature increases
and the CRI decreases. Where the light output needs to be reduced, it is
| 383
Light Sources, Luminaires and Lighting Filters
preferable to use neutral density filters on the luminaire in order to avoid any
possibility of a shift in color characteristics.
CalColor™ Filters
Rosco Laboratories in conjunction with Eastman Kodak has recently cre-
ated a family of filters for motion-picture lighting for which they were jointly
awarded an Academy Award for Scientific and Technical Achievement. This
is Rosco CalColor™, the first system of lighting filters specifically related to
the spectral sensitivity of color negative film.
These filters are very precise equivalents to the established range of the
very familiar “CC” filters. The Series I colors include the primaries blue,
green and red, along with the secondaries yellow, magenta and cyan. The
Series II will include six intermediaries, two of which are available at this
writing, pink and lavender. All colors are produced in the familiar 15, 30, 60
and 90 designations (½, 1, 2 and 3 stops).
All of the colors are produced on a heat-resistant base. During manufac-
ture the CIE references are continuously monitored by online computerized
colormetric equipment, which ensures the consistency of product from run
to run. The CalColor™ products are available in sheets (20" x 24") and rolls
(48" x 25') .
The principle of this system is that each color enhances the individual
color elements at each light source to which they are applied. For example,
CalColor™ 90 Green selectively enhances green transmission by reducing the
blue and red transmission by three stops. A CalColor™90 Magenta enhances
the blue and red transmission by reducing the effective green transmission
by three stops. See CalColor chart on page 834.
Another feature of the CalColor™ system relates to the colorant, selections
that were made with concern for the purity of each color. The colors finally
presented are so “clean” that they can be combined with fully predictable
results (i.e., combining 30 Cyan (-30R) and 15 Blue (-14G, -16R) results in a
Light Steel Blue filter (-14G, -46R)).
oF note…
oF note…
CAUTIONS: Xenon lamps have high internal pressure, even when cold
(cold, the internal pressure can be up to approximately 150 psi, and
in operation this pressure can be as high as 450 psi). The lamps are
supplied with a protective jacket over the bulb, and this should not be
removed until the lamp is fully installed. It is required that suitable face
shield, body jacket and gauntlets be used any time that the protective
jacket is removed. The protective jacket should be installed before steps
are taken to disconnect and remove a lamp.
A further caution must be considered relative to the characteristically
high luminance of the arc in these sources. Direct viewing of the arc can
result in serious damage to the retina.
Xenon lamps produce a considerable ultraviolet component (up to
about 6% of the total lamp energy output). This can result in the pro-
duction of ozone which is harmful to health if breathed for extended
periods or in poorly ventilated spaces. This caution should be observed
even when using “ozone-free” versions of these sources.
oF note…
The control equipment for these strobes permits the addition of delay to
the pulse in degree increments. The position of the shutter will move ei-
ther forward or backward in relationship to the gate until it is in the proper
position. For reflex cameras the strobe fires twice for each frame, once to
illuminate the subject and a second time to illuminate the viewfinder.
LumInaIreS Fresnel
Lens
Fresnel Lens Spotlights Filament
Spherical
Fresnel spotlights are made for Reflector
radiated toward the back of the housing through the filament and toward
the lens. The effect intended is that the energy radiated to the lens appears to
come from a single source. The combination of reflector and light source is
moved in relation to the lens to accomplish the focusing.
One of the most important features of the Fresnel lens spotlight is its abil-
ity to barndoor sharply in the wide flood focus position. This property is
less apparent as the focus is moved toward a spot (at spot focus it is not
effective at all). The barndoor accessory used with this spotlight provides
the cinematographer with the means for convenient light control. The sharp
cutoff at the wide flood is, of course, due to the fact that the single-source
effect produces a totally divergent light beam. The action of the barndoor,
then, is to create a relatively distinct shadow line.
Occasionally it may be desirable to optimize the spot performance of these
units, and for this situation “hot” lenses are available. These tend to produce
a very narrow beam with very high intensity. It is important to remember
that the flood focus is also narrowed when these lenses are used.
d d
The great attraction of these luminaires is that they are substantially more
efficient than the Fresnel lens spotlights. Typical spot-to-flood intensity ra-
tios for these types of units is between 3:1 and 6:1.
Tungsten-halogen Floodlights
A variety of tungsten-halogen floodlighting fixtures take advantage of
these compact sources. Two of the more typical forms are treated here. These
fixtures are available in wattages from about 400–2,000.
There are types of “mini’’ floodlights using the coiled-coil, short-filament,
tungsten-halogen lamps, which provide very even, flat coverage with extreme-
ly sharp barndoor control in both directions. Due to the design of the reflector
in this system, the light output from this fixed-focus floodlight appears to have
a single source. This accounts for the improved barndoor characteristics.
Cyclorama Luminaires
These lighting fixtures were originally developed for lighting backings in
theater but have broad application in similar situations in film. Because of
the design of the reflector system, it is possible to utilize these fixtures very
close to the backing that is being lit and accomplish a very uniform distribu-
tion for a considerable vertical distance. Typically these units are made for
tungsten-halogen linear sources ranging from 500–1,500 watts.
Based on the variations in design, some of these may be used as close as
3' to 6' from the backing being illuminated. The spacing of the luminaires
along the length of the backing is in part determined by the distance of these
fixtures from the backing itself.
Soft Lights
Soft lights, which attempt to produce essentially shadowless illumination,
are made in wattages from 500 up to about 8,000 and typically utilize mul-
tiple 1000w linear tube tungsten-halogen lamps. The degree of softness is
determined by the effective area of the source.
The Aurasoft™ is unique, in that it produces a greater area of coverage than
a comparable conventional unit, at a full stop advantage in light level and
with comparable “shadow-casting” character. These units can be quickly
converted in the field between tungsten-halogen and HMI light sources.
Also available is a plastic tubular diffuser with a reflector at the closed
end which is fitted at the open end, with a tungsten-halogen spotlight. The
configuration allows for the unit to be easily hidden or placed in a corner to
provide soft light that can be used very close to the actors.
The helium filled balloons are designed to contain either tungsten-halogen
or HMI sources in various combinations. These balloons are tethered and
can be used up to an altitude of about 150 feet (45 meters). They range in
| 389
Light Sources, Luminaires and Lighting Filters
Shield Painted
Surface
Reflector
Tungsten- Shield
Halogen
Lamp
Painted Standard
Surface Incandescent
Lamp
Painted
Surface
Tungsten-
Halogen
Lamp
Umbrella
“Soft” Light Light
Fig 5. Reflector systems of various “soft” lights
LIghT-ConTroL aCCeSSorIeS
Barndoors
The purpose of this accessory is to prevent the illumination beam from the
fixture from reaching certain portions of the set. A relatively well-defined
edge can be established that delineates the end of an illuminated area and the
beginning of an unilluminated zone.
Barndoors are most effective when used on Fresnel spotlights when the
spotlight is in the wide flood position. The effectiveness of the barndoor is
reduced as the focus is moved toward spot and is totally without effect at the
spot focus.
The effectiveness of the barndoor as an accessory on other types of lumi-
naires varies sharply with the design of the specific item. In a number of the
open reflector tungsten-halogen systems (particularly floodlights), barndoor
effectiveness is limited to the edge of the barndoor that is parallel to the source.
Snoots
This is a funnel-shaped device used to limit the beam of a Fresnel spot-
light. Available in various diameters.
Scrim
The type of scrim referred to here is placed directly in the accessory-
mounting clips on a luminaire. This type of scrim is normally wire netting,
sometimes stainless-steel wire, which is used as a mechanical dimmer.
The advantage of the scrim is that it permits a reduction in light intensity
in several steps (single and double scrims) without changing the color tem-
perature or the focus of the luminaire. Contrary to popular belief, it is not
a diffuser.
The half-scrim permits the placement of scrim material in only half of the
beam and is widely used on Fresnel spotlights. It overcomes the problem
encountered when the Fresnel is used at fairly high angles. The portion of
the beam striking the floor or objects near the floor closest to the luminaire
produces intensities that are too high to match the desired level at the dis-
tance associated with the center of the beam. The reason for this, of course, is
the substantial variation in the distances that the illumination energy travels.
The half-scrim applied on the portion of the beam impinging on the nearest
objects can overcome this problem.
gel Frames
Different forms of these holders are made and designed to fit into the ac-
cessory clips on the front of most luminaires. They permit the use of various
types of plastic filter materials to modify the characteristics of the beam.
Color media may be put in these holders to affect color, and a wide range of
diffusion products are available.
392
|
Light Sources, Luminaires and Lighting Filters
electrical dimmers
The old-fashioned resistance and autotransformer dimmers have given
way to the solid state SCR dimmer systems. Computer management DMX
controllers cannot only control the intensity of each luminaire, but can
switch cues, replug circuits, precisely control the duration of a dim, and con-
trol many other accessories (color wheels, lamp movement and focus). All of
these cues can be recorded and stored in the computer for replayability. With
arcs, mechanical shutters are motorized to execute the dim.
egg Crates
Large, fabric egg crates can be stretched in front of soft diffusion material
to control spill light.
| 393
Light Sources, Luminaires and Lighting Filters
Ultraviolet
Infrared
Near Far
O ver the last four years LED lighting fixtures have been working their way
into the motion picture industry. Lead by companies such as Litepanels
Inc., products have been introduced that effectively exploit the inherent char-
acteristics of LED technology: low DC power and amperage draw, low heat,
and dimmable without color shift. LEDs are primarily powered by low-volt-
age DC with low-energy demands. This has enabled the design of some inno-
vative battery-operated fixtures. Untethered from a power cable, the instru-
ments have provided a handy, easy-to-rig fixture well suited to the fast-paced
shooting styles of today’s production environments. The use of multicolored
LEDs based on RGB principles or the newer multicolored LED mixing allow
for products that expand the potential for accurate spectral displays.
A number of companies have emerged over the past few years that are
providing innovative LED products: Color Kinetics, Litepanels, Mole-Rich-
ardson, Gekko Technologies, Element Labs, Kino Flo, Zylite, Nila, LEDz
among others.
Challenges however remain in this new technology. For a cinematogra-
pher it is all about the light. If a fixture’s light characteristics aren’t correct
for the scene, it will be passed over. Cinematographers strive for clean edge
shadows or soft light sources that display diffuse shadow lines. The LED will
have to deliver light as good as or better than existing tools provide. Energy
savings alone will not be the reason to embrace the LED.
The introduction of the LED has presented lighting designers with some-
what of a predicament as it pertains to motion picture lighting. The LED
is a point source much like Thomas Edison started with back in the 1880s.
Edison’s source was wrapped in a clear glass envelope. One bulb alone pro-
396
|
LED Lighting For Motion Picture Production
GLarE
As a bright point source the LED is very glaring when viewed on axis. Large
quantities have to be assembled to provide adequate levels of illumination.
When an LED fixture is in the direct eyeline of an actor they tend to com-
plain of discomfort.
There is not much you can do to reduce glare if you need the light output.
Adding a diffuser to the fixture will reduce the glare but leads us into the
next problem: substantial light loss and loss of efficiency.
MuLtiPLE shaDows
Depending on the spacing of the LEDs, the light characteristics can display
multiple shadow lines. Much like the number of pixels per inch determines
the resolution in a digital image the density of LEDs in a fixture will define the
shadow line characteristics of an LED fixture. The tighter the spacing of the
LED, the more homogenous and clean the shadow. Open faced fixtures with-
out diffusion can give off very distracting multiple shadows from a barn door.
CoLor
Early adoption of LEDs was hampered by their spectral output. LED man-
ufacturers specified their color space as dictated by the demands of general
| 397
LED Lighting For Motion Picture Production
hEat ManaGEMEnt
Although the lamps are highly efficient it comes at the cost of heat. Put
a few 2 watt or 5 watt LEDs on a matrix and heat builds up rapidly. The
lightweight LED now has to be linked to a much heavier and bulkier heat
management system or heat sink. This gives the designers challenges as
to how to exploit the LED’s advantages of size and light output against the
weight and size of the heat-sink requirements. So if it ends up as hot and
bulky as a 1 K Fresnel and the color is not as good as an incandescent, will
the industry care to work with it?
The challenges are being addressed in innovative ways and are resulting in
new and useful lighting tools. Together with the inherent advantages of the
LED, we can expect it to play a major role in providing lighting solutions to
the motion-picture industry.
Litepanels – www.litepanels.com
Product names: Micro Series, Mini-Plus Series,
1 x 1, 2 x 2, 4 x 4, Ringlite Series, SeaSun
One of the first successful companies to exploit the advantages of LEDs
was Litepanels. Founded by industry gaffers, they have provided innovative
lighting tools for on camera lighting as well as more general set lighting ap-
398
|
LED Lighting For Motion Picture Production
changing light effects for process shots. The Versa Bank can simulate the ever
changing source light in synchronization with a process background plate.
nila – www.nila.tv
Product names: Nila JNH series
Founded by gaffer/grip Jim Sanfilippo, Nila offers a high powered LED light-
ing system consisting of a module that interconnects to form larger fixtures.
Interchangeable lenses vary the beam angles in 10-, 25-, 45- and 90-degree
increments as well as vertical and horizontal Elliptical beam options. Other
accessories such as yokes and gels flesh out the system. The module is dim-
mable through a DMX protocol and is available in either daylight or tungsten
equivalents. The units operate on a universal 90-240 AC/DC power supply.
Zylight – www.zylight.com
Product names: Zylight IS3, Z90 & Z50, Remote using Zylink protocol.
Founded by Charlie Collias a veteran of video and documentary produc-
tion and his brother Jim from an electrical engineering background, Zylight
produces a range of color-changing RGB portable lights well suited for
on-camera and general studio lighting applications. An innovative wireless
remote-control system, ZyLink offers control over numerous fixtures at one
time. The units operate on AC or DC power. Given the high density of LEDs
on the light emitting surface the shadow characteristics are very clean not
multiples of shadow lines.
LEDz – www.led-z.com
Product names: Mini-Par, Brute 9, 16, and 30.
Founded by veteran HMI designer Karl Schulz, LEDz offers a range of
small portable lighting instruments.
The Mini-Par is a 12VDC on-camera light that offers various beam angles
using a set of accessory lenses. The Brute fixture family consists of fixtures
from 30 watts, 50 watts, and 90 watts. The fixtures are available in Daylight,
5500°K or Tungsten 3000°K.
400
|
LED Lighting For Motion Picture Production
| 401
An Introduction
to Digital Terminology
by Marty Ollstein
and Levie Isaacks, ASC
ANALOG – The natural world is analog. Light and sound are described as
waves whose shape varies with their amplitude and frequency in a con-
tinuously variable signal. An analog signal is understood to be formed
by an infinite number of points. Analog human vision perceives light as a
continuous gradient and spectrum from black to white. Film is an analog
medium and can record a continuous spectrum. Digital video cameras
capture a scene as analog voltage and then digitize it with an A/D (analog-
to-digital) converter to create linear digital code values.
A/D CONVERSION – Analog-to-digital conversion transforms analog data
(such as light intensity or voltage) into a digital binary format of discrete
values. Referred to as digitization or quantization.
ANAMORPHIC – An optical process in which a widescreen (wide aspect
ratio) image is recorded onto a narrower target (film or sensor) using an
anamorphic lens to squeeze the image horizontally. An anamorphic lens
will squeeze a 2.40:1 ’Scope scene onto a 1.33:1 negative frame or 4 x 3
camera chip. The format uses almost the entire image area with no waste
(of pixels or negative area), resulting in a higher-resolution image. For
display, an anamorphic projector lens is needed to unsqueeze the image.
ARTIFACT – A flaw or distortion in an image—a result of technical limita-
tion, incompatibility or error. An artifact can be introduced at any step in
which an image is altered or converted to another format.
ASPECT RATIO – Ratio of screen width to height.
Common ratios:
1.33:1 (4 x 3) Standard TV—or
35mm Full Aperture (Silent)
1.37:1 Academy Aperture—or
Regular 16mm Full Aperture
1.66:1 European Theatrical Standard
402
|
An Introduction to Digital Terminology
.9
.8
.7
.6
.5
2000º K
.4 rve
Cu
dy
B o 4000º K 1000º K
k
ac
Bl
6000º K
.3
10000º K
.2
.1
.0
.0 .1 .2 .3 .4 .5 .6 .7 .8
Figure 1. Triangles within CIE chromaticity chart define limits of color gamuts
| 405
An Introduction to Digital Terminology
COLOR SPACE – A color space is a structure that defines colors and the re-
lationships between colors. Most color-space systems use a three-dimen-
sional model to describe these relationships. Each color space designates
unique primary colors—usually three (a defined red, green and blue)—on
which the system is based. The system can be device-dependent or device-
independent. A device-dependent system is limited to the color represen-
tation of a particular device or process, such as film. The RGB color space
that describes film is representative of the range of colors created by the
color dyes used in film. A device-independent color space defines colors
universally, which through a conversion process using a profile, can be
used to define colors in any medium on any device.
The color space designated as the standard for digital-cinema distri-
bution is X'Y'Z,' a gamma-encoded version of the CIE XYZ space. A de-
vice-independent space, X'Y'Z' has a larger color gamut than other color
spaces, going beyond the limits of human perception. All other physically
realizable gamuts fit into this space.
COMPONENT VIDEO SIGNAL – A video format in which luminance and
chrominance remain as separate entities and have separate values.
COMPOSITE IMAGE – An image created by the combination of two or
more source images. Compositing tools include matte-creation software,
and chroma-key hardware.
COMPOSITE VIDEO SIGNAL – A video format in which luminance and
chrominance are combined.
COMPRESSION – Compression is the process used to reduce the size of
an image file with the least sacrifice in quality or usable information. One
408
|
An Introduction to Digital Terminology
• The digital version (image files) of the production, used during post
production.
• The final colorgrading process (which may take weeks for a feature)
that integrates all visual elements and determines the final look of the
production.
• The product or result of the final colorgrading process (digital master).
DIGITAL MASTER – The completed output or result of the final color-
grading process, subsequently used to create the distribution masters. Or,
the full digital version of the camera-original production.
DIGITAL SIGNAL – A signal represented by discrete units (code values),
measured by a sampling process. A digital signal is defined by a finite code
specified red, green and blue. But some systems are now experimenting
with the use of more than three primaries in order to improve color display.
PRIMARY COLOR CORRECTION – Independently adjusting the lift, gain
and gamma of the three additive color channels (red, green and blue).
PROGRESSIVE – The method of sequentially scanning all lines of a pixel
array to create a frame in one vertical sweep. There are no fields as there
are in interlaced scanning—only complete frames. Computer screens use
progressive scans.
QUANTIZATION ARTIFACT – Where a region of an image is distorted as
a result of some form of image processing. Loss of image information can
result from reduction in color space, bit depth, color sampling or spatial
resolution. A common example: downconverting an image from 10-bit
color to 8-bit color.
RAID ARRAY (Redundant Array of Independent Disks) – A group of
hard-disk drives, linked by a system that enhances speed, increases storage
capacity and performs automatic backup of data. A RAID array writes to
multiple disks simultaneously and backs up data in the process. Often used
in film production, the benefits of the RAID system include increases in
data integrity, fault tolerance, throughput, speed and storage capacity. There
are different RAID levels, including 0 (no redundancy or backup capability)
and 5 (most common, with redundancy) through level 10 and “RAID S.”
RAW FILES – The digital recording of images without applying the gamma
correction of video. This unprocessed data retains more information and
can potentially produce a higher-quality image. Instead of applying the
video gamma correction, they convert the linear code values it captures
into logarithmic code values which approximate a film gamma and color
space, resulting in a wider dynamic range and greater shadow detail.
REDUNDANCY – Values that are repeated in an image.
Compression codecs find and compress redundant values in an image
in order to reduce the size of the file. An example of visual redundancy is
a blue sky with consistent color.
REGION-OF-INTEREST CORRECTION – Using “windows” or articu-
lated mattes to isolate specific areas of the frame to make alterations.
REGISTRATION (COLOR) – The alignment of the three-color sensors
or sources (red, green, blue) on a three-chip recording or display device.
Poor alignment can create color ringing artifacts. Registration charts can
be used to insure that the camera is in precise registration.
RENDER – The process of re-recording a digital image in accordance with
certain specifications or adjustments.
RESOLUTION – The measure of the finest detail visible in an image. The
resolution of an image is measured by the number of pixels in the image
display, not by the number of sensors in the camera chip.
418
|
An Introduction to Digital Terminology
test signal. 0 degrees is at the 9 o’clock position. The color boxes, clockwise
from 0 degrees, are labeled YL (yellow), R (red), MG (magenta), B (blue),
CY (cyan) and G (green).
VFX (Visual-effects material) – Plates, composites, CGI, etc.
VISUALLY LOSSLESS – A compression scheme that claims to be lossless
and may appear to be lossless without manipulation (or color correction).
However, when it must be manipulated for color grading or visual effects,
artifacts often appear.
WATERMARK – A security scheme in which bits are altered within an
image to create a pattern, which constitutes proof of ownership.
WAVEFORM MONITOR – An oscilloscope adapted to display luminance
or brightness light levels across a scene, from left to right, as recorded by a
camera, as well as other video signal levels. It measures voltage levels from
the camera and displays them on a screen in a scale of 0–100 IRE (black to
white). If the levels flatten out at either end of the scale (at 0 or 100, with
some exceptions), the image will clip and record no detail in those areas.
WHITE POINT – The white reference, expressed in chromaticity coordi-
nates, that determines the white balance, or color temperature, of a system.
An important parameter in color management and device calibration.
Devices such as monitors and projectors can be calibrated to a
| 421
An Introduction to Digital Terminology
Aerial Cinematography
Wagtendonk, W.J., Principles of Helicopter Flight, Aviation Supplies & Academics,
Newcastle, WA, 1996.
Crane, Dale, Dictionary of Aeronatical Terms, Aviation Supplies & Academics,
Newcastle, WA, 1997.
Spence, Charles, Aeronautical Information Manual and Federal Aviation Regulations,
McGraw-Hill, New York, NY, 2000.
Padfield, R., Learning to Fly Helicopters, McGraw-Hill, New York, NY, 1992.
Industry-Wide Labor-Management Safety Bulletins at: http://www.csatf.org/bulletintro.shtml
Arctic Cinematography
Eastman Kodak Publication: Photography Under Artic Conditions.
Fisher, Bob, “Cliffhanger’s Effects were a Mountainous Task,”
American Cinematographer, Vol. 74, No. 6, pp. 66-74, 1993.
Miles, Hugh, “Filming in Extreme Climactic Conditions,” BKSTS Journal Image Technology,
February 1988.
Moritsugu, Louise, “Crew’s Peak Performance Enhanced Alive,” American Cinematographer,
Vol. 74, No. 6, pp. 78-84, 1993.
Laszlo, ASC, Andrew, Every Frame a Rembrandt: Art and Practice of Cinematography,
Boston, MA; Focal Press, 2000.
Laszlo, ASC, Andrew, It’s A Wrap!, Hollywood, CA; ASC Press, 2004.
LoBrutto, Vincent, Principal Photography: Interviews with Feature Film Cinematographers,
Westport, CT; Praeger, 1999.
Maltin, Leonard, The Art of the Cinematographer: A Survey and Interviews with Five Masters,
New York, Dover Publications, 1978.
McCandless, Barbara, New York to Hollywood: The Photography of Karl Struss, ASC,
Albuquerque, NM; University of New Mexico, 1995.
Miller, ASC, Virgil E., Splinters From Hollywood Tripods: Memoirs of a Cameraman, New York,
Exposition Press, 1964.
Rainsberger, Todd, James Wong Howe Cinematographer, San Diego, CA; A.S. Barnes, 1981.
Rogers, Pauline B., More Contemporary Cinematographers on Their Art, Boston, MA;
Focal Press, 2000.
Schaefer, Dennis and Salvato, Larry, Masters of Light: Conversations with Contemporary
Cinematographers, Berkeley, CA; University of California Press, 1985.
Sterling, Anna Kate, Cinematographers on the Art and Craft of Cinematography,
Metuchen, NJ; Scarecrow Press, 1987.
Walker, ASC, Joseph, The Light On Her Face, Hollywood, CA; ASC Press, 1984.
Young, BSC, Freddie, Seventy Light Years: An Autobiography as told to Peter Busby,
London, Faber and Faber, 1999.
Camera
Adams, Ansel, The Camera, New York, Morgan and Morgan, Inc., 1975.
Fauer, ASC, Jon, Arricam Book, Hollywood, CA; ASC Press, 2002.
Fauer, ASC, Jon, Arriflex 16 SR Book, Boston, MA; Focal Press, 1999.
Fauer, ASC, Jon, Arriflex 16 SR3 the Book, Arriflex Corp., 1996.
Fauer, ASC, Jon, Arriflex 35 Book, Boston, MA; Focal Press, 1999.
Fauer, ASC, Jon, Arriflex 435 Book, Arriflex Corp., 2000.
Samuelson, David W., Panaflex Users’ Manual, Boston, MA; Focal Press, 1990
Camera Manufacturers
Aaton, +33 47642 9550, www.aaton.com
ARRI, (818) 841-7070, www.arri.com
Fries Engineering, (818) 252-7700, www.frieseng.com
Ikonoskop AB, +46 8673 6288, info@ikonoskop.com
Panavision, (818) 316-1000, www.panavision.com
Photo-Sonics, (818) 842-2141, www.photosonics.com
Pro8mm, (818) 848-5522, www.pro8mm.com
|Ref-3
Further References
Camera Supports
A + C Ltd., +44 (0) 208-427 5168, www.powerpod.co.uk
Aerocrane, (818) 785-5681, www.aerocrane.com
Akela: Shotmaker, (818) 623-1700, www.shotmaker.com
Aquapod, (818) 999-1411
Chapman/Leonard Studio Equipment, (888) 883-6559, www.chapman-leonard.com
Egripment B.V., +31 (0)2944-253.988, Egripment USA, (818) 787-4295, www.egripment.com
Fx-Motion, +32 (0)24.12.10.12, www.fx-motion.com
Grip Factory Munich (GFM), +49 (0)89 31901 29-0, www.g-f-m.net
Hot Gears, (818) 780-2708, www.hotgears.com
Hydroflex, (310) 301-8187, www.hydroflex.com
Isaia & Company, (818) 752-3104, www.isaia.com
J.L. Fisher, Inc., (818) 846-8366, www.jlfisher.com
Jimmy Fisher Co., (818) 769-2631
Libra, (310) 966-9089
Louma, +33 (0)1 48 13 25 60, www.loumasystems.biz
Megamount, +44 (0)1 932 592 348, www.mega3.tv
Movie Tech A.G., +49 0 89-43 68 913, Movie Tech L.P., (678) 417-6352, www.movietech.de
Nettman Systems International, (818) 623-1661, www.camerasystems.com
Orion Technocrane, +49 171-710-1834, www.technocrane.de
Pace Technologies, (818) 759-7322, www.pacetech.com
Panavision Remote Systems, (818) 316-1080, www.panavision.com
Panther, +49 89 61 39 00 01, www.panther-gmbh.de
Spacecam, (818) 889-6060, www.spacecam.com
Strada, (541) 549-4229, www.stradacranes.com
Straight Shoot’r, (818) 340-9376, www.straightshootr.com
Technovision, (818) 782-9051, www.technovision-global.com
Wescam, (818) 785-9282, www.wescam.com
Cinematography
Brown, Blain, Cinematography, Boston, MA; Focal Press, 2002.
Campbell, Russell, Photographic Theory for the Motion Picture Cameraman,
London, Tantivy Press, 1970.
Campbell, Russell, Practical Motion Picture Photography, London, Tantivy Press, 1970.
Ref-4
|
Further References
Color
Albers, J., Interaction of Color, New Haven and London; Yale University Press, 1963.
Eastman Kodak Publication H-12, An Introduction to Color, Rochester, 1972.
Eastman Kodak Publication E-74, Color As Seen and Photographed, Rochester, 1972.
Eastman Kodak Publication H-188, Exploring the Color Image, Rochester.
Evans, R. M., An Introduction to Color, New York, NY; John Wiley & Sons, 1948.
Evans, R. M., Eye, Film, and Camera Color Photography, New York, NY; John Wiley & Sons, 1959.
Evans, R. M., The Perception of Color, New York, NY; John Wiley & Sons, 1974.
Friedman, J. S., History of Color Photography, Boston, MA;
American Photographic Publishing Company, 1944.
Hardy, A. C., Handbook of Colorimetry, MIT, Cambridge, MA; Technology Press, 1936.
Hunt, R. W. G., The Reproduction of Colour, Surrey, UK, Fountain Press, 1995.
Itten, J., The Art of Color, New York, Van Nostrand Reinhold, 1973.
National Bureau of Standards Circular 553, The ISCC-NBS Method of Designating
Colors and A Dictionary of Color Names, Washington D. C., 1955.
Optical Society of America, The Science of Color, New York, NY; Thomas Y.
Crowell Company, 1953.
Society of Motion Picture and Teclevision Engineers, Elements of Color in Professional
Motion Pictures, New York, NY, 1957.
Wall, E. J., History of Three-Color Photography, New York and London, Boston, MA;
American Photographic Publishing Company, 1925.
Film
Adams, Ansel, The Negative, New York, Little Brown, 1989.
Adams, Ansel, The Print, New York, Little Brown,1989.
Eastman Kodak Publication H-1: Eastman Professional Motion Picture Films.
Eastman Kodak Publication H-23: The Book of Film Care.
Eastman Kodak Publication H-188: Exploring the Color Image.
|Ref-5
Further References
Film Design
Affron, Charles and Affron, Mirella Jona, Sets in Motion, Rutgers University Press, 1995.
Carrick, Edward, Designing for Films, The Studio LTD and the Studio Publications Inc, 1941, 1947.
Carter, Paul, Backstage Handbook, 3rd edition., Broadway Press, 1994.
Cruickshank, Dan, Sir Banister Fletcher’s A History of Architecture, 20th edition,
New York, NY, Architectural Press, 1996.
Edwards, Betty, Drawing on the Right Side of the Brain, revised edition, Jeremy P. Tarcher, 1989.
de Vries, Jan Vredeman, Perspective, Dover Publications, 1968.
Heisner, Beverly, Studios, McFarland and Co., 1990.
Katz, Stephen D., Shot by Shot – Visualizing from Concept to Screen, Boston, MA;
Focal Press, 1991, pp. 337-356.
Preston, Ward, What an Art Director Does, Silman-James Press, 1994.
Raoul, Bill, Stock Scenery Construction Handbook, 2nd edition, Broadway Press, 1999.
St John Marner, Terrance, Film Design, The Tantivy Press, 1974.
Film History
The American Film Institute Catalog: Feature Films 1911–1920, Berkeley and Los Angeles,
University of California Press, 1989.
The American Film Institute Catalog: Feature Films 1931–1940, Berkeley and Los Angeles,
University of California Press, 1993.
The American Film Institute Catalog: Feature Films 1921–1930, Berkeley and Los Angeles,
University of California Press, 1997.
The American Film Institute Catalog: Feature Films 1961–1970, Berkeley and Los Angeles,
University of California Press, 1997.
The American Film Institute Catalog: Within Our Gates: Ethnicity in American Feature Films
1911–1960, Berkeley and Los Angeles, University of California Press, 1989.
The American Film Institute Catalog: Feature Films 1941–1950, Berkeley and Los Angeles,
University of California Press, 1999.
Belton, John, Widescreen Cinema, Cambridge, MA; Harvard University Press, 1992.
Brownlow, Kevin, Hollywood the Pioneers, New York, NY; Alfred A. Knopf, 1979.
Brownlow, Kevin, The Parade’s Gone By, New York, Knopf, 1968.
Coe, Brian, The History of Movie Photography, New York, Zoetrope, 1982.
Ref-6
|
Further References
Film Processing
ACVL Handbook, Association of Cinema and Video Laboratories.
Case, Dominic, Motion Picture Film Processing, London, Butterworth and Co. Ltd. (Focal Press), 1985.
Eastman Kodak publications: H-1, H-2, H-7, H-17, H-21, H-23, H-24.07, H-26, H-36, H-37, H-37A,
H-44, H-61, H-61A, H-61B, H-61C, H-61D, H-61E, H-61F, H-807 and H-822.
Happe, L. Bernard, Your Film and the Lab, London, Focal Press, 1974.
Kisner, W.I., Control Techniques in Film Processing, New York, SMPTE, 1960.
Ryan, R.T., Principles of Color Sensitometry, New York, SMPTE, 1974.
Filters
Eastman Kodak Publication B-3: Filters.
Harrison, H.K., Mystery of Filters-II, Porterville, CA; Harrison & Harrison, 1981.
Hirschfeld, ASC, Gerald, Image Control, Boston, MA; Focal Press, 1993.
Hypia, Jorma, The Complete Tiffen Filter Manual, AmPhoto, New York, 1981.
Smith, Robb, Tiffen Practical Filter Manual.
Tiffen Manufacturing Corporation Publication T179: Tiffen Photar Filter Glass
Lenses
Angenieux, P., “Variable focal length objectives,” U.S. Patent No. 2,847,907, 1958.
Bergstein, L., “General theory of optically compensated varifocal systems,”
JOSA Vol. 48, No. 9, pp. 154-171, 1958.
|Ref-7
Further References
Lighting
Adams, Ansel, Artificial Light Photography, New York, Morgan and Morgan, Inc., 1956.
Alton, John, Painting With Light, Berkeley and Los Angeles, University of California Press, 1995.
Bergery, Benjamin, Reflections – 21 Cinematographers at Work, Hollywood, CA; ASC Press, 2002.
Box, Harry, Set Lighting Technician’s Handbook, Boston, MA, Focal Press, 2003.
Malkiewicz, Kris J., Film Lighting: Talk with Hollywood’s Cinematographers and Gaffers,
New York, Touchstone, a Division of Simon & Schuster, 2012.
Millerson, Gerald, The Technique of Lighting for Television and Film, Boston, Focal Press, 1991
Miscellaneous
Arnheim, Rudolf, Art and Visual Perception, Berkley, CA, University of California Press, 1974.
Darby, William, Masters of Lens and Light: A Checklist of Major Cinematographers and
Their Feature Films, Metuchen, NJ, Scarecrow Press, 1991.
Houghton, Buck, What a Producer Does, Silman-James Press, 1991.
Kehoe, Vincent J. R., The Technique of the Professional Makeup Artist, Boston, MA,
Focal Press, 1995.
Kepes, Gyorgy, Language of Vision, New York, MA, Dover Publications, 1995.
Moholy-Nagy, L., Vision in Motion, Wisconsin; Cuneo Press, 1997.
Nilsen, Vladimir, The Cinema as a Graphic Art, New York; Garland Pub., 1985.
Waner, John, Hollywood’s Conversion of All Production to Color Using Eastman Color
Professional Motion Picture Films, Newcastle, ME; Tobey Publishing, 2000.
Ref-8
|
Further References
Photography
Evans, R.M., W.T. Hanson Jr., and W.L. Brewer, Principles of Color Photography,
New York, John Wiley & Sons Inc., 1953.
Mees, C.E.K., The Theory of the Photographic Process, New York, Macmillan, 1977.
Dazian Theatrical Fabrics: East Coast (877) 232-9426 or East Coast Design Studio (212) 206-3515,
West Coast (877) 432-9426 or West Coast Design Studio (818) 841-6500.
Underwater Cinematography
Mertens, Lawrence, In Water Photography: Theory and Practice, Wiley Interscience,
New York, John Wiley & Sons, 1970.
Visual Effects
Abbott, ASC, L.B., Special Effects with Wire, Tape and Rubber Bands,
Hollywood, CA; ASC Press, 1984.
Bulleid, H.A.V. (Henry Anthony Vaughan), Special Effects in Cinematography,
London, Fountain Press, 1960.
Clark, Frank P., Special Effects in Motion Pictures Some Methods for Producing Mechanical Effects,
New York, SMPTE, 1966.
Dunn, ASC, Linwood, and Turner, George E., ASC Treasury of Visual Effects,
Hollywood, CA; ASC Press,1983.
Fielding, Raymond, The Technique of Special Effects Cinematography, Boston, MA;
Focal Press, 1985.
Glover, Thomas J., Pocket Ref, Littleton, CO, Sequoia Publishing, 1997.
Harryhausen, Ray, Ray Harryhausen: An Animated Life, New York, NY, Billboards Books, 2004.
Rogers, Pauline B., The Art of Visual Effects: Interviews on the Tools of the Trade,
Boston, MA; Focal Press, 1999.
The Nautical Almanac, commercial edition, Arcata, CA, Paradise Cay Publications (yearly).
Vaz, Matt Cotta and Barron, Craig, The Invisible Art: The Legends of Movie Matte Painting,
San Francisco, CA; Chronicle Books, 2002.
Ref-10
|
Further References
| In-1
Index
INDEX
Page numbers followed by an “f” refer to a figure or illustration
Clinometer handheld app, 845 Color decision list (CDL), ASC. See ASC color
Clipping, 13, 14, 405 decision list (CDL)
Close Encounters of the Third Kind and streaking Color gamut definition, 406
of hot (bright) objects, 128 Color grading, 257, 257f, 265–266
Close-to-macro focusing, 130 Color sampling
Close-up and split-field diopter lenses, 169 as compression of image data, 131f
CMY (cyan, magenta, yellow) definition, 405 definition, 406–407
Code of Safe Practices, 421 digital basic concepts and, 22–23, 23f
Code value definition, 405 Color space
CODEC correlated, 375
avoiding recording, 47–48 definition, 407
definition, 405 digital basic concepts and, 18–21
interframe definition, 413 Color temperature
intraframe definition, 408 definition, 375
Codex ArriRaw onboard recorder, 639 meters, 164, 255, 380
Codex Onboard M recorder, 639–640 Colortran boosting, 380
Codex Onboard S recorder, 639–640 Commission Internationale de L’Eclairage (CIE)
Color, filters for, 161–167 XYZ color space, 19, 21, 405
color-conversion filters, 162 Common center, 75, 75f, 76
color-compensating (CC) filters, 162, 164 Common topline, 75, 75f, 76
coral filters, 165–166 Communications
Decamired filters, 162–163 radios and pilot jargon, 219
Didymium filers, 166 underwater, 236
differences between camera and lab/post Comparisons of 35mm, 1.85, anamorphic,
correction, 166 Super 35 film formats, 59–86
in digital world, 161–162 advantages of 1.85, 65–68
digital post processing options, 167 advantages of Super 35 composed for 2.40
fluorescent and other discontinuous spectra aspect ratio, 76–78
lighting correction, 163, 164 composition, 64, 67, 68, 69, 72
gel filters, 145, 164 digital motion picture cameras, 81–86
graduated color filters, 164–165 disadvantages of 1.85, 68–69
information, general, 161 disadvantages of anamorphic 2.40, 69–71
light-balancing filters, 162 disadvantages of Super 35 composed for
LL-D, 167 2.40 aspect ratio, 78–81
mixed lighting situations, 164 fill factor, 83–84
sepia filters, 166 history and definition of terms, 62–64
stripe filters, 165 introduction, 59–62
underwater color-correction filters, 166 magnification, 68
Color absorption and underwater cinematography, negative area, 68, 69, 78, 79
231 pixels, 85–86
Color and look management, digital basic pros and cons clarification, 64–69
concepts and, 23–26 resolution versus megapixels, 83
Color balancing scanner versus cameras, 84–85
for existing fluorescent lighting chart, 841 screen heights importance, 60–62
for existing nonphotographic lighting chart, video transfers, 67–68, 71–72, 78
839 Complementary high-density metal oxide
to match daylight or ambient light on location semiconductor (CMOS) definition, 405
interiors chart, 840 Component video signal definition, 407
Color correction collection (CCC), 281 Composite Components Co. (CCC), 335, 335n3,
Color correction for carbon arcs table, 833 339, 341, 346
In-8
|
Index
35mm extreme close up, 810 Kodak light balancing filters, 822
40mm, 778–779 Kodak ultraviolet and haze cutting filters, 823
50mm, 780–781 mired shift value effects, 835
75mm, 782–783 mired values of color temperatures from
85mm, 784–785 2000° to 10,000°K, 835
100mm, 786–787 neutral density filter selector chart, 824
105mm, 788–789 sizes for motion pictures, most common, 817
135mm, 790–791 Tiffen decamired filters, 834
150mm, 792–793 underwater photography color filter
200mm, 794–795 selection chart, 826
300mm, 796–797 Filters, camera, 143–172
400mm, 798–799 application filters, special, 167–168
Super 8mm/6.5mm, 804 barrier, 213
Super 8mm/13mm, 805 for black-and-white, 160–161
Super 8mm/38mm, 806 CalColor, 383
Super 8mm/50mm, 807 checking out, 434
File format conversion definition, 411 circular gradient and circular/elliptical
File format definition, 411 center spot ND filters, 152
File transfer protocol (FTP) definition, 411 close-up and split-field diopter lenses, 169
Fill Facture and digital motion picture cameras, for color, 161–167
33, 83–84 color and black-and-white, 145–155
Film calculator, 852 color in digital world, 161–162
Film camera equipment preparation. color-compensating filters, 162
See Preparation of motion picture film color-conversion filters, 162
camera equipment contrast control filters, 159–160
Film formats contrast-viewing filters, 167, 168
comparisons, 59–86 for control of natural daylight, 379
history and definition of terms, 62–64 coral filters, 165–166
Film rate calculator, 852 custom (homemade and field-ready) filters,
Film sizes in use, 120 170–171
Film stocks chart, Eastman Kodak, 864–865 Decamired® filters, 162–163
Film-strip projectors, 300 definition, 143
Filmlight Baselight, 26 Didymium filter, 166
Filmlight Northlight scanner, 265 diffusion, 155–158
Filter tables, 819–834 digital postprocessing options, 167
for altering black-and-white contrast of effect of depth of field and focal-length
colored subjects, color filters, 820 changes, 169–170
for arc discharge and fluorescent lamps, 832 exciter, 213
for black-and-white cinematography daylight factors, 144
exteriors, selected color filters, 819 filtering for digital intermediate, 171
color temperature adjust and lighting filters, fluorescent and other discontinuous spectra
830–831 lighting correction, 163–164
comparison of system names, 827 fog, double fog, and mist filters, 158–159, 170
compensator chart, 825 grades and nomenclature, 144–145
correlated color temperature of typical graduated color filters, 164–165
light sources, 821 graduated ND, 151–152, 151f
for daylight correction, 829 for incandescent lamps, 379–380
Kodak color compensating filters for infrared filters and digital imaging, 147
color films, 823 infrared filters and film imaging, 146
Kodak conversion filters for color films, 822 infrared-neutral density, 149–150
In-14
|
Index
lab/post correction versus camera, 166–167 Focusing digital still camera lens, 50–51
light-balancing filters, 120 Fog, double fog, graduated fog, and mist filters,
LL-D, 167 158–159, 170
mixed lighting situations, 164 Formats summary, 7–8
multiple use, 170 Formulas, 673–687
neutral-density, 148 conversion tables, 687
optical diffusion in digital camera, 158 distribution systems, 696–697
planning, 143 lens, 673–681
polarization filters and 3-D, 154–155 light and exposure, 685–687
polarizing, 152–153 measurement conversion factors, 698
rain spinners, 171 Ohm’s Law, 691
secondary reflections, 170 shooting practices, 681–685
sepia filters, 166 Frame rate issues, 41–43, 42f
sizes, shapes, and on-lens mounting Frame rates for slow-motion on HD video digital
techniques, 144 still camera, 51
sliding diffusion filters, 158 Fresnel lens spotlights, 386–387, 388, 390
special effects filters, 155–160 Fries 435 35mm camera, 533
star and flare effects, 160 Fries Model 865 65mm/8-pef camera, 583–584
technology changing, 143 Fringing (color) definition, 411
Tessive Time Filter, 150 fStop Wireless WiFi Receiver, 843
tilting filters, 170 Fuji comparison of filter system names chart,
tone control filters, 160–161 827
ultraviolet, 145–146
underwater cinematography and, 235
underwater color-correction filters, 166 G
variable neutral density, 148–149 Gain definition, 412
Wratten filters. See Wratten filters Gamma and Log/Lin, digital basic concepts, 14–15
Wratten numbers, 145 Gamma correction definition, 412
FireWire (IEEE 1394), 44 Gamma definition, 412
Fixed-wing aircraft, common, 222–223 Gekko Technology Ltd., 398
Flash memory definition, 411 Gel filters, 164, 170
Flashing, 296–297 Gel frames, 391
Flat lens port and underwater cinematography, Gel swatch library handheld app, 846
232–233, 234 Glasses for 3-D, 202–205
FLEx files, 275, 280 active (shuttered), 204f, 205
Floating point definition, 411 anaglyh, 204–205, 204f
Fluorescence (black-light) photography, 211–212 circular polarization, 203–204, 203f
Fluorescent lamps, filters for arch discharge and, Dolby, 204, 204f
832 first public use, 202
Fluorescent lighting, 381, 837 leakage of image information, 203
Fluorescent and other discontinuous spectra left and right eye images and, 197
lighting correction, 163–164 linear polarized, 202, 203, 203f
Flying-Cam miniature aerial system, 183–184 liquid crystal, 204f
Focal length matching exposure calculator passive, 203
handheld app, 844, 846, 847 Pulfrich, 204f, 205
Focal length of lens in mm charts, 891, 893 Gobos, 392
Focal length versus horizontal angle table, Glidecam Industries, Inc., 174–175, 175m 181
effective, 809 Gold Series, 175
Focal length versus vertical angle table, Gyro Pro, 181
effective, 808 V-16, 175
Focus conversion table, plus diopter lenses, 818 V-20, 174–175
|In-15
Index
Panavision 24mm T3.5 or 45mm T.28 filters for control of natural daylight, 379
slant focus lens, 664–665 filters for incandescent lamps, 379–380
Panavision/Frazier lens system and camera fluorescent lighting, 381
and Titanic, 132 grip accessories for light control, 392–393
pitching lens, 132, 665 high pressure DC short arc xenon
ports, flat or dome, 232–233 light sources, 383
prime lens. See Prime lenses HMI lamps, 382, 836, 837
reflective lens layout, 114f incandescent lamp operational characteristics,
selection for underwater cinematography, 379–383
233–235 incandescent light sources, 379
semi-telephoto, 667 intensity and lighting quantities, 377
single lens definition and illustration, 113f lighting and other flashing-type lighting
slant-focus lenses, 122, 132–133 effects, 384–385
special purpose, 664–665 low-pressure AC/DC xenon arc lamps, 384
spherical versus anamorphic, 124–129 MIRED (Micro Reciprocal Degrees) system,
specialty lenses and systems, 130–133 375–376
Super 35 and size of, 78 national carbons for studio lighting chart, 838
Swing Shift, 664 photographic light sources, 378, 828
Viewing angles formulas, 678 programmable handheld app, 845
wide-angle, 119, 121f special purpose photographic lighting
Zeiss, 235 equipment and systems, 384–386
zoom lens. See Zoom lenses synchronized stroboscopic lighting, 385–386
Letterbox format voltage operation, boosted, 380–381
anamorphic 2.40 aspect ratio and, 72 Lighting and underwater cinematography,
digital cameras and, 82 236–237, 238
Super 35 composed for 2.40 aspect ratio and, Lighting fixture charts, 901–919
81 factors as supplied by manufacturer,
Libra III digitally-stabilized camera mount, 182 multiplying, 916
Lift definition, 414 HMI and quartz beam projectors, 905
Light and exposure formulas, 684–686 HMI fresnels, 904
Light-balancing filters, 162 HMI PAR, 906–909
Light emitting diodes. See LEDs illumination data for source four
Lightflex, 296, 297 ellipsoidal spotlights, 916
Light meters. See Exposure meters Kino Flo fixtures, 917–918
Light scattering and underwater cinematography, open face, 901
237 PAR 64 fixtures, 910–915
Light sources, luminaries, and lighting filters, softlights, 901
375–393; see also Luminaries SoftSun 5400°K, 919
AC arc lamp flicker problem, 381–382 tungsten fresnels, 902–903
AC arc lamps, enclosed, 381 Lighting situations and filters, mixed, 164–165
accessories, light-control, 391–395 Line of sight, 138
calculations general comments, 378 Linear code values definition, 414
characteristics of typical photographic Linearity definition, 414
light sources table, 837 Liquid crystal display. See LED
color rendering index, 376 Litepanels Inc., 395, 397–398
color temperature, 375 Logarithmic color values definition, 414
commercial/industrial light sources, 386, 828 Look and color management, digital basic
coverage and lighting quantities, 377–378 concepts and, 23–26
daylight, natural, 378–379 Look management
DC carbon arc sources, 380 definition, 415
|In-19
Index
X
XDCAM format, 45, 46
Xenon light sources, 383
AC/DC low pressure arc lamps, 384
cautions, 385
characteristics, typical, 837
flash tubes, 384, 385
XYZ definition, 421
Y
YCrCb definition, 421
YMC (yellow, cyan, magenta) definition, 421
Z
Zoom lenses, 120
anamorphic lenses and, 119, 120
aspherical surfaces and, 118
In-28
|
Index
HOLLYWOOD