Bvcls
Bvcls
The relationship between the camera and the object being captured (ie the ANGLE) gives
emotional information to an audience, and guides their judgment about the character or
object in shot. The more extreme the angle (ie the further away it is from eye left), the
more symbolic and heavily-loaded the shot.
Each shot requires placing the camera in the best position for viewing a scene. This
positioning of camera, specially camera angle, determines both the audience viewpoint
and area covered in the shot. To achieve the desire angle within the frame, the camera
must be placed at the appropriate level.
Low-angle:
When the camera is below the subject that is low angle. The camera will
make whatever it is looking at seem big. It also gives a general sense of dramatic
intensity and sometimes makes a subject seem threatening.
High-Angle:
When the camera is above the subject looking down to it that is the high
angle shot. The psychology behind the shot is the reverse of the low angle shot. High
angle shot makes a subject look smaller. To show weakness or vulnerability, a high angle
shot is appropriate.
Eye-Level-Angle:
These shots are best for close ups of people, meant to depict a general
scene. It may provide a neutral narrative position, or it may support a high-impact,
challenging situation. The camera position at eye level invites the viewer to read the shot,
rather than respond to it emotionally, as would be the case with either a high or a low
angle shot.
appear to be bigger or smaller. It may be used to make things blurred, scary, or just
different. Camera movement techniques are often used, however, to tell a story.
Multiple points of view in visual story can be given by rapid transitions from the camera
position to another. It is also possible to change the viewing perspective within a single
shot by moving a camera or zooming the lens. Several types of movements can be used.
The most common are zooms, pans, tilt, dolly, truck and boom.
Pans-
A movement which scans a scene horizontally. The camera is placed on a tripod, which
operates as a stationary axis point as the camera is turned, often to follow a moving
object which is kept in the middle of the frame.
To create a smooth pan it's a good idea to practice the movement first. If you need to
move or stretch your body during the move, it helps to position yourself so you end up in
the more comfortable position. In other words you should become more comfortable as
the move progresses rather than less comfortable
Tilt
A tilt is a vertical camera movement in which the camera points up or down from a
stationary location. For example, if you mount a camera on your shoulder and nod it up
and down, you are tilting the camera.
Tilting is less common than panning because that's the way humans work — we look left
and right more often than we look up and down.
Tracking
The term tracking shot is widely considered to be synonymous with dolly shot; that is, a
shot in which the camera is mounted on a cart which travels along tracks.
However there are a few variations of both definitions. Tracking is often more narrowly
defined as movement parallel to the action, or at least at a constant distance (e.g. the
camera which travels alongside the race track in track & field events). Dollying is often
defined as moving closer to or further away from the action.
Some definitions specify that tracking shots use physical tracks, others consider tracking
to include hand-held walking shots, Steadicam shots, etc.
Other terms for the tracking shot include trucking shot and crabbing shot.
Dolly Shots
Sometimes called TRUCKING or TRACKING shots. The camera is placed on a moving
vehicle and moves alongside the action, generally following a moving figure or object.
Complicated dolly shots will involve a track being laid on set for the camera to follow,
hence the name. The camera might be mounted on a car, a plane, or even a shopping
trolley (good method for independent film-makers looking to save a few dollars). A dolly
shot may be a good way of portraying movement, the journey of a character for instance,
or for moving from a long shot to a close-up, gradually focusing the audience on a
particular object or character.
The camera is mounted on a cart which travels along tracks for a very smooth movement.
Also known as a tracking shot or trucking shot.
Trucking
Trucking is basically the same as tracking or dollying. Although it means slightly
different things to different people, it generally refers to side-to-side camera movement
with respect to the action.
The term trucking is not uncommon but is less widely-used than dollying or tracking. Yet
another equivalent term is crabbing.
You should do white balances regularly, especially when lighting conditions change (e.g.
moving between indoors and outdoors.
Always consider the purpose of the shot before you start to set it up. After finding
your subject, think that what do you want to show about it/them.
On behalf of some points we can explain the relationship between the camera and
subject:
1. Placement of the subject: what is the placement of the subject we should be clear
about it? For example mainly we should be aware about the Optical Center.
3. Eye-Gaze:
A) Direct-Gaze: When a subject looks directly at the camera. Their power and
confidence is emphasized.
B) Indirect –Gaze: Sometimes subject deflect emphasize to the other people
or subject in the scene.
4. Camera angle: This is another important point which should be remember always.
For this we must aware all the angles like High & Low angle. Along with these
angles we also keep in our mind about Head-room & Nose-room.
5. Portrait of the Subject in different shots: For example Introductory shot, if your
subject is very poor so have to make that portrait according to it.
VIDEO CAMERA
A video camera is a camera used for electronic motion picture acquisition,
initially
developed by the television industry but now common in other applications as well.
Video camera that records and plays visual images and sounds made on magnetic tape.
The Video camera is the single most important piece of production equipment. Other
production equipment and techniques are greatly influenced by the camera’s technical
and performance characteristics. The video camera is central to all television production.
Video cameras are used primarily in two modes.
The first, characteristic of much early television, is what might be called a live broadcast,
where the camera feeds real time images directly to a screen for immediate observation.
A few cameras still serve live television production, but most live connections are for
security, military/tactical, and industrial operations where surreptitious or remote viewing
is required.
The second is to have the images recorded to a storage device for archiving or further
processing; for many years, videotape was the primary format used for this purpose, but
optical disc media, hard disk, and flash memory are all increasingly used. Recorded video
is used in television and film production, and more often surveillance and monitoring
tasks where unattended recording of a situation is required for later analysis.
1. Built-in Microphone; This records the sound along with the picture when the
camera is in operation.
2. White-Balance Sensor window: This indicates the white balancing of the camera
for its color corrections.
3. Lens:- The lens assembly handles all the light and the image that comes into
camera. We can add lenses to achieve different effects in some modals.
5. LCD viewfinder:- Most new versions of digital camcorders have the liquid crystal
display (LCD) viewfinder, a small screen that allows us to see what we are
recording in color.
6. Zoom:-The two way zoom button enables us to zoom the camera lens in and out,
that is, it allows us to go closer to the subject when we zoom in and further from
the object when we zoom out.
7. Recording Levels:- Most professional modals have a drum that we can use to
modify the levels of audio we are recording.
8. Operation switch: This switch is used for the power supply to the camera.
9. Auto light button: This button is pressed to activate the auto light function.
11. Tape eject button; to insert for take out a video cassette, press the button.
12. External microphone socket: If you want to use an external microphone, connect
it to this socket ( in this case, the built-in microphone will be deactivated)
13. Start/stop Button- press this button to start and stop shooting a scene.
14. White balance button- Press this button to select manual white balance
adjustment. Press it again to reset to the automatic white balance adjustment
mode.
15. Exposure/aperture:- the exposure on the camera helps us to increase or decrease
the aperture levels so that the picture becomes brighter or darker depending on
what we desire. This increases or decreases the amount of light entering the
camera.
CLOSEUP (CU)
This allows the camera to focus on the detail of the subject as there is little emphasis on
background or setting. The frame contains the person’s head and shoulders allowing the
audience to recognise a subject’s thoughts and feelings.
Two Shot:-
There are a few variations on this one, but the basic idea is to have a comfortable shot of
two people. Often used in interviews, or when two presenters are hosting a show.
Two-shots are good for establishing a relationship between subjects. If you see two sports
presenters standing side by side facing the camera, you get the idea that these people are
going to be the show's co-hosts. As they have equal prominence in the frame, the
implication is that they will provide equal input. Of course this doesn't always apply, for
example, there are many instances in which it's obvious one of the people is a presenter
and the other is a guest. In any case, the two-shot is a natural way to introduce two
people.
A two-shot could also involve movement or action. It is a good way to follow the
interaction between two people without getting distracted by their surrounding.
Aperture:-
The aperture, or f/stop as it is commonly called, is used to regulate the diameter of the
lens opening. That controls the luminance on the film plane. Besides controlling the
luminance on the film plane, the f/stop also controls image sharpness by partially
correcting various lens aberrations.
The most commonly used aperture control device is the iris diaphragm. An iris
diaphragm is an adjustable device that is fitted into the barrel of the lens or shutter
housing. It is called an iris diaphragm because it resembles the iris in the human eye.
An iris diaphragm is a series of thin, curved, metal blades that overlap each other and is
fastened to a ring on the lens barrel or shutter housing.
The size of the aperture is changed by turning the aperture control ring.
The blades move in unison as the control ring is moved, forming an aperture of any
desired size.
The control ring is marked in a series of f/stops that relate to the iris opening.
The aperture controls the intensity of light that is allowed to pass to the film and the parts
of the image that will appear in sharp focus.
1) Normal lens: A Normal Lens gives a viewpoint that is very close to what is seen by
the human eye. It has very little distortion.
2) Telephoto Lens: A telephoto lens or narrow angle lens is designed to have a focal
length. The subject appears much closer than normal, but you can see only a smaller part
of scene.
3) Zoom Lens: Most Video camera comes with the familiar zoom lens system, which is
a remarkably flexible production tool. At any given setting within its range the zoom lens
behaves like a prime lens of that focal length.
4) Wide angle: a wide angle lens has a short focal length that takes in correspondingly
more of the scene. However subjects will look much further away and the depth and
distance appear exaggerated.
FUNCTIONS:-
The function of the lens in the camera is to direct the light source to the camera sensor
and also to focus your image.
A camera lens uses refraction to focus light on to the area where the film is located inside
the camera. Refraction is caused by a change in the direction of light as it passes through
the curved glass.
2. To help the viewers recognize what things and people look like and where they
are in relation to one another and to their immediate environment.
4. Lighting is an area of the graphics pipeline that has seen no limits to its
complexity and continues to promote active research
9. For one thing, so far, we’ve talked about rendering triangle-by-triangle, whereas
photoreal rendering is generally done pixel-by-pixel, but we will look at these
techniques in more detail in a later lecture
Types
• Ambient Light
• Direction Lights
• Spot Lights
• Point Lights
Ambient Light
• An ever present light
• `Floods’ the scene
• No highlights
• No Shadows
• Good for Base Lighting
Direction Light
• Light from a single direction.
• Like sun-light.
• Has shadows
• Has highlights.
• A good basic light.
Spot Light
• A theatrical spot-light.
• Has shadows
• Radiates out in a cone
• Has fall-off
• Has penumbra
• Very powerful.
Point Light
• A local light
• Radiates in all directions
• Great `filler’ light
• Has shadows
• Can really punch up a scene.
• Very dramatic
3 Point Lighting
• Most Common lighting scheme
• Three lights:
Key light
The Key light establishes the dimension, form and surface detail of subject matter.
Fill light
The Fill light fills in the shadows created by the horizontal and vertical angles of the key
light. The fill light should be placed about 90-degrees away from the key light.
Back light
The function of the Back light is to separate the subject from the background by creating
a subtle rim of light around the subject.
What is "Audio"?
Audio means "of sound" or "of the reproduction of sound". Specifically, it refers to the
range of frequencies detectable by the human ear — approximately 20Hz to 20kHz. It's
not a bad idea to memorise those numbers — 20Hz is the lowest-pitched (bassiest) sound
we can hear, 20kHz is the highest pitch we can hear.
Audio work involves the production, recording, manipulation and reproduction of sound
waves. To understand audio you must have a grasp of two things:
1. Sound Waves: What they are, how they are produced and how we hear them.
2. Sound Equipment: What the different components are, what they do, how to
choose the correct equipment and use it properly.
Microphones
How They Work
MICROPHONE PATTERNS
These are polar graphs of the output produced vs. the angle of the sound source. The
output is represented by the radius of the curve at the incident angle.
Omni
The simplest mic design will pick up all sound, regardless of its point of origin, and is
thus known as an omni directional microphone. They are very easy to use and generally
have good to outstanding frequency response. To see how these patterns are produced,
here's a sidebar on directional microphones.
Bi-directional
It is not very difficult to produce a pickup pattern that accepts sound striking the front or
rear of the diaphragm, but does not respond to sound from the sides. This is the way any
diaphragm will behave if sound can strike the front and back equally. The rejection of
undesired sound is the best achievable with any design, but the fact that the mic accepts
sound from both ends makes it difficult to use in many situations. Most often it is placed
above an instrument. Frequency response is just as good as an omni, at least for sounds
that are not too close to the microphone.
Cardioids
This pattern is popular for sound reinforcement or recording concerts where audience
noise is a possible problem. The concept is great, a mic that picks up sounds it is pointed
at. The reality is different. The first problem is that sounds from the back are not
completely rejected, but merely reduced about 10-30 dB. This can surprise careless users.
The second problem, and a severe one, is that the actual shape of the pickup pattern
varies with frequency. For low frequencies, this is an omni directional microphone. A
mic that is directional in the range of bass instruments will be fairly large and expensive.
Furthermore, the frequency response for signals arriving from the back and sides will be
uneven; this adds an undesired coloration to instruments at the edge of a large ensemble,
or to the reverberation of the concert hall.
A third effect, which may be a problem or may be a desired feature, is that the
microphone will emphasize the low frequency components of any source that is very
close to the diaphragm. This is known as the "proximity effect", and many singers and
radio announcers rely on it to add "chest" to a basically light voice. Close, in this context,
is related to the size of the microphone, so the nice large mics with even back and side
frequency response exhibit the strongest presence effect. Most cardioids mics have a built
in low cut filter switch to compensate for proximity. Mis-setting that switch can cause
hilarious results. Bidirectional mics also exhibit this phenomenon.
Tighter Patterns
It is possible to exaggerate the directionality of cardioids type microphones, if you don't
mind exaggerating some of the problems. The Hyper-cardioids pattern is very popular, as
it gives a better overall rejection and flatter frequency response at the cost of a small back
pickup lobe. This is often seen as a good compromise between the cardioids and
bidirectional patterns. A "shotgun" mic carries these techniques to extremes by mounting
the diaphragm in the middle of a pipe. The shotgun is extremely sensitive along the main
axis, but possesses pronounced extra lobes which vary drastically with frequency. In fact,
the frequency response of this mic is so bad it is usually electronically restricted to the
voice range, where it is used to record dialogue for film and video.
Stereo microphones
You don't need a special microphone to record in stereo, you just need two. A so called
stereo microphone is really two microphones in the same case. There are two kinds:
extremely expensive professional models with precision matched capsules, adjustable
capsule angles and remote switching of pickup patterns; and very cheap units (often with
the capsules oriented at 180 deg.) that can be sold for high prices because they have the
word stereo written on them.
Typical Placement
Single microphone use
Use of a single microphone is pretty straightforward. Having chosen one with appropriate
sensitivity and pattern, (and the best distortion, frequency response, and noise
characteristics you can afford), you simply mount it where the sounds are. The practical
range of distance between the instrument and the microphone is determined by the point
where the sound overloads the microphone or console at the near end, and the point
where ambient noise becomes objectionable at the far end. Between those extremes it is
largely a matter of taste and experimentation.
If you place the microphone close to the instrument, and listen to the results, you will find
the location of the mic affects the way the instrument sounds on the recording. The
timbre may be odd, or some notes may be louder than others. That is because the various
components of an instrument's sound often come from different parts of the instrument
body (the highest note of a piano is nearly five feet from the lowest), and we are used to
hearing an evenly blended tone. A close in microphone will respond to some locations on
the instrument more than others because the difference in distance from each to the mic is
proportionally large. A good rule of thumb is that the blend zone starts at a distance of
about twice the length of the instrument. If you are recording several instruments, the
distance between the players must be treated the same way.
If you place the microphone far away from the instrument, it will sound as if it is far
away from the instrument. We judge sonic distance by the ratio of the strength of the
direct sound from the instrument (which is always heard first) to the strength of the
reverberation from the walls of the room. When we are physically present at a concert,
we use many cues beside the sounds to keep our attention focused on the performance,
and we are able to ignore any distractions there may be. When we listen to a recording,
we don't have those visual clues to what is happening, and find anything extraneous that
is very audible annoying. For this reason, the best seat in the house is not a good place to
record a concert. On the other hand, we do need some reverberation to appreciate certain
features of the music. (That is why some types of music sound best in a stone church)
Close microphone placement prevents this. Some engineers prefer to use close miking
techniques to keep noise down and add artificial reverberation to the recording, others
solve the problem by mounting the mic very high, away from audience noise but where
adequate reverberation can be found.
Stereo
Stereo sound is an illusion of spaciousness produced by playing a recording back through
two speakers. The success of this illusion is referred to as the image. A good image is one
in which each instrument is a natural size, has a distinct location within the sound space,
and does not move around. The main factors that establish the image are the relative
strength of an instrument's sound in each speaker, and the timing of arrival of the sounds
at the listener's ear. In a studio recording, the stereo image is produced artificially. Each
instrument has its own microphone, and the various signals are balanced in the console as
the producer desires. In a concert recording, where the point is to document reality, and
where individual microphones would be awkward at best, it is most common to use two
mics, one for each speaker.
Spaced microphones
The simplest approach is to assume that the speakers will be eight to ten feet apart, and
place two microphones eight to ten feet apart to match. Either omnis or cardioids will
work. When played back, the results will be satisfactory with most speaker arrangements.
(I often laugh when I attend concerts and watch people using this setup fuss endlessly
with the precise placement of the mics. This technique is so forgiving that none of their
efforts will make any practical difference.)
The big disavantage of this technique is that the mics must be rather far back from the
ensemble- at least as far as the distance from the leftmost performer to the rightmost.
Otherwise, those instruments closest to the microphones will be too prominent. There is
usually not enough room between stage and audience to achieve this with a large
ensemble, unless you can suspend the mics or have two very tall stands.
Coincident cardioids
There is another disadvantage to the spaced technique that appears if the two channels are
ever mixed together into a monophonic signal. (Or broadcast over the radio, for similar
reasons.) Because there is a large distance between the mics, it is quite possible that
sound from a particular instrument would reach each mic at slightly different times.
(Sound takes 1 millisecond to travel a foot.) This effect creates phase differences between
the two channels, which results in severe frequency response problems when the signals
are combined. You seldom actually lose notes from this interference, but the result is an
uneven, almost shimmery sound. The various coincident techniques avoid this problem
by mounting both mics in almost the same spot.
This is most often done with two cardioid microphones, one pointing slightly left, one
slightly right. The microphones are often pointing toward each other, as this places the
diaphragms within a couple of inches of each other, totally eliminating phase problems.
No matter how they are mounted, the microphone that points to the left provides the left
channel. The stereo effect comes from the fact that the instruments on the right side are
on-axis for the right channel microphone and somewhat off-axis (and therefore reduced
in level) for the other one. The angle between the microphones is critical, depending on
the actual pickup pattern of the microphone. If the mics are too parallel, there will be
little stereo effect. If the angle is too wide, instruments in the middle of the stage will
sound weak, producing a hole in the middle of the image. [Incidentally, to use this
technique, you must know which way the capsule actually points. There are some very
fine German cardioid microphones in which the diaphragm is mounted so that the pickup
is from the side, even though the case is shaped just like many popular end addressed
models. (The front of the mic in question is marked by the trademark medallion.) I have
heard the results where an engineer mounted a pair of these as if the axis were at the end.
You could hear one cello player and the tympani, but not much else.
You may place the microphones fairly close to the instruments when you use this
technique. The problem of balance between near and far instruments is solved by aiming
the mics toward the back row of the ensemble; the front instruments are therefore off axis
and record at a lower level. You will notice that the height of the microphones becomes a
critical adjustment.
M.S.
The most elegant approach to coincident miking is the M.S. or middle-side technique.
This is usually done with a stereo microphone in which one element is omnidirectional,
and the other bidirectional. The bidirectional element is oriented with the axis running
parallel to the stage, rejecting sound from the center. The omni element, of course, picks
up everything. To understand the next part, consider what happens as instrument is
moved on the stage. If the instrument is on the left half of the stage, a sound would first
move the diaphragm of the bidirectional mic to the right, causing a positive voltage at the
output. If the instrument is moved to center stage, the microphone will not produce any
signal at all. If the instrument is moved to the right side, the sound would first move the
diaphragm to the left, producing a negative volage. You can then say that instruments on
one side of the stage are 180 degrees out of phase with those on the other side, and the
closer they are to the center, the weaker the signal produced.
M.S. produces a very smooth and accurate image, and is entirely mono compatabile. The
only reason it is not used more extensively is the cost of the special microphone and
decoding circuit, well over $1,000.
Types of Microphones
Different Types of Microphones
The use of different types of microphones, in a church setting, will enhance the tone,
sound level and dynamics of the music and importantly, the way the instruments are
created to sound. Having at least one microphone for each instrument your church plays,
is a good number to keep by. The more microphones your church has, the better off
you’ll be in the future.
Windshields/windscreens
These can be bought separately and are attached to the microphone or placed in front of
it. They help cut down on wind noise and p-popping (the distortion caused by the sudden
rush of air if you say plosive consonants like p, b and g directly into a mic).
Audio Meters
An “audio-first” approach is available in many different contexts and basic video
production programs already have the necessary tools to make it work. Classroom
instruction begins with recording techniques: microphone placement, avoidance of wind
noise, avoidance of mic and cable noise, awareness of noise floor of different devices,
monitoring, slating takes, and use of sound logs. Students work with omni-directional and
directional microphones, and condenser and dynamic microphones. As we have both
analog and digital equipment available, I first introduce students to the warmth of analog
recording (traditional cassette recorders) and then the uniqueness of digital recorders
(DATS, mini discs, or hard disk recorders.) Each device presents its own unique
problems: drop out vs. distortion, necessity of the limiter on digital recording, differences
between monitoring with peak meter as compared to a UV meter.
Audio mixer
Definition
· An audio mixer is an electronic console that is used to mix different recorded tracks by
changing their volume levels, adding effects and changing the timbre of each instrument
on the tracks.
Names
Audio mixers are also called mixing consoles and soundboards
Use
· Audio mixers are most often used by recording studios but are also typically used in live
situations by live sound engineers.
Types
There are two types of mixers, digital and analog, and both are commonly used by the
same recording studio to achieve different results.
Mixing consoles are used in many applications, including recording studios, public
address systems, sound reinforcement systems, broadcasting, television, and film postproduction.
An example of a simple application would be to enable the signals that originated from
two separate microphones (each being used by vocalists singing a duet, perhaps) to be
heard through one set of speakers simultaneously. When used for live performances, the
signal produced by the mixer will usually be sent directly to an amplifier, unless that
particular mixer is "powered" or it is being connected to powered speakers.
Broadcast Standards
Broadcasting is the sending audio or video content to a wide audience
using the
electromagnetic spectrum (radio waves), for a mass media like TV or radio. It is a
classic example of one to many model of communication.
Transmission of programs from a radio or television station to home receivers over the
spectrum is referred to as OTA (over the air) or terrestrial broadcasting and in most
countries requires a broadcasting license. Thus a standardization is required to make
the relay of messages possible. The standardization of formats across various set ups
is essential.
The three prime most accepted standards of formats for broadcasting are:
PAL (Phase Alternating Line) PAL - Phase Alternating Line standard was
introduced in the early 1960's and implemented in most European countries except for
France. The PAL standard utilizes a wider channel bandwidth than NTSC which allows
for better picture quality. A standard used almost everywhere else in the world, has the
ability to display 625 lines of resolution with a frame rate of 25 frames per second.
The camera we use today to shoot our home video, news or even films was not made in a day.
Years of scientific research and developement, first on the still camera, then on the film, and
with the advent of recording mediums, on the video. Following are the few landmark dates in teh
history of video camera.
1888: Thomas Edison, the inventor of the light bulb, had another light bulb moment in
1888, when he filed a patent for 'kinetoscope', a device 'which does for eye what
phonograph does for the ear'.
1895: The Lumierre brothers patented the cinematograph, a device for capturing,
developing and projecting moving images.
1912: Bell and Howell introduce the first all metal camera after there wood and leather
camear was destroyed by termites.
1927: Philo Farnsworth's video camera tube converts images into electrical signals.
1983: With the advent of CCD's as image sensors, Sony introduces the first one-piece
video camcorder, Betamovie. But by this point the Betamax format is already losing the
war with VHS.
2001: Once Upon a Time in Mexico is the first mainstream movie filmed in
24-frame-per-second high-definition digital video.
2007: The RED one, the first 4k-resolution digital camera, revolutionizes
digital filmmaking.
2009: Slumdog Millionaire is the first film shot mostly in digital to win the
Academy Award for Best Cinematography
2012: Felix Baumgartner straps on five GoPro video cameras before his historic 24-mile
skydive. YouTube sets its live-stream record as more than 8 million tune in.
Webcams are video cameras which stream a live video feed to a computer.
Camera phones - nowadays most video cameras are incorporated into mobile
phones.
Special camera systems are used for scientific research, e.g. on board a satellite or
a spaceprobe, in artificial intelligence and robotics research, and in medical use. For
example the hubble space telescope. Such cameras are often tuned for non-visible
radiation for infrared (for night vision and heat sensing) or X-ray (for medical and video
astronomy use).
1. Camera lens: It consists of one or more pieces of glass that focuses and frames an
image within the camera. The lenscontains focus ring, zoom ring and aperture control
ring that allows the camera operator to adjust the frame and light in accordanc eto the
subject.
2. Lens shade or hood protects the lens from picking up light distortions(called lens
glare) from the sun or a bright light and saves it from direct heat and dust when shooting
outdoors.
3. The power zoom switch: It is located on the side of the lens, allows the camera
operator to electronically zoom the lens. The speed of the zoom can be varied,
depending on the pressure applied on the switch.
5. Viewfinder: The viewfinder contains a small screen with a magnifying lens that
enlarges the image to be viewed by the camera operator. Depending on the camera, a
viewfinder can come various shapes and sizes.
Viewfinder Types
The viewfinder of a camcorder can be a CRT, tube-type (like those used in most TV
sets), or a flat, LCD type (similar to those in laptop computers). CRT stands for cathode
ray tube; LCD for liquid crystal display. Unlike studio cameras that typically use at least
seven-inch displays, the viewfinders for camcorders must be much smaller. They
typically use a miniature CRT with a magnifying eyepiece, or, as shown below, a small
LCD screen.
LCD swing-out viewfinder. An increasing number of video cameras are fitted with a
foldout rectangular LCD screen (liquid crystal display), which is typically 2.5 to 3.5
inches wide and shows the shot in colour. It is lightweight and conveniently folds flat
against the camera body when out of use. However, stray light falling onto the screen
can degrade its image, making it more difficult for the camera operator to focus and to
judge the picture quality.
Image sensor
It is a kind of camera device which takes the output of each of the three colour channels
(RGB) and recombines them into one colour signal, including both chrominance and
luminance. This is the encoded colour signal, and when it is displayed on a monitor or
receiver we see the camera shot in full colour.
The two principle components of the colour television signal contains a luminance and
chrominance. Luminance refers to the black and white brightness information. Every
colour television signal contains a luminance signal as well as the colour information,
chrominance is the colour information, and includes two components hue and
saturation. Hue refers to the colour itself: red, green, blue and so on and saturation
refers to the amount of intensity of the colour.
Once the white light that enters the lens has been divided into the three primary colours,
each light beam must be translated into electrical signals. The principal electronic
component that converts light into electricity is called the imaging device or image
sensor*. These imaging devices are pickup tube, CCD (Charge-coupled device) or
CMOS (Complementary metal oxide semiconductor).
All of the highest-quality colour cameras use three image sensors. These cameras
produce the best picture because each colour is assigned to its own CCD or pick up
tube thereby ensuring the highest amount of control over the signal of each.
Pick up tube
In older video cameras, before the 1990s, a video camera tube or pickup tube was used
instead of a charge-coupled device (CCD). Several types were in use from the 1930s to
the 1980s. These tubes are a type of cathode ray tube
The Camera Pickup Tube in a television cameras is a small valium tube, usually 3”, or
4” long and 1/2” , 2/3” or 1” in diameter. The tube has several main components. The
tube itself is made up of a glass. Attached to the inside of the glass face of the tube is
extremely thin, transparent photosensitive coating. Next to this is a layer of
photoconductive material. Immediately behind the photoconductive layer is the target,
which has a slight positive electrical charge when the camera is turned on. Light from
the scene that is recorded is focused by the lens of the camera onto the face of the
pickup tube. It passes through the glass and the photosensitive coating, onto the
photoconductive layer. As light hits the photoconductive layer it causes the change on
the target to change in proportion to relative intensity of the light.
The video signal is produced as the target is scanned by a beam of electrons. This
beam of electron is emitted by the electron gun at the back of the pickup tube. The
electron beam is focused on the target plate and scans it in a series of horizontal lines.
Each of the line of the picture is composed of about 500 dots, or bits of information.
Bright spots on the photoconductive layer change the charge on the target generally,
and when that -particular spot is scanned by the electron beam a greater number of
electrons pass through the target than pass
through in places where the image on the
photoconductive layer is darker. These
changes in the charge on the target plate
produce the video signal.
CCD (charge-coupled device)
A charge-coupled device (CCD) is a light-sensitive integrated circuit that stores and
displays the data for an image in such a way that each pixel (picture element) in the
image is converted into an electrical charge the intensity of which is related to a colour
in the colour spectrum.
A CCD normally contains hundreds of thousands or, millions of image-sensing
elements, called pixels (a word made up of pix, for picture, and els for elements), that
are arranged in horizontal and vertical rows, Pixels function very much like tiles that
compose a complete mosaic image. A certain amount of such element is needed to
produce a recognizable image.
An image is projected by a lens on the capacitor array (the photoactive region), causing
each capacitor to accumulate an electric charge proportional to the light intensity at that
location. Digital colour cameras generally use a Bayer mask over the CCD. Each
square of four pixels has one filtered red, one blue, and two green (the human eye is
more sensitive to green than either red or blue). The result of this is that luminance
information is collected at every pixel, but the colour resolution is lower than the
luminance resolution.
CCD CMOS
Aperture/iris:
It is an adjustable opening, which controls the amount of light coming
through the lens (i.e. the "exposure"). The video camera iris works in basically the same
way as a still camera iris -- as you open the iris, more light comes in and the picture
appears brighter.
The difference is that with video cameras, the picture in the viewfinder changes
brightness as the iris is adjusted. Professional cameras have an iris ring on the lens
housing, which you turn clockwise to close and anticlockwise to open. Consumer-level
cameras usually use either a dial or a set of buttons. You will probably need to select
manual iris from the menu.
Professional cameras have an additional feature called zebra stripes which can help
you to judge exposure. The stripes show the area in the frame which is over exposed.
Like still photography, the aperture starts from 2 or 2.4 and ranges till 32 or 64,
depending on the lens .
Shutter speed: The term shutter comes from still photography, where it describes a
mechanical "door" between the camera lens and the film. When a photo is taken, the
door opens for an instant and the film is exposed to the incoming light. The speed at
which the shutter opens and closes can be varied — the faster the speed, the shorter
the period of time the shutter is open, and the less light falls on the film.
Shutter speed is measured in fractions of a second. A speed of 1/60 second means that
the shutter is open for one sixtieth of a second. A speed of 1/500 is faster, and 1/10000
is very fast indeed. Video camera shutters work quite differently from still film camera
shutters but the result is basically the same. The shutter "opens" and "closes" once for
each frame of video; that is, 25 times per second for PAL and 30 times per second for
Focus:
The ability to manually focus your camera is a critical skill at any level of
video production. When the subject is zoomed in and focussed on, it is called sharp
focus. When it is not in sharp focus, but clear enough, it is called soft focus. When the
focus shifts from one subject to another, within same shot, it is called shift focus or rack
focus. The area of acceptable sharpness, in front and behind the point of focus is
called depth of field. The depth of field increases as we close down the aperture, and
decreases when we open it up. This means that Depth of field at F22 will be greater
than that of F2.
Zoom
The zoom is the function which moves your point of
view closer to, or further away from, the subject by
narrowing the angle of view. The two most common
zoom mechanisms are shown below:\
Digital Zoom
This is often trumpeted as a big selling point by manufacturers. It's common to see a
large "150X" emblazoned on the side of a camcorder. Video stores are full of naive
customers comparing the digital zoom of different cameras.
Digital zoom works by magnifying a part of the captured image using digital
manipulation. This is the same as how a graphics program resizes an image to a larger
size. The process involves taking a certain number of pixels and creating a larger
image, but because the new image is based on the same number of pixels, the image
loses quality. At small zooms (up to 20x) the loss may not be too noticeable. At large
zooms (up to 100x or more) the quality becomes absolutely terrible.
Remember that digital zoom can be done in post-production with any half-decent editing
software, so you really gain nothing by having the camera do it.
Optical Zoom
This is the zoom spec which matters. Optical zoom is provided by the lens (i.e. the
optics) and does not lose image quality. The zoom is provided by a telephoto lens. It
works by narrowing the field of view by decreasing the angle, while exposing it on the
same size of sensor.
White balance:
The colour of the light depends on the source, but our eyes adjust
accordingly and see the true colour in any light. The camera on the other hand needs to
be given the reference of white in all lights. This process is called white balance. It's a
function which tells the camera what each colour should look like, by giving it a "true
white" reference. If the camera knows what white looks like, then it will know what all
other colors look like.
a) either the cameras have a colour temperature preset, through which one can
choose what colour temperature one is shooting at.
b) Or the second method is manual. Point the camera at something matt (non-reflective)
white in the same light as the subject, and frame it so that most or all of the picture is
white. Set your focus and exposure, then press the "white balance" button (or throw the
switch). There should be some indicator in the viewfinder which tells you when the white
balance has completed.
Correct White Balance Balanced for Tungstun Balanced for Daylight
A fluorescent light filter, which can reduce the blue-green effect of fluorescent
lights
One or more special effects filters, including
the previously discussed star filter
An opaque “lens cap,” which blocks all light
going through the lens Although filters shown are
located behind the lens, it should be noted that some
filters, such as polarizing filters, must be mounted in
front of the camera lens to be most effective
Aspect Ratio
An image's Aspect Ratio, or AR, represents a
comparison of its width to height. Notation for Aspect
Ratio is normally in the form of X:Y, where X
represents screen width and Y represents height.
A standard analog TV has an AR of 4:3 which means
that for every 4 units of width it's 3 units high. And
is resent research and development of high
definition television (HDTV) HDTV system differ from existing Conventional television
system.
So the framing of the image differs with keeping the aspect ratio of the broadcast
standard, or output medium in mind.
Because of over scanning and other types of image loss between the camera and the
home receiver, an area around the sides of the TV camera image is cut off before being
seen. To compensate for this, directors must assume that about ten percent of the
viewfinder picture may not be visible on the home receiver.
This area (framed by the lines in the photo) is referred to by various names including
safe area and essential area. All written material should be confined to an “even safer”
area, the safe title area (the area inside the blue frame).
Studio cameras:
Most television studio cameras stand on the floor, and are usually on
wheels. The wheeled tripod is called a dolly.Any video camera when used along with
other video cameras in a multiple-camera setup is controlled by a device known as CCU
(camera control unit) in theproduction control room (PCR). Studio cameras are bulky,
and have no recording compartments as they are not needed to be taken out in the
field.
ENG (Electronic news gathering): ENG cameras are larger and heavier (helps
dampen small movements), and usually supported by a camera shoulder support or
shoulder stock on the camera operator's shoulder, taking the weight off the hand, which
is freed to operate the zoom lens control.
The lens is focused manually and directly, without intermediate servo controls. However
the lens zoom and focus can be operated with remote controls with a television studio
configuration operated by a camera control unit (CCU) in case of outdoor braodcast.
Others: Remote cameras for scientific research and areas where human reach is
usually not possible. CCTV is used for surveillance and Lipstick Camera, or Pen
cameras are used for conducting sting operations.
Dollies
A dolly is a camera platform or support device on wheels,
which allows the camera to move smoothly it is widely used
in smaller studios, the rolling tripod dolly does not lend itself
to subtle camerawork. Camera moves tend to be
approximate. Camera height is adjusted resetting the
heights of the tripod legs. So height changes while shooting
are not practical unless a jib is attached to the dolly, the
dolly base should be adjustable.
This type of mount is used for remote productions and in
some small studios. Unlike the elaborate studio pedestal that can be smoothly rolled
across a studio floor-even while the camera is on the air-the wheels on small dollies are
intended primarily to move the camera from place to place between shots.
Slider Mounts
The sliders are handy extensions which can be
added either to the tripods or placed directly on the
floor. As the name suggests, the equipment is used
to give a slight sliding effect either right to left or front
to back. Because of the short lenghth , the effect
can not be achieved for long interval of time.
Camera Jibs
A device that has come into wide use in the last decade
is the camera jib, which is essentially a long, highly
maneuverable boom or crane-like device with a
mounted camera at the end. You will frequently seem
them in action swinging over large crowds at concerts
and events with large audiences.
A jib allows sweeping camera movements from ground
level to 9 or more meters (30 or more feet) in the air.
Camera Tracks
For elaborate productions camera tracks may be
installed that allow the camera to smoothly follow
talent or move through a scene. Although a camera
operator can ride with the camera (as shown here),
some cameras are remotely controlled.
Spider Cam
Looking like a giant spider with a TV camera in its nose, the
spider cam shown here moves along its web and can
provide aerial views of various sporting events. The whole
unit is remotely controlled by a ground observer. The video
is relayed to the production van by microwave.
Handheld Camera:
In handheld camera operator hold
the camera by hand; it is usually because the camera
has to be mobile, able to change positions quickly. This
method is most commonly used by news crews, for
documentaries, at sports events, or for shooting music
videos. In all of these situations, the camera generally
needs to move around to follow the action.
The operator places his or her right hand through a support loop on the side of the lens.
This way, the operator’s fingers are free to control the zoom rocker (servo zoom) switch
while the thumb presses the record/pause switch. The camera operator’s left hand
adjusts the manual zoom ring, the focusing ring, and the lens aperture.
Monopod:
A monopod is a telescoping rod that is attached to the base of the camera
the monopod is an easily carried, lightweight mounting. It consists of a
collapsible metal tube of adjustable length that screw to the camera base.
This extendable tube can be set to any convenient length. Braced against
a knee, foot, or leg, the monopod can provide a firm support for the
camera yet allow the operator to move it around rapidly for a new
viewpoint. Its main disadvantage is that it is easy to accidentally lean the
camera sideways and get sloping horizons.
Tripod:
A tripod offers a compact, convenient method of holding a camera steady, provided it is
used properly. It has three legs of independently adjustable length that are spread apart
to provide a stable base for the camera. Tripod heads come in two types’ friction heads
and fluid heads. Friction heads, which are less expensive of the two types, give fair
control over camera panning (left to right) and tilting
(up and down).
Professional camera operators always use tripods
with fluid heads.tese are more expensive than
friction heads but are designed so that it is virtually
impossible to make a jerky horizontal or vertical
camera movement. For every smooth, solid camera
operation, the tripod-mounted fluid head is the usual
choice.
Normal lens: A normal lens will have an angle of view corrosponding to human eye,
achieved at 50-55 mm,
Wide angle lens: Angle of view is greater than human eye, achieved at lesser than 50
mm. They are good for shooting sceneries and creating illusion of space. A good wide
angle will have a focal range from 16 mm to 50 mm. Beyond 16 mm, the image starts
getting distorted.
Telephoto lens: Angle of view is lesser than human eye, achived at focal lenghth
greater than 55 mm. Telephoto zoom lenses are great for sports enthusiasts and nature
photographers. They allow you to get close to your subject from a safe distance. A
typical telephoto zoom will offer a focal length range of 75mm to 300mm.
Macro lenses :open up the world of close-up photography, and are a great addition to
your equipment. Focussing is critical when working in close-up photography, so mount
the camera on a tripod and switch to manual focus for the best results.
Lens controls:
FOCUS: The focus control is the ring farthest from the camera body, on the front of the
lens. Distance settings are marked in meters and in feet. While a non-zoom (fixed focal
length) lens is focused simply by turning the ring until the image is sharp, the zoom lens
must be zoomed in to the smallest angle of view and the largest image size to adjust
focus.
Zoom: The center ring on most lenses is the zoom control. Most cameras use a rocker
switch beside the lens. This allows you to change the focal length of the lens through a
range from wide angle (short focal length) to telephoto (long focal length)
IRIS: The ring closest to the camera body controls the amount of light passing through
the lens to the light-sensitive surface of the pickup tube or chip. It is called the iris,
aperture, or f-stop control and is marked off in f-numbers. The lowest f-stop lets in the
most light, and the highest f-stop lets in the least
Filters
Glass filters consist of a transparent colored gel sandwiched between two precisely
ground and sometimes coated pieces of glass. Filters can be placed in a circular holder
that screws over the end of the camera lens, (as shown here) or inserted into a filter
wheel behind the camera lens (to be discussed later). A type of filter that’s much
cheaper than a glass filter is the gel. A gel is a small, square or rectangular sheet of opic
plastic used in front of the lens in conjunction with a matte box, which will be illustrated
later. Among professional videographers, these filter types are referred to as round
filters and rectangular filters. There are many types of filters. We’ll only cover the most
commonly used types.
Ultraviolet Filters
News photographers often put an ultraviolet filter (UV filter) over the camera lens to
protect it from the adverse conditions encountered in ENG (electronic newsgathering)
work. A damaged filter is much cheaper to replace than a lens. Protection of this type is
particularly important when the camera is used in high winds where dirt or sleet can be
blown into the lens. By screening out ultraviolet light, the filter also slightly enhances
image color and contrast and reduces haze in distant scenes. In so doing, the filter also
brings the scene being photographed more in line with what the eye sees. Video
cameras also tend to be more sensitive to ultra-violet light, which can add a kind of haze
to some scenes. Since UV filters screen out ultra-violet light while not appreciably
affecting colors, many videographers keep an ultraviolet filter permanently over the lens
of their camera.
Although general color correction in a video camera is done through the combination of
optical and electronic camera adjustments, it’s sometimes desirable to introduce a
strong, dominant color into a scene. For example, when a scene called for a segment
shot in a photographic darkroom, one camera operator simulated a red darkroom
safelight by placing a dark red glass filter over the camera lens. (Darkrooms haven’t
used red filter safelights to print pictures for decades, but since most audiences still
think they do, directors feel they must support the myth.) If the camera has an internal
white balance sensor, the camera must be color balanced before the filter is placed over
the lens or else the white balance system will try to cancel out the effect of the colored
filter.
Polarizing Filters
Most people are familiar with the effect that polarized sunglasses have on reducing
reflections and cutting down glare. Unlike sunglasses, the effect of professional
polarizing filters can be continuously varied—and, as a result, go much farther in their
effect. Polarizing filters can:
• Reduce glare and reflections
• Deepen blue skies
• Penetrate haze
• Saturate (intensify) colors
Once its many applications are understood, a polarizing filter can become a
videographer’s most valuable filter. Simple polarizing filters need between one and
to two extra f stops of exposure. The effect of some professional polarizing filters is
adjustable. This is done by rotating the orientation of two polarizing filters used next to
each other in a single housing. When doing critical copy work, such as photographing
paintings with a shiny surface, polarizing filters can be used over the lights, as well as
the camera lens. By rotating one or more of these filters, all objectionable reflections
can be eliminated.
Day-For-Night
A common special effect—especially in the days of black and white film and television—
was the “night scene” shot in broad daylight—so-called day-for-night. In the black and
white days a deep red filter could be used over the lens to turn blue skies dark—even
black. (Recall the effect that filters have on color that we covered earlier.)
Although not quite as easy to pull off in color, the effect is now created by
underexposing the camera by at least two f-stops and either using a blue filter, or
creating a bluish effect when you white balance your camera. A contrast control filter
and careful control of lighting (including avoiding the sky in scenes) adds to the effect.
To make the “night” effect more convincing other filtering embellishments can be added
during postproduction editing. With the sensitivity of professional cameras now down to
a few foot-candles (a few dozen lux), “night-for-night” scenes are within reach.
Whatever approach you use, some experimentation will be needed using a high-quality
color monitor as a reference.
Even so, there are other sources of light that are even worse—in particular the metal
halide lights often used in gymnasiums and for street lighting. Although the public has
gotten used to these lighting aberrations in news and documentary footage, when it
comes to such things as drama or commercials it’s generally a different story. As we will
see, there are some color balanced fluorescent lamps that are not a problem, because
they have been especially designed for TV and film work. But, don’t expect to find them
in schools, offices, or board rooms.
Star Filters
You’ve undoubtedly seen scenes in which “fingers of light” projected out from the sides
of shiny objects—especially bright lights. This effect is created with a glass star filter that
has a microscopic grid of crossing parallel lines cut into its surface. Notice in the picture
on the right that the four-point star filter used slightly softens and diffuses the image.
Star filters can produce four, five, six, or eight-point stars, depending on the lines
engraved on the surface of the glass. The star effect varies with the f-stop used.
A starburst filter (on the left, below) adds color to the diverging rays. Both star filters
and starburst filters slightly reduce the overall sharpness of the image, which may or
may not be desirable.
Soft Focus and Diffusion Filters - Sometimes you may want to create a dreamy, soft
focus effect. By using a soft focus filter or a diffusion filter (on the right, above) you can
do this. These filters, which are available in various levels of intensity, were regularly
used in the early cinema to give starlets a soft, dreamy appearance (while hiding signs
of aging).
A similar effect can be achieved by shooting through fine screen wire placed close to
the lens, or by shooting through a single thickness of a nylon stocking. The f-stop used
will greatly affect the level of diffusion. In the case of soft focus filters or diffusion
materials it’s important to white balance your camera with these items in place.
Whenever a filter is used with a video camera the black level of the video is raised
slightly. This can create a slight graying effect. Because of this, it’s advisable to readjust
camera setup or black level (either automatically or manually) whenever a filter is used.
Unlike electronic special effects created during postproduction, the optical effects
created by filters during the taping of a scene can’t be undone. To make sure there are
no unpleasant surprises, it’s best to carefully check the results on location with the help
of a high-quality color monitor.
3. Shoot with different filters. Submit in a DVD.
1. Lens: A piece of glass or other transparent substance with curved sides for
concentrating or dispersing light rays. Camera lens is a lens that focuses the image in a
camera.
2. Viewfinder: A device on a camera showing the field of view of the lens, used in
framing and focusing the picture.
6. Aspect Ratio: Aspect ratio refers to the shape, or format, of the image produced
by a camera. The formula is derived by dividing the width of the image by its height. The
aspect ratio of a 35mm image is 3:2. Most computer monitors and digicams have a 4:3
aspect ratio. Many digital cameras offer the option of switching between 4:3, 2:3 or 16:9.
7. AWB (Auto White Balance): An in-camera function that automatically adjusts
the white balance (color balance) of the scene to a neutral setting regardless of the
color characteristics of the ambient light source.
9. 1080p: Also known as “Full-HD,” 1080p is a shorthand term for video recorded at
1920 lines of horizontal resolution and 1080 lines of vertical resolution, and is optimized
for 16:9 format playback. The “p” stands for progressive, which means all of the data is
contained in each frame as opposed to “interlaced” (i), in which the image data is split
between two frames in alternating lines of image data.
10. CCD: A semiconductor device that converts optical images into electronic
signals. CCDs contain rows and columns of ultra small, light-sensitive mechanisms
(pixels) that, when electronically charged and exposed to light, generate electronic
pulses that work in conjunction with millions of surrounding pixels to collectively produce
a photographic image. CCDs and CMOS (Complementary Metal Oxide Semiconductor)
sensors are the dominant technologies for digital imaging.
12. Color Temperature: A linear scale for measuring the color of ambient light with
warm (yellow) light measured in lower numbers and cool (blue) light measured in higher
numbers. Measured in terms of “degrees Kelvin,”* daylight (midday) is approximately
5600-degrees Kelvin, a candle is approximately 800 degrees, an incandescent lamp is
approximately 2800 degrees, a photoflood lamp is 3200 to 3400 degrees, and a midday
blue sky is approximately 10,000-degrees Kelvin.
13. Depth of Field (DOF): Literally, the measure of how much of the background
and foreground area before and beyond your subject is in focus. Depth of field is
increased by stopping the lens down to smaller apertures. Conversely, opening the lens
to a wider aperture narrows the depth of field.
14. Digital Zoom: Unlike an optical zoom, which is an optically lossless function of
the camera’s zoom lens, digital zoom takes the central portion of a digital image and
crops into it to achieve the effect of a zoom. This means that the existing data is not
enhanced or added to, merely displayed at a lower resolution, thereby giving an illusion
of an enlarged image.
17. Gain: Gain refers to the relationship between the input signal and the output
signal of any electronic system. Higher levels of gain amplify the signal, resulting in
greater levels of brightness and contrast. Lower levels of gain will darken the image,
and soften the contrast. Effectively, gain adjustment affects the sensitivity to light of the
CCD or CMOS sensor. In a digital camera, this concept is analogous to the ISO or ASA
ratings of silver-halide films.
18. Interlaced Scan: Interlaced video is a commonly used video capture technique
in which in which the imagery consists of two fields of data captured a frame apart and
played back in a manner that reproduces motion in a natural, flicker-free form that takes
up less storage capacity than progressively captured video.
19. Optical Zoom: Another name for a zoom lens, which is a lens that enables you
to change the magnification ratio, i.e., focal length of the lens by either pushing, pulling
or rotating the lens barrel. Unlike variable focal length lenses, zooms are constructed to
allow a continuously variable focal length, without disturbing focus.
20. Viewfinder: System used for composing and focusing the subject being
photographed. Aside from the more traditional rangefinder and reflex viewfinders, many
compact digital cameras utilize LCD screens in place of a conventional viewfinder as a
method of reducing the size (and number of parts) of the camera. Electronic viewfinders
(EVFs) have become increasingly better in recent years and are slowly finding their way
into traditional DSLRs.
21. White Balance: The camera's ability to correct color cast or tint under different
lighting conditions including daylight, indoor, fluorescent lighting and electronic flash.
Also known as “WB,” many cameras offer an Auto WB mode that is usually—but not
always—pretty accurate.
Dewesh Pandey
Asst Prof BAJMC TIAS
GGSIPU DELHI
____________________________________________________________________