Kiran Seminar Report2007
Kiran Seminar Report2007
Kiran Seminar Report2007
CHAPTER 1
INTRODUCTION
Image sensors are being used in many areas today, in cell phone cameras, digital video
recorders, still cameras, and many more devices. The issue is how to evaluate each sensor, to
see if significant differences exist among the designs. Megapixels seem to be the largest used
barometer of sensor performance, with the idea that the more pixels an image have, the better
its output. This may not always be the case. Many other metrics are important for sensor
design, and may give a better indication of performance than raw pixel count. Furthermore,
specific applications may require the optimization of one aspect of the sensor's performance.
As silicon process technology improves, some of these metrics may get better, while others
might become worse.
CHAPTER 2
HISTORY
CCD was invented in 1970 by Bell Labs. Honeywell developed this into an X-Y scanner and
taken further by IBM Originally for data storage, later taken up by research and astronomy
areas. The CCD started its life as a memory device and one could only inject charge into the
device at an input register. However, it was immediately clear that the CCD could receive
charge via the photoelectric effect and electronic images could be created. By 1969, Bell
researchers were able to capture images with simple linear devices; thus the CCD was born.
CHAPTER 3
ARCHITECTURE
3.1 Pixel
The smallest discrete component of an image or picture on a CRT screen is known as a pixel.
The greater the number of pixels per inch the greater is the resolution. Each pixel is a sample
of an original image, where more samples typically provide more accurate representations of
the original.
Fig.3.1 Pixel
Following figure shows that the different type of Image sensor. In this figure the full frame
and frame transfer image sensor both are same there is little difference between them.
Another major element that determines image quality is noise reduction. In “EXMOR”
CMOS sensor, noise on the analog part is eliminated by the built-in Correlated Double
Sampling (CDS) circuit. Other new structural element drastically also decrease the noise-
contamination level.
The A/D conversion conventionally done just before signal readout is now performed
immediately after the light-to-electricity conversion, and is performed for each column. This
helps to reduce noise because the analog circuit is made shorter, and the frequency lower.
Noise-elimination circuits (CDS circuit) are equipped in the digital domain in addition to in
the analog domain.
CHAPTER 4
WORKING
4.1 Basic Operation of CCD
Charge-coupled devices (CCDs) are silicon-based integrated circuits consisting of a dense
matrix of photodiodes that operate by converting light energy in the form of photons into an
electronic charge.
Electrons generated by the interaction of photons with silicon atoms are stored in a potential
well and can subsequently be transferred across the chip through registers and output to an
amplifier. The A/D conversion is done at the edge of the circuit.
In a CCD for capturing images, there is a photoactive region, and a transmission region made
out of a shift register (the CCD, properly speaking). An image is projected by a lens on the
capacitor array (the photoactive region), causing each capacitor to accumulate an electric
charge proportional to the light intensity at that location.
A one-dimensional array, used in cameras, captures a single slice of the image, while a two-
dimensional array, used in video and still cameras, captures a two-dimensional picture
corresponding to the scene projected onto the focal plane of the sensor.
1) The photodiode within the pixel receives light which is then converted to electrical charges
and accumulated. 2) The electrical charges accumulated in all the receiving sections are
simultaneously transferred to the vertical CCD shift registers. 3) The charges that have passed
through the vertical CCD shift registers are transferred to the horizontal CCD shift registers.
4) The charges sent from the horizontal CCD shift registers are converted to a voltage and
amplified in the amplifier, then sent to camera signal processing.
In the interline transfer design the charge holding region is shielded from light. Light is
collected over the entire imager simultaneously and then transferred to the next, adjacent
charge transfer cells within the columns. This implies a low fill factor, which on modern
designs usually is compensated for by micro lenses.
Next, the charge is read out: each row of data is moved to a separate horizontal charge
transfer register. Charge packets for each row are read out serially and sensed by a charge-to-
voltage conversion and amplifier section.
Here, the following fig. Shows that an interline image sensor has a light shielded vertical
CCD adjacent to each photodiode photo sensor. Charge is transferred from the photo sites to
the vertical CCD in one cycle. Then charge is transferred into the horizontal CCD, one raw at
a time.
The next image can be integrated while the previous image is safely transferred out of the
imager. Interline imager sensor, unlike full-frame device, do not require an external shutter.
Here, Full frame means in the full frame design the charge holding region is integrated with
the light sensing region. Light is collected over the entire imager simultaneously. Then the
light has to be shut off so that the charge can be transferred down the columns.
Finally, each row of data is moved to a separate horizontal charge transfer register. Charge
packets for each row are read out serially and sensed by a charge-to-voltage conversion and
amplifier section. This design features a high, almost 100% fill factor but external shuttering
is required and light cannot be collected during readout.
Here, figure shows that the pixels are both photo sites and the VCCD. Charge is transported
down the columns from pixel to pixel. Charge in the first VCCD row is transferred into the
HCCD. The HCCD clocks out one row at a time.
1) The photodiode within the pixel receives light which is then converted to electrical charges
and accumulated. 2) The accumulated charges are converted to a voltage by an amplifier
within the pixel. 3) The converted voltage is transferred to the vertical signal line depending
on the selected transistor. 4) Various random noise and fixed pattern noise are eliminated by
correlated double sampling at a column circuit. 5) After CDS, the image signal voltage is
output through the horizontal signal line.
Overlaying the entire sensor is a grid of metal interconnects to apply timing and readout
signals, and an array of column output signal interconnects. The column lines connect to a set
of decode and readout (multiplexing) electronics that are arranged by column outside of the
pixel array.
This architecture allows the signals from the entire array, from subsections, or even from a
single pixel to be readout by a simple X-Y addressing technique, something not possible with
a CCD.
Have 3-4 transistors per pixel. Fast, higher SNR, but larger pixel, lower fill factor. Requires
lower voltage and consumes lower power.
We use one- or three-chip camera –The three-chip is usually at least 3 times as expensive.
The colour filter matrix for one- chip, usually Bayer mosaic.
Reduces colour resolution to about half. Also reduces light collection efficiency. Anisotropic
in x and y. A new method invented by Foveon uses “vertical filters” with less resolution loss.
CHAPTER 5
CHAPTER 6
APPLICATIONS
The image sensor used in many devices like mobile video phone, Finger Print Scanner,
Virtual Key Board, Self-Parking Car, Aerospace, Auto Pilot Technology etc. Samsung
CMOS image sensors (CIS) enable bright, crisp images and smooth-motion video for a broad
range of applications, from smart phones and cameras to notebooks and smart TV.
Almost all medical and near medical areas benefit from image sensors are utilized. These
sensors are used for patients’ observation and drug production, inside the dentist’s offices and
during surgeries. In most cases the sensor itself represents only a small fraction (in size and
cost) of the larger system, but its functionality plays a major role in the whole system.
Figure shows examples of medical applications where CMOS image sensors are used. In this
section of the paper we mostly concentrate on applications that push current image sensor
technology to the edge of the possibilities. These applications are wireless capsule endoscopy
and retinal implants. Both of these applications will play an important role in millions of
patients’ lives in the near future.
The following figure shows that the application of an image sensor is used in different area or
field.
Security Industrial
4% 3%
Video-
Conferencing
5%
Office Tools
51%
Consumer &
Professional
37%
CHAPTER 7
SUMMARY
7.1 Summary
Image sensors are an emergent solution for practically every automation-focused machine
vision application. New electronic fabrication processes, software implementations, and new
application fields will dictate the growth of image-sensor technology in the future. The two
major segments CCD and CMOS of the world image sensors market offer a range of image
sensors for multiple end users. The world image sensors market is poised for growth, with
certain factors likely to aid its growth in future years.
This technology is well supported by the latest fields of 3D Imaging, Motion Capture,
Biometrics, Automotive and Robotics which has overwhelming developments.
REFERENCE
[1] Sanket Mehta, Arpita Patel, Jagrat Mehta, “CCD or CMOS Image Sensor for
Photography”, IEEE ICCSP conference-2015.
[2] Alen Luštica, “CCD and CMOS Image Sensors in New HD Cameras”, 53rd International
Symposium ELMAR-2011.
[3] Abbas El Gamal and Helmy Eltoukhy, “CMOS Image Sensors”, IEEE circuits & devices
magazine, May/June 2005.
APPENDIX